threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi:\n\nTake the following example:\n\ninsert into cte1 select i, i from generate_series(1, 1000000)i;\ncreate index on cte1(a);\n\nexplain\nwith cte1 as (select * from cte1)\nselect * from c where a = 1;\n\nIt needs to do seq scan on the above format, however it is pretty\nquick if we change the query to\nselect * from (select * from cte1) c where a = 1;\n\nI know how we treat cte and subqueries differently currently,\nI just don't know why we can't treat cte as a subquery, so lots of\nsubquery related technology can apply to it. Do we have any\ndiscussion about this?\n\nThanks\n\n-- \nBest Regards\nAndy Fan\n\nHi: Take the following example: insert into cte1 select i, i from generate_series(1, 1000000)i;create index on cte1(a);explain with cte1 as (select * from cte1)select * from c where a = 1;It needs to do seq scan on the above format, however it is prettyquick if we change the query to select * from (select * from cte1) c where a = 1; I know how we treat cte and subqueries differently currently,I just don't know why we can't treat cte as a subquery, so lots ofsubquery related technology can apply to it. Do we have any discussion about this? Thanks-- Best RegardsAndy Fan",
"msg_date": "Sat, 14 Nov 2020 14:03:57 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Have we tried to treat CTE as SubQuery in planner?"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Take the following example:\n\n> insert into cte1 select i, i from generate_series(1, 1000000)i;\n> create index on cte1(a);\n\n> explain\n> with cte1 as (select * from cte1)\n> select * from c where a = 1;\n\n> It needs to do seq scan on the above format, however it is pretty\n> quick if we change the query to\n> select * from (select * from cte1) c where a = 1;\n\nThis example seems both confused and out of date. Since we changed\nthe rules on materializing CTEs (in 608b167f9), I get\n\nregression=# create table c as select i as a, i from generate_series(1, 1000000)i;\nSELECT 1000000\nregression=# create index on c(a);\nCREATE INDEX\nregression=# explain\nregression-# with cte1 as (select * from c)\nregression-# select * from cte1 where a = 1;\n QUERY PLAN \n--------------------------------------------------------------------------\n Bitmap Heap Scan on c (cost=95.17..4793.05 rows=5000 width=8)\n Recheck Cond: (a = 1)\n -> Bitmap Index Scan on c_a_idx (cost=0.00..93.92 rows=5000 width=0)\n Index Cond: (a = 1)\n(4 rows)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 14 Nov 2020 01:14:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Have we tried to treat CTE as SubQuery in planner?"
},
{
"msg_contents": "On Sat, Nov 14, 2020 at 2:04 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi:\n>\n> Take the following example:\n>\n> insert into cte1 select i, i from generate_series(1, 1000000)i;\n> create index on cte1(a);\n>\n> explain\n> with cte1 as (select * from cte1)\n> select * from c where a = 1;\n>\n> It needs to do seq scan on the above format, however it is pretty\n> quick if we change the query to\n> select * from (select * from cte1) c where a = 1;\n>\n> I know how we treat cte and subqueries differently currently,\n> I just don't know why we can't treat cte as a subquery, so lots of\n> subquery related technology can apply to it. Do we have any\n> discussion about this?\n\nSee https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=608b167f9f9c4553c35bb1ec0eab9ddae643989b\n\n\n",
"msg_date": "Sat, 14 Nov 2020 14:15:17 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have we tried to treat CTE as SubQuery in planner?"
},
{
"msg_contents": "Hi,\n\nOn Fri, Nov 13, 2020 at 10:04 PM Andy Fan wrote:\n>\n> Hi:\n>\n> Take the following example:\n>\n> insert into cte1 select i, i from generate_series(1, 1000000)i;\n> create index on cte1(a);\n>\n> explain\n> with cte1 as (select * from cte1)\n> select * from c where a = 1;\n>\n\nITYM:\n\nEXPLAIN\nWITH c AS (SELECT * FROM cte1)\nSELECT * FROM c WHERE a = 1;\n\nI'm also guessing your table DDL is:\n\nCREATE TABLE cte1 (a int, b int);\n\n> It needs to do seq scan on the above format, however it is pretty\n> quick if we change the query to\n> select * from (select * from cte1) c where a = 1;\n\nDoes it? On HEAD, I got the following plan:\n\n(without stats):\n Bitmap Heap Scan on foo\n Recheck Cond: (a = 1)\n -> Bitmap Index Scan on foo_a_idx\n Index Cond: (a = 1)\n\n(with stats):\n Index Scan using foo_a_idx on foo\n Index Cond: (a = 1)\n\n\n>\n> I know how we treat cte and subqueries differently currently,\n> I just don't know why we can't treat cte as a subquery, so lots of\n> subquery related technology can apply to it. Do we have any\n> discussion about this?\n\nThis was brought up a few times, the most recent one I can recall was a\nlittle bit over two years ago [1]\n\n[1] https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.uk\n\n\n",
"msg_date": "Fri, 13 Nov 2020 22:43:51 -0800",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Have we tried to treat CTE as SubQuery in planner?"
},
{
"msg_contents": "On Sat, Nov 14, 2020 at 2:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > Take the following example:\n>\n> > insert into cte1 select i, i from generate_series(1, 1000000)i;\n> > create index on cte1(a);\n>\n> > explain\n> > with cte1 as (select * from cte1)\n> > select * from c where a = 1;\n>\n> > It needs to do seq scan on the above format, however it is pretty\n> > quick if we change the query to\n> > select * from (select * from cte1) c where a = 1;\n>\n> This example seems both confused and out of date. Since we changed\n> the rules on materializing CTEs (in 608b167f9), I get\n>\n>\nSorry, I should have tested it again on the HEAD, and 608b167f9 is exactly\nthe thing I mean.\n\nregression=# create table c as select i as a, i from generate_series(1,\n> 1000000)i;\n> SELECT 1000000\n> regression=# create index on c(a);\n> CREATE INDEX\n> regression=# explain\n> regression-# with cte1 as (select * from c)\n> regression-# select * from cte1 where a = 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Bitmap Heap Scan on c (cost=95.17..4793.05 rows=5000 width=8)\n> Recheck Cond: (a = 1)\n> -> Bitmap Index Scan on c_a_idx (cost=0.00..93.92 rows=5000 width=0)\n> Index Cond: (a = 1)\n> (4 rows)\n>\n> regards, tom lane\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Sat, Nov 14, 2020 at 2:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Take the following example:\n\n> insert into cte1 select i, i from generate_series(1, 1000000)i;\n> create index on cte1(a);\n\n> explain\n> with cte1 as (select * from cte1)\n> select * from c where a = 1;\n\n> It needs to do seq scan on the above format, however it is pretty\n> quick if we change the query to\n> select * from (select * from cte1) c where a = 1;\n\nThis example seems both confused and out of date. Since we changed\nthe rules on materializing CTEs (in 608b167f9), I get\n Sorry, I should have tested it again on the HEAD, and 608b167f9 is exactlythe thing I mean. \nregression=# create table c as select i as a, i from generate_series(1, 1000000)i;\nSELECT 1000000\nregression=# create index on c(a);\nCREATE INDEX\nregression=# explain\nregression-# with cte1 as (select * from c)\nregression-# select * from cte1 where a = 1;\n QUERY PLAN \n--------------------------------------------------------------------------\n Bitmap Heap Scan on c (cost=95.17..4793.05 rows=5000 width=8)\n Recheck Cond: (a = 1)\n -> Bitmap Index Scan on c_a_idx (cost=0.00..93.92 rows=5000 width=0)\n Index Cond: (a = 1)\n(4 rows)\n\n regards, tom lane\n-- Best RegardsAndy Fan",
"msg_date": "Sat, 14 Nov 2020 15:15:41 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Have we tried to treat CTE as SubQuery in planner?"
},
{
"msg_contents": "On Sat, Nov 14, 2020 at 2:44 PM Jesse Zhang <sbjesse@gmail.com> wrote:\n\n> Hi,\n>\n> On Fri, Nov 13, 2020 at 10:04 PM Andy Fan wrote:\n> >\n> > Hi:\n> >\n> > Take the following example:\n> >\n> > insert into cte1 select i, i from generate_series(1, 1000000)i;\n> > create index on cte1(a);\n> >\n> > explain\n> > with cte1 as (select * from cte1)\n> > select * from c where a = 1;\n> >\n>\n> ITYM:\n>\n> EXPLAIN\n> WITH c AS (SELECT * FROM cte1)\n> SELECT * FROM c WHERE a = 1;\n>\n> I'm also guessing your table DDL is:\n>\n> CREATE TABLE cte1 (a int, b int);\n>\n> > It needs to do seq scan on the above format, however it is pretty\n> > quick if we change the query to\n> > select * from (select * from cte1) c where a = 1;\n>\n> Does it? On HEAD, I got the following plan:\n>\n>\nYou understand me correctly, just too busy recently and make\nme make mistakes like this. Sorry about that:(\n\n\n\n\n> (without stats):\n> Bitmap Heap Scan on foo\n> Recheck Cond: (a = 1)\n> -> Bitmap Index Scan on foo_a_idx\n> Index Cond: (a = 1)\n>\n> (with stats):\n> Index Scan using foo_a_idx on foo\n> Index Cond: (a = 1)\n>\n>\n>\n\n> >\n> > I know how we treat cte and subqueries differently currently,\n> > I just don't know why we can't treat cte as a subquery, so lots of\n> > subquery related technology can apply to it. Do we have any\n> > discussion about this?\n>\n> This was brought up a few times, the most recent one I can recall was a\n> little bit over two years ago [1]\n>\n> [1] https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.uk\n\n\nAnd I should have searched \"CTE\" at least for a while..\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Sat, Nov 14, 2020 at 2:44 PM Jesse Zhang <sbjesse@gmail.com> wrote:Hi,\n\nOn Fri, Nov 13, 2020 at 10:04 PM Andy Fan wrote:\n>\n> Hi:\n>\n> Take the following example:\n>\n> insert into cte1 select i, i from generate_series(1, 1000000)i;\n> create index on cte1(a);\n>\n> explain\n> with cte1 as (select * from cte1)\n> select * from c where a = 1;\n>\n\nITYM:\n\nEXPLAIN\nWITH c AS (SELECT * FROM cte1)\nSELECT * FROM c WHERE a = 1;\n\nI'm also guessing your table DDL is:\n\nCREATE TABLE cte1 (a int, b int);\n\n> It needs to do seq scan on the above format, however it is pretty\n> quick if we change the query to\n> select * from (select * from cte1) c where a = 1;\n\nDoes it? On HEAD, I got the following plan:\n You understand me correctly, just too busy recently and makeme make mistakes like this. Sorry about that:( \n(without stats):\n Bitmap Heap Scan on foo\n Recheck Cond: (a = 1)\n -> Bitmap Index Scan on foo_a_idx\n Index Cond: (a = 1)\n\n(with stats):\n Index Scan using foo_a_idx on foo\n Index Cond: (a = 1)\n\n \n>\n> I know how we treat cte and subqueries differently currently,\n> I just don't know why we can't treat cte as a subquery, so lots of\n> subquery related technology can apply to it. Do we have any\n> discussion about this?\n\nThis was brought up a few times, the most recent one I can recall was a\nlittle bit over two years ago [1]\n\n[1] https://postgr.es/m/87sh48ffhb.fsf@news-spur.riddles.org.ukAnd I should have searched \"CTE\" at least for a while..-- Best RegardsAndy Fan",
"msg_date": "Sat, 14 Nov 2020 15:23:48 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Have we tried to treat CTE as SubQuery in planner?"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nAfter fixing bug #16161 (pg_ctl inability to open just deleted\npostmaster.pid) there are still some errors related to the same\nWindows-only issue.\nNamely, pg_upgradeCheck failures seen on\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=fairywren&br=REL_13_STABLE\nHere pg_ls_dir_files couldn't handle the \"delete pending\" state too.\n\nI've made a simple test to reproduce this behavior with pg_ls_waldir,\nthat fails with:\nerror running SQL: 'psql:<stdin>:1: ERROR: could not stat file\n\"pg_wal/dummy\": Permission denied'\n\nAs noted in [1], a sensible solution would be putting the same \"retry on\nERROR_ACCESS_DENIED\" action in a wrapper for stat().\nAnd bed90759f brought in master the _pgstat64() function, where such\nerror handling should be placed.\nSo please look at the patch (based on the previous one made to fix\nbug#16161), that makes the attached test pass.\n\nAnd I have a few questions.\nFor now genfile.c needs to include \"port.h\" explicitly to override\ndefinitions in \"sys/stat.h\", that override \"stat\" redefinition for\nWindows included implicitly above via \"postgres.h\". Without it the\npg_ls_dir_files() function just uses system' stat().\nSo I wonder, is there a way to make all stat-calling code use the\nimproved stat()?\nAnd if so, maybe the stat() call in pgwin32_open should be replaced with\nmicrosoft_native_stat().\n\nAnd regarding released versions, are there any reasons to not backport\nbed90759f (with the proposed fix or alike)?\n\n[1] https://www.postgresql.org/message-id/396837.1605298573%40sss.pgh.pa.us\n\nBest regards,\nAlexander",
"msg_date": "Sat, 14 Nov 2020 13:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "More time spending with \"delete pending\""
},
{
"msg_contents": "On Sat, Nov 14, 2020 at 01:00:00PM +0300, Alexander Lakhin wrote:\n> As noted in [1], a sensible solution would be putting the same \"retry on\n> ERROR_ACCESS_DENIED\" action in a wrapper for stat().\n> And bed90759f brought in master the _pgstat64() function, where such\n> error handling should be placed.\n> So please look at the patch (based on the previous one made to fix\n> bug#16161), that makes the attached test pass.\n\nYour patch introduces a \"loops\", but doesn't use it to escape the loop.\n\n> diff --git a/src/backend/utils/adt/genfile.c b/src/backend/utils/adt/genfile.c\n> index d34182a7b0..922df49125 100644\n> --- a/src/backend/utils/adt/genfile.c\n> +++ b/src/backend/utils/adt/genfile.c\n> @@ -28,6 +28,7 @@\n> #include \"funcapi.h\"\n> #include \"mb/pg_wchar.h\"\n> #include \"miscadmin.h\"\n> +#include \"port.h\"\n> #include \"postmaster/syslogger.h\"\n> #include \"storage/fd.h\"\n> #include \"utils/acl.h\"\n> diff --git a/src/port/win32stat.c b/src/port/win32stat.c\n> index 4351aa4d08..c2b55c7fa6 100644\n> --- a/src/port/win32stat.c\n> +++ b/src/port/win32stat.c\n> @@ -170,6 +170,7 @@ _pgstat64(const char *name, struct stat *buf)\n> \tSECURITY_ATTRIBUTES sa;\n> \tHANDLE\t\thFile;\n> \tint\t\t\tret;\n> +\tint\t\t\tloops = 0;\n> #if _WIN32_WINNT < 0x0600\n> \tIO_STATUS_BLOCK ioStatus;\n> \tFILE_STANDARD_INFORMATION standardInfo;\n> @@ -182,31 +183,60 @@ _pgstat64(const char *name, struct stat *buf)\n> \t\terrno = EINVAL;\n> \t\treturn -1;\n> \t}\n> -\n> \t/* fast not-exists check */\n> \tif (GetFileAttributes(name) == INVALID_FILE_ATTRIBUTES)\n> \t{\n> -\t\t_dosmaperr(GetLastError());\n> -\t\treturn -1;\n> +\t\tDWORD\t\terr = GetLastError();\n> +\n> +\t\tif (err != ERROR_ACCESS_DENIED) {\n> +\t\t\t_dosmaperr(err);\n> +\t\t\treturn -1;\n> +\t\t}\n> \t}\n> \n> \t/* get a file handle as lightweight as we can */\n> \tsa.nLength = sizeof(SECURITY_ATTRIBUTES);\n> \tsa.bInheritHandle = TRUE;\n> \tsa.lpSecurityDescriptor = NULL;\n> -\thFile = CreateFile(name,\n> -\t\t\t\t\t GENERIC_READ,\n> -\t\t\t\t\t (FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE),\n> -\t\t\t\t\t &sa,\n> -\t\t\t\t\t OPEN_EXISTING,\n> -\t\t\t\t\t (FILE_FLAG_NO_BUFFERING | FILE_FLAG_BACKUP_SEMANTICS |\n> -\t\t\t\t\t\tFILE_FLAG_OVERLAPPED),\n> -\t\t\t\t\t NULL);\n> -\tif (hFile == INVALID_HANDLE_VALUE)\n> +\twhile ((hFile = CreateFile(name,\n> +\t\t\t\t\t\t\t GENERIC_READ,\n> +\t\t\t\t\t\t\t (FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE),\n> +\t\t\t\t\t\t\t &sa,\n> +\t\t\t\t\t\t\t OPEN_EXISTING,\n> +\t\t\t\t\t\t\t (FILE_FLAG_NO_BUFFERING | FILE_FLAG_BACKUP_SEMANTICS |\n> +\t\t\t\t\t\t\t FILE_FLAG_OVERLAPPED),\n> +\t\t\t\t\t\t\t NULL)) == INVALID_HANDLE_VALUE)\n> \t{\n> \t\tDWORD\t\terr = GetLastError();\n> \n> -\t\tCloseHandle(hFile);\n> +\t\t/*\n> +\t\t * ERROR_ACCESS_DENIED is returned if the file is deleted but not yet\n> +\t\t * gone (Windows NT status code is STATUS_DELETE_PENDING). In that\n> +\t\t * case we want to wait a bit and try again, giving up after 1 second\n> +\t\t * (since this condition should never persist very long). However,\n> +\t\t * there are other commonly-hit cases that return ERROR_ACCESS_DENIED,\n> +\t\t * so care is needed. In particular that happens if we try to open a\n> +\t\t * directory, or of course if there's an actual file-permissions\n> +\t\t * problem. To distinguish these cases, try a stat(). In the\n> +\t\t * delete-pending case, it will either also get STATUS_DELETE_PENDING,\n> +\t\t * or it will see the file as gone and fail with ENOENT. In other\n> +\t\t * cases it will usually succeed. The only somewhat-likely case where\n> +\t\t * this coding will uselessly wait is if there's a permissions problem\n> +\t\t * with a containing directory, which we hope will never happen in any\n> +\t\t * performance-critical code paths.\n> +\t\t */\n> +\t\tif (err == ERROR_ACCESS_DENIED)\n> +\t\t{\n> +\t\t\tstruct microsoft_native_stat st;\n> +\n> +\t\t\tif (microsoft_native_stat(name, &st) != 0)\n> +\t\t\t{\n> +\t\t\t\tpg_usleep(100000);\n> +\t\t\t\tloops++;\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\t\t}\n> +\n> \t\t_dosmaperr(err);\n> \t\treturn -1;\n> \t}\n\n\n",
"msg_date": "Sat, 14 Nov 2020 19:11:12 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "15.11.2020 04:11, Justin Pryzby wrote:\n> On Sat, Nov 14, 2020 at 01:00:00PM +0300, Alexander Lakhin wrote:\n>> As noted in [1], a sensible solution would be putting the same \"retry on\n>> ERROR_ACCESS_DENIED\" action in a wrapper for stat().\n>> And bed90759f brought in master the _pgstat64() function, where such\n>> error handling should be placed.\n>> So please look at the patch (based on the previous one made to fix\n>> bug#16161), that makes the attached test pass.\n> Your patch introduces a \"loops\", but doesn't use it to escape the loop.\nIndeed, this is my mistake. Please look at the corrected patch (now that\ncode corresponds to the pgwin32_open() as intended).\n\nAnd it rises another question, what if pg_ls_dir_files() is called for a\ndirectory where hundreds or thousands of files are really inaccessible\ndue to restrictions?\nI mean that using CreateFile() in the modified stat() implementation can\nbe rather expensive for an arbitrary file (and worse yet, for many files).\nOn the positive side, for now pg_ls_dir_files() is called only from\npg_ls_logdir, pg_ls_waldir, pg_ls_tmpdir, pg_ls_archive_statusdir, where\nhaving a bunch of files that are inaccessible for the postgres user is\nnot expected anyway.\n\nBut probably getting directory contents with correct file sizes (>4GB)\nin pg_ls_dir_files() can be implemented without calling\nCreateFile()/stat() at all (as ProcMon shows, the \"dir\" command doesn't\ncall CreateFile() (or any other system function) for each file in the\ntarget directory).\n\nBest regards,\nAlexander",
"msg_date": "Sun, 15 Nov 2020 08:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "15.11.2020 08:00, Alexander Lakhin wrote:\n> And it rises another question, what if pg_ls_dir_files() is called for a\n> directory where hundreds or thousands of files are really inaccessible\n> due to restrictions?\n> I mean that using CreateFile() in the modified stat() implementation can\n> be rather expensive for an arbitrary file (and worse yet, for many files).\n> On the positive side, for now pg_ls_dir_files() is called only from\n> pg_ls_logdir, pg_ls_waldir, pg_ls_tmpdir, pg_ls_archive_statusdir, where\n> having a bunch of files that are inaccessible for the postgres user is\n> not expected anyway.\nI've missed the fact that pg_ls_dir_files() just fails on first error,\nso the existing coding would not cause performance issues. But such\nbehavior is somewhat dubious, because having an inaccessible file in a\ndirectory of interest would make any of those calls fail\n> But probably getting directory contents with correct file sizes (>4GB)\n> in pg_ls_dir_files() can be implemented without calling\n> CreateFile()/stat() at all (as ProcMon shows, the \"dir\" command doesn't\n> call CreateFile() (or any other system function) for each file in the\n> target directory).\nAs I've found out, readdir() replacement for Windows in fact gets all\nthe needed information (correct file size, attributes...) in\nWIN32_FIND_DATA, but it just leaves it inside and returns only fields of\nthe dirent structure. So pg_ls_dir_files() (that calls\nReadDir()/readdir()) needs to get this information again, that leads to\nopening a file on Windows.\nI think it can be done more effectively by adding a new function\nReadDirExtendedInfo(), that will additionally accept a pointer to\n\"struct dirent_extra\" with fields {valid (indicating that the structure\nis filled), attributes, file size, mtime}. Maybe the advanced function\ncould also invoke stat() inside (not on Windows).\n\nAs to patch I proposed before, I think it's still needed to fully\nsupport the following usage pattern:\nstat();\nif (no_error) {\n do_something();\n} else if (file_not_found) {\n do_something_else();\n} else {\n error();\n}\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 15 Nov 2020 18:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Sun, Nov 15, 2020 at 4:00 PM Alexander Lakhin <exclusion@gmail.com>\nwrote:\n\n> As I've found out, readdir() replacement for Windows in fact gets all\n> the needed information (correct file size, attributes...) in\n> WIN32_FIND_DATA, but it just leaves it inside and returns only fields of\n> the dirent structure. So pg_ls_dir_files() (that calls\n> ReadDir()/readdir()) needs to get this information again, that leads to\n> opening a file on Windows.\n> I think it can be done more effectively by adding a new function\n> ReadDirExtendedInfo(), that will additionally accept a pointer to\n> \"struct dirent_extra\" with fields {valid (indicating that the structure\n> is filled), attributes, file size, mtime}. Maybe the advanced function\n> could also invoke stat() inside (not on Windows).\n>\n> As to patch I proposed before, I think it's still needed to fully\n> support the following usage pattern:\n> stat();\n> if (no_error) {\n> do_something();\n> } else if (file_not_found) {\n> do_something_else();\n> } else {\n> error();\n> }\n\n\nWe are currently missing a WIN32 lstat() port. I was thinking about\nproposing a patch to implement it using GetFileAttributesEx(). That might\nwork as fall back to the CreateFile() if the file attribute is not a\nFILE_ATTRIBUTE_REPARSE_POINT.\n\nAnyhow, I do not think any retry logic should be in the stat() function,\nbut in the caller.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>\n\nOn Sun, Nov 15, 2020 at 4:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\nAs I've found out, readdir() replacement for Windows in fact gets all\nthe needed information (correct file size, attributes...) in\nWIN32_FIND_DATA, but it just leaves it inside and returns only fields of\nthe dirent structure. So pg_ls_dir_files() (that calls\nReadDir()/readdir()) needs to get this information again, that leads to\nopening a file on Windows.\nI think it can be done more effectively by adding a new function\nReadDirExtendedInfo(), that will additionally accept a pointer to\n\"struct dirent_extra\" with fields {valid (indicating that the structure\nis filled), attributes, file size, mtime}. Maybe the advanced function\ncould also invoke stat() inside (not on Windows).\n\nAs to patch I proposed before, I think it's still needed to fully\nsupport the following usage pattern:\nstat();\nif (no_error) {\n do_something();\n} else if (file_not_found) {\n do_something_else();\n} else {\n error();\n}We are currently missing a WIN32 lstat() port. I was thinking about proposing a patch to implement it using GetFileAttributesEx(). That might work as fall back to the CreateFile() if the file attribute is not a FILE_ATTRIBUTE_REPARSE_POINT.Anyhow, I do not think any retry logic should be in the stat() function, but in the caller.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 16 Nov 2020 13:52:03 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 15.11.2020 04:11, Justin Pryzby wrote:\n>> Your patch introduces a \"loops\", but doesn't use it to escape the loop.\n\n> Indeed, this is my mistake. Please look at the corrected patch (now that\n> code corresponds to the pgwin32_open() as intended).\n\nSo what you're saying (and what the buildfarm seems to confirm, since\nwe are still seeing \"Permission denied\" failures from time to time,\neg [1]) is that all that code that bed90759f added to deal with\ndelete-pending is just junk, because we never get that far if the\nfile is delete-pending. So shouldn't we remove all of it? (That's\nlines 214-276 in win32stat.c, plus some earlier declarations that\nwould now be unneeded.)\n\nI'd be kind of inclined to change pgwin32_open() to use\nmicrosoft_native_stat, as well, because these two functions\nreally ought to be handling this exactly alike.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-11-18%2018%3A02%3A04\n\n\n",
"msg_date": "Wed, 18 Nov 2020 13:53:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "BTW ... scraping the buildfarm logs for \"could not open ... Permission\ndenied\" failures suggests that pgwin32_open() isn't the pinnacle of\nperfection either. In the last three months I found these instances:\n\n dory | REL_11_STABLE | 2020-08-21 22:15:14 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/REL_11_STABLE/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n fairywren | REL_11_STABLE | 2020-09-01 01:04:55 | Check | pg_ctl: could not open PID file \"C:/tools/msys64/home/pgrunner/bf/root/REL_11_STABLE/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n dory | HEAD | 2020-09-16 02:40:17 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/HEAD/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n walleye | HEAD | 2020-09-29 18:35:46 | pg_upgradeCheck | 2020-09-29 15:20:04.298 EDT [896:31] pg_regress/gist WARNING: could not open statistics file \"pg_stat_tmp/global.stat\": Permission denied\n frogmouth | REL_10_STABLE | 2020-10-01 00:30:15 | StopDb-C:5 | waiting for server to shut down....pg_ctl: could not open PID file \"data-C/postmaster.pid\": Permission denied\\r\n dory | REL_11_STABLE | 2020-10-05 02:15:06 | pg_upgradeCheck | waiting for server to shut down..............pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/REL_11_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/data.old/postmaster.pid\": Permission denied\\r\n bowerbird | REL_10_STABLE | 2020-10-21 23:19:22 | scriptsCheck | waiting for server to shut down....pg_ctl: could not open PID file \"H:/prog/bf/root/REL_10_STABLE/pgsql.build/src/bin/scripts/tmp_check/data_main_cH68/pgdata/postmaster.pid\": Permission denied\\r\n jacana | REL_10_STABLE | 2020-10-22 12:42:51 | pg_upgradeCheck | Oct 22 09:08:20 waiting for server to shut down....pg_ctl: could not open PID file \"c:/mingw/msys/1.0/home/pgrunner/bf/root/REL_10_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/data.old/postmaster.pid\": Permission denied\\r\n dory | HEAD | 2020-10-22 23:30:16 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/HEAD/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n dory | REL_11_STABLE | 2020-10-27 18:15:08 | StopDb-C:2 | waiting for server to shut down....pg_ctl: could not open PID file \"data-C/postmaster.pid\": Permission denied\\r\n fairywren | REL_11_STABLE | 2020-10-28 04:01:17 | recoveryCheck | waiting for server to shut down....pg_ctl: could not open PID file \"C:/tools/msys64/home/pgrunner/bf/root/REL_11_STABLE/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_2_data/pgdata/postmaster.pid\": Permission denied\\r\n hamerkop | REL_11_STABLE | 2020-11-01 11:31:54 | pg_upgradeCheck | waiting for server to shut down....pg_ctl: could not open PID file \"c:/build-farm-local/buildroot/REL_11_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/data/postmaster.pid\": Permission denied\\r\n dory | HEAD | 2020-11-04 05:50:17 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/HEAD/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n walleye | HEAD | 2020-11-09 09:45:41 | Check | 2020-11-09 05:06:19.672 EST [4408:31] pg_regress/gist WARNING: could not open statistics file \"pg_stat_tmp/global.stat\": Permission denied\n dory | HEAD | 2020-11-10 23:35:18 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/HEAD/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n\nNow, the ones on the 10 and 11 branches are all from pg_ctl, which\ndoes not use pgwin32_open() in those branches, only native open().\nSo those aren't fair to count against it. But we have nearly as\nmany similar failures in HEAD, which surely is going through\npgwin32_open(). So either we don't really have adequate protection\nagainst delete-pending files, or there is some other effect we haven't\nexplained yet.\n\nA simple theory about that is that the 1-second wait is not always\nenough on heavily loaded buildfarm machines. Is it sensible to\nincrease the timeout?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Nov 2020 15:39:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Hello Tom,\n18.11.2020 23:39, Tom Lane wrote:\n> BTW ... scraping the buildfarm logs for \"could not open ... Permission\n> denied\" failures suggests that pgwin32_open() isn't the pinnacle of\n> perfection either. In the last three months I found these instances:\n>\n> dory | REL_11_STABLE | 2020-08-21 22:15:14 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/REL_11_STABLE/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n> fairywren | REL_11_STABLE | 2020-09-01 01:04:55 | Check | pg_ctl: could not open PID file \"C:/tools/msys64/home/pgrunner/bf/root/REL_11_STABLE/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n> dory | HEAD | 2020-09-16 02:40:17 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/HEAD/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n> walleye | HEAD | 2020-09-29 18:35:46 | pg_upgradeCheck | 2020-09-29 15:20:04.298 EDT [896:31] pg_regress/gist WARNING: could not open statistics file \"pg_stat_tmp/global.stat\": Permission denied\n> frogmouth | REL_10_STABLE | 2020-10-01 00:30:15 | StopDb-C:5 | waiting for server to shut down....pg_ctl: could not open PID file \"data-C/postmaster.pid\": Permission denied\\r\n> dory | REL_11_STABLE | 2020-10-05 02:15:06 | pg_upgradeCheck | waiting for server to shut down..............pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/REL_11_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/data.old/postmaster.pid\": Permission denied\\r\n> bowerbird | REL_10_STABLE | 2020-10-21 23:19:22 | scriptsCheck | waiting for server to shut down....pg_ctl: could not open PID file \"H:/prog/bf/root/REL_10_STABLE/pgsql.build/src/bin/scripts/tmp_check/data_main_cH68/pgdata/postmaster.pid\": Permission denied\\r\n> jacana | REL_10_STABLE | 2020-10-22 12:42:51 | pg_upgradeCheck | Oct 22 09:08:20 waiting for server to shut down....pg_ctl: could not open PID file \"c:/mingw/msys/1.0/home/pgrunner/bf/root/REL_10_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/data.old/postmaster.pid\": Permission denied\\r\n> dory | HEAD | 2020-10-22 23:30:16 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/HEAD/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n> dory | REL_11_STABLE | 2020-10-27 18:15:08 | StopDb-C:2 | waiting for server to shut down....pg_ctl: could not open PID file \"data-C/postmaster.pid\": Permission denied\\r\n> fairywren | REL_11_STABLE | 2020-10-28 04:01:17 | recoveryCheck | waiting for server to shut down....pg_ctl: could not open PID file \"C:/tools/msys64/home/pgrunner/bf/root/REL_11_STABLE/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_2_data/pgdata/postmaster.pid\": Permission denied\\r\n> hamerkop | REL_11_STABLE | 2020-11-01 11:31:54 | pg_upgradeCheck | waiting for server to shut down....pg_ctl: could not open PID file \"c:/build-farm-local/buildroot/REL_11_STABLE/pgsql.build/src/bin/pg_upgrade/tmp_check/data/postmaster.pid\": Permission denied\\r\n> dory | HEAD | 2020-11-04 05:50:17 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/HEAD/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n> walleye | HEAD | 2020-11-09 09:45:41 | Check | 2020-11-09 05:06:19.672 EST [4408:31] pg_regress/gist WARNING: could not open statistics file \"pg_stat_tmp/global.stat\": Permission denied\n> dory | HEAD | 2020-11-10 23:35:18 | Check | pg_ctl: could not open PID file \"C:/pgbuildfarm/pgbuildroot/HEAD/pgsql.build/src/test/regress/./tmp_check/data/postmaster.pid\": Permission denied\\r\n>\n> Now, the ones on the 10 and 11 branches are all from pg_ctl, which\n> does not use pgwin32_open() in those branches, only native open().\n> So those aren't fair to count against it. But we have nearly as\n> many similar failures in HEAD, which surely is going through\n> pgwin32_open(). So either we don't really have adequate protection\n> against delete-pending files, or there is some other effect we haven't\n> explained yet.\nCan you confirm that there are no such failures on REL_12_STABLE and\nREL_13_STABLE for last three (or maybe six) months? Maybe something\nchanged in HEAD then?\n> A simple theory about that is that the 1-second wait is not always\n> enough on heavily loaded buildfarm machines. Is it sensible to\n> increase the timeout?\nI'm going to explore the issue this weekend, and probably comparing\nduration of the tests before failures can say something about load of\nthe machines...\nIt's very doubtful that that state could last for so 1 second...\n\nBest regards,\nAlexander\n\n\n\n",
"msg_date": "Thu, 19 Nov 2020 00:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 18.11.2020 23:39, Tom Lane wrote:\n>> Now, the ones on the 10 and 11 branches are all from pg_ctl, which\n>> does not use pgwin32_open() in those branches, only native open().\n>> So those aren't fair to count against it. But we have nearly as\n>> many similar failures in HEAD, which surely is going through\n>> pgwin32_open(). So either we don't really have adequate protection\n>> against delete-pending files, or there is some other effect we haven't\n>> explained yet.\n\n> Can you confirm that there are no such failures on REL_12_STABLE and\n> REL_13_STABLE for last three (or maybe six) months? Maybe something\n> changed in HEAD then?\n\nHmm, that is an interesting question isn't it. Here's a search going\nback a full year. There are a few in v12 --- interestingly, all on\nthe statistics file, none from pg_ctl --- and none in v13. Of course,\nv13 has only existed as a separate branch since 2020-06-07.\n\nThere's also a buildfarm test-coverage artifact involved. The bulk\nof the HEAD reports are from dory and walleye, neither of which are\nbuilding v13 as yet, because of unclear problems [1]. I think those\ntwo animals build much more frequently than our other Windows animals,\ntoo, so the fact that they have more may be just because of that and\nnot because they're somehow more susceptible. Because of that, I'm not\nsure that we have enough evidence to say that v13 is better than HEAD.\nIf there is some new bug, it's post-v12, but maybe not post-v13. But\nv12 is evidently not perfect either.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BBEBhvHhM-Bn628pf-LsjqRh3Ang7qCSBG0Ga%2B7KwhGqrNUPw%40mail.gmail.com",
"msg_date": "Wed, 18 Nov 2020 17:28:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "19.11.2020 01:28, Tom Lane wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> 18.11.2020 23:39, Tom Lane wrote:\n>>> Now, the ones on the 10 and 11 branches are all from pg_ctl, which\n>>> does not use pgwin32_open() in those branches, only native open().\n>>> So those aren't fair to count against it. But we have nearly as\n>>> many similar failures in HEAD, which surely is going through\n>>> pgwin32_open(). So either we don't really have adequate protection\n>>> against delete-pending files, or there is some other effect we haven't\n>>> explained yet.\n>> Can you confirm that there are no such failures on REL_12_STABLE and\n>> REL_13_STABLE for last three (or maybe six) months? Maybe something\n>> changed in HEAD then?\n> Hmm, that is an interesting question isn't it. Here's a search going\n> back a full year. There are a few in v12 --- interestingly, all on\n> the statistics file, none from pg_ctl --- and none in v13. Of course,\n> v13 has only existed as a separate branch since 2020-06-07.\n>\n> There's also a buildfarm test-coverage artifact involved. The bulk\n> of the HEAD reports are from dory and walleye, neither of which are\n> building v13 as yet, because of unclear problems [1]. I think those\n> two animals build much more frequently than our other Windows animals,\n> too, so the fact that they have more may be just because of that and\n> not because they're somehow more susceptible. Because of that, I'm not\n> sure that we have enough evidence to say that v13 is better than HEAD.\n> If there is some new bug, it's post-v12, but maybe not post-v13. But\n> v12 is evidently not perfect either.\nThanks for the complete list! As I can see from it, only lorikeet fails\non REL_12_STABLE. And it has a simple explanation.\nlorikeet uses cygwin, not win32, but improved open() and fopen()\nfunctions are not defined on cygwin:\n#if defined(WIN32) && !defined(__CYGWIN__)\n\n/*\n�* open() and fopen() replacements to allow deletion of open files and\n�* passing of other special options.\n�*/\n#define��� ��� O_DIRECT��� 0x80000000\nextern int��� pgwin32_open(const char *, int,...);\nextern FILE *pgwin32_fopen(const char *, const char *);\n#define��� ��� open(a,b,c) pgwin32_open(a,b,c)\n#define��� ��� fopen(a,b) pgwin32_fopen(a,b)\n...\n\n\nAnd aside from the \"Permission denied\" failures (not necessarily related\nto the \"delete pending\" state), we can see the \"Device or resource busy\"\nfailures on the same global.stat file (e. g., [1], [2], [3]). All these\nfailures occur when VACUUM (or REINDEX [3]) tries to access statistics\nsimultaneously with the statistics collector.\n\nI've tried to make open()/fopen() replacements for cygwin, but haven't\nsucceeded yet (I need more time to find out how to get original\nGetLastError() [4], as knowing errno is not sufficient to implement\npgwin32_open()' logic). (And I'm inclined to file a separate bug related\nto this cygwin behavior when I'll get more info.)\n\nThe failures on HEAD (on dory and walleye) need another investigation\n(maybe they are caused by modified stat()...).\n\n[1]\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=lorikeet&dt=2020-09-07%2020%3A31%3A16&stg=install-check-C\n[2]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2019-10-27%2009%3A36%3A07\n[3]\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=lorikeet&dt=2020-09-02%2008%3A28%3A53&stg=install-check-C\n[4] https://github.com/openunix/cygwin/blob/master/winsup/cygwin/errno.cc\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 22 Nov 2020 18:59:59 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Hello hackers,\n\n22.11.2020 18:59, Alexander Lakhin wrote:\n> 19.11.2020 01:28, Tom Lane wrote:\n>\n>> Hmm, that is an interesting question isn't it. Here's a search going\n>> back a full year. There are a few in v12 --- interestingly, all on\n>> the statistics file, none from pg_ctl --- and none in v13. Of course,\n>> v13 has only existed as a separate branch since 2020-06-07.\n>>\n>> There's also a buildfarm test-coverage artifact involved. The bulk\n>> of the HEAD reports are from dory and walleye, neither of which are\n>> building v13 as yet, because of unclear problems [1]. I think those\n>> two animals build much more frequently than our other Windows animals,\n>> too, so the fact that they have more may be just because of that and\n>> not because they're somehow more susceptible. Because of that, I'm not\n>> sure that we have enough evidence to say that v13 is better than HEAD.\n>> If there is some new bug, it's post-v12, but maybe not post-v13. But\n>> v12 is evidently not perfect either.\n> The failures on HEAD (on dory and walleye) need another investigation\n> (maybe they are caused by modified stat()...).\nTo revive this topic and make sure that the issue still persists, I've\nsearched through the buildfarm logs (for HEAD and REL_13_STABLE) and\ncollected the following failures of the misc_functions test with the\n\"Permission denied\" error:\n\nbowerbird\nHEAD (13 runs last week, 500 total, 18 failed)\n-\n\nREL_13_STABLE (4 runs last week, 237 total, 8 failed)\n-\n\ndrongo\nHEAD (25 runs last week, 500 total, 24 failed)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2020-12-27%2019%3A01%3A50\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-01-18%2001%3A01%3A55\n\nREL_13_STABLE (5 runs last week, 255 total, 2 failed)\n-\n\nhamerkop\nHEAD - (8 runs last week, 501 total, 20 failed)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2021-02-22%2010%3A20%3A00\n\nfairywren\nHEAD - (27 runs last week, 500 total, 28 failed)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-02-22%2012%3A03%3A35\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-01-28%2000%3A04%3A50\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-01-19%2023%3A29%3A38\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-01-17%2018%3A08%3A11\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-11-30%2009%3A57%3A15\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-11-19%2006%3A02%3A22\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-11-18%2018%3A02%3A04\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-09-16%2019%3A48%3A21\n\nREL_13_STABLE (5 runs last week, 302 total, 5 failed)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-02-26%2003%3A03%3A42\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-10-20%2007%3A50%3A44\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-09-10%2006%3A02%3A12\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-09-04%2019%3A58%3A39\n\nwoodlouse\nHEAD (33 runs last week, 500 total, 5 failed)\n-\n\nREL_13_STABLE (4 runs last week, 325 total, 1 failed)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=woodlouse&dt=2021-01-20%2011%3A42%3A30\n\ndory\nHEAD (52 runs last week, 500 total, 25 failed)\n-\n\nwhelk\nHEAD (31 runs last week, 500 total, 11 failed)\n-\n\nREL_13_STABLE (7 runs last week, 224 total, 1 failed)\n- \n\n(I've not considered lorikeet, jacana, and walleye as they are using\nalternative compilers.)\nSo on some machines there wasn't a single failure (probably it depends\non hardware), but where it fails, such failures account for a\nsignificant share of all failures.\nI believe that the patch attached to [1] should fix this issue. The\npatch still applies to master and makes the demotest (attached to [2])\npass. Also I've prepared a trivial patch that makes pgwin32_open() use\nthe original stat() function (as in the proposed change for _pgstat64()).\n\nhttps://www.postgresql.org/message-id/979f4ee6-9960-e37f-f3ee-f458e87777b0%40gmail.com\nhttps://www.postgresql.org/message-id/c3427edf-d7c0-ff57-90f6-b5de3bb62709%40gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Sun, 14 Mar 2021 18:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Sun, Mar 14, 2021 at 06:00:00PM +0300, Alexander Lakhin wrote:\n> I believe that the patch attached to [1] should fix this issue. The\n> patch still applies to master and makes the demotest (attached to [2])\n> pass. Also I've prepared a trivial patch that makes pgwin32_open() use\n> the original stat() function (as in the proposed change for _pgstat64()).\n\nHmm. Knowing that _pgfstat64() has some special handling related to\nfiles pending for deletion, do we really need that on HEAD?\n\n> -\t\t\t\tstruct stat st;\n> +\t\t\t\tstruct microsoft_native_stat st;\n> \n> -\t\t\t\tif (stat(fileName, &st) != 0)\n> +\t\t\t\tif (microsoft_native_stat(fileName, &st) != 0)\n> \t\t\t\t{\n> \t\t\t\t\tpg_usleep(100000);\n> \t\t\t\t\tloops++;\n\nThis change looks like a good idea for the WIN32 emulation of open(),\ntaken independently.\n--\nMichael",
"msg_date": "Tue, 6 Jul 2021 17:33:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Hello Michael,\n\n06.07.2021 11:33, Michael Paquier wrote:\n> On Sun, Mar 14, 2021 at 06:00:00PM +0300, Alexander Lakhin wrote:\n>> I believe that the patch attached to [1] should fix this issue. The\n>> patch still applies to master and makes the demotest (attached to [2])\n>> pass. Also I've prepared a trivial patch that makes pgwin32_open() use\n>> the original stat() function (as in the proposed change for _pgstat64()).\n> Hmm. Knowing that _pgfstat64() has some special handling related to\n> files pending for deletion, do we really need that on HEAD?\nYes, this fix is needed for HEAD (b9734c13) as the simple test attached\nto [1] shows:\n(You can put that script in src\\test\\modules\\test_misc\\t and perform\n`vcregress taptest src\\test\\modules\\test_misc`.)\nt/001_delete_pending.pl ......... # Looks like your test exited with 2\nbefore it could output anything.\n\nand\nsrc\\test\\modules\\test_misc\\tmp_check\\log\\regress_log_001_delete_pending\ncontains:\n# Postmaster PID for node \"node\" is 1616\nerror running SQL: 'psql:<stdin>:1: ERROR:� could not stat file\n\"pg_wal/dummy\": Permission denied'\nwhile running 'psql -XAtq -d port=64889 host=127.0.0.1 dbname='postgres'\n-f - -v ON_ERROR_STOP=1' with sql 'select count(*) > 0 as ok from\npg_ls_waldir();' at C:/src/postgresql/src/test/perl/PostgresNode.pm line\n1771.\n\nAs Tom Lane noted above, the code added with bed90759f is dubious\n(_NtQueryInformationFile() can not be used to handle the \"delete\npending\" state as CreateFile() returns INVALID_HANDLE_VALUE in this case.)\nProbably that change should be reverted. Should I do it along with the\nproposed fix?\n\n[1]\nhttps://www.postgresql.org/message-id/c3427edf-d7c0-ff57-90f6-b5de3bb62709%40gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 8 Jul 2021 07:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 07:00:01AM +0300, Alexander Lakhin wrote:\n> As Tom Lane noted above, the code added with bed90759f is dubious\n> (_NtQueryInformationFile() can not be used to handle the \"delete\n> pending\" state as CreateFile() returns INVALID_HANDLE_VALUE in this case.)\n> Probably that change should be reverted. Should I do it along with the\n> proposed fix?\n\nAh, I see. I have managed to miss your point. If\n_NtQueryInformationFile() cannot be used, then we'd actually miss the\ncontents for standardInfo and the pending deletion. If you could send\neverything you have, that would be helpful! I'd like to test that\nstuff by myself, with all the contents discussed at disposal for a\nproper evaluation.\n--\nMichael",
"msg_date": "Thu, 8 Jul 2021 16:47:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "08.07.2021 10:47, Michael Paquier wrote:\n> On Thu, Jul 08, 2021 at 07:00:01AM +0300, Alexander Lakhin wrote:\n>> As Tom Lane noted above, the code added with bed90759f is dubious\n>> (_NtQueryInformationFile() can not be used to handle the \"delete\n>> pending\" state as CreateFile() returns INVALID_HANDLE_VALUE in this case.)\n>> Probably that change should be reverted. Should I do it along with the\n>> proposed fix?\n> Ah, I see. I have managed to miss your point. If\n> _NtQueryInformationFile() cannot be used, then we'd actually miss the\n> contents for standardInfo and the pending deletion. If you could send\n> everything you have, that would be helpful! I'd like to test that\n> stuff by myself, with all the contents discussed at disposal for a\n> proper evaluation.\n> --\nBeside the aforementioned test I can only propose the extended patch,\nthat incorporates the undo of the changes brought by bed90759f.\nWith this patch that test is passed.\n\nBest regards,\nAlexander",
"msg_date": "Thu, 8 Jul 2021 23:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 11:00:00PM +0300, Alexander Lakhin wrote:\n> Beside the aforementioned test I can only propose the extended patch,\n> that incorporates the undo of the changes brought by bed90759f.\n> With this patch that test is passed.\n\nChecked and confirmed. It is a nice test with IPC::Run you have here.\nMaking things in win32stat.c more consistent with open.c surely is\nappealing. One thing that I'd like to introduce in this patch, and\nalso mentioned upthread, is to change the stat() call in open.c to use\nmicrosoft_native_stat().\n\nI have let pgbench run for a couple of hours with some concurrent\nactivity using genfile.c, without noticing problems. My environment\nis not representative of everything we can find out there on Windows,\nbut it brings some confidence.\n--\nMichael",
"msg_date": "Fri, 9 Jul 2021 14:52:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Hello Michael,\n09.07.2021 08:52, Michael Paquier wrote:\n> On Thu, Jul 08, 2021 at 11:00:00PM +0300, Alexander Lakhin wrote:\n>> Beside the aforementioned test I can only propose the extended patch,\n>> that incorporates the undo of the changes brought by bed90759f.\n>> With this patch that test is passed.\n> Checked and confirmed. It is a nice test with IPC::Run you have here.\n> Making things in win32stat.c more consistent with open.c surely is\n> appealing. One thing that I'd like to introduce in this patch, and\n> also mentioned upthread, is to change the stat() call in open.c to use\n> microsoft_native_stat().\n>\n> I have let pgbench run for a couple of hours with some concurrent\n> activity using genfile.c, without noticing problems. My environment\n> is not representative of everything we can find out there on Windows,\n> but it brings some confidence.\nThank you! I agree with your improvement. I can't remember why did I\ninject 'include \"port.h\"' in genfile.c.\nToday I've rechecked all the chain of includes and I see that stat() is\nredefined as _pgstat64() in win32_port.h, that includes <sys/stat.h>.\ngenfile.c includes \"postgres.h\" (that includes win32_port.h indirectly)\nand then includes <sys/stat.h> again, but the later include should be\nignored due \"#pragma once\" in stat.h.\nSo I have no objection to the removal of that include.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 9 Jul 2021 21:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 09:00:00PM +0300, Alexander Lakhin wrote:\n> Thank you! I agree with your improvement. I can't remember why did I\n> inject 'include \"port.h\"' in genfile.c.\n> Today I've rechecked all the chain of includes and I see that stat() is\n> redefined as _pgstat64() in win32_port.h, that includes <sys/stat.h>.\n> genfile.c includes \"postgres.h\" (that includes win32_port.h indirectly)\n> and then includes <sys/stat.h> again, but the later include should be\n> ignored due \"#pragma once\" in stat.h.\n> So I have no objection to the removal of that include.\n\nThanks, that matches my impression. There was one comment at the top\nof _pgstat64() that was still incorrect. I have spent more time\nchecking and tested this stuff today, and that looks fine. One\nquestion pending is if we should increase the timeout used by WIN32's\nstat() and open(), but the buildfarm should be able to tell us more\nhere.\n--\nMichael",
"msg_date": "Mon, 12 Jul 2021 13:07:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 01:07:17PM +0900, Michael Paquier wrote:\n> Thanks, that matches my impression. There was one comment at the top\n> of _pgstat64() that was still incorrect. I have spent more time\n> checking and tested this stuff today, and that looks fine. One\n> question pending is if we should increase the timeout used by WIN32's\n> stat() and open(), but the buildfarm should be able to tell us more\n> here.\n\nAnd fairywren, that uses MinGW, is unhappy:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-07-12%2004%3A47%3A13\nLooking at it now.\n--\nMichael",
"msg_date": "Mon, 12 Jul 2021 14:09:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 02:09:41PM +0900, Michael Paquier wrote:\n> And fairywren, that uses MinGW, is unhappy:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-07-12%2004%3A47%3A13\n> Looking at it now.\n\nI am not sure what to do here to cool down MinGW, so the patch has\nbeen reverted for now:\n- Plain stat() is not enough to do a proper detection of the files\npending for deletion on MSVC, so reverting only the part with\nmicrosoft_native_stat() is not going to help.\n- We are going to need an evaluation of the stat() implementation in\nMinGW and check if is it possible to rely on it more. If yes, we\ncould get away with more tweaks based on __MINGW32__/__MINGW64__.\n--\nMichael",
"msg_date": "Mon, 12 Jul 2021 14:52:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Hello Michael,\n12.07.2021 08:52, Michael Paquier wrote:\n> On Mon, Jul 12, 2021 at 02:09:41PM +0900, Michael Paquier wrote:\n>> And fairywren, that uses MinGW, is unhappy:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-07-12%2004%3A47%3A13\n>> Looking at it now.\n> I am not sure what to do here to cool down MinGW, so the patch has\n> been reverted for now:\n> - Plain stat() is not enough to do a proper detection of the files\n> pending for deletion on MSVC, so reverting only the part with\n> microsoft_native_stat() is not going to help.\n> - We are going to need an evaluation of the stat() implementation in\n> MinGW and check if is it possible to rely on it more. If yes, we\n> could get away with more tweaks based on __MINGW32__/__MINGW64__.\nI'll take care of it and will prepare mingw-compatible patch tomorrow.\n\nBest regards,\nAlexander\n\n\n\n",
"msg_date": "Mon, 12 Jul 2021 09:06:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 09:06:01AM +0300, Alexander Lakhin wrote:\n> I'll take care of it and will prepare mingw-compatible patch tomorrow.\n\nThanks. Do you have an environment with 32b or 64b MinGW?\n--\nMichael",
"msg_date": "Mon, 12 Jul 2021 15:23:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "12.07.2021 09:23, Michael Paquier wrote:\n> On Mon, Jul 12, 2021 at 09:06:01AM +0300, Alexander Lakhin wrote:\n>> I'll take care of it and will prepare mingw-compatible patch tomorrow.\n> Thanks. Do you have an environment with 32b or 64b MinGW?\nYes, I have VMs with 32- and 64-bit mingw environments.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 12 Jul 2021 09:40:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Hello Michael,\n12.07.2021 08:52, Michael Paquier wrote:\n> On Mon, Jul 12, 2021 at 02:09:41PM +0900, Michael Paquier wrote:\n>> And fairywren, that uses MinGW, is unhappy:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-07-12%2004%3A47%3A13\n>> Looking at it now.\n> I am not sure what to do here to cool down MinGW, so the patch has\n> been reverted for now:\n> - Plain stat() is not enough to do a proper detection of the files\n> pending for deletion on MSVC, so reverting only the part with\n> microsoft_native_stat() is not going to help.\n> - We are going to need an evaluation of the stat() implementation in\n> MinGW and check if is it possible to rely on it more. If yes, we\n> could get away with more tweaks based on __MINGW32__/__MINGW64__.\nI've managed to adapt open/win32stat for MinGW by preventing stat\noverriding at the implementation level. Thus, the code that calls stat()\nwill use the overridden struct/function(s), but inside of\nopen/win32_stat we could access original stat and reference to\n__pgstat64 where we want to use our.\nTested on MSVC, MinGW64 and MinGW32 � the fixed code compiles everywhere.\n\nBut when using perl from msys (not ActivePerl) while testing I've got\nthe same test failure due to another error condition:\n\n\n\nIn this case CreateFile() fails with ERROR_SHARING_VIOLATION (not\nERROR_ACCESS_DENIED) and it leads to the same \"Permission denied\" error.\nThis condition is handled in the pgwin32_open() but not in _pgstat64() yet.\n\nI think I should develop another test for MSVC environment that will\ndemonstrate a such failure too.\nBut as it's not related to the DELETE_PENDING state, I wonder whether\nthis should be fixed in a separate patch.\n\nBest regards,\nAlexander",
"msg_date": "Tue, 13 Jul 2021 21:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "> On 13 Jul 2021, at 20:00, Alexander Lakhin <exclusion@gmail.com> wrote:\n> \n> Hello Michael,\n> 12.07.2021 08:52, Michael Paquier wrote:\n>> On Mon, Jul 12, 2021 at 02:09:41PM +0900, Michael Paquier wrote:\n>> \n>>> And fairywren, that uses MinGW, is unhappy:\n>>> \n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-07-12%2004%3A47%3A13\n>>> \n>>> Looking at it now.\n>>> \n>> I am not sure what to do here to cool down MinGW, so the patch has\n>> been reverted for now:\n>> - Plain stat() is not enough to do a proper detection of the files\n>> pending for deletion on MSVC, so reverting only the part with\n>> microsoft_native_stat() is not going to help.\n>> - We are going to need an evaluation of the stat() implementation in\n>> MinGW and check if is it possible to rely on it more. If yes, we\n>> could get away with more tweaks based on __MINGW32__/__MINGW64__.\n>> \n> I've managed to adapt open/win32stat for MinGW by preventing stat overriding at the implementation level. Thus, the code that calls stat() will use the overridden struct/function(s), but inside of open/win32_stat we could access original stat and reference to __pgstat64 where we want to use our.\n> Tested on MSVC, MinGW64 and MinGW32 — the fixed code compiles everywhere.\n> \n> But when using perl from msys (not ActivePerl) while testing I've got the same test failure due to another error condition:\n> \n> <ghffjapimnefdgac.png>\n> \n> In this case CreateFile() fails with ERROR_SHARING_VIOLATION (not ERROR_ACCESS_DENIED) and it leads to the same \"Permission denied\" error. This condition is handled in the pgwin32_open() but not in _pgstat64() yet.\n> \n> I think I should develop another test for MSVC environment that will demonstrate a such failure too.\n> But as it's not related to the DELETE_PENDING state, I wonder whether this should be fixed in a separate patch.\n\nMichael, have you had a chance to look at the updated patch here? I'm moving\nthis patch entry from Waiting on Author to Needs Review.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 1 Dec 2021 11:51:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "On Wed, Dec 01, 2021 at 11:51:58AM +0100, Daniel Gustafsson wrote:\n> Michael, have you had a chance to look at the updated patch here? I'm moving\n> this patch entry from Waiting on Author to Needs Review.\n\nNo, I haven't. Before being touched more, this requires first a bit\nmore of testing infrastructure based on MinGW, and I don't have that\nin place yet :/\n--\nMichael",
"msg_date": "Thu, 2 Dec 2021 14:03:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: More time spending with \"delete pending\""
},
{
"msg_contents": "Hello Daniel and Michael,\n02.12.2021 08:03, Michael Paquier wrote:\n> On Wed, Dec 01, 2021 at 11:51:58AM +0100, Daniel Gustafsson wrote:\n>> Michael, have you had a chance to look at the updated patch here? I'm moving\n>> this patch entry from Waiting on Author to Needs Review.\n> No, I haven't. Before being touched more, this requires first a bit\n> more of testing infrastructure based on MinGW, and I don't have that\n> in place yet :/\nI think that my patch should be withdrawn in favor of\nhttps://commitfest.postgresql.org/34/3326/ (\"Check for\nSTATUS_DELETE_PENDING on Windows\" by Thomas Munro), that use deeper\nunderstanding of the problematic status, not indirect indication of it.\n\nBarring objections, I'll withdraw my commitfest entry.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 3 Dec 2021 06:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: More time spending with \"delete pending\""
}
] |
[
{
"msg_contents": "Hi, Hackers.\n\n Yesterday, OR REPLACE clause was provided to the CREATE TRIGGER statement, so I wrote a patch for tab completion for the psql command.\nTRIGGER adds tab completion to the CREATE OR REPLACE statement, and the CREATE TRIGGER and CREATE OR REPLACE TRIGGER statements are completed in the same way.\nI referred to the tab completion code of the CREATE RULE statement.\n\nRegards,\nNoriyoshi Shinoda",
"msg_date": "Mon, 16 Nov 2020 08:12:05 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSI)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 08:12:05AM +0000, Shinoda, Noriyoshi (PN Japan FSI) wrote:\n> Yesterday, OR REPLACE clause was provided to the CREATE TRIGGER\n> statement, so I wrote a patch for tab completion for the psql\n> command.\n\nThanks, the logic looks fine. I'll apply if there are no objections.\nPlease note that git diff --check and that the indentation does not\nseem quite right, but that's easy enough to fix.\n--\nMichael",
"msg_date": "Tue, 17 Nov 2020 11:02:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Nov 16, 2020 at 08:12:05AM +0000, Shinoda, Noriyoshi (PN Japan FSI) wrote:\n>> Yesterday, OR REPLACE clause was provided to the CREATE TRIGGER\n>> statement, so I wrote a patch for tab completion for the psql\n>> command.\n\n> Thanks, the logic looks fine. I'll apply if there are no objections.\n> Please note that git diff --check and that the indentation does not\n> seem quite right, but that's easy enough to fix.\n\nIt's kind of depressing how repetitive the patch is. There's\nprobably not much to be done about that in the short run, but\nit seems to point up the need to start thinking about how to\nrefactor tab-complete.c more thoroughly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Nov 2020 21:14:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 09:14:51PM -0500, Tom Lane wrote:\n> It's kind of depressing how repetitive the patch is. There's\n> probably not much to be done about that in the short run, but\n> it seems to point up the need to start thinking about how to\n> refactor tab-complete.c more thoroughly.\n\nAgreed. One thing that I'd think could help is a new wild card to\nmake some of the words conditional in the list of items. But that may\nbe tricky once you consider the case of a group of words.\n\nI don't think that this is a requirement for this thread, though.\n--\nMichael",
"msg_date": "Tue, 17 Nov 2020 11:55:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I don't think that this is a requirement for this thread, though.\n\nAgreed, I'm not trying to block this patch. Just wishing\nthere were a better way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Nov 2020 22:14:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 10:14:10PM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> I don't think that this is a requirement for this thread, though.\n> \n> Agreed, I'm not trying to block this patch. Just wishing\n> there were a better way.\n\nOkay. I have tweaked a couple of things in the patch and applied it.\nI am wondering if libreadline gives the possibility to implement an\noptional grouping of words to complete, but diving into its\ndocumentation I have not found something that we could use.\n--\nMichael",
"msg_date": "Wed, 18 Nov 2020 14:06:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "> Okay. I have tweaked a couple of things in the patch and applied it.\n\nThank you so much.\nThe next time I post a patch, be careful with git --diff check and indentation.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Michael Paquier [mailto:michael@paquier.xyz] \nSent: Wednesday, November 18, 2020 2:07 PM\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: Shinoda, Noriyoshi (PN Japan FSI) <noriyoshi.shinoda@hpe.com>; pgsql-hackers@lists.postgresql.org\nSubject: Re: Tab complete for CREATE OR REPLACE TRIGGER statement\n\nOn Mon, Nov 16, 2020 at 10:14:10PM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> I don't think that this is a requirement for this thread, though.\n> \n> Agreed, I'm not trying to block this patch. Just wishing there were a \n> better way.\n\nOkay. I have tweaked a couple of things in the patch and applied it.\nI am wondering if libreadline gives the possibility to implement an optional grouping of words to complete, but diving into its documentation I have not found something that we could use.\n--\nMichael\n\n\n",
"msg_date": "Wed, 18 Nov 2020 06:06:07 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSI)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "RE: Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "On 2020-11-18 06:06, Michael Paquier wrote:\n> On Mon, Nov 16, 2020 at 10:14:10PM -0500, Tom Lane wrote:\n>> Michael Paquier <michael@paquier.xyz> writes:\n>>> I don't think that this is a requirement for this thread, though.\n>> \n>> Agreed, I'm not trying to block this patch. Just wishing\n>> there were a better way.\n> \n> Okay. I have tweaked a couple of things in the patch and applied it.\n> I am wondering if libreadline gives the possibility to implement an\n> optional grouping of words to complete, but diving into its\n> documentation I have not found something that we could use.\n\nTo me the code looks like a prime candidate for \"data-driven\" \nrefactoring.\n\nIt should be redone as generic code that reads a table of rules with\nparams and then checks and applies each. Thus the repetitive code would\nbe replaced by a bit more generic code and a lot of code-free data.\n\n-- \nBest regards,\n\nTels\n\n\n",
"msg_date": "Wed, 18 Nov 2020 16:25:44 +0100",
"msg_from": "Tels <nospam-pg-abuse@bloodgate.com>",
"msg_from_op": false,
"msg_subject": "Re: Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "Tels <nospam-pg-abuse@bloodgate.com> writes:\n> On 2020-11-18 06:06, Michael Paquier wrote:\n>> On Mon, Nov 16, 2020 at 10:14:10PM -0500, Tom Lane wrote:\n>>> Agreed, I'm not trying to block this patch. Just wishing\n>>> there were a better way.\n\n> To me the code looks like a prime candidate for \"data-driven\" \n> refactoring.\n> It should be redone as generic code that reads a table of rules with\n> params and then checks and applies each. Thus the repetitive code would\n> be replaced by a bit more generic code and a lot of code-free data.\n\nIn the past I've looked into whether the rules could be autogenerated\nfrom the backend's grammar. It did not look very promising though.\nThe grammar isn't really factorized appropriately -- for instance,\ntab-complete has a lot of knowledge about which kinds of objects can\nbe named in particular places, while the Bison productions only know\nthat's a \"name\". Still, the precedent of ECPG suggests it might be\npossible to process the grammar rules somehow to get to a useful\nresult.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Nov 2020 10:49:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tab complete for CREATE OR REPLACE TRIGGER statement"
},
{
"msg_contents": "Hello Tom,\n\nOn 2020-11-18 16:49, Tom Lane wrote:\n> Tels <nospam-pg-abuse@bloodgate.com> writes:\n>> On 2020-11-18 06:06, Michael Paquier wrote:\n>>> On Mon, Nov 16, 2020 at 10:14:10PM -0500, Tom Lane wrote:\n>>>> Agreed, I'm not trying to block this patch. Just wishing\n>>>> there were a better way.\n> \n>> To me the code looks like a prime candidate for \"data-driven\"\n>> refactoring.\n>> It should be redone as generic code that reads a table of rules with\n>> params and then checks and applies each. Thus the repetitive code \n>> would\n>> be replaced by a bit more generic code and a lot of code-free data.\n> \n> In the past I've looked into whether the rules could be autogenerated\n> from the backend's grammar. It did not look very promising though.\n> The grammar isn't really factorized appropriately -- for instance,\n> tab-complete has a lot of knowledge about which kinds of objects can\n> be named in particular places, while the Bison productions only know\n> that's a \"name\". Still, the precedent of ECPG suggests it might be\n> possible to process the grammar rules somehow to get to a useful\n> result.\n\nHm, that would be even better, for now I was just thinking that\ncode like this:\n\n IF RULE_A_MATCHES THEN\n DO STUFF A\n ELSE IF RULE_B_MATCHES THEN\n DO_STUFF_B\n ELSE IF RULE_C_MATCHES THEN\n DO_STUFF_C\n ...\n\nshould be replaced by\n\n RULE_A MATCH STUFF\n RULE_B MATCH STUFF\n RULE_C MATCH STUFF\n ...\n\n FOREACH RULE DO\n IF RULE.MATCH THEN\n DO RULE.STUFF\n END FOREACH\n\nEven if the rules would be manually created (converted from the current\ncode), that would be more compact and probably less error-prone.\n\nCreating the rule automatically turns this into a completely different\nstory.\n\n-- \nBest regards,\n\nTels\n\n\n",
"msg_date": "Thu, 19 Nov 2020 18:19:44 +0100",
"msg_from": "Tels <nospam-pg-abuse@bloodgate.com>",
"msg_from_op": false,
"msg_subject": "Re: Tab complete for CREATE OR REPLACE TRIGGER statement"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is more of a head-ups than anything else, as I suspect this may come\nup in various forums.\n\nThe PostgreSQL installers for macOS (from EDB, possibly others too) create\nthe data directory in /Library/PostgreSQL/<major_ver>/data. This has been\nthe case since the first release, 10+ years ago.\n\nIt looks like the Big Sur upgrade has taken it upon itself to \"fix\" any\nfilesystem permissions it doesn't like. On my system, this resulted in the\ndata directory having 0755 permissions, which meant that PostgreSQL refused\nto start. Manually changing the permissions back to 0700 (0750 should also\nwork) fixes the issue.\n\nI'm not sure there's much we can do about this - systems that are likely to\nbe affected are already out there, and we obviously don't want to relax the\npermissions Postgres requires.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nHi,This is more of a head-ups than anything else, as I suspect this may come up in various forums.The PostgreSQL installers for macOS (from EDB, possibly others too) create the data directory in /Library/PostgreSQL/<major_ver>/data. This has been the case since the first release, 10+ years ago.It looks like the Big Sur upgrade has taken it upon itself to \"fix\" any filesystem permissions it doesn't like. On my system, this resulted in the data directory having 0755 permissions, which meant that PostgreSQL refused to start. Manually changing the permissions back to 0700 (0750 should also work) fixes the issue.I'm not sure there's much we can do about this - systems that are likely to be affected are already out there, and we obviously don't want to relax the permissions Postgres requires.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 Nov 2020 09:27:52 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations"
},
{
"msg_contents": "On 11/16/20 4:27 AM, Dave Page wrote:\n> Hi,\n> \n> This is more of a head-ups than anything else, as I suspect this may\n> come up in various forums.\n> \n> The PostgreSQL installers for macOS (from EDB, possibly others too)\n> create the data directory in /Library/PostgreSQL/<major_ver>/data. This\n> has been the case since the first release, 10+ years ago.\n> \n> It looks like the Big Sur upgrade has taken it upon itself to \"fix\" any\n> filesystem permissions it doesn't like. On my system, this resulted in\n> the data directory having 0755 permissions, which meant that PostgreSQL\n> refused to start. Manually changing the permissions back to 0700 (0750\n> should also work) fixes the issue.\n> \n> I'm not sure there's much we can do about this - systems that are likely\n> to be affected are already out there, and we obviously don't want to\n> relax the permissions Postgres requires.\n\nThanks for raising this. We should provide some guidance on upgrading\nthis when upgrading to Big Sur.\n\nDo we know where the other macOS installers place their data\ndirectories? We should reach out to the installer maintainers to see if\nthey are seeing the same behavior so we know what guidance to issue.\n\nThanks,\n\nJonathan",
"msg_date": "Mon, 16 Nov 2020 10:55:27 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL\n installations"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 3:55 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> On 11/16/20 4:27 AM, Dave Page wrote:\n> > Hi,\n> >\n> > This is more of a head-ups than anything else, as I suspect this may\n> > come up in various forums.\n> >\n> > The PostgreSQL installers for macOS (from EDB, possibly others too)\n> > create the data directory in /Library/PostgreSQL/<major_ver>/data. This\n> > has been the case since the first release, 10+ years ago.\n> >\n> > It looks like the Big Sur upgrade has taken it upon itself to \"fix\" any\n> > filesystem permissions it doesn't like. On my system, this resulted in\n> > the data directory having 0755 permissions, which meant that PostgreSQL\n> > refused to start. Manually changing the permissions back to 0700 (0750\n> > should also work) fixes the issue.\n> >\n> > I'm not sure there's much we can do about this - systems that are likely\n> > to be affected are already out there, and we obviously don't want to\n> > relax the permissions Postgres requires.\n>\n> Thanks for raising this. We should provide some guidance on upgrading\n> this when upgrading to Big Sur.\n>\n> Do we know where the other macOS installers place their data\n> directories? We should reach out to the installer maintainers to see if\n> they are seeing the same behavior so we know what guidance to issue.\n>\n\nI believe postgres.app only installs for the current user, and puts it's\ndata under ~/Library/Application Support/Postgres.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Mon, Nov 16, 2020 at 3:55 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 11/16/20 4:27 AM, Dave Page wrote:\n> Hi,\n> \n> This is more of a head-ups than anything else, as I suspect this may\n> come up in various forums.\n> \n> The PostgreSQL installers for macOS (from EDB, possibly others too)\n> create the data directory in /Library/PostgreSQL/<major_ver>/data. This\n> has been the case since the first release, 10+ years ago.\n> \n> It looks like the Big Sur upgrade has taken it upon itself to \"fix\" any\n> filesystem permissions it doesn't like. On my system, this resulted in\n> the data directory having 0755 permissions, which meant that PostgreSQL\n> refused to start. Manually changing the permissions back to 0700 (0750\n> should also work) fixes the issue.\n> \n> I'm not sure there's much we can do about this - systems that are likely\n> to be affected are already out there, and we obviously don't want to\n> relax the permissions Postgres requires.\n\nThanks for raising this. We should provide some guidance on upgrading\nthis when upgrading to Big Sur.\n\nDo we know where the other macOS installers place their data\ndirectories? We should reach out to the installer maintainers to see if\nthey are seeing the same behavior so we know what guidance to issue.I believe postgres.app only installs for the current user, and puts it's data under ~/Library/Application Support/Postgres. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 Nov 2020 16:21:14 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL\n installations"
},
{
"msg_contents": "I suppose there are many ways to have PG on OSX i.e. package managers\n(Homebrew, Macports), App installers etc and so many places anyone can find\nhis data directory reside in. Generally I prefer data directory to be\nsomewhere inside the user home dir as OSX will take care of possible\nbackups and will not generally modify its contents during migration betweeb\nosx versions and/or different machines. It is not only the question of\npermissions.\n\nAny options inside user homedir are equally suitable IMO.\n\n---\nBest regards,\nPavel Borisov\n\nPostgres professional: http://postgrespro.com\n\nI suppose there are many ways to have PG on OSX i.e. package managers (Homebrew, Macports), App installers etc and so many places anyone can find his data directory reside in. Generally I prefer data directory to be somewhere inside the user home dir as OSX will take care of possible backups and will not generally modify its contents during migration betweeb osx versions and/or different machines. It is not only the question of permissions.Any options inside user homedir are equally suitable IMO.---Best regards,Pavel BorisovPostgres professional: http://postgrespro.com",
"msg_date": "Mon, 16 Nov 2020 20:45:05 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL\n installations"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 4:45 PM Pavel Borisov <pashkin.elfe@gmail.com>\nwrote:\n\n> I suppose there are many ways to have PG on OSX i.e. package managers\n> (Homebrew, Macports), App installers etc and so many places anyone can find\n> his data directory reside in. Generally I prefer data directory to be\n> somewhere inside the user home dir as OSX will take care of possible\n> backups and will not generally modify its contents during migration betweeb\n> osx versions and/or different machines. It is not only the question of\n> permissions.\n>\n> Any options inside user homedir are equally suitable IMO.\n>\n\nIt is in the user's homedir - it's just that that isn't under /Users:\n\nhal:~ postgres$ echo $HOME\n/Library/PostgreSQL/13\n\nWith the EDB installers (unlike postgres.app), PostgreSQL runs as a\nservice, much as it would on Linux or BSD.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Mon, Nov 16, 2020 at 4:45 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:I suppose there are many ways to have PG on OSX i.e. package managers (Homebrew, Macports), App installers etc and so many places anyone can find his data directory reside in. Generally I prefer data directory to be somewhere inside the user home dir as OSX will take care of possible backups and will not generally modify its contents during migration betweeb osx versions and/or different machines. It is not only the question of permissions.Any options inside user homedir are equally suitable IMO.It is in the user's homedir - it's just that that isn't under /Users:hal:~ postgres$ echo $HOME/Library/PostgreSQL/13With the EDB installers (unlike postgres.app), PostgreSQL runs as a service, much as it would on Linux or BSD. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 Nov 2020 16:49:12 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL\n installations"
},
{
"msg_contents": "FYI, both Jonathan and I have now tested this on additional machines and\nhave been unable to reproduce the issue, so it seems like something odd\nhappened on my original upgrade rather than a general issue.\n\nApologies for the noise.\n\nOn Mon, Nov 16, 2020 at 9:27 AM Dave Page <dpage@pgadmin.org> wrote:\n\n> Hi,\n>\n> This is more of a head-ups than anything else, as I suspect this may come\n> up in various forums.\n>\n> The PostgreSQL installers for macOS (from EDB, possibly others too) create\n> the data directory in /Library/PostgreSQL/<major_ver>/data. This has been\n> the case since the first release, 10+ years ago.\n>\n> It looks like the Big Sur upgrade has taken it upon itself to \"fix\" any\n> filesystem permissions it doesn't like. On my system, this resulted in the\n> data directory having 0755 permissions, which meant that PostgreSQL refused\n> to start. Manually changing the permissions back to 0700 (0750 should also\n> work) fixes the issue.\n>\n> I'm not sure there's much we can do about this - systems that are likely\n> to be affected are already out there, and we obviously don't want to relax\n> the permissions Postgres requires.\n>\n> --\n> Dave Page\n> Blog: http://pgsnake.blogspot.com\n> Twitter: @pgsnake\n>\n> EDB: http://www.enterprisedb.com\n>\n>\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nFYI, both Jonathan and I have now tested this on additional machines and have been unable to reproduce the issue, so it seems like something odd happened on my original upgrade rather than a general issue.Apologies for the noise.On Mon, Nov 16, 2020 at 9:27 AM Dave Page <dpage@pgadmin.org> wrote:Hi,This is more of a head-ups than anything else, as I suspect this may come up in various forums.The PostgreSQL installers for macOS (from EDB, possibly others too) create the data directory in /Library/PostgreSQL/<major_ver>/data. This has been the case since the first release, 10+ years ago.It looks like the Big Sur upgrade has taken it upon itself to \"fix\" any filesystem permissions it doesn't like. On my system, this resulted in the data directory having 0755 permissions, which meant that PostgreSQL refused to start. Manually changing the permissions back to 0700 (0750 should also work) fixes the issue.I'm not sure there's much we can do about this - systems that are likely to be affected are already out there, and we obviously don't want to relax the permissions Postgres requires.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com\n-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 17 Nov 2020 10:58:42 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL\n installations"
}
] |
[
{
"msg_contents": "Hello,\n\nI've changed the BEGIN WAIT FOR LSN statement to core functions \npg_waitlsn, pg_waitlsn_infinite and pg_waitlsn_no_wait.\nCurrently the functions work inside repeatable read transactions, but \nwaitlsn creates a snapshot if called first in a transaction block, which \ncan possibly lead the transaction to working incorrectly, so the \nfunction gives a warning.\n\nUsage examples\n==========\nselect pg_waitlsn(‘LSN’, timeout);\nselect pg_waitlsn_infinite(‘LSN’);\nselect pg_waitlsn_no_wait(‘LSN’);",
"msg_date": "Mon, 16 Nov 2020 13:09:30 +0300",
"msg_from": "a.pervushina@postgrespro.ru",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi!\n\nOn Mon, Nov 16, 2020 at 1:09 PM <a.pervushina@postgrespro.ru> wrote:\n> I've changed the BEGIN WAIT FOR LSN statement to core functions\n> pg_waitlsn, pg_waitlsn_infinite and pg_waitlsn_no_wait.\n> Currently the functions work inside repeatable read transactions, but\n> waitlsn creates a snapshot if called first in a transaction block, which\n> can possibly lead the transaction to working incorrectly, so the\n> function gives a warning.\n>\n> Usage examples\n> ==========\n> select pg_waitlsn(‘LSN’, timeout);\n> select pg_waitlsn_infinite(‘LSN’);\n> select pg_waitlsn_no_wait(‘LSN’);\n\nThe name pg_waitlsn_no_wait() looks confusing. I've tried to see how\nit's documented, but the patch doesn't contain documentation...\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 16 Nov 2020 14:31:29 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
}
] |
[
{
"msg_contents": "This one seems to have fallen by the wayside.\n\nErik",
"msg_date": "Mon, 16 Nov 2020 13:26:36 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "doc CREATE INDEX"
},
{
"msg_contents": "On 2020-Nov-16, Erik Rijkers wrote:\n\n> This one seems to have fallen by the wayside.\n\nSo it did.\n\nPushed to all three branches, thanks.\n\n\n\n",
"msg_date": "Mon, 16 Nov 2020 11:04:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: doc CREATE INDEX"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 11:04:57AM -0300, �lvaro Herrera wrote:\n> On 2020-Nov-16, Erik Rijkers wrote:\n> \n> > This one seems to have fallen by the wayside.\n> \n> So it did.\n> \n> Pushed to all three branches, thanks.\n\nI have applied with to PG 9.5-11, which also had the odd wording.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 16 Nov 2020 11:15:11 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc CREATE INDEX"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a PoC/WIP patch adding support for extended statistics on\nexpressions. This is by no means \"ready\" - most of the stuff works, but\noften in a rather hackish way. I certainly don't expect this to pass\nregression tests, for example.\n\nThere's an example demonstrating how this works for two queries at the\nend of this message. Now let's talk about the main parts of the patch:\n\n1) extending grammar to allow expressions, not just plain columns\n\n Fairly straighforward, I think. I'm sure the logic which expressions\n are allowed is not 100% (e.g. volatile functions etc.) but that's\n a detail we can deal with later.\n\n2) store the expressions in pg_statistic_ext catalog\n\n I ended up adding a separate column, similar to indexprs, except that\n the order of columns/expressions does not matter, so we don't need to\n bother with storing 0s in stxkeys - we simply consider expressions to\n be \"after\" all the simple columns.\n\n3) build statistics\n\n This should work too, for all three types of statistics we have (mcv,\n dependencies and ndistinct). This should work too, although the code\n changes are often very hackish \"to make it work\".\n\n The main challenge here was how to represent the expressions in the\n statistics - e.g. in ndistinct, which track ndistinct estimates for\n combinations of parameters, and so far we used attnums for that. I\n decided the easiest way it to keep doing that, but offset the\n expressions by MaxHeapAttributeNumber. That seems to work, but maybe\n there's a better way.\n\n4) apply the statistics\n\n This is the hard part, really, and the exact state of the support\n depends on type of statistics.\n\n For ndistinct coefficients, it generally works. I'm sure there may be\n bugs in estimate_num_groups, etc. but in principle it works.\n\n For MCV lists, it generally works too - you can define statistics on\n the expressions and the estimates should improve. The main downside\n here is that it requires at least two expressions, otherwise we can't\n build/apply the extended statistics. So for example\n\n SELECT * FROM t WHERE mod(a,100) = 10 AND mod(b,11) = 0\n\n may be estimated \"correctly\", once you drop any of the conditions it\n gets much worse as we don't have stats for individual expressions.\n That's rather annoying - it does not break the extended MCV, but the\n behavior will certainly cause confusion.\n\n For functional dependencies, the estimation does not work yet. Also,\n the missing per-column statistics have bigger impact than on MCV,\n because while MCV can work fine without it, the dependencies heavily\n rely on the per-column estimates. We only apply \"corrections\" based\n on the dependency degree, so we still need (good) per-column\n estimates, which does not quite work with the expressions.\n\n\n Of course, the lack of per-expression statistics may be somewhat\n fixed by adding indexes on expressions, but that's kinda expensive.\n\n\nNow, a simple example demonstrating how this improves estimates - let's\ncreate a table with 1M rows, and do queries with mod() expressions on\nit. It might be date_trunc() or something similar, that'd work too.\n\n\ntable with 1M rows\n==================\n\ntest=# create table t (a int);\ntest=# insert into t select i from generate_series(1,1000000) s(i);\ntest=# analyze t;\n\n\npoorly estimated queries\n========================\n\ntest=# explain (analyze, timing off) select * from t where mod(a,3) = 0\nand mod(a,7) = 0;\n QUERY PLAN\n\n----------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..24425.00 rows=25 width=4) (actual rows=47619\nloops=1)\n Filter: ((mod(a, 3) = 0) AND (mod(a, 7) = 0))\n Rows Removed by Filter: 952381\n Planning Time: 0.329 ms\n Execution Time: 156.675 ms\n(5 rows)\n\ntest=# explain (analyze, timing off) select mod(a,3), mod(a,7) from t\ngroup by 1, 2;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------\n HashAggregate (cost=75675.00..98487.50 rows=1000000 width=8) (actual\nrows=21 loops=1)\n Group Key: mod(a, 3), mod(a, 7)\n Planned Partitions: 32 Batches: 1 Memory Usage: 1561kB\n -> Seq Scan on t (cost=0.00..19425.00 rows=1000000 width=8) (actual\nrows=1000000 loops=1)\n Planning Time: 0.277 ms\n Execution Time: 502.803 ms\n(6 rows)\n\n\nimproved estimates\n==================\n\ntest=# create statistics s1 (ndistinct) on mod(a,3), mod(a,7) from t;\ntest=# analyze t;\n\ntest=# explain (analyze, timing off) select mod(a,3), mod(a,7) from t\ngroup by 1, 2;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------\n HashAggregate (cost=24425.00..24425.31 rows=21 width=8) (actual\nrows=21 loops=1)\n Group Key: mod(a, 3), mod(a, 7)\n Batches: 1 Memory Usage: 24kB\n -> Seq Scan on t (cost=0.00..19425.00 rows=1000000 width=8) (actual\nrows=1000000 loops=1)\n Planning Time: 0.135 ms\n Execution Time: 500.092 ms\n(6 rows)\n\ntest=# create statistics s2 (mcv) on mod(a,3), mod(a,7) from t;\ntest=# analyze t;\n\ntest=# explain (analyze, timing off) select * from t where mod(a,3) = 0\nand mod(a,7) = 0;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..24425.00 rows=46433 width=4) (actual\nrows=47619 loops=1)\n Filter: ((mod(a, 3) = 0) AND (mod(a, 7) = 0))\n Rows Removed by Filter: 952381\n Planning Time: 0.702 ms\n Execution Time: 152.280 ms\n(5 rows)\n\n\nClearly, estimates for both queries are significantly improved. Of\ncourse, this example is kinda artificial/simplistic.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 16 Nov 2020 14:49:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 11/16/20 2:49 PM, Tomas Vondra wrote:\n> Hi,\n>\n> ...\n>\n> 4) apply the statistics\n> \n> This is the hard part, really, and the exact state of the support\n> depends on type of statistics.\n> \n> For ndistinct coefficients, it generally works. I'm sure there may be\n> bugs in estimate_num_groups, etc. but in principle it works.\n> \n> For MCV lists, it generally works too - you can define statistics on\n> the expressions and the estimates should improve. The main downside\n> here is that it requires at least two expressions, otherwise we can't\n> build/apply the extended statistics. So for example\n> \n> SELECT * FROM t WHERE mod(a,100) = 10 AND mod(b,11) = 0\n> \n> may be estimated \"correctly\", once you drop any of the conditions it\n> gets much worse as we don't have stats for individual expressions.\n> That's rather annoying - it does not break the extended MCV, but the\n> behavior will certainly cause confusion.\n> \n> For functional dependencies, the estimation does not work yet. Also,\n> the missing per-column statistics have bigger impact than on MCV,\n> because while MCV can work fine without it, the dependencies heavily\n> rely on the per-column estimates. We only apply \"corrections\" based\n> on the dependency degree, so we still need (good) per-column\n> estimates, which does not quite work with the expressions.\n> \n> \n> Of course, the lack of per-expression statistics may be somewhat\n> fixed by adding indexes on expressions, but that's kinda expensive.\n> \n\nFWIW after re-reading [1], I think the plan to build pg_statistic rows\nfor expressions and stash them in pg_statistic_ext_data is the way to\ngo. I was thinking that maybe we'll need some new statistics type to\nrequest this, e.g.\n\n CREATE STATISTICS s (expressions) ON ...\n\nbut on second thought I think we should just build this whenever there\nare expressions in the definition. It'll require some changes (e.g. we\nrequire at least two items in the list, but we'll want to allow building\nstats on a single expression too, I guess), but that's doable.\n\nOf course, we don't have any catalogs with composite types yet, so it's\nnot 100% sure this will work, but it's worth a try.\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/6331.1579041473%40sss.pgh.pa.us#5ec6af7583e84cef2ca6a9e8a713511e\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Nov 2020 16:27:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nattached is a significantly improved version of the patch, allowing\ndefining extended statistics on expressions. This fixes most of the\nproblems in the previous WIP version and AFAICS it does pass all\nregression tests (including under valgrind). There's a bunch of FIXMEs\nand a couple loose ends, but overall I think it's ready for reviews.\n\n\nOverall, the patch does two main things:\n\n* it adds a new \"expressions\" statistics kind, building per-expression\nstatistics (i.e it's similar to having expression index)\n\n* it allows using expressions in definition of extended statistics, and\nproperly handles that in all existing statistics kinds (dependencies,\nmcv, ndistinct)\n\n\nThe expression handling mostly copies what we do for indexes, with\nsimilar restrictions - no volatile functions, aggregates etc. The list\nof expressions is stored in pg_statistic_ext catalog, but unlike for\nindexes we don't need to worry about the exact order of elements, so\nthere are no \"0\" for expressions in stxkeys etc. We simply assume the\nexpressions come after simple columns, and that's it.\n\nTo reference expressions in the built statistics (e.g. in a dependency)\nwe use \"special attnums\" computed from the expression index by adding\nMaxHeapAttributeNumber. So the first expression has attnum 1601 etc.\n\nThis mapping expressions to attnums is used both while building and\napplying the statistics to clauses, as it makes the whole process much\nsimpler than dealing with attnums and expressions entirely separately.\n\n\nThe first part allows us to do something like this:\n\n CREATE TABLE t (a int);\n INSERT INTO t SELECT i FROM generate_series(1,1000000) s(i);\n ANALYZE t;\n\n EXPLAIN (ANALYZE, TIMING OFF)\n SELECT * FROM t WHERE mod(a,10) = 0;\n\n CREATE STATISTICS s (expressions) ON mod(a,10) FROM t;\n ANALYZE t;\n\n EXPLAIN (ANALYZE, TIMING OFF)\n SELECT * FROM t WHERE mod(a,10) = 0;\n\nWithout the statistics we get this:\n\n QUERY PLAN\n --------------------------------------------------------\n Seq Scan on t (cost=0.00..19425.00 rows=5000 width=4)\n (actual rows=100000 loops=1)\n Filter: (mod(a, 10) = 0)\n Rows Removed by Filter: 900000\n Planning Time: 0.216 ms\n Execution Time: 157.552 ms\n (5 rows)\n\nwhile with the statistics we get this\n\n QUERY PLAN\n ----------------------------------------------------------\n Seq Scan on t (cost=0.00..19425.00 rows=100900 width=4)\n (actual rows=100000 loops=1)\n Filter: (mod(a, 10) = 0)\n Rows Removed by Filter: 900000\n Planning Time: 0.399 ms\n Execution Time: 157.530 ms\n (5 rows)\n\nSo that's pretty nice improvement. In practice you could get the same\neffect by creating an expression index\n\n CREATE INDEX ON t (mod(a,10));\n\nbut of course that's far from free - there's cost to maintain the index,\nit blocks HOT, and it takes space on disk. The statistics have none of\nthese issues.\n\nImplementation-wise, this simply builds per-column statistics for each\nexpression, and stashes them into a new column in pg_statistic_ext_data\ncatalog as an array of elements with pg_statistic composite type. And\nthen in selfuncs.c we look not just at indexes, but also at this when\nlooking for expression stats.\n\nSo that gives us the per-expression stats. This is enabled by default\nwhen you don't specify the statistics type and the definition includes\nany expression that is not a simple column reference. Otherwise you may\nalso request it explicitly by using \"expressions\" in the CREATE.\n\n\nNow, the second part is really just a natural extension of the existing\nstats to also work with expressions. The easiest thing is probably to\nshow some examples, so consider this:\n\n CREATE TABLE t (a INT, b INT, c INT);\n INSERT INTO t SELECT i, i, i FROM generate_series(1,1000000) s(i);\n ANALYZE t;\n\nwhich without any statistics gives us this:\n\n EXPLAIN (ANALYZE, TIMING OFF)\n SELECT 1 FROM t WHERE mod(a,10) = 0 AND mod(b,5) = 0;\n\n QUERY PLAN\n ------------------------------------------------------\n Seq Scan on t (cost=0.00..25406.00 rows=25 width=4)\n (actual rows=100000 loops=1)\n Filter: ((mod(a, 10) = 0) AND (mod(b, 5) = 0))\n Rows Removed by Filter: 900000\n Planning Time: 0.080 ms\n Execution Time: 161.445 ms\n (5 rows)\n\n\n EXPLAIN (ANALYZE, TIMING OFF)\n SELECT 1 FROM t GROUP BY mod(a,10), mod(b,5);\n\n QUERY PLAN\n ------------------------------------------------------------------\n HashAggregate (cost=76656.00..99468.50 rows=1000000 width=12)\n (actual rows=10 loops=1)\n Group Key: mod(a, 10), mod(b, 5)\n Planned Partitions: 32 Batches: 1 Memory Usage: 1561kB\n -> Seq Scan on t (cost=0.00..20406.00 rows=1000000 width=8)\n (actual rows=1000000 loops=1)\n Planning Time: 0.232 ms\n Execution Time: 514.446 ms\n (6 rows)\n\nand now let's add statistics on the expressions:\n\n CREATE STATISTICS s ON mod(a,10), mod(b,5) FROM t;\n ANALYZE t;\n\nwhich ends up with these spot-on estimates:\n\n EXPLAIN (ANALYZE, TIMING OFF)\n SELECT 1 FROM t WHERE mod(a,10) = 0 AND mod(b,5) = 0;\n\n QUERY PLAN\n ---------------------------------------------------------\n Seq Scan on t (cost=0.00..25406.00 rows=97400 width=4)\n (actual rows=100000 loops=1)\n Filter: ((mod(a, 10) = 0) AND (mod(b, 5) = 0))\n Rows Removed by Filter: 900000\n Planning Time: 0.366 ms\n Execution Time: 159.207 ms\n (5 rows)\n\n EXPLAIN (ANALYZE, TIMING OFF)\n SELECT 1 FROM t GROUP BY mod(a,10), mod(b,5);\n\n QUERY PLAN\n -----------------------------------------------------------------\n HashAggregate (cost=25406.00..25406.15 rows=10 width=12)\n (actual rows=10 loops=1)\n Group Key: mod(a, 10), mod(b, 5)\n Batches: 1 Memory Usage: 24kB\n -> Seq Scan on t (cost=0.00..20406.00 rows=1000000 width=8)\n (actual rows=1000000 loops=1)\n Planning Time: 0.299 ms\n Execution Time: 530.793 ms\n (6 rows)\n\nOf course, this is a very simple query, but hopefully you get the idea.\n\n\nThere's about two main areas where I think might be hidden issues:\n\n1) We're kinda faking the pg_statistic entries, and I suppose there\nmight be some loose ends (e.g. with respect to ACLs etc.).\n\n2) Memory management while evaluating the expressions during analyze is\nkinda simplistic, we're probably keeping the memory around longer than\nneeded etc.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 22 Nov 2020 20:03:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 08:03:51PM +0100, Tomas Vondra wrote:\n> attached is a significantly improved version of the patch, allowing\n> defining extended statistics on expressions. This fixes most of the\n> problems in the previous WIP version and AFAICS it does pass all\n> regression tests (including under valgrind). There's a bunch of FIXMEs\n> and a couple loose ends, but overall I think it's ready for reviews.\n\nI was looking at the previous patch, so now read this one instead, and attach\nsome proposed fixes.\n\n+ * This matters especially for * expensive expressions, of course.\n\n+ The expression can refer only to columns of the underlying table, but\n+ it can use all columns, not just the ones the statistics is defined\n+ on.\n\nI don't know what these are trying to say?\n\n+ errmsg(\"statistics expressions and predicates can refer only to the table being indexed\")));\n+ * partial-index predicates. Create it in the per-index context to be\n\nI think these are copied and shouldn't mention \"indexes\" or \"predicates\". Or\nshould statistics support predicates, too ?\n\nIdea: if a user specifies no stakinds, and there's no expression specified,\nthen you automatically build everything except for expressional stats. But if\nthey specify only one statistics \"column\", it gives an error. If that's a\nnon-simple column reference, should that instead build *only* expressional\nstats (possibly with a NOTICE, since the user might be intending to make MV\nstats).\n\nI think pg_stats_ext should allow inspecting the pg_statistic data in\npg_statistic_ext_data.stxdexprs. I guess array_agg() should be ordered by\nsomething, so maybe it should use ORDINALITY (?)\n\nI hacked more on bootstrap.c so included that here.\n\n-- \nJustin",
"msg_date": "Sun, 22 Nov 2020 20:26:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 11/23/20 3:26 AM, Justin Pryzby wrote:\n> On Sun, Nov 22, 2020 at 08:03:51PM +0100, Tomas Vondra wrote:\n>> attached is a significantly improved version of the patch, allowing\n>> defining extended statistics on expressions. This fixes most of the\n>> problems in the previous WIP version and AFAICS it does pass all\n>> regression tests (including under valgrind). There's a bunch of FIXMEs\n>> and a couple loose ends, but overall I think it's ready for reviews.\n> \n> I was looking at the previous patch, so now read this one instead, and attach\n> some proposed fixes.\n> \n> + * This matters especially for * expensive expressions, of course.\n> \n\nThe point this was trying to make is that we evaluate the expressions\nonly once, and use the results to build all extended statistics. Instead\nof leaving it up to every \"build\" to re-evaluate it.\n\n> + The expression can refer only to columns of the underlying table, but\n> + it can use all columns, not just the ones the statistics is defined\n> + on.\n> \n> I don't know what these are trying to say?\n> \n\nD'oh. That's bogus para, copied from the CREATE INDEX docs (where it\ntalked about the index predicate, which is irrelevant here).\n\n> + errmsg(\"statistics expressions and predicates can refer only to the table being indexed\")));\n> + * partial-index predicates. Create it in the per-index context to be\n> \n> I think these are copied and shouldn't mention \"indexes\" or \"predicates\". Or\n> should statistics support predicates, too ?\n> \n\nRight. Stupid copy-pasto.\n\n> Idea: if a user specifies no stakinds, and there's no expression specified,\n> then you automatically build everything except for expressional stats. But if\n> they specify only one statistics \"column\", it gives an error. If that's a\n> non-simple column reference, should that instead build *only* expressional\n> stats (possibly with a NOTICE, since the user might be intending to make MV\n> stats).\n> \n\nRight, that was the intention - but I messed up and it works only if you\nspecify the \"expressions\" kind explicitly (and I also added the ERROR\nmessage to expected output by mistake). I agree we should handle this\nautomatically, so that\n\n CREATE STATISTICS s ON (a+b) FROM t\n\nworks and only creates statistics for the expression.\n\n\n> I think pg_stats_ext should allow inspecting the pg_statistic data in\n> pg_statistic_ext_data.stxdexprs. I guess array_agg() should be ordered by\n> something, so maybe it should use ORDINALITY (?)\n> \n\nI agree we should expose the expression statistics, but I'm not\nconvinced we should do that in the pg_stats_ext view itself. The problem\nis that it's a table bested in a table, essentially, with non-trivial\nstructure, so I was thinking about adding a separate view exposing just\nthis one part. Something like pg_stats_ext_expressions, with about the\nsame structure as pg_stats, or something.\n\n> I hacked more on bootstrap.c so included that here.\n\nThanks. As for the 0004-0007 patches:\n\n0004 - Seems fine. IMHO not really \"silly errors\" but OK.\n\n0005 - Mostly OK. The docs wording mostly comes from CREATE INDEX docs,\nthough. The paragraph about \"t1\" is old, so if we want to reword it then\nmaybe we should backpatch too.\n\n0006 - Not sure. I think CreateStatistics can be fixed with less code,\nkeeping it more like PG13 (good for backpatching). Not sure why rename\nextended statistics to multi-variate statistics - we use \"extended\"\neverywhere. Not sure what's the point of serialize_expr_stats changes,\nthat's code is mostly copy-paste from update_attstats.\n\n0007 - I suspect this makes the pg_stats_ext too complex to work with,\nIMHO we should move this to a separate view.\n\n\nThanks for the review! I'll try to look more closely at those patches\nsometime next week, and merge most of it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 Nov 2020 04:30:26 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nAttached is an updated version of the patch series, merging some of the\nchanges proposed by Justin. I've kept the bootstrap patches separate, at\nleast for now.\n\nAs for the individual 0004-0007 patches:\n\n1) 0004 - merged as is\n\n2) 0005 - I've merged some of the docs changes, but some of the wording\nwas copied from CREATE INDEX docs in which case I've kept that. I've\nalso not merged changed to pre-existing docs, like the t1 example which\nis unrelated to this patch.\n\nOTOH I've corrected the t3 example description, which was somewhat bogus\nand unrelated to what the example actually did. I've also removed the\nirrelevant para which originally described index predicates and was\ncopied from CREATE INDEX docs by mistake.\n\n3) 0006 - I've committed something similar / less invasive, achieving\nthe same goals (I think), and I've added a couple regression tests.\n\n4) 0007 - I agreed we need a way to expose the stats, but including this\nin pg_stats_ext seems rather inconvenient (table in a table is difficult\nto work with). Instead I've added a new catalog pg_stats_ext_exprs with\nstructure similar to pg_stats. I've also added the expressions to the\npg_stats_ext catalog, which was only showing the attributes, and some\nbasic docs for the catalog changes.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 24 Nov 2020 00:30:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 04:30:26AM +0100, Tomas Vondra wrote:\n> 0004 - Seems fine. IMHO not really \"silly errors\" but OK.\n\nThis is one of the same issues you pointed out - shadowing a variable.\nCould be backpatched.\n\nOn Mon, Nov 23, 2020 at 04:30:26AM +0100, Tomas Vondra wrote:\n> > + errmsg(\"statistics expressions and predicates can refer only to the table being indexed\")));\n> > + * partial-index predicates. Create it in the per-index context to be\n> > \n> > I think these are copied and shouldn't mention \"indexes\" or \"predicates\". Or\n> > should statistics support predicates, too ?\n> > \n> \n> Right. Stupid copy-pasto.\n\nRight, but then I was wondering if CREATE STATS should actually support\npredicates, since one use case is to do what indexes do without their overhead.\nI haven't thought about it enough yet.\n\n> 0006 - Not sure. I think CreateStatistics can be fixed with less code,\n> keeping it more like PG13 (good for backpatching). Not sure why rename\n> extended statistics to multi-variate statistics - we use \"extended\"\n> everywhere.\n\n- if (build_expressions && (list_length(stxexprs) == 0))\n+ if (!build_expressions_only && (list_length(stmt->exprs) < 2))\n ereport(ERROR, \n (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n- errmsg(\"extended expression statistics require at least one expression\")));\n+ errmsg(\"multi-variate statistics require at least two columns\")));\n\nI think all of \"CREATE STATISTICS\" has been known as \"extended stats\", so I\nthink it may be confusing to say that it requires two columns for the general\nfacility.\n\n> Not sure what's the point of serialize_expr_stats changes,\n> that's code is mostly copy-paste from update_attstats.\n\nRight. I think \"i\" is poor variable name when it isn't a loop variable and not\nof limited scope.\n\n> 0007 - I suspect this makes the pg_stats_ext too complex to work with,\n> IMHO we should move this to a separate view.\n\nRight - then unnest() the whole thing and return one row per expression rather\nthan array, as you've done. Maybe the docs should say that this returns one\nrow per expression.\n\nLooking quickly at your new patch: I guess you know there's a bunch of\nlingering references to \"indexes\" and \"predicates\":\n\nI don't know if you want to go to the effort to prohibit this.\npostgres=# CREATE STATISTICS asf ON (tableoid::int+1) FROM t;\nCREATE STATISTICS\n\nI think a lot of people will find this confusing:\n\npostgres=# CREATE STATISTICS asf ON i FROM t;\nERROR: extended statistics require at least 2 columns\npostgres=# CREATE STATISTICS asf ON (i) FROM t;\nCREATE STATISTICS\n\npostgres=# CREATE STATISTICS asf (expressions) ON i FROM t;\nERROR: extended expression statistics require at least one expression\npostgres=# CREATE STATISTICS asf (expressions) ON (i) FROM t;\nCREATE STATISTICS\n\nI haven't looked, but is it possible to make it work without parens ?\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 24 Nov 2020 10:23:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 11/24/20 5:23 PM, Justin Pryzby wrote:\n> On Mon, Nov 23, 2020 at 04:30:26AM +0100, Tomas Vondra wrote:\n>> 0004 - Seems fine. IMHO not really \"silly errors\" but OK.\n> \n> This is one of the same issues you pointed out - shadowing a variable.\n> Could be backpatched.\n> \n> On Mon, Nov 23, 2020 at 04:30:26AM +0100, Tomas Vondra wrote:\n>>> + errmsg(\"statistics expressions and predicates can refer only to the table being indexed\")));\n>>> + * partial-index predicates. Create it in the per-index context to be\n>>>\n>>> I think these are copied and shouldn't mention \"indexes\" or \"predicates\". Or\n>>> should statistics support predicates, too ?\n>>>\n>>\n>> Right. Stupid copy-pasto.\n> \n> Right, but then I was wondering if CREATE STATS should actually support\n> predicates, since one use case is to do what indexes do without their overhead.\n> I haven't thought about it enough yet.\n> \n\nWell, it's not supported now, so the message is bogus. I'm not against\nsupporting \"partial statistics\" with predicates in the future, but it's\ngoing to be non-trivial project on it's own. It's not something I can\nbolt onto the current patch easily.\n\n>> 0006 - Not sure. I think CreateStatistics can be fixed with less code,\n>> keeping it more like PG13 (good for backpatching). Not sure why rename\n>> extended statistics to multi-variate statistics - we use \"extended\"\n>> everywhere.\n> \n> - if (build_expressions && (list_length(stxexprs) == 0))\n> + if (!build_expressions_only && (list_length(stmt->exprs) < 2))\n> ereport(ERROR, \n> (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> - errmsg(\"extended expression statistics require at least one expression\")));\n> + errmsg(\"multi-variate statistics require at least two columns\")));\n> \n> I think all of \"CREATE STATISTICS\" has been known as \"extended stats\", so I\n> think it may be confusing to say that it requires two columns for the general\n> facility.\n> \n>> Not sure what's the point of serialize_expr_stats changes,\n>> that's code is mostly copy-paste from update_attstats.\n> \n> Right. I think \"i\" is poor variable name when it isn't a loop variable and not\n> of limited scope.\n> \n\nOK, I understand. I'll consider tweaking that.\n\n>> 0007 - I suspect this makes the pg_stats_ext too complex to work with,\n>> IMHO we should move this to a separate view.\n> \n> Right - then unnest() the whole thing and return one row per expression rather\n> than array, as you've done. Maybe the docs should say that this returns one\n> row per expression.\n> \n> Looking quickly at your new patch: I guess you know there's a bunch of\n> lingering references to \"indexes\" and \"predicates\":\n> \n> I don't know if you want to go to the effort to prohibit this.\n> postgres=# CREATE STATISTICS asf ON (tableoid::int+1) FROM t;\n> CREATE STATISTICS\n> \n\nHmm, we're already rejecting system attributes, I suppose we should do\nthe same thing for expressions on system attributes.\n\n> I think a lot of people will find this confusing:\n> \n> postgres=# CREATE STATISTICS asf ON i FROM t;\n> ERROR: extended statistics require at least 2 columns\n> postgres=# CREATE STATISTICS asf ON (i) FROM t;\n> CREATE STATISTICS\n> \n> postgres=# CREATE STATISTICS asf (expressions) ON i FROM t;\n> ERROR: extended expression statistics require at least one expression\n> postgres=# CREATE STATISTICS asf (expressions) ON (i) FROM t;\n> CREATE STATISTICS\n> \n> I haven't looked, but is it possible to make it work without parens ?\n> \n\nHmm, you're right that may be surprising. I suppose we could walk the\nexpressions while creating the statistics, and replace such trivial\nexpressions with the nested variable, but I haven't tried. I wonder what\nthe CREATE INDEX behavior would be in these cases.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 24 Nov 2020 18:29:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nAttached is a patch series rebased on top of 25a9e54d2d which improves\nestimation of OR clauses. There were only a couple minor conflicts.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 3 Dec 2020 16:23:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Thu, 3 Dec 2020 at 15:23, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> Attached is a patch series rebased on top of 25a9e54d2d.\n\nAfter reading this thread and [1], I think I prefer the name\n\"standard\" rather than \"expressions\", because it is meant to describe\nthe kind of statistics being built rather than what they apply to, but\nmaybe that name doesn't actually need to be exposed to the end user:\n\nLooking at the current behaviour, there are a couple of things that\nseem a little odd, even though they are understandable. For example,\nthe fact that\n\n CREATE STATISTICS s (expressions) ON (expr), col FROM tbl;\n\nfails, but\n\n CREATE STATISTICS s (expressions, mcv) ON (expr), col FROM tbl;\n\nsucceeds and creates both \"expressions\" and \"mcv\" statistics. Also, the syntax\n\n CREATE STATISTICS s (expressions) ON (expr1), (expr2) FROM tbl;\n\ntends to suggest that it's going to create statistics on the pair of\nexpressions, describing their correlation, when actually it builds 2\nindependent statistics. Also, this error text isn't entirely accurate:\n\n CREATE STATISTICS s ON col FROM tbl;\n ERROR: extended statistics require at least 2 columns\n\nbecause extended statistics don't always require 2 columns, they can\nalso just have an expression, or multiple expressions and 0 or 1\ncolumns.\n\nI think a lot of this stems from treating \"expressions\" in the same\nway as the other (multi-column) stats kinds, and it might actually be\nneater to have separate documented syntaxes for single- and\nmulti-column statistics:\n\n CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n ON (expression)\n FROM table_name\n\n CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n [ ( statistics_kind [, ... ] ) ]\n ON { column_name | (expression) } , { column_name | (expression) } [, ...]\n FROM table_name\n\nThe first syntax would create single-column stats, and wouldn't accept\na statistics_kind argument, because there is only one kind of\nsingle-column statistic. Maybe that might change in the future, but if\nso, it's likely that the kinds of single-column stats will be\ndifferent from the kinds of multi-column stats.\n\nIn the second syntax, the only accepted kinds would be the current\nmulti-column stats kinds (ndistinct, dependencies, and mcv), and it\nwould always build stats describing the correlations between the\ncolumns listed. It would continue to build standard/expression stats\non any expressions in the list, but that's more of an implementation\ndetail.\n\nIt would no longer be possible to do \"CREATE STATISTICS s\n(expressions) ON (expr1), (expr2) FROM tbl\". Instead, you'd have to\nissue 2 separate \"CREATE STATISTICS\" commands, but that seems more\nlogical, because they're independent stats.\n\nThe parsing code might not change much, but some of the errors would\nbe different. For example, the errors \"building only extended\nexpression statistics on simple columns not allowed\" and \"extended\nexpression statistics require at least one expression\" would go away,\nand the error \"extended statistics require at least 2 columns\" might\nbecome more specific, depending on the stats kind.\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/flat/1009.1579038764%40sss.pgh.pa.us#8624792a20ae595683b574f5933dae53\n\n\n",
"msg_date": "Mon, 7 Dec 2020 09:56:00 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 12/7/20 10:56 AM, Dean Rasheed wrote:\n> On Thu, 3 Dec 2020 at 15:23, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Attached is a patch series rebased on top of 25a9e54d2d.\n> \n> After reading this thread and [1], I think I prefer the name\n> \"standard\" rather than \"expressions\", because it is meant to describe\n> the kind of statistics being built rather than what they apply to, but\n> maybe that name doesn't actually need to be exposed to the end user:\n> \n> Looking at the current behaviour, there are a couple of things that\n> seem a little odd, even though they are understandable. For example,\n> the fact that\n> \n> CREATE STATISTICS s (expressions) ON (expr), col FROM tbl;\n> \n> fails, but\n> \n> CREATE STATISTICS s (expressions, mcv) ON (expr), col FROM tbl;\n> \n> succeeds and creates both \"expressions\" and \"mcv\" statistics. Also, the syntax\n> \n> CREATE STATISTICS s (expressions) ON (expr1), (expr2) FROM tbl;\n> \n> tends to suggest that it's going to create statistics on the pair of\n> expressions, describing their correlation, when actually it builds 2\n> independent statistics. Also, this error text isn't entirely accurate:\n> \n> CREATE STATISTICS s ON col FROM tbl;\n> ERROR: extended statistics require at least 2 columns\n> \n> because extended statistics don't always require 2 columns, they can\n> also just have an expression, or multiple expressions and 0 or 1\n> columns.\n> \n> I think a lot of this stems from treating \"expressions\" in the same\n> way as the other (multi-column) stats kinds, and it might actually be\n> neater to have separate documented syntaxes for single- and\n> multi-column statistics:\n> \n> CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n> ON (expression)\n> FROM table_name\n> \n> CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n> [ ( statistics_kind [, ... ] ) ]\n> ON { column_name | (expression) } , { column_name | (expression) } [, ...]\n> FROM table_name\n> \n> The first syntax would create single-column stats, and wouldn't accept\n> a statistics_kind argument, because there is only one kind of\n> single-column statistic. Maybe that might change in the future, but if\n> so, it's likely that the kinds of single-column stats will be\n> different from the kinds of multi-column stats.\n> \n> In the second syntax, the only accepted kinds would be the current\n> multi-column stats kinds (ndistinct, dependencies, and mcv), and it\n> would always build stats describing the correlations between the\n> columns listed. It would continue to build standard/expression stats\n> on any expressions in the list, but that's more of an implementation\n> detail.\n> \n> It would no longer be possible to do \"CREATE STATISTICS s\n> (expressions) ON (expr1), (expr2) FROM tbl\". Instead, you'd have to\n> issue 2 separate \"CREATE STATISTICS\" commands, but that seems more\n> logical, because they're independent stats.\n> \n> The parsing code might not change much, but some of the errors would\n> be different. For example, the errors \"building only extended\n> expression statistics on simple columns not allowed\" and \"extended\n> expression statistics require at least one expression\" would go away,\n> and the error \"extended statistics require at least 2 columns\" might\n> become more specific, depending on the stats kind.\n> \n\nI think it makes sense in general. I see two issues with this approach,\nthough:\n\n* By adding expression/standard stats for individual statistics, it\nmakes the list of statistics longer - I wonder if this might have\nmeasurable impact on lookups in this list.\n\n* I'm not sure it's a good idea that the second syntax would always\nbuild the per-expression stats. Firstly, it seems a bit strange that it\nbehaves differently than the other kinds. Secondly, I wonder if there\nare cases where it'd be desirable to explicitly disable building these\nper-expression stats. For example, what if we have multiple extended\nstatistics objects, overlapping on a couple expressions. It seems\npointless to build the stats for all of them.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Dec 2020 15:15:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Mon, 7 Dec 2020 at 14:15, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/7/20 10:56 AM, Dean Rasheed wrote:\n> > it might actually be\n> > neater to have separate documented syntaxes for single- and\n> > multi-column statistics:\n> >\n> > CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n> > ON (expression)\n> > FROM table_name\n> >\n> > CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n> > [ ( statistics_kind [, ... ] ) ]\n> > ON { column_name | (expression) } , { column_name | (expression) } [, ...]\n> > FROM table_name\n>\n> I think it makes sense in general. I see two issues with this approach,\n> though:\n>\n> * By adding expression/standard stats for individual statistics, it\n> makes the list of statistics longer - I wonder if this might have\n> measurable impact on lookups in this list.\n>\n> * I'm not sure it's a good idea that the second syntax would always\n> build the per-expression stats. Firstly, it seems a bit strange that it\n> behaves differently than the other kinds. Secondly, I wonder if there\n> are cases where it'd be desirable to explicitly disable building these\n> per-expression stats. For example, what if we have multiple extended\n> statistics objects, overlapping on a couple expressions. It seems\n> pointless to build the stats for all of them.\n>\n\nHmm, I'm not sure it would really be a good idea to build MCV stats on\nexpressions without also building the standard stats for those\nexpressions, otherwise the assumptions that\nmcv_combine_selectivities() makes about simple_sel and mcv_basesel\nwouldn't really hold. But then, if multiple MCV stats shared the same\nexpression, it would be quite wasteful to build standard stats on the\nexpression more than once.\n\nIt feels like it should build a single extended stats object for each\nunique expression, with appropriate dependencies for any MCV stats\nthat used those expressions, but I'm not sure how complex that would\nbe. Dropping the last MCV stat object using a standard expression stat\nobject might reasonably drop the expression stats ... except if they\nwere explicitly created by the user, independently of any MCV stats.\nThat could get quite messy.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 7 Dec 2020 16:02:00 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 12/7/20 5:02 PM, Dean Rasheed wrote:\n> On Mon, 7 Dec 2020 at 14:15, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 12/7/20 10:56 AM, Dean Rasheed wrote:\n>>> it might actually be\n>>> neater to have separate documented syntaxes for single- and\n>>> multi-column statistics:\n>>>\n>>> CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n>>> ON (expression)\n>>> FROM table_name\n>>>\n>>> CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n>>> [ ( statistics_kind [, ... ] ) ]\n>>> ON { column_name | (expression) } , { column_name | (expression) } [, ...]\n>>> FROM table_name\n>>\n>> I think it makes sense in general. I see two issues with this approach,\n>> though:\n>>\n>> * By adding expression/standard stats for individual statistics, it\n>> makes the list of statistics longer - I wonder if this might have\n>> measurable impact on lookups in this list.\n>>\n>> * I'm not sure it's a good idea that the second syntax would always\n>> build the per-expression stats. Firstly, it seems a bit strange that it\n>> behaves differently than the other kinds. Secondly, I wonder if there\n>> are cases where it'd be desirable to explicitly disable building these\n>> per-expression stats. For example, what if we have multiple extended\n>> statistics objects, overlapping on a couple expressions. It seems\n>> pointless to build the stats for all of them.\n>>\n> \n> Hmm, I'm not sure it would really be a good idea to build MCV stats on\n> expressions without also building the standard stats for those\n> expressions, otherwise the assumptions that\n> mcv_combine_selectivities() makes about simple_sel and mcv_basesel\n> wouldn't really hold. But then, if multiple MCV stats shared the same\n> expression, it would be quite wasteful to build standard stats on the\n> expression more than once.\n> \n\nYeah. You're right it'd be problematic to build MCV on expressions\nwithout having the per-expression stats. In fact, that's exactly the\nproblem what forced me to add the per-expression stats to this patch.\nOriginally I planned to address it in a later patch, but I had to move\nit forward.\n\nSo I think you're right we need to ensure we have standard stats for\neach expression at least once, to make this work well.\n\n> It feels like it should build a single extended stats object for each\n> unique expression, with appropriate dependencies for any MCV stats\n> that used those expressions, but I'm not sure how complex that would\n> be. Dropping the last MCV stat object using a standard expression stat\n> object might reasonably drop the expression stats ... except if they\n> were explicitly created by the user, independently of any MCV stats.\n> That could get quite messy.\n> \n\nPossibly. But I don't think it's worth the extra complexity. I don't\nexpect people to have a lot of overlapping stats, so the amount of\nwasted space and CPU time is expected to be fairly limited.\n\nSo I don't think it's worth spending too much time on this now. Let's\njust do what you proposed, and revisit this later if needed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Dec 2020 13:44:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Tue, 8 Dec 2020 at 12:44, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> Possibly. But I don't think it's worth the extra complexity. I don't\n> expect people to have a lot of overlapping stats, so the amount of\n> wasted space and CPU time is expected to be fairly limited.\n>\n> So I don't think it's worth spending too much time on this now. Let's\n> just do what you proposed, and revisit this later if needed.\n>\n\nYes, I think that's a reasonable approach to take. As long as the\ndocumentation makes it clear that building MCV stats also causes\nstandard expression stats to be built on any expressions included in\nthe list, then the user will know and can avoid duplication most of\nthe time. I don't think there's any need for code to try to prevent\nthat -- just as we don't bother with code to prevent a user building\nmultiple indexes on the same column.\n\nThe only case where duplication won't be avoidable is where there are\nmultiple MCV stats sharing the same expression, but that's probably\nquite unlikely in practice, and it seems acceptable to leave improving\nthat as a possible future optimisation.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 11 Dec 2020 12:58:36 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 12/11/20 1:58 PM, Dean Rasheed wrote:\n> On Tue, 8 Dec 2020 at 12:44, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Possibly. But I don't think it's worth the extra complexity. I don't\n>> expect people to have a lot of overlapping stats, so the amount of\n>> wasted space and CPU time is expected to be fairly limited.\n>>\n>> So I don't think it's worth spending too much time on this now. Let's\n>> just do what you proposed, and revisit this later if needed.\n>>\n> \n> Yes, I think that's a reasonable approach to take. As long as the\n> documentation makes it clear that building MCV stats also causes\n> standard expression stats to be built on any expressions included in\n> the list, then the user will know and can avoid duplication most of\n> the time. I don't think there's any need for code to try to prevent\n> that -- just as we don't bother with code to prevent a user building\n> multiple indexes on the same column.\n> \n> The only case where duplication won't be avoidable is where there are\n> multiple MCV stats sharing the same expression, but that's probably\n> quite unlikely in practice, and it seems acceptable to leave improving\n> that as a possible future optimisation.\n> \n\nOK. Attached is an updated version, reworking it this way.\n\nI tried tweaking the grammar to differentiate these two syntax variants,\nbut that led to shift/reduce conflicts with the existing ones. I tried\nfixing that, but I ended up doing that in CreateStatistics().\n\nThe other thing is that we probably can't tie this to just MCV, because\nfunctional dependencies need the per-expression stats too. So I simply\nbuild expression stats whenever there's at least one expression.\n\nI also decided to keep the \"expressions\" statistics kind - it's not\nallowed to specify it in CREATE STATISTICS, but it's useful internally\nas it allows deciding whether to build the stats in a single place.\nOtherwise we'd need to do that every time we build the statistics, etc.\n\nI added a brief explanation to the sgml docs, not sure if that's good\nenough - maybe it needs more details.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 11 Dec 2020 21:17:40 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Fri, 11 Dec 2020 at 20:17, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> OK. Attached is an updated version, reworking it this way.\n\nCool. I think this is an exciting development, so I hope it makes it\ninto the next release.\n\nI have started looking at it. So far I have only looked at the\ncatalog, parser and client changes, but I thought it's worth posting\nmy comments so far.\n\n> I tried tweaking the grammar to differentiate these two syntax variants,\n> but that led to shift/reduce conflicts with the existing ones. I tried\n> fixing that, but I ended up doing that in CreateStatistics().\n\nYeah, that makes sense. I wasn't expecting the grammar to change.\n\n> The other thing is that we probably can't tie this to just MCV, because\n> functional dependencies need the per-expression stats too. So I simply\n> build expression stats whenever there's at least one expression.\n\nMakes sense.\n\n> I also decided to keep the \"expressions\" statistics kind - it's not\n> allowed to specify it in CREATE STATISTICS, but it's useful internally\n> as it allows deciding whether to build the stats in a single place.\n> Otherwise we'd need to do that every time we build the statistics, etc.\n\nYes, I thought that would be the easiest way to do it. Essentially the\n\"expressions\" stats kind is an internal implementation detail, hidden\nfrom the user, because it's built automatically when required, so you\ndon't need to (and can't) explicitly ask for it. This new behaviour\nseems much more logical to me.\n\n> I added a brief explanation to the sgml docs, not sure if that's good\n> enough - maybe it needs more details.\n\nYes, I think that could use a little tidying up, but I haven't looked\ntoo closely yet.\n\n\nSome other comments:\n\n* I'm not sure I understand the need for 0001. Wasn't there an earlier\nversion of this patch that just did it by re-populating the type\narray, but which still had it as an array rather than turning it into\na list? Making it a list falsifies some of the comments and\nfunction/variable name choices in that file.\n\n* There's a comment typo in catalog/Makefile -- \"are are reputedly\nother...\", should be \"there are reputedly other...\".\n\n* Looking at the pg_stats_ext view, I think perhaps expressions stats\nshould be omitted entirely from that view, since it doesn't show any\nuseful information about them. So it could remove \"e\" from the \"kinds\"\narray, and exclude rows whose only kind is \"e\", since such rows have\nno interesting data in them. Essentially, the new view\npg_stats_ext_exprs makes having any expression stats in pg_stats_ext\nredundant. Hiding this data in pg_stats_ext would also be consistent\nwith making the \"expressions\" stats kind hidden from the user.\n\n* In gram.y, it wasn't quite obvious why you converted the column list\nfor CREATE STATISTICS from an expr_list to a stats_params list. I\nfigured it out, and it makes sense, but I think it could use a\ncomment, perhaps something along the lines of the one for index_elem,\ne.g.:\n\n/*\n * Statistics attributes can be either simple column references, or arbitrary\n * expressions in parens. For compatibility with index attributes permitted\n * in CREATE INDEX, we allow an expression that's just a function call to be\n * written without parens.\n */\n\n* In parse_func.c and parse_agg.c, there are a few new error strings\nthat use the abbreviation \"stats expressions\", whereas most other\nerrors refer to \"statistics expressions\". For consistency, I think\nthey should all be the latter.\n\n* In generateClonedExtStatsStmt(), I think the \"expressions\" stats\nkind needs to be explicitly excluded, otherwise CREATE TABLE (LIKE\n...) fails if the source table has expression stats.\n\n* CreateStatistics() uses ShareUpdateExclusiveLock, but in\ntcop/utility.c the relation is opened with a ShareLock. ISTM that the\nlatter lock mode should be made to match CreateStatistics().\n\n* Why does the new code in tcop/utility.c not use\nRangeVarGetRelidExtended together with RangeVarCallbackOwnsRelation?\nThat seems preferable to doing the ACL check in CreateStatistics().\nFor one thing, as it stands, it allows the lock to be taken even if\nthe user doesn't own the table. Is it intentional that the current\ncode allows extended stats to be created on system catalogs? That\nwould be one thing that using RangeVarCallbackOwnsRelation would\nchange, but I can't see a reason to allow it.\n\n* In src/bin/psql/describe.c, I think the \\d output should also\nexclude the \"expressions\" stats kind and just list the other kinds (or\nhave no kinds list at all, if there are no other kinds), to make it\nconsistent with the CREATE STATISTICS syntax.\n\n* The pg_dump output for a stats object whose only kind is\n\"expressions\" is broken -- it includes a spurious \"()\" for the kinds\nlist.\n\nThat's it for now. I'll look at the optimiser changes next, and try to\npost more comments later this week.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 4 Jan 2021 15:34:08 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Mon, Jan 04, 2021 at 03:34:08PM +0000, Dean Rasheed wrote:\n> * I'm not sure I understand the need for 0001. Wasn't there an earlier\n> version of this patch that just did it by re-populating the type\n> array, but which still had it as an array rather than turning it into\n> a list? Making it a list falsifies some of the comments and\n> function/variable name choices in that file.\n\nThis part is from me.\n\nI can review the names if it's desired , but it'd be fine to fall back to the\nearlier patch. I thought a pglist was cleaner, but it's not needed.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 4 Jan 2021 09:45:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 1/4/21 4:34 PM, Dean Rasheed wrote:\n>\n> ... \n> \n> Some other comments:\n> \n> * I'm not sure I understand the need for 0001. Wasn't there an earlier\n> version of this patch that just did it by re-populating the type\n> array, but which still had it as an array rather than turning it into\n> a list? Making it a list falsifies some of the comments and\n> function/variable name choices in that file.\n> \n\nThat's a bit done to Justin - I think it's fine to use the older version \nrepopulating the type array, but that question is somewhat unrelated to \nthis patch.\n\n> * There's a comment typo in catalog/Makefile -- \"are are reputedly\n> other...\", should be \"there are reputedly other...\".\n> \n> * Looking at the pg_stats_ext view, I think perhaps expressions stats\n> should be omitted entirely from that view, since it doesn't show any\n> useful information about them. So it could remove \"e\" from the \"kinds\"\n> array, and exclude rows whose only kind is \"e\", since such rows have\n> no interesting data in them. Essentially, the new view\n> pg_stats_ext_exprs makes having any expression stats in pg_stats_ext\n> redundant. Hiding this data in pg_stats_ext would also be consistent\n> with making the \"expressions\" stats kind hidden from the user.\n> \n\nHmmm, not sure. I'm not sure removing 'e' from the array is a good idea. \nOn the one hand it's internal detail, on the other hand most of that \nview is internal detail too. Excluding rows with only 'e' seems \nreasonable, though. I need to think about this.\n\n> * In gram.y, it wasn't quite obvious why you converted the column list\n> for CREATE STATISTICS from an expr_list to a stats_params list. I\n> figured it out, and it makes sense, but I think it could use a\n> comment, perhaps something along the lines of the one for index_elem,\n> e.g.:\n> \n> /*\n> * Statistics attributes can be either simple column references, or arbitrary\n> * expressions in parens. For compatibility with index attributes permitted\n> * in CREATE INDEX, we allow an expression that's just a function call to be\n> * written without parens.\n> */\n> \n\nOH, right. I'd have trouble figuring this myself, and I wrote that code \nmyself only one or two months ago.\n\n> * In parse_func.c and parse_agg.c, there are a few new error strings\n> that use the abbreviation \"stats expressions\", whereas most other\n> errors refer to \"statistics expressions\". For consistency, I think\n> they should all be the latter.\n> \n\nOK, will fix.\n\n> * In generateClonedExtStatsStmt(), I think the \"expressions\" stats\n> kind needs to be explicitly excluded, otherwise CREATE TABLE (LIKE\n> ...) fails if the source table has expression stats.\n> \n\nYeah, will fix. I guess this also means we're missing some tests.\n\n> * CreateStatistics() uses ShareUpdateExclusiveLock, but in\n> tcop/utility.c the relation is opened with a ShareLock. ISTM that the\n> latter lock mode should be made to match CreateStatistics().\n> \n\nNot sure, will check.\n\n> * Why does the new code in tcop/utility.c not use\n> RangeVarGetRelidExtended together with RangeVarCallbackOwnsRelation?\n> That seems preferable to doing the ACL check in CreateStatistics().\n> For one thing, as it stands, it allows the lock to be taken even if\n> the user doesn't own the table. Is it intentional that the current\n> code allows extended stats to be created on system catalogs? That\n> would be one thing that using RangeVarCallbackOwnsRelation would\n> change, but I can't see a reason to allow it.\n> \n\nI think I copied the code from somewhere - probably expression indexes, \nor something like that. Not a proof that it's the right/better way to do \nthis, though.\n\n> * In src/bin/psql/describe.c, I think the \\d output should also\n> exclude the \"expressions\" stats kind and just list the other kinds (or\n> have no kinds list at all, if there are no other kinds), to make it\n> consistent with the CREATE STATISTICS syntax.\n> \n\nNot sure I understand. Why would this make it consistent with CREATE \nSTATISTICS? Can you elaborate?\n\n> * The pg_dump output for a stats object whose only kind is\n> \"expressions\" is broken -- it includes a spurious \"()\" for the kinds\n> list.\n> \n\nWill fix. Again, this suggests there are TAP tests missing.\n\n> That's it for now. I'll look at the optimiser changes next, and try to\n> post more comments later this week.\n> \n\nThanks!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 5 Jan 2021 01:45:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Tue, 5 Jan 2021 at 00:45, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> On 1/4/21 4:34 PM, Dean Rasheed wrote:\n> >\n> > * In src/bin/psql/describe.c, I think the \\d output should also\n> > exclude the \"expressions\" stats kind and just list the other kinds (or\n> > have no kinds list at all, if there are no other kinds), to make it\n> > consistent with the CREATE STATISTICS syntax.\n>\n> Not sure I understand. Why would this make it consistent with CREATE\n> STATISTICS? Can you elaborate?\n>\n\nThis isn't absolutely essential, but I think it would be neater. For\nexample, if I have a table with stats like this:\n\nCREATE TABLE foo(a int, b int);\nCREATE STATISTICS foo_s_ab (mcv) ON a,b FROM foo;\n\nthen the \\d output is as follows:\n\n\\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\nStatistics objects:\n \"public\".\"foo_s_ab\" (mcv) ON a, b FROM foo\n\nand the stats line matches the DDL used to create the stats. It could,\nfor example, be copy-pasted and tweaked to create similar stats on\nanother table, but even if that's not very likely, it's neat that it\nreflects how the stats were created.\n\nOTOH, if there are expressions in the list, it produces something like this:\n\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\nStatistics objects:\n \"public\".\"foo_s_ab\" (mcv, expressions) ON a, b, ((a * b)) FROM foo\n\nwhich no longer matches the DDL used, and isn't part of an accepted\nsyntax, so seems a bit inconsistent.\n\nIn general, if we're making the \"expressions\" kind an internal\nimplementation detail that just gets built automatically when needed,\nthen I think we should hide it from this sort of output, so the list\nof kinds matches the list that the user used when the stats were\ncreated.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 5 Jan 2021 14:10:08 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 1/5/21 3:10 PM, Dean Rasheed wrote:\n> On Tue, 5 Jan 2021 at 00:45, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 1/4/21 4:34 PM, Dean Rasheed wrote:\n>>>\n>>> * In src/bin/psql/describe.c, I think the \\d output should also\n>>> exclude the \"expressions\" stats kind and just list the other kinds (or\n>>> have no kinds list at all, if there are no other kinds), to make it\n>>> consistent with the CREATE STATISTICS syntax.\n>>\n>> Not sure I understand. Why would this make it consistent with CREATE\n>> STATISTICS? Can you elaborate?\n>>\n> \n> This isn't absolutely essential, but I think it would be neater. For\n> example, if I have a table with stats like this:\n> \n> CREATE TABLE foo(a int, b int);\n> CREATE STATISTICS foo_s_ab (mcv) ON a,b FROM foo;\n> \n> then the \\d output is as follows:\n> \n> \\d foo\n> Table \"public.foo\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> a | integer | | |\n> b | integer | | |\n> Statistics objects:\n> \"public\".\"foo_s_ab\" (mcv) ON a, b FROM foo\n> \n> and the stats line matches the DDL used to create the stats. It could,\n> for example, be copy-pasted and tweaked to create similar stats on\n> another table, but even if that's not very likely, it's neat that it\n> reflects how the stats were created.\n> \n> OTOH, if there are expressions in the list, it produces something like this:\n> \n> Table \"public.foo\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> a | integer | | |\n> b | integer | | |\n> Statistics objects:\n> \"public\".\"foo_s_ab\" (mcv, expressions) ON a, b, ((a * b)) FROM foo\n> \n> which no longer matches the DDL used, and isn't part of an accepted\n> syntax, so seems a bit inconsistent.\n> \n> In general, if we're making the \"expressions\" kind an internal\n> implementation detail that just gets built automatically when needed,\n> then I think we should hide it from this sort of output, so the list\n> of kinds matches the list that the user used when the stats were\n> created.\n> \n\nHmm, I see. You're probably right it's not necessary to show this, given \nthe modified handling of expression stats (which makes them an internal \ndetail, not exposed to users). I'll tweak this.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 5 Jan 2021 16:09:19 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Looking over the statscmds.c changes, there are a few XXX's and\nFIXME's that need resolving, and I had a couple of other minor\ncomments:\n\n+ /*\n+ * An expression using mutable functions is probably wrong,\n+ * since if you aren't going to get the same result for the\n+ * same data every time, it's not clear what the index entries\n+ * mean at all.\n+ */\n+ if (CheckMutability((Expr *) expr))\n+ ereport(ERROR,\n\nThat comment is presumably copied from the index code, so needs updating.\n\n\n+ /*\n+ * Disallow data types without a less-than operator\n+ *\n+ * XXX Maybe allow this, but only for EXPRESSIONS stats and\n+ * prevent building e.g. MCV etc.\n+ */\n+ atttype = exprType(expr);\n+ type = lookup_type_cache(atttype, TYPECACHE_LT_OPR);\n+ if (type->lt_opr == InvalidOid)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"expression cannot be used in\nstatistics because its type %s has no default btree operator class\",\n+ format_type_be(atttype))));\n\nAs the comment suggests, it's probably worth skipping this check if\nnumcols is 1 so that single-column stats can be built for more types\nof expressions. (I'm assuming that it's basically no more effort to\nmake that work, so I think it falls into the might-as-well-do-it\ncategory.)\n\n\n+ /*\n+ * Parse the statistics kinds. Firstly, check that this is not the\n+ * variant building statistics for a single expression, in which case\n+ * we don't allow specifying any statistis kinds. The simple variant\n+ * only has one expression, and does not allow statistics kinds.\n+ */\n+ if ((list_length(stmt->exprs) == 1) && (list_length(stxexprs) == 1))\n+ {\n\nTypo: \"statistis\"\nNit-picking, this test could just be:\n\n+ if ((numcols == 1) && (list_length(stxexprs) == 1))\n\nwhich IMO is a little more readable, and matches a similar test a\nlittle further down.\n\n\n+ /*\n+ * If there are no simply-referenced columns, give the statistics an\n+ * auto dependency on the whole table. In most cases, this will\n+ * be redundant, but it might not be if the statistics expressions\n+ * contain no Vars (which might seem strange but possible).\n+ *\n+ * XXX This is copied from index_create, not sure if it's applicable\n+ * to extended statistics too.\n+ */\n\nSeems right to me.\n\n\n+ /*\n+ * FIXME use 'expr' for expressions, which have empty column names.\n+ * For indexes this is handled in ChooseIndexColumnNames, but we\n+ * have no such function for stats.\n+ */\n+ if (!name)\n+ name = \"expr\";\n\nIn theory, this function could be made to duplicate the logic used for\nindexes, creating names like \"expr1\", \"expr2\", etc. To be honest\nthough, I don't think it's worth the effort. The code for indexes\nisn't really bulletproof anyway -- for example there might be a column\ncalled \"expr\" that is or isn't included in the index, which would make\nthe generated name ambiguous. And in any case, a name like\n\"tbl_cola_expr_colb_expr1_colc_stat\" isn't really any more useful than\n\"tbl_cola_expr_colb_expr_colc_stat\". So I'd be tempted to leave that\ncode as it is.\n\n\n+\n+/*\n+ * CheckMutability\n+ * Test whether given expression is mutable\n+ *\n+ * FIXME copied from indexcmds.c, maybe use some shared function?\n+ */\n+static bool\n+CheckMutability(Expr *expr)\n+{\n\nAs the comment says, it's quite messy duplicating this code, but I'm\nwondering whether it would be OK to just skip this check entirely. I\nthink someone else suggested that elsewhere, and I think it might not\nbe a bad idea.\n\nFor indexes, it could easily lead to wrong query results, but for\nstats the most likely problem is that the stats would get out of date\n(which they tend to do all by themselves anyway) and need rebuilding.\n\nIf you ignore intentionally crazy examples (which are still possible\neven with this check), then there are probably many legitimate cases\nwhere someone might want to use non-immutable functions in stats, and\nthis check just forces them to create an immutable wrapper function.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 6 Jan 2021 15:11:20 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Starting to look at the planner code, I found an oversight in the way\nexpression stats are read at the start of planning -- it is necessary\nto call ChangeVarNodes() on any expressions if the relid isn't 1,\notherwise the stats expressions may contain Var nodes referring to the\nwrong relation. Possibly the easiest place to do that would be in\nget_relation_statistics(), if rel->relid != 1.\n\nHere's a simple test case:\n\nCREATE TABLE foo AS SELECT x FROM generate_series(1,100000) g(x);\nCREATE STATISTICS foo_s ON (x%10) FROM foo;\nANALYSE foo;\n\nEXPLAIN SELECT * FROM foo WHERE x%10 = 0;\nEXPLAIN SELECT * FROM (SELECT 1) t, foo WHERE x%10 = 0;\n\n(in the second query, the stats don't get applied).\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 7 Jan 2021 13:49:50 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nAttached is a patch fixing most of the issues. There are a couple \nexceptions:\n\n\n > * Looking at the pg_stats_ext view, I think perhaps expressions stats\n > should be omitted entirely from that view, since it doesn't show any\n > useful information about them. So it could remove \"e\" from the \"kinds\"\n > array, and exclude rows whose only kind is \"e\", since such rows have\n > no interesting data in them. Essentially, the new view\n > pg_stats_ext_exprs makes having any expression stats in pg_stats_ext\n > redundant. Hiding this data in pg_stats_ext would also be consistent\n > with making the \"expressions\" stats kind hidden from the user.\n\nI haven't removed the expressions stats from pg_stats_ext view yet. I'm \nnot 100% sure about it yet.\n\n\n > * Why does the new code in tcop/utility.c not use\n > RangeVarGetRelidExtended together with RangeVarCallbackOwnsRelation?\n > That seems preferable to doing the ACL check in CreateStatistics().\n > For one thing, as it stands, it allows the lock to be taken even if\n > the user doesn't own the table. Is it intentional that the current\n > code allows extended stats to be created on system catalogs? That\n > would be one thing that using RangeVarCallbackOwnsRelation would\n > change, but I can't see a reason to allow it.\n\nI haven't switched utility.c to RangeVarGetRelidExtended together with \nRangeVarCallbackOwnsRelation, because the current code allows checking \nfor object type first. I don't recall why exactly was it done this way, \nbut I didn't feel like changing that in this patch.\n\nYou're however right it should not be possible to create statistics on \nsystem catalogs. For regular users that should be rejected thanks to the \nownership check, but superuser may create it. I've added proper check to \nCreateStatistics() - this is probably worth backpatching.\n\n\n > * In src/bin/psql/describe.c, I think the \\d output should also\n > exclude the \"expressions\" stats kind and just list the other kinds (or\n > have no kinds list at all, if there are no other kinds), to make it\n > consistent with the CREATE STATISTICS syntax.\n\nI've done this, but I went one step further - we hide the list of kinds \nusing the same rules as pg_dump, i.e. we don't list the kinds if all of \nthem are selected. Not sure if that's the right thing, though.\n\n\nThe rest of the issues should be fixed, I think.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 8 Jan 2021 01:57:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Fri, Jan 08, 2021 at 01:57:29AM +0100, Tomas Vondra wrote:\n> Attached is a patch fixing most of the issues. There are a couple\n> exceptions:\n\nIn the docs:\n\n+ — at the cost that its schema must be extended whenever the structure \n+ of statistics <link linkend=\"catalog-pg-statistic\"><structname>pg_statistic</structname></link> changes. \n\nshould say \"of statistics *IN* pg_statistics changes\" ?\n\n+ to an expression index. The full variant allows defining statistics objects \n+ on multiple columns and expressions, and pick which statistics kinds will \n+ be built. The per-expression statistics are built automatically when there \n\n\"and pick\" is wrong - maybe say \"and selecting which..\"\n\n+ and run a query using an expression on that column. Without the \n\nremove \"the\" ?\n\n+ extended statistics, the planner has no information about data \n+ distribution for reasults of those expression, and uses default \n\n*results\n\n+ estimates as illustrated by the first query. The planner also does \n+ not realize the value of the second column fully defines the value \n+ of the other column, because date truncated to day still identifies \n+ the month). Then expression and ndistinct statistics are built on \n\nThe \")\" is unbalanced\n\n+ /* all parts of thi expression are covered by this statistics */ \n\nthis\n\n+ * GrouExprInfos, but only if it's not known equal to any of the existing \n\nGroup\n\n+ * we don't allow specifying any statistis kinds. The simple variant \n\nstatistics\n\n+ * If no statistic type was specified, build them all (but request \n\nSay \"kind\" not \"type\" ?\n\n+ * expression is a simple Var. OTOH we check that there's at least one \n+ * statistics matching the expression. \n\none statistic (singular) ?\n\n+ * the future, we might consider \n+ */ \n\nconsider ???\n\n+-- (not it fails, when there are no simple column references) \n\nnote?\n\nThere's some remaining copy/paste stuff from index expressions:\n\nerrmsg(\"statistics expressions and predicates can refer only to the table being indexed\")));\nleft behind by evaluating the predicate or index expressions.\nSet up for predicate or expression evaluation\nNeed an EState for evaluation of index expressions and\n/* Compute and save index expression values */\nleft behind by evaluating the predicate or index expressions.\nFetch function for analyzing index expressions.\npartial-index predicates. Create it in the per-index context to be\n* When analyzing an expression index, believe the expression tree's type \n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 7 Jan 2021 20:35:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 1/8/21 3:35 AM, Justin Pryzby wrote:\n> On Fri, Jan 08, 2021 at 01:57:29AM +0100, Tomas Vondra wrote:\n>> Attached is a patch fixing most of the issues. There are a couple\n>> exceptions:\n> \n> In the docs:\n> \n> ...\n>\n\nThanks! Checking the docs and comments is a tedious work, I appreciate\nyou going through all that. I'll fix that in the next version.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 9 Jan 2021 01:04:35 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Attached is an updated version of the patch series, fixing a couple issues:\n\n1) docs issues, pointed out by Justin Pryzby\n\n2) adds ACL check to statext_extract_expression to verify access to \nattributes in the expression(s)\n\n3) adds comment to statext_is_compatible_clause_internal explaining the \nambiguity in extracting expressions for extended stats\n\n4) fixes/improves memory management in compute_expr_stats\n\n5) a bunch of minor comment and code fixes\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 16 Jan 2021 17:48:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Sat, Jan 16, 2021 at 05:48:43PM +0100, Tomas Vondra wrote:\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>expr</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Expression the extended statistics is defined on\n> + </para></entry>\n\nExpression the extended statistics ARE defined on\nOr maybe say \"on which the extended statistics are defined\"\n\n> + <para>\n> + The <command>CREATE STATISTICS</command> command has two basic forms. The\n> + simple variant allows to build statistics for a single expression, does\n\n.. ALLOWS BUILDING statistics for a single expression, AND does (or BUT does)\n\n> + Expression statistics are per-expression and are similar to creating an\n> + index on the expression, except that they avoid the overhead of the index.\n\nMaybe say \"overhead of index maintenance\"\n\n> + All functions and operators used in a statistics definition must be\n> + <quote>immutable</quote>, that is, their results must depend only on\n> + their arguments and never on any outside influence (such as\n> + the contents of another table or the current time). This restriction\n\nsay \"outside factor\" or \"external factor\"\n\n> + results of those expression, and uses default estimates as illustrated\n> + by the first query. The planner also does not realize the value of the\n\nrealize THAT\n\n> + second column fully defines the value of the other column, because date\n> + truncated to day still identifies the month. Then expression and\n> + ndistinct statistics are built on those two columns:\n\nI got an error doing this:\n\nCREATE TABLE t AS SELECT generate_series(1,9) AS i;\nCREATE STATISTICS s ON (i+1) ,(i+1+0) FROM t;\nANALYZE t;\nSELECT i+1 FROM t GROUP BY 1;\nERROR: corrupt MVNDistinct entry\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 16 Jan 2021 17:22:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 1/17/21 12:22 AM, Justin Pryzby wrote:\n> On Sat, Jan 16, 2021 at 05:48:43PM +0100, Tomas Vondra wrote:\n>> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> + <structfield>expr</structfield> <type>text</type>\n>> + </para>\n>> + <para>\n>> + Expression the extended statistics is defined on\n>> + </para></entry>\n> \n> Expression the extended statistics ARE defined on\n> Or maybe say \"on which the extended statistics are defined\"\n> \n\nI'm pretty sure \"is\" is correct because \"expression\" is singular.\n\n>> + <para>\n>> + The <command>CREATE STATISTICS</command> command has two basic forms. The\n>> + simple variant allows to build statistics for a single expression, does\n> \n> .. ALLOWS BUILDING statistics for a single expression, AND does (or BUT does)\n> \n>> + Expression statistics are per-expression and are similar to creating an\n>> + index on the expression, except that they avoid the overhead of the index.\n> \n> Maybe say \"overhead of index maintenance\"\n> \n\nYeah, that sounds better.\n\n>> + All functions and operators used in a statistics definition must be\n>> + <quote>immutable</quote>, that is, their results must depend only on\n>> + their arguments and never on any outside influence (such as\n>> + the contents of another table or the current time). This restriction\n> \n> say \"outside factor\" or \"external factor\"\n> \n\nIn fact, we've removed the immutability restriction, so this paragraph \nshould have been removed.\n\n>> + results of those expression, and uses default estimates as illustrated\n>> + by the first query. The planner also does not realize the value of the\n> \n> realize THAT\n> \n\nOK, changed.\n\n>> + second column fully defines the value of the other column, because date\n>> + truncated to day still identifies the month. Then expression and\n>> + ndistinct statistics are built on those two columns:\n> \n> I got an error doing this:\n> \n> CREATE TABLE t AS SELECT generate_series(1,9) AS i;\n> CREATE STATISTICS s ON (i+1) ,(i+1+0) FROM t;\n> ANALYZE t;\n> SELECT i+1 FROM t GROUP BY 1;\n> ERROR: corrupt MVNDistinct entry\n> \n\nThanks. There was a thinko in estimate_multivariate_ndistinct, resulting \nin mismatching the ndistinct coefficient items. The attached patch fixes \nthat, but I've realized the way we pick the \"best\" statistics may need \nsome improvements (I added an XXX comment about that).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 17 Jan 2021 01:23:39 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n+ * Check that only the base rel is mentioned. (This should be dead code\n+ * now that add_missing_from is history.)\n+ */\n+ if (list_length(pstate->p_rtable) != 1)\n\nIf it is dead code, it can be removed, right ?\n\nFor statext_mcv_clauselist_selectivity:\n\n+ // bms_free(list_attnums[listidx]);\n\nThe commented line can be removed.\n\n+bool\n+examine_clause_args2(List *args, Node **exprp, Const **cstp, bool\n*expronleftp)\n\nBetter add some comment for examine_clause_args2 since there\nis examine_clause_args() already.\n\n+ if (!ok || stats->compute_stats == NULL || stats->minrows <= 0)\n\nWhen would stats->minrows have negative value ?\n\nFor serialize_expr_stats():\n\n+ sd = table_open(StatisticRelationId, RowExclusiveLock);\n+\n+ /* lookup OID of composite type for pg_statistic */\n+ typOid = get_rel_type_id(StatisticRelationId);\n+ if (!OidIsValid(typOid))\n+ ereport(ERROR,\n\nLooks like the table_open() call can be made after the typOid check.\n\n+ Datum values[Natts_pg_statistic];\n+ bool nulls[Natts_pg_statistic];\n+ HeapTuple stup;\n+\n+ if (!stats->stats_valid)\n\nIt seems the local arrays can be declared after the validity check.\n\n+ if (enabled[i] == STATS_EXT_NDISTINCT)\n+ ndistinct_enabled = true;\n+ if (enabled[i] == STATS_EXT_DEPENDENCIES)\n+ dependencies_enabled = true;\n+ if (enabled[i] == STATS_EXT_MCV)\n\nthe second and third if should be preceded with 'else'\n\n+ReleaseDummy(HeapTuple tuple)\n+{\n+ pfree(tuple);\n\nSince ReleaseDummy() is just a single pfree call, maybe you don't need this\nmethod - call pfree in its place.\n\nCheers\n\nOn Sat, Jan 16, 2021 at 4:24 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 1/17/21 12:22 AM, Justin Pryzby wrote:\n> > On Sat, Jan 16, 2021 at 05:48:43PM +0100, Tomas Vondra wrote:\n> >> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> >> + <structfield>expr</structfield> <type>text</type>\n> >> + </para>\n> >> + <para>\n> >> + Expression the extended statistics is defined on\n> >> + </para></entry>\n> >\n> > Expression the extended statistics ARE defined on\n> > Or maybe say \"on which the extended statistics are defined\"\n> >\n>\n> I'm pretty sure \"is\" is correct because \"expression\" is singular.\n>\n> >> + <para>\n> >> + The <command>CREATE STATISTICS</command> command has two basic\n> forms. The\n> >> + simple variant allows to build statistics for a single expression,\n> does\n> >\n> > .. ALLOWS BUILDING statistics for a single expression, AND does (or BUT\n> does)\n> >\n> >> + Expression statistics are per-expression and are similar to\n> creating an\n> >> + index on the expression, except that they avoid the overhead of the\n> index.\n> >\n> > Maybe say \"overhead of index maintenance\"\n> >\n>\n> Yeah, that sounds better.\n>\n> >> + All functions and operators used in a statistics definition must be\n> >> + <quote>immutable</quote>, that is, their results must depend only on\n> >> + their arguments and never on any outside influence (such as\n> >> + the contents of another table or the current time). This\n> restriction\n> >\n> > say \"outside factor\" or \"external factor\"\n> >\n>\n> In fact, we've removed the immutability restriction, so this paragraph\n> should have been removed.\n>\n> >> + results of those expression, and uses default estimates as\n> illustrated\n> >> + by the first query. The planner also does not realize the value of\n> the\n> >\n> > realize THAT\n> >\n>\n> OK, changed.\n>\n> >> + second column fully defines the value of the other column, because\n> date\n> >> + truncated to day still identifies the month. Then expression and\n> >> + ndistinct statistics are built on those two columns:\n> >\n> > I got an error doing this:\n> >\n> > CREATE TABLE t AS SELECT generate_series(1,9) AS i;\n> > CREATE STATISTICS s ON (i+1) ,(i+1+0) FROM t;\n> > ANALYZE t;\n> > SELECT i+1 FROM t GROUP BY 1;\n> > ERROR: corrupt MVNDistinct entry\n> >\n>\n> Thanks. There was a thinko in estimate_multivariate_ndistinct, resulting\n> in mismatching the ndistinct coefficient items. The attached patch fixes\n> that, but I've realized the way we pick the \"best\" statistics may need\n> some improvements (I added an XXX comment about that).\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,+ * Check that only the base rel is mentioned. (This should be dead code+ * now that add_missing_from is history.)+ */+ if (list_length(pstate->p_rtable) != 1)If it is dead code, it can be removed, right ?For statext_mcv_clauselist_selectivity:+ // bms_free(list_attnums[listidx]);The commented line can be removed.+bool+examine_clause_args2(List *args, Node **exprp, Const **cstp, bool *expronleftp)Better add some comment for examine_clause_args2 since there is examine_clause_args() already.+ if (!ok || stats->compute_stats == NULL || stats->minrows <= 0)When would stats->minrows have negative value ?For serialize_expr_stats():+ sd = table_open(StatisticRelationId, RowExclusiveLock);++ /* lookup OID of composite type for pg_statistic */+ typOid = get_rel_type_id(StatisticRelationId);+ if (!OidIsValid(typOid))+ ereport(ERROR,Looks like the table_open() call can be made after the typOid check.+ Datum values[Natts_pg_statistic];+ bool nulls[Natts_pg_statistic];+ HeapTuple stup;++ if (!stats->stats_valid)It seems the local arrays can be declared after the validity check.+ if (enabled[i] == STATS_EXT_NDISTINCT)+ ndistinct_enabled = true;+ if (enabled[i] == STATS_EXT_DEPENDENCIES)+ dependencies_enabled = true;+ if (enabled[i] == STATS_EXT_MCV)the second and third if should be preceded with 'else'+ReleaseDummy(HeapTuple tuple)+{+ pfree(tuple);Since ReleaseDummy() is just a single pfree call, maybe you don't need this method - call pfree in its place.CheersOn Sat, Jan 16, 2021 at 4:24 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 1/17/21 12:22 AM, Justin Pryzby wrote:\n> On Sat, Jan 16, 2021 at 05:48:43PM +0100, Tomas Vondra wrote:\n>> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> + <structfield>expr</structfield> <type>text</type>\n>> + </para>\n>> + <para>\n>> + Expression the extended statistics is defined on\n>> + </para></entry>\n> \n> Expression the extended statistics ARE defined on\n> Or maybe say \"on which the extended statistics are defined\"\n> \n\nI'm pretty sure \"is\" is correct because \"expression\" is singular.\n\n>> + <para>\n>> + The <command>CREATE STATISTICS</command> command has two basic forms. The\n>> + simple variant allows to build statistics for a single expression, does\n> \n> .. ALLOWS BUILDING statistics for a single expression, AND does (or BUT does)\n> \n>> + Expression statistics are per-expression and are similar to creating an\n>> + index on the expression, except that they avoid the overhead of the index.\n> \n> Maybe say \"overhead of index maintenance\"\n> \n\nYeah, that sounds better.\n\n>> + All functions and operators used in a statistics definition must be\n>> + <quote>immutable</quote>, that is, their results must depend only on\n>> + their arguments and never on any outside influence (such as\n>> + the contents of another table or the current time). This restriction\n> \n> say \"outside factor\" or \"external factor\"\n> \n\nIn fact, we've removed the immutability restriction, so this paragraph \nshould have been removed.\n\n>> + results of those expression, and uses default estimates as illustrated\n>> + by the first query. The planner also does not realize the value of the\n> \n> realize THAT\n> \n\nOK, changed.\n\n>> + second column fully defines the value of the other column, because date\n>> + truncated to day still identifies the month. Then expression and\n>> + ndistinct statistics are built on those two columns:\n> \n> I got an error doing this:\n> \n> CREATE TABLE t AS SELECT generate_series(1,9) AS i;\n> CREATE STATISTICS s ON (i+1) ,(i+1+0) FROM t;\n> ANALYZE t;\n> SELECT i+1 FROM t GROUP BY 1;\n> ERROR: corrupt MVNDistinct entry\n> \n\nThanks. There was a thinko in estimate_multivariate_ndistinct, resulting \nin mismatching the ndistinct coefficient items. The attached patch fixes \nthat, but I've realized the way we pick the \"best\" statistics may need \nsome improvements (I added an XXX comment about that).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 16 Jan 2021 18:55:20 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Sun, Jan 17, 2021 at 01:23:39AM +0100, Tomas Vondra wrote:\n> diff --git a/doc/src/sgml/ref/create_statistics.sgml b/doc/src/sgml/ref/create_statistics.sgml\n> index 4363be50c3..5b8eb8d248 100644\n> --- a/doc/src/sgml/ref/create_statistics.sgml\n> +++ b/doc/src/sgml/ref/create_statistics.sgml\n> @@ -21,9 +21,13 @@ PostgreSQL documentation\n> \n> <refsynopsisdiv>\n> <synopsis>\n> +CREATE STATISTICS [ IF NOT EXISTS ] <replaceable class=\"parameter\">statistics_name</replaceable>\n> + ON ( <replaceable class=\"parameter\">expression</replaceable> )\n> + FROM <replaceable class=\"parameter\">table_name</replaceable>\n\n> CREATE STATISTICS [ IF NOT EXISTS ] <replaceable class=\"parameter\">statistics_name</replaceable>\n> [ ( <replaceable class=\"parameter\">statistics_kind</replaceable> [, ... ] ) ]\n> - ON <replaceable class=\"parameter\">column_name</replaceable>, <replaceable class=\"parameter\">column_name</replaceable> [, ...]\n> + ON { <replaceable class=\"parameter\">column_name</replaceable> | ( <replaceable class=\"parameter\">expression</replaceable> ) } [, ...]\n> FROM <replaceable class=\"parameter\">table_name</replaceable>\n> </synopsis>\n\nI think this part is wrong, since it's not possible to specify a single column\nin form#2.\n\nIf I'm right, the current way is:\n\n - form#1 allows expression statistics of a single expression, and doesn't\n allow specifying \"kinds\";\n\n - form#2 allows specifying \"kinds\", but always computes expression statistics,\n and requires multiple columns/expressions.\n\nSo it'd need to be column_name|expression, column_name|expression [,...]\n\n> @@ -39,6 +43,16 @@ CREATE STATISTICS [ IF NOT EXISTS ] <replaceable class=\"parameter\">statistics_na\n> database and will be owned by the user issuing the command.\n> </para>\n> \n> + <para>\n> + The <command>CREATE STATISTICS</command> command has two basic forms. The\n> + simple variant allows building statistics for a single expression, does\n> + not allow specifying any statistics kinds and provides benefits similar\n> + to an expression index. The full variant allows defining statistics objects\n> + on multiple columns and expressions, and selecting which statistics kinds will\n> + be built. The per-expression statistics are built automatically when there\n> + is at least one expression.\n> + </para>\n\n> + <varlistentry>\n> + <term><replaceable class=\"parameter\">expression</replaceable></term>\n> + <listitem>\n> + <para>\n> + The expression to be covered by the computed statistics. In this case\n> + only a single expression is required, in which case only the expression\n> + statistics kind is allowed. The order of expressions is insignificant.\n\nI think this part is wrong now ?\nI guess there's no user-facing expression \"kind\" left in the CREATE command.\nI guess \"in which case\" means \"if only one expr is specified\".\n\"expression\" could be either form#1 or form#2.\n\nMaybe it should just say:\n> + An expression to be covered by the computed statistics.\n\nMaybe somewhere else, say:\n> In the second form of the command, the order of expressions is insignificant.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 16 Jan 2021 23:29:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 1/17/21 3:55 AM, Zhihong Yu wrote:\n> Hi,\n> + * Check that only the base rel is mentioned. (This should be dead code\n> + * now that add_missing_from is history.)\n> + */\n> + if (list_length(pstate->p_rtable) != 1)\n> \n> If it is dead code, it can be removed, right ?\n> \n\nMaybe. The question is whether it really is dead code. It's copied from \ntransformIndexStmt so I kept it for now.\n\n> For statext_mcv_clauselist_selectivity:\n> \n> + // bms_free(list_attnums[listidx]);\n> > The commented line can be removed.\n>\n\nActually, this should probably do list_free on the list_exprs.\n\n> +bool\n> +examine_clause_args2(List *args, Node **exprp, Const **cstp, bool \n> *expronleftp)\n> \n> Better add some comment for examine_clause_args2 since there \n> is examine_clause_args() already.\n> \n\nYeah, this probably needs a better name. I have a feeling this may need \na refactoring to reuse more of the existing code for the expressions.\n\n> + if (!ok || stats->compute_stats == NULL || stats->minrows <= 0)\n> \n> When would stats->minrows have negative value ?\n> \n\nThe typanalyze function (e.g. std_typanalyze) can set it to negative \nvalue. This is the same check as in examine_attribute, and we need the \nsame protections I think.\n\n> For serialize_expr_stats():\n> \n> + sd = table_open(StatisticRelationId, RowExclusiveLock);\n> +\n> + /* lookup OID of composite type for pg_statistic */\n> + typOid = get_rel_type_id(StatisticRelationId);\n> + if (!OidIsValid(typOid))\n> + ereport(ERROR,\n> \n> Looks like the table_open() call can be made after the typOid check.\n> \n\nProbably, but this failure is unlikely (should never happen) so I don't \nthink this makes any real difference.\n\n> + Datum values[Natts_pg_statistic];\n> + bool nulls[Natts_pg_statistic];\n> + HeapTuple stup;\n> +\n> + if (!stats->stats_valid)\n> \n> It seems the local arrays can be declared after the validity check.\n> \n\nNope, that would be against C99.\n\n> + if (enabled[i] == STATS_EXT_NDISTINCT)\n> + ndistinct_enabled = true;\n> + if (enabled[i] == STATS_EXT_DEPENDENCIES)\n> + dependencies_enabled = true;\n> + if (enabled[i] == STATS_EXT_MCV)\n> \n> the second and third if should be preceded with 'else'\n> \n\nYeah, although this just moves existing code. But you're right it could \nuse else.\n\n> +ReleaseDummy(HeapTuple tuple)\n> +{\n> + pfree(tuple);\n> \n> Since ReleaseDummy() is just a single pfree call, maybe you don't need \n> this method - call pfree in its place.\n> \n\nNo, that's not possible because the freefunc callback expects signature\n\n void (*)(HeapTupleData *)\n\nand pfree() does not match that.\n\n\nthanks\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Jan 2021 00:14:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 1/17/21 6:29 AM, Justin Pryzby wrote:\n> On Sun, Jan 17, 2021 at 01:23:39AM +0100, Tomas Vondra wrote:\n>> diff --git a/doc/src/sgml/ref/create_statistics.sgml b/doc/src/sgml/ref/create_statistics.sgml\n>> index 4363be50c3..5b8eb8d248 100644\n>> --- a/doc/src/sgml/ref/create_statistics.sgml\n>> +++ b/doc/src/sgml/ref/create_statistics.sgml\n>> @@ -21,9 +21,13 @@ PostgreSQL documentation\n>> \n>> <refsynopsisdiv>\n>> <synopsis>\n>> +CREATE STATISTICS [ IF NOT EXISTS ] <replaceable class=\"parameter\">statistics_name</replaceable>\n>> + ON ( <replaceable class=\"parameter\">expression</replaceable> )\n>> + FROM <replaceable class=\"parameter\">table_name</replaceable>\n> \n>> CREATE STATISTICS [ IF NOT EXISTS ] <replaceable class=\"parameter\">statistics_name</replaceable>\n>> [ ( <replaceable class=\"parameter\">statistics_kind</replaceable> [, ... ] ) ]\n>> - ON <replaceable class=\"parameter\">column_name</replaceable>, <replaceable class=\"parameter\">column_name</replaceable> [, ...]\n>> + ON { <replaceable class=\"parameter\">column_name</replaceable> | ( <replaceable class=\"parameter\">expression</replaceable> ) } [, ...]\n>> FROM <replaceable class=\"parameter\">table_name</replaceable>\n>> </synopsis>\n> \n> I think this part is wrong, since it's not possible to specify a single column\n> in form#2.\n> \n> If I'm right, the current way is:\n> \n> - form#1 allows expression statistics of a single expression, and doesn't\n> allow specifying \"kinds\";\n> \n> - form#2 allows specifying \"kinds\", but always computes expression statistics,\n> and requires multiple columns/expressions.\n> \n> So it'd need to be column_name|expression, column_name|expression [,...]\n> \n\nStrictly speaking you're probably correct - there should be at least two \nelements. But I'm somewhat hesitant about making this more complex, \nbecause it'll be harder to understand.\n\n\n>> @@ -39,6 +43,16 @@ CREATE STATISTICS [ IF NOT EXISTS ] <replaceable class=\"parameter\">statistics_na\n>> database and will be owned by the user issuing the command.\n>> </para>\n>> \n>> + <para>\n>> + The <command>CREATE STATISTICS</command> command has two basic forms. The\n>> + simple variant allows building statistics for a single expression, does\n>> + not allow specifying any statistics kinds and provides benefits similar\n>> + to an expression index. The full variant allows defining statistics objects\n>> + on multiple columns and expressions, and selecting which statistics kinds will\n>> + be built. The per-expression statistics are built automatically when there\n>> + is at least one expression.\n>> + </para>\n> \n>> + <varlistentry>\n>> + <term><replaceable class=\"parameter\">expression</replaceable></term>\n>> + <listitem>\n>> + <para>\n>> + The expression to be covered by the computed statistics. In this case\n>> + only a single expression is required, in which case only the expression\n>> + statistics kind is allowed. The order of expressions is insignificant.\n> \n> I think this part is wrong now ?\n> I guess there's no user-facing expression \"kind\" left in the CREATE command.\n> I guess \"in which case\" means \"if only one expr is specified\".\n> \"expression\" could be either form#1 or form#2.\n> \n> Maybe it should just say:\n>> + An expression to be covered by the computed statistics.\n> \n> Maybe somewhere else, say:\n>> In the second form of the command, the order of expressions is insignificant.\n> \n\nYeah, this is a leftover from when there was \"expressions\" kind. I'll \nreword this a bit.\n\n\nthanks\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Jan 2021 00:26:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Sun, Jan 17, 2021 at 01:23:39AM +0100, Tomas Vondra wrote:\n> > CREATE TABLE t AS SELECT generate_series(1,9) AS i;\n> > CREATE STATISTICS s ON (i+1) ,(i+1+0) FROM t;\n> > ANALYZE t;\n> > SELECT i+1 FROM t GROUP BY 1;\n> > ERROR: corrupt MVNDistinct entry\n> \n> Thanks. There was a thinko in estimate_multivariate_ndistinct, resulting in\n> mismatching the ndistinct coefficient items. The attached patch fixes that,\n> but I've realized the way we pick the \"best\" statistics may need some\n> improvements (I added an XXX comment about that).\n\nThat maybe indicates a deficiency in testing and code coverage.\n\n| postgres=# CREATE TABLE t(i int);\n| postgres=# CREATE STATISTICS s2 ON (i+1) ,(i+1+0) FROM t;\n| postgres=# \\d t\n| Table \"public.t\"\n| Column | Type | Collation | Nullable | Default \n| --------+---------+-----------+----------+---------\n| i | integer | | | \n| Indexes:\n| \"t_i_idx\" btree (i)\n| Statistics objects:\n| \"public\".\"s2\" (ndistinct, dependencies, mcv) ON FROM t\n\non ... what ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 17 Jan 2021 22:53:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Looking through extended_stats.c, I found a corner case that can lead\nto a seg-fault:\n\nCREATE TABLE foo();\nCREATE STATISTICS s ON (1) FROM foo;\nANALYSE foo;\n\nThis crashes in lookup_var_attr_stats(), because it isn't expecting\nnvacatts to be 0. I can't think of any case where building stats on a\ntable with no analysable columns is useful, so it should probably just\nexit early in that case.\n\n\nIn BuildRelationExtStatistics(), it looks like min_attrs should be\ndeclared assert-only.\n\n\nIn evaluate_expressions():\n\n+ /* set the pointers */\n+ result = (ExprInfo *) ptr;\n+ ptr += sizeof(ExprInfo);\n\nI think that should probably have a MAXALIGN().\n\n\nA slightly bigger issue that I don't like is the way it assigns\nattribute numbers for expressions starting from\nMaxHeapAttributeNumber+1, so the first expression has an attnum of\n1601. That leads to pretty inefficient use of Bitmapsets, since most\ntables only contain a handful of columns, and there's a large block of\nunused space in the middle the Bitmapset.\n\nAn alternative approach might be to use regular attnums for columns\nand use negative indexes -1, -2, -3, ... for expressions in the stored\nstats. Then when storing and retrieving attnums from Bitmapsets, it\ncould just offset by STATS_MAX_DIMENSIONS (8) to avoid negative values\nin the Bitmapsets, since there can't be more than that many\nexpressions (just like other code stores system attributes using\nFirstLowInvalidHeapAttributeNumber).\n\nThat would be a somewhat bigger change, but hopefully fairly\nmechanical, and then some code like add_expressions_to_attributes()\nwould go away.\n\n\nLooking at the new view pg_stats_ext_exprs, I noticed that it fails to\nshow expressions until the statistics have been built. For example:\n\nCREATE TABLE foo(a int, b int);\nCREATE STATISTICS s ON (a+b), (a*b) FROM foo;\nSELECT statistics_name, tablename, expr, n_distinct FROM pg_stats_ext_exprs;\n\n statistics_name | tablename | expr | n_distinct\n-----------------+-----------+------+------------\n s | foo | |\n(1 row)\n\nbut after populating and analysing the table, this becomes:\n\n statistics_name | tablename | expr | n_distinct\n-----------------+-----------+---------+------------\n s | foo | (a + b) | 11\n s | foo | (a * b) | 11\n(2 rows)\n\nI think it should show the expressions even before the stats have been built.\n\nAnother issue is that it returns rows for non-expression stats as\nwell. For example:\n\nCREATE TABLE foo(a int, b int);\nCREATE STATISTICS s ON a, b FROM foo;\nSELECT statistics_name, tablename, expr, n_distinct FROM pg_stats_ext_exprs;\n\n statistics_name | tablename | expr | n_distinct\n-----------------+-----------+------+------------\n s | foo | |\n(1 row)\n\nand those values will never be populated, since they're not\nexpressions, so I would expect them to not be shown in the view.\n\nSo basically, instead of\n\n+ LEFT JOIN LATERAL (\n+ SELECT\n+ *\n+ FROM (\n+ SELECT\n+\nunnest(pg_get_statisticsobjdef_expressions(s.oid)) AS expr,\n+ unnest(sd.stxdexpr)::pg_statistic AS a\n+ ) x\n+ ) stat ON sd.stxdexpr IS NOT NULL;\n\nperhaps just\n\n+ JOIN LATERAL (\n+ SELECT\n+ *\n+ FROM (\n+ SELECT\n+\nunnest(pg_get_statisticsobjdef_expressions(s.oid)) AS expr,\n+ unnest(sd.stxdexpr)::pg_statistic AS a\n+ ) x\n+ ) stat ON true;\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 18 Jan 2021 15:48:28 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\nOn 1/18/21 4:48 PM, Dean Rasheed wrote:\n> Looking through extended_stats.c, I found a corner case that can lead\n> to a seg-fault:\n> \n> CREATE TABLE foo();\n> CREATE STATISTICS s ON (1) FROM foo;\n> ANALYSE foo;\n> \n> This crashes in lookup_var_attr_stats(), because it isn't expecting\n> nvacatts to be 0. I can't think of any case where building stats on a\n> table with no analysable columns is useful, so it should probably just\n> exit early in that case.\n> \n> \n> In BuildRelationExtStatistics(), it looks like min_attrs should be\n> declared assert-only.\n> \n> \n> In evaluate_expressions():\n> \n> + /* set the pointers */\n> + result = (ExprInfo *) ptr;\n> + ptr += sizeof(ExprInfo);\n> \n> I think that should probably have a MAXALIGN().\n> \n\nThanks, I'll fix all of that.\n\n> \n> A slightly bigger issue that I don't like is the way it assigns\n> attribute numbers for expressions starting from\n> MaxHeapAttributeNumber+1, so the first expression has an attnum of\n> 1601. That leads to pretty inefficient use of Bitmapsets, since most\n> tables only contain a handful of columns, and there's a large block of\n> unused space in the middle the Bitmapset.\n> \n> An alternative approach might be to use regular attnums for columns\n> and use negative indexes -1, -2, -3, ... for expressions in the stored\n> stats. Then when storing and retrieving attnums from Bitmapsets, it\n> could just offset by STATS_MAX_DIMENSIONS (8) to avoid negative values\n> in the Bitmapsets, since there can't be more than that many\n> expressions (just like other code stores system attributes using\n> FirstLowInvalidHeapAttributeNumber).\n> \n> That would be a somewhat bigger change, but hopefully fairly\n> mechanical, and then some code like add_expressions_to_attributes()\n> would go away.\n> \n\nWell, I tried this but unfortunately it's not that simple. We still need \nto build the bitmaps, so I don't think add_expression_to_attributes can \nbe just removed. I mean, we need to do the offsetting somewhere, even if \nwe change how we do it.\n\nBut the main issue is that in some cases the number of expressions is \nnot really limited by STATS_MAX_DIMENSIONS - for example when applying \nfunctional dependencies, we \"merge\" multiple statistics, so we may end \nup with more expressions. So we can't just use STATS_MAX_DIMENSIONS.\n\nAlso, if we offset regular attnums by STATS_MAX_DIMENSIONS, that inverts \nthe order of processing (so far we've assumed expressions are after \nregular attnums). So the changes are more extensive - I tried doing that \nanyway, and I'm still struggling with crashes and regression failures. \nOf course, that doesn't mean we shouldn't do it, but it's far from \nmechanical. (Some of that is probably a sign this code needs a bit more \nwork to polish.)\n\nBut I wonder if it'd be easier to just calculate the actual max attnum \nand then use it instead of MaxHeapAttributeNumber ...\n\n> \n> Looking at the new view pg_stats_ext_exprs, I noticed that it fails to\n> show expressions until the statistics have been built. For example:\n> \n> CREATE TABLE foo(a int, b int);\n> CREATE STATISTICS s ON (a+b), (a*b) FROM foo;\n> SELECT statistics_name, tablename, expr, n_distinct FROM pg_stats_ext_exprs;\n> \n> statistics_name | tablename | expr | n_distinct\n> -----------------+-----------+------+------------\n> s | foo | |\n> (1 row)\n> \n> but after populating and analysing the table, this becomes:\n> \n> statistics_name | tablename | expr | n_distinct\n> -----------------+-----------+---------+------------\n> s | foo | (a + b) | 11\n> s | foo | (a * b) | 11\n> (2 rows)\n> \n> I think it should show the expressions even before the stats have been built.\n> \n> Another issue is that it returns rows for non-expression stats as\n> well. For example:\n> \n> CREATE TABLE foo(a int, b int);\n> CREATE STATISTICS s ON a, b FROM foo;\n> SELECT statistics_name, tablename, expr, n_distinct FROM pg_stats_ext_exprs;\n> \n> statistics_name | tablename | expr | n_distinct\n> -----------------+-----------+------+------------\n> s | foo | |\n> (1 row)\n> \n> and those values will never be populated, since they're not\n> expressions, so I would expect them to not be shown in the view.\n> \n> So basically, instead of\n> \n> + LEFT JOIN LATERAL (\n> + SELECT\n> + *\n> + FROM (\n> + SELECT\n> +\n> unnest(pg_get_statisticsobjdef_expressions(s.oid)) AS expr,\n> + unnest(sd.stxdexpr)::pg_statistic AS a\n> + ) x\n> + ) stat ON sd.stxdexpr IS NOT NULL;\n> \n> perhaps just\n> \n> + JOIN LATERAL (\n> + SELECT\n> + *\n> + FROM (\n> + SELECT\n> +\n> unnest(pg_get_statisticsobjdef_expressions(s.oid)) AS expr,\n> + unnest(sd.stxdexpr)::pg_statistic AS a\n> + ) x\n> + ) stat ON true;\n\nWill fix.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 19 Jan 2021 02:57:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Tue, 19 Jan 2021 at 01:57, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> > A slightly bigger issue that I don't like is the way it assigns\n> > attribute numbers for expressions starting from\n> > MaxHeapAttributeNumber+1, so the first expression has an attnum of\n> > 1601. That leads to pretty inefficient use of Bitmapsets, since most\n> > tables only contain a handful of columns, and there's a large block of\n> > unused space in the middle the Bitmapset.\n> >\n> > An alternative approach might be to use regular attnums for columns\n> > and use negative indexes -1, -2, -3, ... for expressions in the stored\n> > stats. Then when storing and retrieving attnums from Bitmapsets, it\n> > could just offset by STATS_MAX_DIMENSIONS (8) to avoid negative values\n> > in the Bitmapsets, since there can't be more than that many\n> > expressions (just like other code stores system attributes using\n> > FirstLowInvalidHeapAttributeNumber).\n>\n> Well, I tried this but unfortunately it's not that simple. We still need\n> to build the bitmaps, so I don't think add_expression_to_attributes can\n> be just removed. I mean, we need to do the offsetting somewhere, even if\n> we change how we do it.\n\nHmm, I was imagining that the offsetting would happen in each place\nthat adds or retrieves an attnum from a Bitmapset, much like a lot of\nother code does for system attributes, and then you'd know you had an\nexpression if the resulting attnum was negative.\n\nI was also thinking that it would be these negative attnums that would\nbe stored in the stats data, so instead of something like \"1, 2 =>\n1601\", it would be \"1, 2 => -1\", so in some sense \"-1\" would be the\n\"real\" attnum associated with the expression.\n\n> But the main issue is that in some cases the number of expressions is\n> not really limited by STATS_MAX_DIMENSIONS - for example when applying\n> functional dependencies, we \"merge\" multiple statistics, so we may end\n> up with more expressions. So we can't just use STATS_MAX_DIMENSIONS.\n\nAh, I see. I hadn't really fully understood what that code was doing.\n\nISTM though that this is really an internal problem to the\ndependencies code, in that these \"merged\" Bitmapsets containing attrs\nfrom multiple different stats objects do not (and must not) ever go\noutside that local code that uses them. So that code would be free to\nuse a different offset for its own purposes -- e..g., collect all the\ndistinct expressions across all the stats objects and then offset by\nthe number of distinct expressions.\n\n> Also, if we offset regular attnums by STATS_MAX_DIMENSIONS, that inverts\n> the order of processing (so far we've assumed expressions are after\n> regular attnums). So the changes are more extensive - I tried doing that\n> anyway, and I'm still struggling with crashes and regression failures.\n> Of course, that doesn't mean we shouldn't do it, but it's far from\n> mechanical. (Some of that is probably a sign this code needs a bit more\n> work to polish.)\n\nInteresting. What code assumes expressions come after attributes?\nIdeally, I think it would be cleaner if no code assumed any particular\norder, but I can believe that it might be convenient in some\ncircumstances.\n\n> But I wonder if it'd be easier to just calculate the actual max attnum\n> and then use it instead of MaxHeapAttributeNumber ...\n\nHmm, I'm not sure how that would work. There still needs to be an\nattnum that gets stored in the database, and it has to continue to\nwork if the user adds columns to the table. That's why I was\nadvocating storing negative values, though I haven't actually tried it\nto see what might go wrong.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 21 Jan 2021 11:11:55 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 1/21/21 12:11 PM, Dean Rasheed wrote:\n> On Tue, 19 Jan 2021 at 01:57, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>> A slightly bigger issue that I don't like is the way it assigns\n>>> attribute numbers for expressions starting from\n>>> MaxHeapAttributeNumber+1, so the first expression has an attnum of\n>>> 1601. That leads to pretty inefficient use of Bitmapsets, since most\n>>> tables only contain a handful of columns, and there's a large block of\n>>> unused space in the middle the Bitmapset.\n>>>\n>>> An alternative approach might be to use regular attnums for columns\n>>> and use negative indexes -1, -2, -3, ... for expressions in the stored\n>>> stats. Then when storing and retrieving attnums from Bitmapsets, it\n>>> could just offset by STATS_MAX_DIMENSIONS (8) to avoid negative values\n>>> in the Bitmapsets, since there can't be more than that many\n>>> expressions (just like other code stores system attributes using\n>>> FirstLowInvalidHeapAttributeNumber).\n>>\n>> Well, I tried this but unfortunately it's not that simple. We still need\n>> to build the bitmaps, so I don't think add_expression_to_attributes can\n>> be just removed. I mean, we need to do the offsetting somewhere, even if\n>> we change how we do it.\n> \n> Hmm, I was imagining that the offsetting would happen in each place\n> that adds or retrieves an attnum from a Bitmapset, much like a lot of\n> other code does for system attributes, and then you'd know you had an\n> expression if the resulting attnum was negative.\n> \n> I was also thinking that it would be these negative attnums that would\n> be stored in the stats data, so instead of something like \"1, 2 =>\n> 1601\", it would be \"1, 2 => -1\", so in some sense \"-1\" would be the\n> \"real\" attnum associated with the expression.\n> \n>> But the main issue is that in some cases the number of expressions is\n>> not really limited by STATS_MAX_DIMENSIONS - for example when applying\n>> functional dependencies, we \"merge\" multiple statistics, so we may end\n>> up with more expressions. So we can't just use STATS_MAX_DIMENSIONS.\n> \n> Ah, I see. I hadn't really fully understood what that code was doing.\n> \n> ISTM though that this is really an internal problem to the\n> dependencies code, in that these \"merged\" Bitmapsets containing attrs\n> from multiple different stats objects do not (and must not) ever go\n> outside that local code that uses them. So that code would be free to\n> use a different offset for its own purposes -- e..g., collect all the\n> distinct expressions across all the stats objects and then offset by\n> the number of distinct expressions.\n> \n\n>> Also, if we offset regular attnums by STATS_MAX_DIMENSIONS, that inverts\n>> the order of processing (so far we've assumed expressions are after\n>> regular attnums). So the changes are more extensive - I tried doing that\n>> anyway, and I'm still struggling with crashes and regression failures.\n>> Of course, that doesn't mean we shouldn't do it, but it's far from\n>> mechanical. (Some of that is probably a sign this code needs a bit more\n>> work to polish.)\n> \n> Interesting. What code assumes expressions come after attributes?\n> Ideally, I think it would be cleaner if no code assumed any particular\n> order, but I can believe that it might be convenient in some\n> circumstances.\n> \n\nWell, in a bunch of places we look at the index (from the bitmap) and \nuse it to determine whether the value is a regular attribute or an \nexpression, because the values are stored in separate arrays.\n\nThis is solvable, all I'm saying (both here and in the preceding part \nabout dependencies) is that it's not entirely mechanical task. But it \nmight be better to rethink the separation of simple values and \nexpression, and make it more \"unified\" so that most of the code does not \nreally need to deal with these differences.\n\n>> But I wonder if it'd be easier to just calculate the actual max attnum\n>> and then use it instead of MaxHeapAttributeNumber ...\n> \n> Hmm, I'm not sure how that would work. There still needs to be an\n> attnum that gets stored in the database, and it has to continue to\n> work if the user adds columns to the table. That's why I was\n> advocating storing negative values, though I haven't actually tried it\n> to see what might go wrong.\n> \n\nWell, yeah, we need to identify the expression in some statistics (e.g. \nin dependencies or ndistinct items). And yeah, offsetting the expression \nattnums by MaxHeapAttributeNumber may be inefficient in this case.\n\n\nAttached is an updated version of the patch, hopefully addressing all \nissues pointed out by you, Justin and Zhihong, with the exception of the \nexpression attnums discussed here.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 22 Jan 2021 02:26:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "This already needs to be rebased on 55dc86eca.\n\nAnd needs to update rules.out.\n\nAnd doesn't address this one:\n\nOn Sun, Jan 17, 2021 at 10:53:31PM -0600, Justin Pryzby wrote:\n> | postgres=# CREATE TABLE t(i int);\n> | postgres=# CREATE STATISTICS s2 ON (i+1) ,(i+1+0) FROM t;\n> | postgres=# \\d t\n> | Table \"public.t\"\n> | Column | Type | Collation | Nullable | Default \n> | --------+---------+-----------+----------+---------\n> | i | integer | | | \n> | Indexes:\n> | \"t_i_idx\" btree (i)\n> | Statistics objects:\n> | \"public\".\"s2\" (ndistinct, dependencies, mcv) ON FROM t\n> \n> on ... what ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 21 Jan 2021 20:29:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 1/22/21 3:29 AM, Justin Pryzby wrote:\n> This already needs to be rebased on 55dc86eca.\n> \n> And needs to update rules.out.\n> \n\nWhooops. A fixed version attached.\n\n> And doesn't address this one:\n> \n> On Sun, Jan 17, 2021 at 10:53:31PM -0600, Justin Pryzby wrote:\n>> | postgres=# CREATE TABLE t(i int);\n>> | postgres=# CREATE STATISTICS s2 ON (i+1) ,(i+1+0) FROM t;\n>> | postgres=# \\d t\n>> | Table \"public.t\"\n>> | Column | Type | Collation | Nullable | Default\n>> | --------+---------+-----------+----------+---------\n>> | i | integer | | |\n>> | Indexes:\n>> | \"t_i_idx\" btree (i)\n>> | Statistics objects:\n>> | \"public\".\"s2\" (ndistinct, dependencies, mcv) ON FROM t\n>>\n>> on ... what ?\n> \n\nUmm, for me that prints:\n\ntest=# CREATE TABLE t(i int);\nCREATE TABLE\ntest=# CREATE STATISTICS s2 ON (i+1) ,(i+1+0) FROM t;\nCREATE STATISTICS\ntest=# \\d t\n Table \"public.t\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n i | integer | | |\nStatistics objects:\n \"public\".\"s2\" ON ((i + 1)), (((i + 1) + 0)) FROM t\n\nwhich I think is OK. But maybe there's something else to trigger the \nproblem?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 22 Jan 2021 04:49:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 04:49:51AM +0100, Tomas Vondra wrote:\n> > > | Statistics objects:\n> > > | \"public\".\"s2\" (ndistinct, dependencies, mcv) ON FROM t\n> \n> Umm, for me that prints:\n\n> \"public\".\"s2\" ON ((i + 1)), (((i + 1) + 0)) FROM t\n> \n> which I think is OK. But maybe there's something else to trigger the\n> problem?\n\nOh. It's because I was using /usr/bin/psql and not ./src/bin/psql.\nI think it's considered ok if old client's \\d commands don't work on new\nserver, but it's not clear to me if it's ok if they misbehave. It's almost\nbetter it made an ERROR.\n\nIn any case, why are there so many parentheses ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 21 Jan 2021 22:01:01 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 10:01:01PM -0600, Justin Pryzby wrote:\n> On Fri, Jan 22, 2021 at 04:49:51AM +0100, Tomas Vondra wrote:\n> > > > | Statistics objects:\n> > > > | \"public\".\"s2\" (ndistinct, dependencies, mcv) ON FROM t\n> > \n> > Umm, for me that prints:\n> \n> > \"public\".\"s2\" ON ((i + 1)), (((i + 1) + 0)) FROM t\n> > \n> > which I think is OK. But maybe there's something else to trigger the\n> > problem?\n> \n> Oh. It's because I was using /usr/bin/psql and not ./src/bin/psql.\n> I think it's considered ok if old client's \\d commands don't work on new\n> server, but it's not clear to me if it's ok if they misbehave. It's almost\n> better it made an ERROR.\n\nI think you'll maybe have to do something better - this seems a bit too weird:\n\n| postgres=# CREATE STATISTICS s2 ON (i+1) ,i FROM t;\n| postgres=# \\d t\n| ...\n| \"public\".\"s2\" (ndistinct, dependencies, mcv) ON i FROM t\n\nIt suggests including additional columns in stxkeys for each expression.\nMaybe that also helps give direction to response to Dean's concern?\n\nThat doesn't make old psql do anything more desirable, though.\nUnless you also added attributes, all you can do is make it say things like\n\"columns: ctid\".\n\n> In any case, why are there so many parentheses ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 21 Jan 2021 22:46:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Fri, 22 Jan 2021 at 04:46, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> I think you'll maybe have to do something better - this seems a bit too weird:\n>\n> | postgres=# CREATE STATISTICS s2 ON (i+1) ,i FROM t;\n> | postgres=# \\d t\n> | ...\n> | \"public\".\"s2\" (ndistinct, dependencies, mcv) ON i FROM t\n>\n\nI guess that's not surprising, given that old psql knows nothing about\nexpressions in stats.\n\nIn general, I think connecting old versions of psql to newer servers\nis not supported. You're lucky if \\d works at all. So it shouldn't be\nthis patch's responsibility to make that output nicer.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 22 Jan 2021 09:00:40 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 1/22/21 10:00 AM, Dean Rasheed wrote:\n> On Fri, 22 Jan 2021 at 04:46, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> I think you'll maybe have to do something better - this seems a bit too weird:\n>>\n>> | postgres=# CREATE STATISTICS s2 ON (i+1) ,i FROM t;\n>> | postgres=# \\d t\n>> | ...\n>> | \"public\".\"s2\" (ndistinct, dependencies, mcv) ON i FROM t\n>>\n> \n> I guess that's not surprising, given that old psql knows nothing about\n> expressions in stats.\n> \n> In general, I think connecting old versions of psql to newer servers\n> is not supported. You're lucky if \\d works at all. So it shouldn't be\n> this patch's responsibility to make that output nicer.\n> \n\nYeah. It's not clear to me what exactly could we do with this, without \n\"backpatching\" the old psql or making the ruleutils.c consider version \nof the psql. Neither of these seems possible/acceptable.\n\nI'm sure this is not the only place showing \"incomplete\" information in \nold psql on new server.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Jan 2021 14:06:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 1/22/21 5:01 AM, Justin Pryzby wrote:\n> On Fri, Jan 22, 2021 at 04:49:51AM +0100, Tomas Vondra wrote:\n>>>> | Statistics objects:\n>>>> | \"public\".\"s2\" (ndistinct, dependencies, mcv) ON FROM t\n>>\n>> Umm, for me that prints:\n> \n>> \"public\".\"s2\" ON ((i + 1)), (((i + 1) + 0)) FROM t\n>>\n>> which I think is OK. But maybe there's something else to trigger the\n>> problem?\n> \n> Oh. It's because I was using /usr/bin/psql and not ./src/bin/psql.\n> I think it's considered ok if old client's \\d commands don't work on new\n> server, but it's not clear to me if it's ok if they misbehave. It's almost\n> better it made an ERROR.\n> \n\nWell, how would the server know to throw an error? We can't quite patch \nthe old psql (if we could, we could just tweak the query).\n\n> In any case, why are there so many parentheses ?\n> \n\nThat's a bug in pg_get_statisticsobj_worker, probably. It shouldn't be \nadding extra parentheses, on top of what deparse_expression_pretty does. \nWill fix.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Jan 2021 14:09:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Fri, 22 Jan 2021 at 03:49, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Whooops. A fixed version attached.\n>\n\nThe change to pg_stats_ext_exprs isn't quite right, because now it\ncross joins expressions and their stats, which leads to too many rows,\nwith the wrong stats being listed against expressions. For example:\n\nCREATE TABLE foo (a int, b text);\nINSERT INTO foo SELECT 1, 'xxx' FROM generate_series(1,1000);\nCREATE STATISTICS foo_s ON (a*10), upper(b) FROM foo;\nANALYSE foo;\n\nSELECT tablename, statistics_name, expr, most_common_vals\n FROM pg_stats_ext_exprs;\n\n tablename | statistics_name | expr | most_common_vals\n-----------+-----------------+----------+------------------\n foo | foo_s | (a * 10) | {10}\n foo | foo_s | (a * 10) | {XXX}\n foo | foo_s | upper(b) | {10}\n foo | foo_s | upper(b) | {XXX}\n(4 rows)\n\n\nMore protection is still required for tables with no analysable\ncolumns. For example:\n\nCREATE TABLE foo();\nCREATE STATISTICS foo_s ON (1) FROM foo;\nINSERT INTO foo SELECT FROM generate_series(1,1000);\nANALYSE foo;\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x000000000090e9d4 in lookup_var_attr_stats (rel=0x7f7766b37598, attrs=0x0,\n exprs=0x216b258, nvacatts=0, vacatts=0x216cb40) at extended_stats.c:664\n664 stats[i]->tupDesc = vacatts[0]->tupDesc;\n\n#0 0x000000000090e9d4 in lookup_var_attr_stats (rel=0x7f7766b37598,\n attrs=0x0, exprs=0x216b258, nvacatts=0, vacatts=0x216cb40)\n at extended_stats.c:664\n#1 0x000000000090da93 in BuildRelationExtStatistics (onerel=0x7f7766b37598,\n totalrows=1000, numrows=100, rows=0x216d040, natts=0,\n vacattrstats=0x216cb40) at extended_stats.c:161\n#2 0x000000000066ea97 in do_analyze_rel (onerel=0x7f7766b37598,\n params=0x7ffc06f7d450, va_cols=0x0,\n acquirefunc=0x66f71a <acquire_sample_rows>, relpages=4, inh=false,\n in_outer_xact=false, elevel=13) at analyze.c:595\n\n\nAttached is an incremental update fixing those issues, together with a\nfew more suggested improvements:\n\nThere was quite a bit of code duplication in extended_stats.c which I\nattempted to reduce by\n\n1). Deleting examine_opclause_expression() in favour of examine_clause_args().\n2). Deleting examine_opclause_expression2() in favour of examine_clause_args2().\n3). Merging examine_clause_args() and examine_clause_args2(), renaming\nit examine_opclause_args() (which was actually the name it had in its\noriginal doc comment, despite the name in the code being different).\n4). Merging statext_extract_expression() and\nstatext_extract_expression_internal() into\nstatext_is_compatible_clause() and\nstatext_is_compatible_clause_internal() respectively.\n\nThat last change goes beyond just removing code duplication. It allows\nsupport for compound clauses that contain a mix of attribute and\nexpression clauses, for example, this simple test case wasn't\npreviously estimated well:\n\nCREATE TABLE foo (a int, b int, c int);\nINSERT INTO foo SELECT x/100, x/100, x/100 FROM generate_series(1,10000) g(x);\nCREATE STATISTICS foo_s on a,b,(c*c) FROM foo;\nANALYSE foo;\nEXPLAIN ANALYSE SELECT * FROM foo WHERE a=1 AND (b=1 OR c*c=1);\n\nI didn't add any new regression tests, but perhaps it would be worth\nadding something to test a case like that.\n\nI changed choose_best_statistics() in a couple of ways. Firstly, I\nthink it wants to only count expressions from fully covered clauses,\njust as we only count attributes if the stat covers all the attributes\nfrom a clause, since otherwise the stat cannot estimate the clause, so\nit shouldn't count. Secondly, I think the number of expressions in the\nstat needs to be added to it's number of keys, so that the choice of\nnarrowest stat with the same number of matches counts expressions in\nthe same way as attributes.\n\nI simplified the code in statext_mcv_clauselist_selectivity(), by\nattempting to handle expressions and attributes together in the same\nway, making it much closer to the original code. I don't think that\nthe check for the existence of a stat covering all the expressions in\na clause was necessary when pre-processing the list of clauses, since\nthat's checked later on, so it's enough to just detect compatible\nclauses. Also, it now checks for stats that cover both the attributes\nand the expressions from each clause, rather than one or the other, to\ncope with examples like the one above. I also updated the check for\nsimple_clauses -- what's wanted there is to identify clauses that only\nreference a single column or a single expression, so that the later\ncode doesn't apply multi-column estimates to it.\n\nI'm attaching it as a incremental patch (0004) on top of your patches,\nbut if 0003 and 0004 are collapsed together, the total number of diffs\nis less than 0003 alone.\n\nRegards,\nDean",
"msg_date": "Wed, 27 Jan 2021 11:02:51 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 1/27/21 12:02 PM, Dean Rasheed wrote:\n> On Fri, 22 Jan 2021 at 03:49, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Whooops. A fixed version attached.\n>>\n> \n> The change to pg_stats_ext_exprs isn't quite right, because now it\n> cross joins expressions and their stats, which leads to too many rows,\n> with the wrong stats being listed against expressions. For example:\n> \n> CREATE TABLE foo (a int, b text);\n> INSERT INTO foo SELECT 1, 'xxx' FROM generate_series(1,1000);\n> CREATE STATISTICS foo_s ON (a*10), upper(b) FROM foo;\n> ANALYSE foo;\n> \n> SELECT tablename, statistics_name, expr, most_common_vals\n> FROM pg_stats_ext_exprs;\n> \n> tablename | statistics_name | expr | most_common_vals\n> -----------+-----------------+----------+------------------\n> foo | foo_s | (a * 10) | {10}\n> foo | foo_s | (a * 10) | {XXX}\n> foo | foo_s | upper(b) | {10}\n> foo | foo_s | upper(b) | {XXX}\n> (4 rows)\n> \n> \n> More protection is still required for tables with no analysable\n> columns. For example:\n> \n> CREATE TABLE foo();\n> CREATE STATISTICS foo_s ON (1) FROM foo;\n> INSERT INTO foo SELECT FROM generate_series(1,1000);\n> ANALYSE foo;\n> \n> Program received signal SIGSEGV, Segmentation fault.\n> 0x000000000090e9d4 in lookup_var_attr_stats (rel=0x7f7766b37598, attrs=0x0,\n> exprs=0x216b258, nvacatts=0, vacatts=0x216cb40) at extended_stats.c:664\n> 664 stats[i]->tupDesc = vacatts[0]->tupDesc;\n> \n> #0 0x000000000090e9d4 in lookup_var_attr_stats (rel=0x7f7766b37598,\n> attrs=0x0, exprs=0x216b258, nvacatts=0, vacatts=0x216cb40)\n> at extended_stats.c:664\n> #1 0x000000000090da93 in BuildRelationExtStatistics (onerel=0x7f7766b37598,\n> totalrows=1000, numrows=100, rows=0x216d040, natts=0,\n> vacattrstats=0x216cb40) at extended_stats.c:161\n> #2 0x000000000066ea97 in do_analyze_rel (onerel=0x7f7766b37598,\n> params=0x7ffc06f7d450, va_cols=0x0,\n> acquirefunc=0x66f71a <acquire_sample_rows>, relpages=4, inh=false,\n> in_outer_xact=false, elevel=13) at analyze.c:595\n> \n> \n> Attached is an incremental update fixing those issues, together with a\n> few more suggested improvements:\n> \n> There was quite a bit of code duplication in extended_stats.c which I\n> attempted to reduce by\n> \n> 1). Deleting examine_opclause_expression() in favour of examine_clause_args().\n> 2). Deleting examine_opclause_expression2() in favour of examine_clause_args2().\n> 3). Merging examine_clause_args() and examine_clause_args2(), renaming\n> it examine_opclause_args() (which was actually the name it had in its\n> original doc comment, despite the name in the code being different).\n> 4). Merging statext_extract_expression() and\n> statext_extract_expression_internal() into\n> statext_is_compatible_clause() and\n> statext_is_compatible_clause_internal() respectively.\n> \n> That last change goes beyond just removing code duplication. It allows\n> support for compound clauses that contain a mix of attribute and\n> expression clauses, for example, this simple test case wasn't\n> previously estimated well:\n> \n> CREATE TABLE foo (a int, b int, c int);\n> INSERT INTO foo SELECT x/100, x/100, x/100 FROM generate_series(1,10000) g(x);\n> CREATE STATISTICS foo_s on a,b,(c*c) FROM foo;\n> ANALYSE foo;\n> EXPLAIN ANALYSE SELECT * FROM foo WHERE a=1 AND (b=1 OR c*c=1);\n> \n> I didn't add any new regression tests, but perhaps it would be worth\n> adding something to test a case like that.\n> \n> I changed choose_best_statistics() in a couple of ways. Firstly, I\n> think it wants to only count expressions from fully covered clauses,\n> just as we only count attributes if the stat covers all the attributes\n> from a clause, since otherwise the stat cannot estimate the clause, so\n> it shouldn't count. Secondly, I think the number of expressions in the\n> stat needs to be added to it's number of keys, so that the choice of\n> narrowest stat with the same number of matches counts expressions in\n> the same way as attributes.\n> \n> I simplified the code in statext_mcv_clauselist_selectivity(), by\n> attempting to handle expressions and attributes together in the same\n> way, making it much closer to the original code. I don't think that\n> the check for the existence of a stat covering all the expressions in\n> a clause was necessary when pre-processing the list of clauses, since\n> that's checked later on, so it's enough to just detect compatible\n> clauses. Also, it now checks for stats that cover both the attributes\n> and the expressions from each clause, rather than one or the other, to\n> cope with examples like the one above. I also updated the check for\n> simple_clauses -- what's wanted there is to identify clauses that only\n> reference a single column or a single expression, so that the later\n> code doesn't apply multi-column estimates to it.\n> \n> I'm attaching it as a incremental patch (0004) on top of your patches,\n> but if 0003 and 0004 are collapsed together, the total number of diffs\n> is less than 0003 alone.\n> \n\nThanks. All of this seems like a clear improvement, both removing the\nduplicate copy-pasted code, and fixing the handling of the cases that\nmix plain variables and expressions. FWIW I agree there should be a\nregression test for this, so I'll add one.\n\nI think the main remaining issue is how we handle the expressions in\nbitmapsets. I've been experimenting with this a bit, but shifting the\nregular attnums and stashing expressions before them seems quite\ncomplex, especially when we don't know how many expressions there are\n(e.g. when merging functional dependencies). It's true using attnums\nabove MaxHeapAttributeNumber for expressions wastes ~200B, but is that\nreally an issue, considering it's very short-lived allocation?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 Feb 2021 19:40:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nAttached is a rebased patch series, merging the changes from the last\nreview into the 0003 patch, and with a WIP patch 0004 reworking the\ntracking of expressions (to address the inefficiency due to relying on\nMaxHeapAttributeNumber).\n\nThe 0004 passes is very much an experimental patch with a lot of ad hoc\nchanges. It passes make check, but it definitely needs much more work,\ncleanup and testing. At this point it's more a demonstration of what\nwould be needed to rework it like this.\n\nThe main change is that instead of handling expressions by assigning\nthem attnums above MaxHeapAttributeNumber, we assign them system-like\nattnums, i.e. negative ones. So the first one gets -1, the second one\n-2, etc. And then we shift all attnums above 0, to allow using the\nbitmapset as before.\n\nOverall, this works, but the shifting is kinda pointless - it allows us\nto build a bitmapset, but it's mostly useless because it depends on how\nmany expressions are in the statistics definition. So we can't compare\nor combine bitmapsets for different statistics, and (more importantly)\nwe can't easily compare bitmapset on attnums from clauses.\n\nUsing MaxHeapAttributeNumber allowed using the bitmapsets at least for\nregular attributes. Not sure if that's a major advantage, outweighing\nwasting some space.\n\nI wonder if we should just ditch the bitmapsets, and just use simple\narrays of attnums. I don't think we expect too many elements here,\nespecially when dealing with individual statistics. So now we're just\nbuilding and rebuilding the bitmapsets ... seems pointless.\n\n\nOne thing I'd like to improve (independently of what we do with the\nbitmapsets) is getting rid of the distinction between attributes and\nexpressions when building the statistics - currently all the various\nplaces have to care about whether the item is attribute or expression,\nand look either into the tuple or array of pre-calculated value, do\nvarious shifts to get the indexes, etc. That's quite tedious, and I've\nmade a lot of errors in that (and I'm sure there are more). So IMO we\nshould simplify this by replacing this with something containing values\nfor both attributes and expressions, handling it in a unified way.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 18 Feb 2021 02:31:44 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nAttached is a slightly improved version of the patch series, addressing\nmost of the issues raised in the previous message.\n\n\n0001-bootstrap-convert-Typ-to-a-List-20210304.patch\n0002-Allow-composite-types-in-bootstrap-20210304.patch\n\nThese two parts are without any changes.\n\n\n0003-Extended-statistics-on-expressions-20210304.patch\n\nMostly unchanged, The one improvement is removing some duplicate code in\nin mvc.c. When building the match bitmap for clauses, some of the clause\ntypes had one block for plain attributes, then a nearly identical block\nfor expressions. I got rid of that - the only thing that is really\ndifferent is determining the statistics dimension.\n\n\n0004-WIP-rework-tracking-of-expressions-20210304.patch\n\nThis is mostly unchanged of the patch reworking how we assign artificial\nattnums to expressions (negative instead of (MaxHeapAttributeNumber+i)).\nI said I want to do some cleanup, but I ended up doing most of that in\nthe 0005 patch - and I plan to squash both parts into 0003 in the end. I\nleft them separate to make 0005 easier to review for now.\n\n\n0005-WIP-unify-handling-of-attributes-and-expres-20210304.patch\n\nThis reworks how we build statistics on attributes and expressions.\nInstead of treating attributes and expressions separately, this allows\nhandling them uniformly.\n\nUntil now, the various \"build\" functions (for different statistics\nkinds) extracted attribute values from sampled tuples, but expressions\nwere pre-calculated in a separate array. Firstly to save CPU time (not\nhaving to evaluate expensive expressions repeatedly) and to keep the\ndifferent stats consistent (there might be volatile functions etc.).\n\nSo the build functions had to look at the attnum, determine if it's\nattribute or expression, and in some cases it was tricky / easy to get\nwrong.\n\nThis patch replaces this \"split\" view with a simple \"consistent\"\nrepresentation merging values from attributes and expressions, and just\npasses that to the build functions. There's no need to check the attnum,\nand handle expressions in some special way, so the build functions are\nmuch simpler / easier to understand (at least I think so).\n\nThe build data is represented by \"StatsBuildData\" struct - not sure if\nthere's a better name.\n\nI'm mostly happy with how this turned out. I'm sure there's a bit more\ncleanup needed (e.g. the merging/remapping of dependencies needs some\nrefactoring, I think) but overall this seems reasonable.\n\nI did some performance testing, I don't think there's any measurable\nperformance degradation. I'm actually wondering if we need to transform\nthe AttrNumber arrays into bitmaps in various places - maybe we should\njust do a plain linear search. We don't really expect many elements, as\neach statistics has 8 attnums at most. So maybe building the bitmapsets\nis a net loss? The one exception might be functional dependencies, where\nwe can \"merge\" multiple statistics together. But even then it'd require\nmany statistics objects to make a difference.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 4 Mar 2021 23:16:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Mon, Jan 04, 2021 at 09:45:24AM -0600, Justin Pryzby wrote:\n> On Mon, Jan 04, 2021 at 03:34:08PM +0000, Dean Rasheed wrote:\n> > * I'm not sure I understand the need for 0001. Wasn't there an earlier\n> > version of this patch that just did it by re-populating the type\n> > array, but which still had it as an array rather than turning it into\n> > a list? Making it a list falsifies some of the comments and\n> > function/variable name choices in that file.\n> \n> This part is from me.\n> \n> I can review the names if it's desired , but it'd be fine to fall back to the\n> earlier patch. I thought a pglist was cleaner, but it's not needed.\n\nThis updates the preliminary patches to address the issues Dean raised.\n\nOne advantage of using a pglist is that we can free it by calling\nlist_free_deep(Typ), rather than looping to free each of its elements.\nBut maybe for bootstrap.c it doesn't matter, and we can just write:\n| Typ = NULL; /* Leak the old Typ array */\n\n-- \nJustin",
"msg_date": "Thu, 4 Mar 2021 18:43:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 3/5/21 1:43 AM, Justin Pryzby wrote:\n> On Mon, Jan 04, 2021 at 09:45:24AM -0600, Justin Pryzby wrote:\n>> On Mon, Jan 04, 2021 at 03:34:08PM +0000, Dean Rasheed wrote:\n>>> * I'm not sure I understand the need for 0001. Wasn't there an earlier\n>>> version of this patch that just did it by re-populating the type\n>>> array, but which still had it as an array rather than turning it into\n>>> a list? Making it a list falsifies some of the comments and\n>>> function/variable name choices in that file.\n>>\n>> This part is from me.\n>>\n>> I can review the names if it's desired , but it'd be fine to fall back to the\n>> earlier patch. I thought a pglist was cleaner, but it's not needed.\n> \n> This updates the preliminary patches to address the issues Dean raised.\n> \n> One advantage of using a pglist is that we can free it by calling\n> list_free_deep(Typ), rather than looping to free each of its elements.\n> But maybe for bootstrap.c it doesn't matter, and we can just write:\n> | Typ = NULL; /* Leak the old Typ array */\n> \n\nThanks. I'll switch this in the next version of the patch series.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 5 Mar 2021 02:17:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Thu, 4 Mar 2021 at 22:16, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> Attached is a slightly improved version of the patch series, addressing\n> most of the issues raised in the previous message.\n\nCool. Sorry for the delay replying.\n\n> 0003-Extended-statistics-on-expressions-20210304.patch\n>\n> Mostly unchanged, The one improvement is removing some duplicate code in\n> in mvc.c.\n>\n> 0004-WIP-rework-tracking-of-expressions-20210304.patch\n>\n> This is mostly unchanged of the patch reworking how we assign artificial\n> attnums to expressions (negative instead of (MaxHeapAttributeNumber+i)).\n\nLooks good.\n\nI see you undid the change to get_relation_statistics() in plancat.c,\nwhich offset the attnums of plain attributes in the StatisticExtInfo\nstruct. I was going to suggest that as a simplification to the\nprevious 0004 patch. Related to that, is this comment in\ndependencies_clauselist_selectivity():\n\n /*\n * Count matching attributes - we have to undo two attnum offsets.\n * First, the dependency is offset using the number of expressions\n * for that statistics, and then (if it's a plain attribute) we\n * need to apply the same offset as above, by unique_exprs_cnt.\n */\n\nwhich needs updating, since there is now just one attnum offset, not\ntwo. Only the unique_exprs_cnt offset is relevant now.\n\nAlso, related to that change, I don't think that\nstat_covers_attributes() is needed anymore. I think that the code that\ncalls it can now just be reverted back to using bms_is_subset(), since\nthat bitmapset holds plain attributes that aren't offset.\n\n> 0005-WIP-unify-handling-of-attributes-and-expres-20210304.patch\n>\n> This reworks how we build statistics on attributes and expressions.\n> Instead of treating attributes and expressions separately, this allows\n> handling them uniformly.\n>\n> Until now, the various \"build\" functions (for different statistics\n> kinds) extracted attribute values from sampled tuples, but expressions\n> were pre-calculated in a separate array. Firstly to save CPU time (not\n> having to evaluate expensive expressions repeatedly) and to keep the\n> different stats consistent (there might be volatile functions etc.).\n>\n> So the build functions had to look at the attnum, determine if it's\n> attribute or expression, and in some cases it was tricky / easy to get\n> wrong.\n>\n> This patch replaces this \"split\" view with a simple \"consistent\"\n> representation merging values from attributes and expressions, and just\n> passes that to the build functions. There's no need to check the attnum,\n> and handle expressions in some special way, so the build functions are\n> much simpler / easier to understand (at least I think so).\n>\n> The build data is represented by \"StatsBuildData\" struct - not sure if\n> there's a better name.\n>\n> I'm mostly happy with how this turned out. I'm sure there's a bit more\n> cleanup needed (e.g. the merging/remapping of dependencies needs some\n> refactoring, I think) but overall this seems reasonable.\n\nAgreed. That's a nice improvement.\n\nI wonder if dependency_is_compatible_expression() can be merged with\ndependency_is_compatible_clause() to reduce code duplication. It\nprobably also ought to be possible to support \"Expr IN Array\" there,\nin a similar way to the other code in statext_is_compatible_clause().\nAlso, should this check rinfo->clause_relids against the passed-in\nrelid to rule out clauses referencing other relations, in the same way\nthat statext_is_compatible_clause() does?\n\n> I did some performance testing, I don't think there's any measurable\n> performance degradation. I'm actually wondering if we need to transform\n> the AttrNumber arrays into bitmaps in various places - maybe we should\n> just do a plain linear search. We don't really expect many elements, as\n> each statistics has 8 attnums at most. So maybe building the bitmapsets\n> is a net loss? The one exception might be functional dependencies, where\n> we can \"merge\" multiple statistics together. But even then it'd require\n> many statistics objects to make a difference.\n\nPossibly. There's a danger in trying to change too much at once\nthough. As it stands, I think it's fairly close to being committable,\nwith just a little more tidying up.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 5 Mar 2021 14:32:29 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nHere is an updated version of the patch series, addressing most of the \nissues raised so far. It also adapts the new version of the bootstrap \npatches shared by Justin on March 3.\n\nI've merged the two patches reworking tracking of expressions, that I \nkept separate to make review easier. But as we agree it's a good \napproach, I've merged them into the main patch. FWIW I agree trying to \nundo the bitmapsets entirely would be a step too far - the patch is \nalready quite large, so I'll leave it for the future.\n\nAs for the changes, I did a bunch of cleanup in the code supporting \nfunctional dependencies and mcv. Most of this was cosmetic in the \"not \nchanging\" behavior - comments, undoing some unnecessary changes to make \nthe code more like before, regression tests, etc.\n\nThere are a couple notable changes, worth mentioning explicitly.\n\n\n1) functional dependencies\n\nI looked at merging dependency_is_compatible_expression and \ndependency_is_compatible_clause, and I've made the code of those \nfunctions much more similar. I didn't go as far as actually merging \nthose functions. Maybe we actually should do that to correctly handle \n\"nested cases\" with Vars in expressions, but I'm not sure about that.\n\nI added support for the SAOP and OR clauses. Not sure about the OR case, \nbut SAOP was missing mostly because that feature was added after this \npatch was created.\n\nThis also required rethinking the handling of RestrictInfo - it was \nrequired for expressions but not for Vars, for some reason. But that \nwould not work for OR clauses. So now it's mostly what we do for Vars.\n\nThere's a couple minor FIXMEs remaining, I'll look into those next.\n\n\n2) ndistinct\n\nSo far the code in selfuncs.c using ndistinct stats to estimate GROUP BY \nwas quite WIP / experimental, and when I started looking at it, adding \nregression tests etc., I discovered a bunch of bugs. Some of that was \ndue to the reworks in tracking expressions, but not all. I fixed all of \nthat and cleaned the code quite a bit. I'm not going to claim it's bug \nfree, but I think it's in a much better shape now.\n\nThere's one thing that's bugging me, in how we handle \"partial\" matches. \nFor each expression we track both the original expression and the Vars \nwe extract from it. If we can't find a statistics matching the whole \nexpression, we try to match those individual Vars, and we remove the \nmatching ones from the list. And in the end we multiply the estimates \nfor the remaining Vars.\n\nThis works fine with one matching ndistinct statistics. Consider for example\n\n GROUP BY (a+b), (c+d)\n\nwith statistics on [(a+b),c] - that is, expression and one column. We \nparse the expressions into two GroupExprInfo\n\n {expr: (a+b), vars: [a, b]}\n {expr: (c+d), vars: [c, d]}\n\nand the statistics matches the first item exactly (the expression). The \nsecond expression is not in the statistics, but we match \"c\". So we end \nup with an estimate for \"(a+b), c\" and have one remaining GroupExprInfo:\n\n {expr: (c+d), vars: [d]}\n\nWithout any other statistics we estimate that as ndistinct for \"d\", so \nwe end up with\n\n ndistinct((a+b), c) * ndistinct(d)\n\nwhich mostly makes sense. It assumes ndistinct(c+d) is product of the \nndistinct estimates, but that's kinda what we've been always doing.\n\nBut now consider we have another statistics on just (c+d). In the second \nloop we end up matching this expression exactly, so we end up with\n\n ndistinct((a+b), c) * ndistinct((c+d))\n\ni.e. we kinda use the \"c\" twice. Which is a bit unfortunate. I think \nwhat we should do after the first loop is just discarding the whole \nexpression and \"expand\" into per-variable GroupExprInfo, so in the \nsecond step we would not match the (c+d) statistics.\n\nOf course, maybe there's a better way to pick the statistics, but I \nthink our conclusion so far was that people should just create \nstatistics covering all the columns in the query, to not have to match \nmultiple statistics like this.\n\n\n3) regression tests\n\nThe patch adds a bunch of regression tests - I admit I've been adding \nthe tests a bit arbitrarily, mostly copy-paste of existing tests and \ntweaking them to use expressions. This helped with identifying bugs, but \nthe runtime of the stats_ext test suite grew quite a lot - maybe 2-3x, \nand it's not one of the slowest cases on my system (~3 seconds). I think \nwe need to either reduce the number of new tests, or maybe move some of \nthe tests into a separate parallel test suite.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 7 Mar 2021 22:10:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Sun, 7 Mar 2021 at 21:10, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> 2) ndistinct\n>\n> There's one thing that's bugging me, in how we handle \"partial\" matches.\n> For each expression we track both the original expression and the Vars\n> we extract from it. If we can't find a statistics matching the whole\n> expression, we try to match those individual Vars, and we remove the\n> matching ones from the list. And in the end we multiply the estimates\n> for the remaining Vars.\n>\n> This works fine with one matching ndistinct statistics. Consider for example\n>\n> GROUP BY (a+b), (c+d)\n>\n> with statistics on [(a+b),c] - that is, expression and one column.\n\nI've just been going over this example, and I think it actually works\nslightly differently from how you described, though I haven't worked\nout the full general implications of that.\n\n> We parse the expressions into two GroupExprInfo\n>\n> {expr: (a+b), vars: [a, b]}\n> {expr: (c+d), vars: [c, d]}\n>\n\nHere, I think what you actually get, in the presence of stats on\n[(a+b),c] is actually the following two GroupExprInfos:\n\n {expr: (a+b), vars: []}\n {expr: (c+d), vars: [c, d]}\n\nbecause of the code immediately after this comment in estimate_num_groups():\n\n /*\n * If examine_variable is able to deduce anything about the GROUP BY\n * expression, treat it as a single variable even if it's really more\n * complicated.\n */\n\nAs it happens, that makes no difference in this case, where there is\njust this one stats object, but it does change things when there are\ntwo stats objects.\n\n> and the statistics matches the first item exactly (the expression). The\n> second expression is not in the statistics, but we match \"c\". So we end\n> up with an estimate for \"(a+b), c\" and have one remaining GroupExprInfo:\n>\n> {expr: (c+d), vars: [d]}\n\nRight.\n\n> Without any other statistics we estimate that as ndistinct for \"d\", so\n> we end up with\n>\n> ndistinct((a+b), c) * ndistinct(d)\n>\n> which mostly makes sense. It assumes ndistinct(c+d) is product of the\n> ndistinct estimates, but that's kinda what we've been always doing.\n\nYes, that appears to be what happens, and it's probably the best that\ncan be done with the available stats.\n\n> But now consider we have another statistics on just (c+d). In the second\n> loop we end up matching this expression exactly, so we end up with\n>\n> ndistinct((a+b), c) * ndistinct((c+d))\n\nIn this case, with stats on (c+d) as well, the two GroupExprInfos\nbuilt at the start change to:\n\n {expr: (a+b), vars: []}\n {expr: (c+d), vars: []}\n\nso it actually ends up not using any multivariate stats, but instead\nuses the two univariate expression stats, giving\n\n ndistinct((a+b)) * ndistinct((c+d))\n\nwhich actually seems pretty good, and gave very good estimates in the\nsimple test case I tried.\n\n> i.e. we kinda use the \"c\" twice. Which is a bit unfortunate. I think\n> what we should do after the first loop is just discarding the whole\n> expression and \"expand\" into per-variable GroupExprInfo, so in the\n> second step we would not match the (c+d) statistics.\n\nNot using the (c+d) stats would give either\n\n ndistinct((a+b)) * ndistinct(c) * ndistinct(d)\n\nor\n\n ndistinct((a+b), c) * ndistinct(d)\n\ndepending on exactly how the algorithm was changed. In my test, these\nboth gave worse estimates, but there are probably other datasets for\nwhich they might do better. It all depends on how much correlation\nthere is between (a+b) and c.\n\nI suspect that there is no optimal strategy for handling overlapping\nstats that works best for all datasets, but the current algorithm\nseems to do a pretty decent job.\n\n> Of course, maybe there's a better way to pick the statistics, but I\n> think our conclusion so far was that people should just create\n> statistics covering all the columns in the query, to not have to match\n> multiple statistics like this.\n\nYes, I think that's always likely to work better, especially for\nndistinct stats, where all possible permutations of subsets of the\ncolumns are included, so a single ndistinct stat can work well for a\nrange of different queries.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 17 Mar 2021 15:55:41 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nOn 3/17/21 4:55 PM, Dean Rasheed wrote:\n> On Sun, 7 Mar 2021 at 21:10, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> 2) ndistinct\n>>\n>> There's one thing that's bugging me, in how we handle \"partial\" matches.\n>> For each expression we track both the original expression and the Vars\n>> we extract from it. If we can't find a statistics matching the whole\n>> expression, we try to match those individual Vars, and we remove the\n>> matching ones from the list. And in the end we multiply the estimates\n>> for the remaining Vars.\n>>\n>> This works fine with one matching ndistinct statistics. Consider for example\n>>\n>> GROUP BY (a+b), (c+d)\n>>\n>> with statistics on [(a+b),c] - that is, expression and one column.\n> \n> I've just been going over this example, and I think it actually works\n> slightly differently from how you described, though I haven't worked\n> out the full general implications of that.\n> \n>> We parse the expressions into two GroupExprInfo\n>>\n>> {expr: (a+b), vars: [a, b]}\n>> {expr: (c+d), vars: [c, d]}\n>>\n> \n> Here, I think what you actually get, in the presence of stats on\n> [(a+b),c] is actually the following two GroupExprInfos:\n> \n> {expr: (a+b), vars: []}\n> {expr: (c+d), vars: [c, d]}\n> \n\nYeah, right. To be precise, we get\n\n {expr: (a+b), vars: [(a+b)]}\n\nbecause in the first case we pass NIL, so add_unique_group_expr treats\nthe whole expression as a var (a bit strange, but OK).\n\n> because of the code immediately after this comment in estimate_num_groups():\n> \n> /*\n> * If examine_variable is able to deduce anything about the GROUP BY\n> * expression, treat it as a single variable even if it's really more\n> * complicated.\n> */\n> \n> As it happens, that makes no difference in this case, where there is\n> just this one stats object, but it does change things when there are\n> two stats objects.\n> \n>> and the statistics matches the first item exactly (the expression). The\n>> second expression is not in the statistics, but we match \"c\". So we end\n>> up with an estimate for \"(a+b), c\" and have one remaining GroupExprInfo:\n>>\n>> {expr: (c+d), vars: [d]}\n> \n> Right.\n> \n>> Without any other statistics we estimate that as ndistinct for \"d\", so\n>> we end up with\n>>\n>> ndistinct((a+b), c) * ndistinct(d)\n>>\n>> which mostly makes sense. It assumes ndistinct(c+d) is product of the\n>> ndistinct estimates, but that's kinda what we've been always doing.\n> \n> Yes, that appears to be what happens, and it's probably the best that\n> can be done with the available stats.\n> \n>> But now consider we have another statistics on just (c+d). In the second\n>> loop we end up matching this expression exactly, so we end up with\n>>\n>> ndistinct((a+b), c) * ndistinct((c+d))\n> \n> In this case, with stats on (c+d) as well, the two GroupExprInfos\n> built at the start change to:\n> \n> {expr: (a+b), vars: []}\n> {expr: (c+d), vars: []}\n> \n> so it actually ends up not using any multivariate stats, but instead\n> uses the two univariate expression stats, giving\n> \n> ndistinct((a+b)) * ndistinct((c+d))\n> \n> which actually seems pretty good, and gave very good estimates in the\n> simple test case I tried.\n> \n\nYeah, that works pretty well in this case.\n\nI wonder if we'd be better off extracting the Vars and doing what I\nmistakenly described as the current behavior. That's essentially mean\nextracting the Vars even in the case where we now pass NIL.\n\nMy concern is that the current behavior (where we prefer expression\nstats over multi-column stats to some extent) works fine as long as the\nparts are independent, but once there's dependency it's probably more\nlikely to produce underestimates. I think underestimates for grouping\nestimates were a risk in the past, so let's not make that worse.\n\n>> i.e. we kinda use the \"c\" twice. Which is a bit unfortunate. I think\n>> what we should do after the first loop is just discarding the whole\n>> expression and \"expand\" into per-variable GroupExprInfo, so in the\n>> second step we would not match the (c+d) statistics.\n> \n> Not using the (c+d) stats would give either\n> \n> ndistinct((a+b)) * ndistinct(c) * ndistinct(d)\n> \n> or\n> \n> ndistinct((a+b), c) * ndistinct(d)\n> \n> depending on exactly how the algorithm was changed. In my test, these\n> both gave worse estimates, but there are probably other datasets for\n> which they might do better. It all depends on how much correlation\n> there is between (a+b) and c.\n> \n> I suspect that there is no optimal strategy for handling overlapping\n> stats that works best for all datasets, but the current algorithm\n> seems to do a pretty decent job.\n> \n>> Of course, maybe there's a better way to pick the statistics, but I\n>> think our conclusion so far was that people should just create\n>> statistics covering all the columns in the query, to not have to match\n>> multiple statistics like this.\n> \n> Yes, I think that's always likely to work better, especially for\n> ndistinct stats, where all possible permutations of subsets of the\n> columns are included, so a single ndistinct stat can work well for a\n> range of different queries.\n> \n\nYeah, I agree that's a reasonable mitigation. Ultimately, there's no\nperfect algorithm how to pick and combine stats when we don't know if\nthere even is a statistical dependency between the subsets of columns.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Mar 2021 18:25:58 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, 17 Mar 2021 at 17:26, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> My concern is that the current behavior (where we prefer expression\n> stats over multi-column stats to some extent) works fine as long as the\n> parts are independent, but once there's dependency it's probably more\n> likely to produce underestimates. I think underestimates for grouping\n> estimates were a risk in the past, so let's not make that worse.\n>\n\nI'm not sure the current behaviour really is preferring expression\nstats over multi-column stats. In this example, where we're grouping\nby (a+b), (c+d) and have stats on [(a+b),c] and (c+d), neither of\nthose multi-column stats actually match more than one\ncolumn/expression. If anything, I'd go the other way and say that it\nwas wrong to use the [(a+b),c] stats in the first case, where they\nwere the only stats available, since those stats aren't really\napplicable to (c+d), which probably ought to be treated as\nindependent. IOW, it might have been better to estimate the first case\nas\n\n ndistinct((a+b)) * ndistinct(c) * ndistinct(d)\n\nand the second case as\n\n ndistinct((a+b)) * ndistinct((c+d))\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 17 Mar 2021 18:54:41 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 3/17/21 7:54 PM, Dean Rasheed wrote:\n> On Wed, 17 Mar 2021 at 17:26, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> My concern is that the current behavior (where we prefer expression\n>> stats over multi-column stats to some extent) works fine as long as the\n>> parts are independent, but once there's dependency it's probably more\n>> likely to produce underestimates. I think underestimates for grouping\n>> estimates were a risk in the past, so let's not make that worse.\n>>\n> \n> I'm not sure the current behaviour really is preferring expression\n> stats over multi-column stats. In this example, where we're grouping\n> by (a+b), (c+d) and have stats on [(a+b),c] and (c+d), neither of\n> those multi-column stats actually match more than one\n> column/expression. If anything, I'd go the other way and say that it\n> was wrong to use the [(a+b),c] stats in the first case, where they\n> were the only stats available, since those stats aren't really\n> applicable to (c+d), which probably ought to be treated as\n> independent. IOW, it might have been better to estimate the first case\n> as\n> \n> ndistinct((a+b)) * ndistinct(c) * ndistinct(d)\n> \n> and the second case as\n> \n> ndistinct((a+b)) * ndistinct((c+d))\n> \n\nOK. I might be confused, but isn't that what the algorithm currently\ndoes? Or am I just confused about what the first/second case refers to?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Mar 2021 20:07:22 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, 17 Mar 2021 at 19:07, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/17/21 7:54 PM, Dean Rasheed wrote:\n> >\n> > it might have been better to estimate the first case as\n> >\n> > ndistinct((a+b)) * ndistinct(c) * ndistinct(d)\n> >\n> > and the second case as\n> >\n> > ndistinct((a+b)) * ndistinct((c+d))\n>\n> OK. I might be confused, but isn't that what the algorithm currently\n> does? Or am I just confused about what the first/second case refers to?\n>\n\nNo, it currently estimates the first case as ndistinct((a+b),c) *\nndistinct(d). Having said that, maybe that's OK after all. It at least\nmakes an effort to account for any correlation between (a+b) and\n(c+d), using the known correlation between (a+b) and c. For reference,\nhere is the test case I was using (which isn't really very good for\ncatching dependence between columns):\n\nDROP TABLE IF EXISTS foo;\nCREATE TABLE foo (a int, b int, c int, d int);\nINSERT INTO foo SELECT x%10, x%11, x%12, x%13 FROM generate_series(1,100000) x;\nSELECT COUNT(DISTINCT a) FROM foo; -- 10\nSELECT COUNT(DISTINCT b) FROM foo; -- 11\nSELECT COUNT(DISTINCT c) FROM foo; -- 12\nSELECT COUNT(DISTINCT d) FROM foo; -- 13\nSELECT COUNT(DISTINCT (a+b)) FROM foo; -- 20\nSELECT COUNT(DISTINCT (c+d)) FROM foo; -- 24\nSELECT COUNT(DISTINCT ((a+b),c)) FROM foo; -- 228\nSELECT COUNT(DISTINCT ((a+b),(c+d))) FROM foo; -- 478\n\n-- First case: stats on [(a+b),c]\nCREATE STATISTICS s1(ndistinct) ON (a+b),c FROM foo;\nANALYSE foo;\nEXPLAIN ANALYSE\nSELECT (a+b), (c+d) FROM foo GROUP BY (a+b), (c+d);\n -- Estimate = 2964, Actual = 478\n -- This estimate is ndistinct((a+b),c) * ndistinct(d) = 228*13\n\n-- Second case: stats on (c+d) as well\nCREATE STATISTICS s2 ON (c+d) FROM foo;\nANALYSE foo;\nEXPLAIN ANALYSE\nSELECT (a+b), (c+d) FROM foo GROUP BY (a+b), (c+d);\n -- Estimate = 480, Actual = 478\n -- This estimate is ndistinct((a+b)) * ndistinct((c+d)) = 20*24\n\nI think that's probably pretty reasonable behaviour, given incomplete\nstats (the estimate with no extended stats is capped at 10000).\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 17 Mar 2021 20:48:51 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, 17 Mar 2021 at 20:48, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> For reference, here is the test case I was using (which isn't really very good for\n> catching dependence between columns):\n>\n\nAnd here's a test case with much more dependence between the columns:\n\nDROP TABLE IF EXISTS foo;\nCREATE TABLE foo (a int, b int, c int, d int);\nINSERT INTO foo SELECT x%2, x%5, x%10, x%15 FROM generate_series(1,100000) x;\nSELECT COUNT(DISTINCT a) FROM foo; -- 2\nSELECT COUNT(DISTINCT b) FROM foo; -- 5\nSELECT COUNT(DISTINCT c) FROM foo; -- 10\nSELECT COUNT(DISTINCT d) FROM foo; -- 15\nSELECT COUNT(DISTINCT (a+b)) FROM foo; -- 6\nSELECT COUNT(DISTINCT (c+d)) FROM foo; -- 20\nSELECT COUNT(DISTINCT ((a+b),c)) FROM foo; -- 10\nSELECT COUNT(DISTINCT ((a+b),(c+d))) FROM foo; -- 30\n\n-- First case: stats on [(a+b),c]\nCREATE STATISTICS s1(ndistinct) ON (a+b),c FROM foo;\nANALYSE foo;\nEXPLAIN ANALYSE\nSELECT (a+b), (c+d) FROM foo GROUP BY (a+b), (c+d);\n -- Estimate = 150, Actual = 30\n -- This estimate is ndistinct((a+b),c) * ndistinct(d) = 10*15,\n -- which is much better than ndistinct((a+b)) * ndistinct(c) *\nndistinct(d) = 6*10*15 = 900\n -- Estimate with no stats = 1500\n\n-- Second case: stats on (c+d) as well\nCREATE STATISTICS s2 ON (c+d) FROM foo;\nANALYSE foo;\nEXPLAIN ANALYSE\nSELECT (a+b), (c+d) FROM foo GROUP BY (a+b), (c+d);\n -- Estimate = 120, Actual = 30\n -- This estimate is ndistinct((a+b)) * ndistinct((c+d)) = 6*20\n\nAgain, I'd say the current behaviour is pretty good.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 17 Mar 2021 20:58:22 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 3/17/21 9:58 PM, Dean Rasheed wrote:\n> On Wed, 17 Mar 2021 at 20:48, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>\n>> For reference, here is the test case I was using (which isn't really very good for\n>> catching dependence between columns):\n>>\n> \n> And here's a test case with much more dependence between the columns:\n> \n> DROP TABLE IF EXISTS foo;\n> CREATE TABLE foo (a int, b int, c int, d int);\n> INSERT INTO foo SELECT x%2, x%5, x%10, x%15 FROM generate_series(1,100000) x;\n> SELECT COUNT(DISTINCT a) FROM foo; -- 2\n> SELECT COUNT(DISTINCT b) FROM foo; -- 5\n> SELECT COUNT(DISTINCT c) FROM foo; -- 10\n> SELECT COUNT(DISTINCT d) FROM foo; -- 15\n> SELECT COUNT(DISTINCT (a+b)) FROM foo; -- 6\n> SELECT COUNT(DISTINCT (c+d)) FROM foo; -- 20\n> SELECT COUNT(DISTINCT ((a+b),c)) FROM foo; -- 10\n> SELECT COUNT(DISTINCT ((a+b),(c+d))) FROM foo; -- 30\n> \n> -- First case: stats on [(a+b),c]\n> CREATE STATISTICS s1(ndistinct) ON (a+b),c FROM foo;\n> ANALYSE foo;\n> EXPLAIN ANALYSE\n> SELECT (a+b), (c+d) FROM foo GROUP BY (a+b), (c+d);\n> -- Estimate = 150, Actual = 30\n> -- This estimate is ndistinct((a+b),c) * ndistinct(d) = 10*15,\n> -- which is much better than ndistinct((a+b)) * ndistinct(c) *\n> ndistinct(d) = 6*10*15 = 900\n> -- Estimate with no stats = 1500\n> \n> -- Second case: stats on (c+d) as well\n> CREATE STATISTICS s2 ON (c+d) FROM foo;\n> ANALYSE foo;\n> EXPLAIN ANALYSE\n> SELECT (a+b), (c+d) FROM foo GROUP BY (a+b), (c+d);\n> -- Estimate = 120, Actual = 30\n> -- This estimate is ndistinct((a+b)) * ndistinct((c+d)) = 6*20\n> \n> Again, I'd say the current behaviour is pretty good.\n> \n\nThanks!\n\nI agree applying at least the [(a+b),c] stats is probably the right\napproach, as it means we're considering at least the available\ninformation about dependence between the columns.\n\nI think to improve this, we'll need to teach the code to use overlapping\nstatistics, a bit like conditional probability. In this case we might do\nsomething like this:\n\n ndistinct((a+b),c) * (ndistinct((c+d)) / ndistinct(c))\n\nWhich in this case would be either, for the \"less correlated\" case\n\n 228 * 24 / 12 = 446 (actual = 478, current estimate = 480)\n\nor, for the \"more correlated\" case\n\n 10 * 20 / 10 = 20 (actual = 30, current estimate = 120)\n\nBut that's clearly a matter for a future patch, and I'm sure there are\ncases where this will produce worse estimates.\n\n\nAnyway, I plan to go over the patches one more time, and start pushing\nthem sometime early next week. I don't want to leave it until the very\nlast moment in the CF.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Mar 2021 22:30:59 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, 17 Mar 2021 at 21:31, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I agree applying at least the [(a+b),c] stats is probably the right\n> approach, as it means we're considering at least the available\n> information about dependence between the columns.\n>\n> I think to improve this, we'll need to teach the code to use overlapping\n> statistics, a bit like conditional probability. In this case we might do\n> something like this:\n>\n> ndistinct((a+b),c) * (ndistinct((c+d)) / ndistinct(c))\n\nYes, I was thinking the same thing. That would be equivalent to\napplying a multiplicative \"correction\" factor of\n\n ndistinct(a,b,c,...) / ( ndistinct(a) * ndistinct(b) * ndistinct(c) * ... )\n\nfor each multivariate stat applicable to more than one\ncolumn/expression, regardless of whether those columns were already\ncovered by other multivariate stats. That might well simplify the\nimplementation, as well as probably produce better estimates.\n\n> But that's clearly a matter for a future patch, and I'm sure there are\n> cases where this will produce worse estimates.\n\nAgreed.\n\n> Anyway, I plan to go over the patches one more time, and start pushing\n> them sometime early next week. I don't want to leave it until the very\n> last moment in the CF.\n\n+1. I think they're in good enough shape for that process to start.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 18 Mar 2021 07:54:38 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi,\n\nI've pushed the first two parts, dealing with composite types during\nbootstrap. I've decided to commit both, including the array->list\nconversion, as that makes the reloads simpler. I've made two tweaks:\n\n1) I've renamed the function to reload_typ_list, which I think is better\n(and it used to be reload_typ_array).\n\n2) I've removed the did_reread assert. It'd allow just a single reload,\nwhich blocks recursive composite types - seems unnecessary, although we\ndon't need that now. I think we can't have infinite recursion, because\nwe can only load types from already defined catalogs (so no cycles).\n\n\nI've rebased and cleaned up the main part of the patch. There's a bunch\nof comments slightly reworded / cleaned up, etc. The more significant\nchanges are:\n\n1) The explanation of the example added to create_statistics.sgml was\nsomewhat wrong, so I corrected that.\n\n2) I've renamed StatBuildData to StatsBuildData.\n\n3) I've resolved the FIXMEs in examine_expression.\n\nWe don't need to do anything special about the collation, because unlike\nindexes it's not possible to specify \"collate\" for the attributes. It's\npossible to do thatin the expression, but exprCollation handles that.\n\nFor statistics target, we simply use the value determined for the\nstatistics itself. There's no way to specify that for expressions.\n\n4) I've updated the comments about ndistinct estimates in selfuncs.c,\nbecause some of it was a bit wrong/misleading - per the discussion we\nhad about matching stats to expressions.\n\n5) I've also tweaked the add_unique_group_expr so that we don't have to\nrun examine_variable() repeatedly if we already have it.\n\n6) Resolved the FIXME about acl_ok in examine_variable. Instead of just\nsetting it to 'true' it now mimics what we do for indexes. I think it's\ncorrect, but this is probably worth a review.\n\n7) psql now prints expressions only for (version > 14). I've considered\ntweaking the existing block, but that was quite incomprehensible so I\njust made a modified copy.\n\nI think this is 99.999% ready to be pushed, so barring objections I'll\ndo that in a day or two.\n\nThe part 0003 is a small tweak I'll consider, preferring exact matches\nof expressions over Var matches. It's not a clear win (but what is, in a\ngreedy algorithm), but it does improve one of the regression tests.\nMinor change, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 24 Mar 2021 01:51:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Most importantly, it looks like this forgets to update catalog documentation\nfor stxexprs and stxkind='e'\n\nIt seems like you're preferring to use pluralized \"statistics\" in a lot of\nplaces that sound wrong to me. For example:\n> Currently the first statistics wins, which seems silly.\nI can write more separately, but I think this is resolved and clarified if you\nwrite \"statistics object\" and not just \"statistics\". \n\n> + Name of schema containing table\n\nI don't know about the nearby descriptions, but this one sounds too much like a\n\"schema-containing\" table. Say \"Name of the schema which contains the table\" ?\n\n> + Name of table\n\nSay \"name of table on which the extended statistics are defined\"\n\n> + Name of extended statistics\n\n\"Name of the extended statistic object\"\n\n> + Owner of the extended statistics\n\n..object\n\n> + Expression the extended statistics is defined on\n\nI think it should say \"the extended statistic\", or \"the extended statistics\nobject\". Maybe \"..on which the extended statistic is defined\"\n\n> + of random access to the disk. (This expression is null if the expression\n> + data type does not have a <literal><</literal> operator.)\n\nexpression's data type\n\n> + much-too-small row count estimate in the first two queries. Moreover, the\n\nmaybe say \"dramatically underestimates the rowcount\"\n\n> + planner has no information about relationship between the expressions, so it\n\nthe relationship\n\n> + assumes the two <literal>WHERE</literal> and <literal>GROUP BY</literal>\n> + conditions are independent, and multiplies their selectivities together to\n> + arrive at a much-too-high group count estimate in the aggregate query.\n\nsevere overestimate ?\n\n> + This is further exacerbated by the lack of accurate statistics for the\n> + expressions, forcing the planner to use default ndistinct estimate for the\n\nuse *a default\n\n> + expression derived from ndistinct for the column. With such statistics, the\n> + planner recognizes that the conditions are correlated and arrives at much\n> + more accurate estimates.\n\nare correlated comma\n\n> +\t\t\tif (type->lt_opr == InvalidOid)\n\nThese could be !OidIsValid\n\n> +\t * expressions. It's either expensive or very easy to defeat for\n> +\t * determined used, and there's no risk if we allow such statistics (the\n> +\t * statistics is useless, but harmless).\n\nI think it's meant to say \"for a determined user\" ?\n\n> +\t * If there are no simply-referenced columns, give the statistics an auto\n> +\t * dependency on the whole table. In most cases, this will be redundant,\n> +\t * but it might not be if the statistics expressions contain no Vars\n> +\t * (which might seem strange but possible).\n> +\t */\n> +\tif (!nattnums)\n> +\t{\n> +\t\tObjectAddressSet(parentobject, RelationRelationId, relid);\n> +\t\trecordDependencyOn(&myself, &parentobject, DEPENDENCY_AUTO);\n> +\t}\n\nCan this be unconditional ?\n\n> +\t * Translate the array of indexs to regular attnums for the dependency (we\n\nsp: indexes\n\n> +\t\t\t\t\t * Not found a matching expression, so we can simply skip\n\nFound no matching expr\n\n> +\t\t\t\t/* if found a matching, */\n\nmatching ..\n\n> +examine_attribute(Node *expr)\n\nMaybe you should rename this to something distinct ? So it's easy to add a\nbreakpoint there, for example.\n\n> +\tstats->anl_context = CurrentMemoryContext;\t/* XXX should be using\n> +\t\t\t\t\t\t\t\t\t\t\t\t * something else? */\n\n> +\t\tbool\t\tnulls[Natts_pg_statistic];\n...\n> +\t\t * Construct a new pg_statistic tuple\n> +\t\t */\n> +\t\tfor (i = 0; i < Natts_pg_statistic; ++i)\n> +\t\t{\n> +\t\t\tnulls[i] = false;\n> +\t\t}\n\nShouldn't you just write nulls[Natts_pg_statistic] = {false};\nor at least: memset(nulls, 0, sizeof(nulls));\n\n> +\t\t\t\t * We don't store collations used to build the statistics, but\n> +\t\t\t\t * we can use the collation for the attribute itself, as\n> +\t\t\t\t * stored in varcollid. We do reset the statistics after a\n> +\t\t\t\t * type change (including collation change), so this is OK. We\n> +\t\t\t\t * may need to relax this after allowing extended statistics\n> +\t\t\t\t * on expressions.\n\nThis text should be updated or removed ?\n\n> @@ -2705,7 +2705,108 @@ describeOneTableDetails(const char *schemaname,\n> \t\t}\n> \n> \t\t/* print any extended statistics */\n> -\t\tif (pset.sversion >= 100000)\n> +\t\tif (pset.sversion >= 140000)\n> +\t\t{\n> +\t\t\tprintfPQExpBuffer(&buf,\n> +\t\t\t\t\t\t\t \"SELECT oid, \"\n> +\t\t\t\t\t\t\t \"stxrelid::pg_catalog.regclass, \"\n> +\t\t\t\t\t\t\t \"stxnamespace::pg_catalog.regnamespace AS nsp, \"\n> +\t\t\t\t\t\t\t \"stxname,\\n\"\n> +\t\t\t\t\t\t\t \"pg_get_statisticsobjdef_columns(oid) AS columns,\\n\"\n> +\t\t\t\t\t\t\t \" 'd' = any(stxkind) AS ndist_enabled,\\n\"\n> +\t\t\t\t\t\t\t \" 'f' = any(stxkind) AS deps_enabled,\\n\"\n> +\t\t\t\t\t\t\t \" 'm' = any(stxkind) AS mcv_enabled,\\n\");\n> +\n> +\t\t\tif (pset.sversion >= 130000)\n> +\t\t\t\tappendPQExpBufferStr(&buf, \" stxstattarget\\n\");\n> +\t\t\telse\n> +\t\t\t\tappendPQExpBufferStr(&buf, \" -1 AS stxstattarget\\n\");\n\n >= 130000 is fully determined by >= 14000 :)\n\n> +\t * type of the opclass, which is not interesting for our purposes. (Note:\n> +\t * if we did anything with non-expression index columns, we'd need to\n\nindex is wrong ?\n\nI mentioned a bunch of other references to \"index\" and \"predicate\" which are\nstill around:\n\nOn Thu, Jan 07, 2021 at 08:35:37PM -0600, Justin Pryzby wrote:\n> There's some remaining copy/paste stuff from index expressions:\n> \n> errmsg(\"statistics expressions and predicates can refer only to the table being indexed\")));\n> left behind by evaluating the predicate or index expressions.\n> Set up for predicate or expression evaluation\n> Need an EState for evaluation of index expressions and\n> partial-index predicates. Create it in the per-index context to be\n> Fetch function for analyzing index expressions.\n\n\n",
"msg_date": "Wed, 24 Mar 2021 01:24:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "I got this crash running sqlsmith:\n\n#1 0x00007f907574b801 in __GI_abort () at abort.c:79\n#2 0x00005646b95a35f8 in ExceptionalCondition (conditionName=conditionName@entry=0x5646b97411db \"bms_num_members(varnos) == 1\", errorType=errorType@entry=0x5646b95fa00b \"FailedAssertion\", \n fileName=fileName@entry=0x5646b9739dbe \"selfuncs.c\", lineNumber=lineNumber@entry=3332) at assert.c:69\n#3 0x00005646b955c9a1 in add_unique_group_expr (vars=0x5646bbd9e200, expr=0x5646b9eb0c30, exprinfos=0x5646bbd9e100, root=0x5646ba9a0cb0) at selfuncs.c:3332\n#4 add_unique_group_expr (root=0x5646ba9a0cb0, exprinfos=0x5646bbd9e100, expr=0x5646b9eb0c30, vars=0x5646bbd9e200) at selfuncs.c:3307\n#5 0x00005646b955d560 in estimate_num_groups () at selfuncs.c:3558\n#6 0x00005646b93ad004 in create_distinct_paths (input_rel=<optimized out>, root=0x5646ba9a0cb0) at planner.c:4808\n#7 grouping_planner () at planner.c:2238\n#8 0x00005646b93ae0ef in subquery_planner (glob=glob@entry=0x5646ba9a0b98, parse=parse@entry=0x5646ba905d80, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0)\n at planner.c:1024\n#9 0x00005646b93af543 in standard_planner (parse=0x5646ba905d80, query_string=<optimized out>, cursorOptions=256, boundParams=<optimized out>) at planner.c:404\n#10 0x00005646b94873ac in pg_plan_query (querytree=0x5646ba905d80, \n query_string=0x5646b9cd87e0 \"select distinct \\n \\n pg_catalog.variance(\\n cast(pg_catalog.pg_stat_get_bgwriter_timed_checkpoints() as int8)) over (partition by subq_0.c2 order by subq_0.c0) as c0, \\n subq_0.c2 as c1, \\n sub\"..., cursorOptions=256, boundParams=0x0) at postgres.c:821\n#11 0x00005646b94874a1 in pg_plan_queries (querytrees=0x5646baaba250, \n query_string=query_string@entry=0x5646b9cd87e0 \"select distinct \\n \\n pg_catalog.variance(\\n cast(pg_catalog.pg_stat_get_bgwriter_timed_checkpoints() as int8)) over (partition by subq_0.c2 order by subq_0.c0) as c0, \\n subq_0.c2 as c1, \\n sub\"..., cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:912\n#12 0x00005646b9487888 in exec_simple_query () at postgres.c:1104\n\n2021-03-24 03:06:12.489 CDT postmaster[11653] LOG: server process (PID 11696) was terminated by signal 6: Aborted\n2021-03-24 03:06:12.489 CDT postmaster[11653] DETAIL: Failed process was running: select distinct \n \n pg_catalog.variance(\n cast(pg_catalog.pg_stat_get_bgwriter_timed_checkpoints() as int8)) over (partition by subq_0.c2 order by subq_0.c0) as c0, \n subq_0.c2 as c1, \n subq_0.c0 as c2, \n subq_0.c2 as c3, \n subq_0.c1 as c4, \n subq_0.c1 as c5, \n subq_0.c0 as c6\n from \n (select \n ref_1.foreign_server_catalog as c0, \n ref_1.authorization_identifier as c1, \n sample_2.tgname as c2, \n ref_1.foreign_server_catalog as c3\n from \n pg_catalog.pg_stat_database_conflicts as ref_0\n left join information_schema._pg_user_mappings as ref_1\n on (ref_0.datname < ref_0.datname)\n inner join pg_catalog.pg_amproc as sample_0 tablesample system (5) \n on (cast(null as uuid) < cast(null as uuid))\n left join pg_catalog.pg_aggregate as sample_1 tablesample system (2.9) \n on (sample_0.amprocnum = sample_1.aggnumdirectargs )\n inner join pg_catalog.pg_trigger as sample_2 tablesampl\n\n\n",
"msg_date": "Wed, 24 Mar 2021 03:14:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Hi Justin,\n\nUnfortunately the query is incomplete, so I can't quite determine what\nwent wrong. Can you extract the full query causing the crash, either\nfrom the server log or from a core file?\n\nthanks\n\nOn 3/24/21 9:14 AM, Justin Pryzby wrote:\n> I got this crash running sqlsmith:\n> \n> #1 0x00007f907574b801 in __GI_abort () at abort.c:79\n> #2 0x00005646b95a35f8 in ExceptionalCondition (conditionName=conditionName@entry=0x5646b97411db \"bms_num_members(varnos) == 1\", errorType=errorType@entry=0x5646b95fa00b \"FailedAssertion\", \n> fileName=fileName@entry=0x5646b9739dbe \"selfuncs.c\", lineNumber=lineNumber@entry=3332) at assert.c:69\n> #3 0x00005646b955c9a1 in add_unique_group_expr (vars=0x5646bbd9e200, expr=0x5646b9eb0c30, exprinfos=0x5646bbd9e100, root=0x5646ba9a0cb0) at selfuncs.c:3332\n> #4 add_unique_group_expr (root=0x5646ba9a0cb0, exprinfos=0x5646bbd9e100, expr=0x5646b9eb0c30, vars=0x5646bbd9e200) at selfuncs.c:3307\n> #5 0x00005646b955d560 in estimate_num_groups () at selfuncs.c:3558\n> #6 0x00005646b93ad004 in create_distinct_paths (input_rel=<optimized out>, root=0x5646ba9a0cb0) at planner.c:4808\n> #7 grouping_planner () at planner.c:2238\n> #8 0x00005646b93ae0ef in subquery_planner (glob=glob@entry=0x5646ba9a0b98, parse=parse@entry=0x5646ba905d80, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0)\n> at planner.c:1024\n> #9 0x00005646b93af543 in standard_planner (parse=0x5646ba905d80, query_string=<optimized out>, cursorOptions=256, boundParams=<optimized out>) at planner.c:404\n> #10 0x00005646b94873ac in pg_plan_query (querytree=0x5646ba905d80, \n> query_string=0x5646b9cd87e0 \"select distinct \\n \\n pg_catalog.variance(\\n cast(pg_catalog.pg_stat_get_bgwriter_timed_checkpoints() as int8)) over (partition by subq_0.c2 order by subq_0.c0) as c0, \\n subq_0.c2 as c1, \\n sub\"..., cursorOptions=256, boundParams=0x0) at postgres.c:821\n> #11 0x00005646b94874a1 in pg_plan_queries (querytrees=0x5646baaba250, \n> query_string=query_string@entry=0x5646b9cd87e0 \"select distinct \\n \\n pg_catalog.variance(\\n cast(pg_catalog.pg_stat_get_bgwriter_timed_checkpoints() as int8)) over (partition by subq_0.c2 order by subq_0.c0) as c0, \\n subq_0.c2 as c1, \\n sub\"..., cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:912\n> #12 0x00005646b9487888 in exec_simple_query () at postgres.c:1104\n> \n> 2021-03-24 03:06:12.489 CDT postmaster[11653] LOG: server process (PID 11696) was terminated by signal 6: Aborted\n> 2021-03-24 03:06:12.489 CDT postmaster[11653] DETAIL: Failed process was running: select distinct \n> \n> pg_catalog.variance(\n> cast(pg_catalog.pg_stat_get_bgwriter_timed_checkpoints() as int8)) over (partition by subq_0.c2 order by subq_0.c0) as c0, \n> subq_0.c2 as c1, \n> subq_0.c0 as c2, \n> subq_0.c2 as c3, \n> subq_0.c1 as c4, \n> subq_0.c1 as c5, \n> subq_0.c0 as c6\n> from \n> (select \n> ref_1.foreign_server_catalog as c0, \n> ref_1.authorization_identifier as c1, \n> sample_2.tgname as c2, \n> ref_1.foreign_server_catalog as c3\n> from \n> pg_catalog.pg_stat_database_conflicts as ref_0\n> left join information_schema._pg_user_mappings as ref_1\n> on (ref_0.datname < ref_0.datname)\n> inner join pg_catalog.pg_amproc as sample_0 tablesample system (5) \n> on (cast(null as uuid) < cast(null as uuid))\n> left join pg_catalog.pg_aggregate as sample_1 tablesample system (2.9) \n> on (sample_0.amprocnum = sample_1.aggnumdirectargs )\n> inner join pg_catalog.pg_trigger as sample_2 tablesampl\n> \n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Mar 2021 09:54:22 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 09:54:22AM +0100, Tomas Vondra wrote:\n> Hi Justin,\n> \n> Unfortunately the query is incomplete, so I can't quite determine what\n> went wrong. Can you extract the full query causing the crash, either\n> from the server log or from a core file?\n\nOh, shoot, I didn't realize it was truncated, and I already destroyed the core\nand moved on to something else...\n\nBut this fails well enough, and may be much shorter than the original :)\n\nselect distinct\n pg_catalog.variance(\n cast(pg_catalog.pg_stat_get_bgwriter_timed_checkpoints() as int8)) over (partition by subq_0.c2 order by subq_0.c0) as c0,\n subq_0.c2 as c1, subq_0.c0 as c2, subq_0.c2 as c3, subq_0.c1 as c4, subq_0.c1 as c5, subq_0.c0 as c6\n from\n (select\n ref_1.foreign_server_catalog as c0,\n ref_1.authorization_identifier as c1,\n sample_2.tgname as c2,\n ref_1.foreign_server_catalog as c3\n from\n pg_catalog.pg_stat_database_conflicts as ref_0\n left join information_schema._pg_user_mappings as ref_1\n on (ref_0.datname < ref_0.datname)\n inner join pg_catalog.pg_amproc as sample_0 tablesample system (5)\n on (cast(null as uuid) < cast(null as uuid))\n left join pg_catalog.pg_aggregate as sample_1 tablesample system (2.9)\n on (sample_0.amprocnum = sample_1.aggnumdirectargs )\n inner join pg_catalog.pg_trigger as sample_2 tablesample system (1) on true )subq_0;\n\nTRAP: FailedAssertion(\"bms_num_members(varnos) == 1\", File: \"selfuncs.c\", Line: 3332, PID: 16422)\n\nAlso ... with this patch CREATE STATISTIC is no longer rejecting multiple\ntables, and instead does this:\n\npostgres=# CREATE STATISTICS xt ON a FROM t JOIN t ON true;\nERROR: schema \"i\" does not exist\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Mar 2021 04:17:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Thanks, it seems to be some thinko in handling in PlaceHolderVars, which\nseem to break the code's assumptions about varnos. This fixes it for me,\nbut I need to look at it more closely.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 24 Mar 2021 11:22:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, 24 Mar 2021 at 10:22, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Thanks, it seems to be some thinko in handling in PlaceHolderVars, which\n> seem to break the code's assumptions about varnos. This fixes it for me,\n> but I need to look at it more closely.\n>\n\nI think that makes sense.\n\nReviewing the docs, I noticed a couple of omissions, and had a few\nother suggestions (attached).\n\nRegards,\nDean",
"msg_date": "Wed, 24 Mar 2021 13:36:15 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 3/24/21 2:36 PM, Dean Rasheed wrote:\n> On Wed, 24 Mar 2021 at 10:22, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Thanks, it seems to be some thinko in handling in PlaceHolderVars, which\n>> seem to break the code's assumptions about varnos. This fixes it for me,\n>> but I need to look at it more closely.\n>>\n> \n> I think that makes sense.\n> \n\nAFAIK the primary issue here is that the two places disagree. While\nestimate_num_groups does this\n\n varnos = pull_varnos(root, (Node *) varshere);\n if (bms_membership(varnos) == BMS_SINGLETON)\n { ... }\n\nthe add_unique_group_expr does this\n\n varnos = pull_varnos(root, (Node *) groupexpr);\n\nThat is, one looks at the group expression, while the other look at vars\nextracted from it by pull_var_clause(). Apparently for PlaceHolderVar\nthis can differ, causing the crash.\n\nSo we need to change one of those places - my fix tweaked the second\nplace to also look at the vars, but maybe we should change the other\nplace? Or maybe it's not the right fix for PlaceHolderVars ...\n\n> Reviewing the docs, I noticed a couple of omissions, and had a few\n> other suggestions (attached).\n> \n\nThanks! I'll include that in the next version of the patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Mar 2021 15:47:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, 24 Mar 2021 at 14:48, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> AFAIK the primary issue here is that the two places disagree. While\n> estimate_num_groups does this\n>\n> varnos = pull_varnos(root, (Node *) varshere);\n> if (bms_membership(varnos) == BMS_SINGLETON)\n> { ... }\n>\n> the add_unique_group_expr does this\n>\n> varnos = pull_varnos(root, (Node *) groupexpr);\n>\n> That is, one looks at the group expression, while the other look at vars\n> extracted from it by pull_var_clause(). Apparently for PlaceHolderVar\n> this can differ, causing the crash.\n>\n> So we need to change one of those places - my fix tweaked the second\n> place to also look at the vars, but maybe we should change the other\n> place? Or maybe it's not the right fix for PlaceHolderVars ...\n>\n\nI think that it doesn't make any difference which place is changed.\n\nThis is a case of an expression with no stats. With your change,\nyou'll get a single GroupExprInfo containing a list of\nVariableStatData's for each of it's Var's, whereas if you changed it\nthe other way, you'd get a separate GroupExprInfo for each Var. But I\nthink they'd both end up being treated the same by\nestimate_multivariate_ndistinct(), since there wouldn't be any stats\nmatching the expression, only the individual Var's. Maybe changing the\nfirst place would be the more bulletproof fix though.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 24 Mar 2021 16:28:05 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\nOn 3/24/21 7:24 AM, Justin Pryzby wrote:\n> Most importantly, it looks like this forgets to update catalog documentation\n> for stxexprs and stxkind='e'\n> \n\nGood catch.\n\n> It seems like you're preferring to use pluralized \"statistics\" in a lot of\n> places that sound wrong to me. For example:\n>> Currently the first statistics wins, which seems silly.\n> I can write more separately, but I think this is resolved and clarified if you\n> write \"statistics object\" and not just \"statistics\". \n> \n\nOK \"statistics object\" seems better and more consistent.\n\n>> + Name of schema containing table\n> \n> I don't know about the nearby descriptions, but this one sounds too much like a\n> \"schema-containing\" table. Say \"Name of the schema which contains the table\" ?\n> \n\nI think the current spelling is OK / consistent with the other catalogs.\n\n>> + Name of table\n> \n> Say \"name of table on which the extended statistics are defined\"\n> \n\nI've used \"Name of table the statistics object is defined on\".\n\n>> + Name of extended statistics\n> \n> \"Name of the extended statistic object\"\n> \n>> + Owner of the extended statistics\n> \n> ..object\n> \n\nOK\n\n>> + Expression the extended statistics is defined on\n> \n> I think it should say \"the extended statistic\", or \"the extended statistics\n> object\". Maybe \"..on which the extended statistic is defined\"\n> \n\nOK\n\n>> + of random access to the disk. (This expression is null if the expression\n>> + data type does not have a <literal><</literal> operator.)\n> \n> expression's data type\n> \n\nOK\n\n>> + much-too-small row count estimate in the first two queries. Moreover, the\n> \n> maybe say \"dramatically underestimates the rowcount\"\n> \n\nI've changed this to \"... results in a significant underestimate of row\ncount\".\n\n>> + planner has no information about relationship between the expressions, so it\n> \n> the relationship\n> \n\nOK\n\n>> + assumes the two <literal>WHERE</literal> and <literal>GROUP BY</literal>\n>> + conditions are independent, and multiplies their selectivities together to\n>> + arrive at a much-too-high group count estimate in the aggregate query.\n> \n> severe overestimate ?\n> \n\nOK\n\n>> + This is further exacerbated by the lack of accurate statistics for the\n>> + expressions, forcing the planner to use default ndistinct estimate for the\n> \n> use *a default\n> \n\nOK\n\n>> + expression derived from ndistinct for the column. With such statistics, the\n>> + planner recognizes that the conditions are correlated and arrives at much\n>> + more accurate estimates.\n> \n> are correlated comma\n> \n\nOK\n\n>> +\t\t\tif (type->lt_opr == InvalidOid)\n> \n> These could be !OidIsValid\n> \n\nMaybe, but it's like this already. I'll leave this alone and then\nfix/backpatch separately.\n\n>> +\t * expressions. It's either expensive or very easy to defeat for\n>> +\t * determined used, and there's no risk if we allow such statistics (the\n>> +\t * statistics is useless, but harmless).\n> \n> I think it's meant to say \"for a determined user\" ?\n> \n\nRight.\n\n>> +\t * If there are no simply-referenced columns, give the statistics an auto\n>> +\t * dependency on the whole table. In most cases, this will be redundant,\n>> +\t * but it might not be if the statistics expressions contain no Vars\n>> +\t * (which might seem strange but possible).\n>> +\t */\n>> +\tif (!nattnums)\n>> +\t{\n>> +\t\tObjectAddressSet(parentobject, RelationRelationId, relid);\n>> +\t\trecordDependencyOn(&myself, &parentobject, DEPENDENCY_AUTO);\n>> +\t}\n> \n> Can this be unconditional ?\n> \n\nWhat would be the benefit? This behavior copied from index_create, so\nI'd prefer keeping it the same for consistency reason. Presumably it's\nlike that for some reason (a bit of cargo cult programming, I know).\n\n>> +\t * Translate the array of indexs to regular attnums for the dependency (we\n> \n> sp: indexes\n> \n\nOK\n\n>> +\t\t\t\t\t * Not found a matching expression, so we can simply skip\n> \n> Found no matching expr\n> \n\nOK\n\n>> +\t\t\t\t/* if found a matching, */\n> \n> matching ..\n> \n\nMatching dependency.\n\n>> +examine_attribute(Node *expr)\n> \n> Maybe you should rename this to something distinct ? So it's easy to add a\n> breakpoint there, for example.\n> \n\nWhat would be a better name? It's not difficult to add a breakpoint\nusing line number, for example.\n\n>> +\tstats->anl_context = CurrentMemoryContext;\t/* XXX should be using\n>> +\t\t\t\t\t\t\t\t\t\t\t\t * something else? */\n> \n>> +\t\tbool\t\tnulls[Natts_pg_statistic];\n> ...\n>> +\t\t * Construct a new pg_statistic tuple\n>> +\t\t */\n>> +\t\tfor (i = 0; i < Natts_pg_statistic; ++i)\n>> +\t\t{\n>> +\t\t\tnulls[i] = false;\n>> +\t\t}\n> \n> Shouldn't you just write nulls[Natts_pg_statistic] = {false};\n> or at least: memset(nulls, 0, sizeof(nulls));\n> \n\nMaybe, but it's a copy of what update_attstats() does, so I prefer\nkeeping it the same.\n\n>> +\t\t\t\t * We don't store collations used to build the statistics, but\n>> +\t\t\t\t * we can use the collation for the attribute itself, as\n>> +\t\t\t\t * stored in varcollid. We do reset the statistics after a\n>> +\t\t\t\t * type change (including collation change), so this is OK. We\n>> +\t\t\t\t * may need to relax this after allowing extended statistics\n>> +\t\t\t\t * on expressions.\n> \n> This text should be updated or removed ?\n> \n\nYeah, the last sentence is obsolete. Updated.\n\n>> @@ -2705,7 +2705,108 @@ describeOneTableDetails(const char *schemaname,\n>> \t\t}\n>> \n>> \t\t/* print any extended statistics */\n>> -\t\tif (pset.sversion >= 100000)\n>> +\t\tif (pset.sversion >= 140000)\n>> +\t\t{\n>> +\t\t\tprintfPQExpBuffer(&buf,\n>> +\t\t\t\t\t\t\t \"SELECT oid, \"\n>> +\t\t\t\t\t\t\t \"stxrelid::pg_catalog.regclass, \"\n>> +\t\t\t\t\t\t\t \"stxnamespace::pg_catalog.regnamespace AS nsp, \"\n>> +\t\t\t\t\t\t\t \"stxname,\\n\"\n>> +\t\t\t\t\t\t\t \"pg_get_statisticsobjdef_columns(oid) AS columns,\\n\"\n>> +\t\t\t\t\t\t\t \" 'd' = any(stxkind) AS ndist_enabled,\\n\"\n>> +\t\t\t\t\t\t\t \" 'f' = any(stxkind) AS deps_enabled,\\n\"\n>> +\t\t\t\t\t\t\t \" 'm' = any(stxkind) AS mcv_enabled,\\n\");\n>> +\n>> +\t\t\tif (pset.sversion >= 130000)\n>> +\t\t\t\tappendPQExpBufferStr(&buf, \" stxstattarget\\n\");\n>> +\t\t\telse\n>> +\t\t\t\tappendPQExpBufferStr(&buf, \" -1 AS stxstattarget\\n\");\n> \n> >= 130000 is fully determined by >= 14000 :)\n> \n\nAh, right.\n\n>> +\t * type of the opclass, which is not interesting for our purposes. (Note:\n>> +\t * if we did anything with non-expression index columns, we'd need to\n> \n> index is wrong ?\n> \n\nFixed\n\n> I mentioned a bunch of other references to \"index\" and \"predicate\" which are\n> still around:\n> \n\nWhooops, sorry. Fixed.\n\n\nI'll post a cleaned-up version of the patch addressing Dean's review\ncomments too.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Mar 2021 17:38:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 01:24:46AM -0500, Justin Pryzby wrote:\n> It seems like you're preferring to use pluralized \"statistics\" in a lot of\n> places that sound wrong to me. For example:\n> > Currently the first statistics wins, which seems silly.\n> I can write more separately, but I think this is resolved and clarified if you\n> write \"statistics object\" and not just \"statistics\". \n\nIn HEAD:catalogs.sgml, pg_statistic_ext (the table) says \"object\":\n|Name of the statistics object\n|Owner of the statistics object\n|An array of attribute numbers, indicating which table columns are covered by this statistics object;\n\nBut pg_stats_ext (the view) doesn't say \"object\", which sounds wrong:\n|Name of extended statistics\n|Owner of the extended statistics\n|Names of the columns the extended statistics is defined on\n\nOther pre-existing issues: should be singular \"statistic\":\ndoc/src/sgml/perform.sgml: Another type of statistics stored for each column are most-common value\ndoc/src/sgml/ref/psql-ref.sgml: The status of each kind of extended statistics is shown in a column\n\nPre-existing issues: doesn't say \"object\" but I think it should:\nsrc/backend/commands/statscmds.c: errmsg(\"statistics creation on system columns is not supported\")));\nsrc/backend/commands/statscmds.c: errmsg(\"cannot have more than %d columns in statistics\",\nsrc/backend/commands/statscmds.c: * If we got here and the OID is not valid, it means the statistics does\nsrc/backend/commands/statscmds.c: * Select a nonconflicting name for a new statistics.\nsrc/backend/commands/statscmds.c: * Generate \"name2\" for a new statistics given the list of column names for it\nsrc/backend/statistics/extended_stats.c: /* compute statistics target for this statistics */\nsrc/backend/statistics/extended_stats.c: * attributes the statistics is defined on, and then the default statistics\nsrc/backend/statistics/mcv.c: * The input is the OID of the statistics, and there are no rows returned if\n\nshould say \"for a statistics object\" or \"for statistics objects\"\nsrc/backend/statistics/extended_stats.c: * target for a statistics objects (from the object target, attribute targets\n\nYour patch adds these:\n\nShould say \"object\":\n+ * Check if we actually have a matching statistics for the expression. \n+ /* evaluate expressions (if the statistics has any) */ \n+ * for the extended statistics. The second option seems more reasonable. \n+ * the statistics had all options enabled on the original version. \n+ * But if the statistics is defined on just a single column, it has to \n+ /* has the statistics expressions? */ \n+ /* expression - see if it's in the statistics */ \n+ * column(s) the statistics depends on. Also require all \n+ * statistics is defined on more than one column/expression). \n+ * statistics is useless, but harmless). \n+ * If there are no simply-referenced columns, give the statistics an auto \n\n\n+ * Then the first statistics matches no expressions and 3 vars, \n+ * while the second statistics matches one expression and 1 var. \n+ * Currently the first statistics wins, which seems silly. \n\n+ * [(a+c), d]. But maybe it's better than failing to match the \n+ * second statistics? \n\nI can make patches for these (separate patches for HEAD and your patch), but I\ndon't think your patch has to wait on it, since the user-facing documentation\nis consistent with what's already there, and the rest are internal comments.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Mar 2021 11:42:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 3/24/21 5:28 PM, Dean Rasheed wrote:\n> On Wed, 24 Mar 2021 at 14:48, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> AFAIK the primary issue here is that the two places disagree. While\n>> estimate_num_groups does this\n>>\n>> varnos = pull_varnos(root, (Node *) varshere);\n>> if (bms_membership(varnos) == BMS_SINGLETON)\n>> { ... }\n>>\n>> the add_unique_group_expr does this\n>>\n>> varnos = pull_varnos(root, (Node *) groupexpr);\n>>\n>> That is, one looks at the group expression, while the other look at vars\n>> extracted from it by pull_var_clause(). Apparently for PlaceHolderVar\n>> this can differ, causing the crash.\n>>\n>> So we need to change one of those places - my fix tweaked the second\n>> place to also look at the vars, but maybe we should change the other\n>> place? Or maybe it's not the right fix for PlaceHolderVars ...\n>>\n> \n> I think that it doesn't make any difference which place is changed.\n> \n> This is a case of an expression with no stats. With your change,\n> you'll get a single GroupExprInfo containing a list of\n> VariableStatData's for each of it's Var's, whereas if you changed it\n> the other way, you'd get a separate GroupExprInfo for each Var. But I\n> think they'd both end up being treated the same by\n> estimate_multivariate_ndistinct(), since there wouldn't be any stats\n> matching the expression, only the individual Var's. Maybe changing the\n> first place would be the more bulletproof fix though.\n> \n\nYeah, I think that's true. I'll do a bit more research / experiments.\n\nAs for the changes proposed in the create_statistics, do we really want\nto use univariate / multivariate there? Yes, the terms are correct, but\nI'm not sure how many people looking at CREATE STATISTICS will\nunderstand them.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Mar 2021 17:48:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, 24 Mar 2021 at 16:48, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> As for the changes proposed in the create_statistics, do we really want\n> to use univariate / multivariate there? Yes, the terms are correct, but\n> I'm not sure how many people looking at CREATE STATISTICS will\n> understand them.\n>\n\nHmm, I think \"univariate\" and \"multivariate\" are pretty ubiquitous,\nwhen used to describe statistics. You could use \"single-column\" and\n\"multi-column\", but then \"column\" isn't really right anymore, since it\nmight be a column or an expression. I can't think of any other terms\nthat fit.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 24 Mar 2021 17:15:46 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 05:15:46PM +0000, Dean Rasheed wrote:\n> On Wed, 24 Mar 2021 at 16:48, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > As for the changes proposed in the create_statistics, do we really want\n> > to use univariate / multivariate there? Yes, the terms are correct, but\n> > I'm not sure how many people looking at CREATE STATISTICS will\n> > understand them.\n> >\n> \n> Hmm, I think \"univariate\" and \"multivariate\" are pretty ubiquitous,\n> when used to describe statistics. You could use \"single-column\" and\n> \"multi-column\", but then \"column\" isn't really right anymore, since it\n> might be a column or an expression. I can't think of any other terms\n> that fit.\n\nWe already use \"multivariate\", just not in create-statistics.sgml\n\ndoc/src/sgml/perform.sgml: <firstterm>multivariate statistics</firstterm>, which can capture\ndoc/src/sgml/perform.sgml: it's impractical to compute multivariate statistics automatically.\ndoc/src/sgml/planstats.sgml: <sect1 id=\"multivariate-statistics-examples\">\ndoc/src/sgml/planstats.sgml: <secondary>multivariate</secondary>\ndoc/src/sgml/planstats.sgml: multivariate statistics on the two columns:\ndoc/src/sgml/planstats.sgml: <sect2 id=\"multivariate-ndistinct-counts\">\ndoc/src/sgml/planstats.sgml: But without multivariate statistics, the estimate for the number of\ndoc/src/sgml/planstats.sgml: This section introduces multivariate variant of <acronym>MCV</acronym>\ndoc/src/sgml/ref/create_statistics.sgml: and <xref linkend=\"multivariate-statistics-examples\"/>.\ndoc/src/sgml/release-13.sgml:2020-01-13 [eae056c19] Apply multiple multivariate MCV lists when possible\n\nSo I think the answer is for create-statistics to expose that word in a\nuser-facing way in its reference to multivariate-statistics-examples.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Mar 2021 12:18:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 2021-Mar-24, Justin Pryzby wrote:\n\n> On Wed, Mar 24, 2021 at 05:15:46PM +0000, Dean Rasheed wrote:\n\n> > Hmm, I think \"univariate\" and \"multivariate\" are pretty ubiquitous,\n> > when used to describe statistics. You could use \"single-column\" and\n> > \"multi-column\", but then \"column\" isn't really right anymore, since it\n> > might be a column or an expression. I can't think of any other terms\n> > that fit.\n\nAgreed. If we need to define the term, we can spend a sentence or two\nin that.\n\n> We already use \"multivariate\", just not in create-statistics.sgml\n\n> So I think the answer is for create-statistics to expose that word in a\n> user-facing way in its reference to multivariate-statistics-examples.\n\n+1\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Wed, 24 Mar 2021 16:50:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 3/24/21 5:48 PM, Tomas Vondra wrote:\n> \n> \n> On 3/24/21 5:28 PM, Dean Rasheed wrote:\n>> On Wed, 24 Mar 2021 at 14:48, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> AFAIK the primary issue here is that the two places disagree. While\n>>> estimate_num_groups does this\n>>>\n>>> varnos = pull_varnos(root, (Node *) varshere);\n>>> if (bms_membership(varnos) == BMS_SINGLETON)\n>>> { ... }\n>>>\n>>> the add_unique_group_expr does this\n>>>\n>>> varnos = pull_varnos(root, (Node *) groupexpr);\n>>>\n>>> That is, one looks at the group expression, while the other look at vars\n>>> extracted from it by pull_var_clause(). Apparently for PlaceHolderVar\n>>> this can differ, causing the crash.\n>>>\n>>> So we need to change one of those places - my fix tweaked the second\n>>> place to also look at the vars, but maybe we should change the other\n>>> place? Or maybe it's not the right fix for PlaceHolderVars ...\n>>>\n>>\n>> I think that it doesn't make any difference which place is changed.\n>>\n>> This is a case of an expression with no stats. With your change,\n>> you'll get a single GroupExprInfo containing a list of\n>> VariableStatData's for each of it's Var's, whereas if you changed it\n>> the other way, you'd get a separate GroupExprInfo for each Var. But I\n>> think they'd both end up being treated the same by\n>> estimate_multivariate_ndistinct(), since there wouldn't be any stats\n>> matching the expression, only the individual Var's. Maybe changing the\n>> first place would be the more bulletproof fix though.\n>>\n> \n> Yeah, I think that's true. I'll do a bit more research / experiments.\n> \n\nActually, I think we need that block at all - there's no point in\nkeeping the exact expression, because if there was a statistics matching\nit it'd be matched by the examine_variable. So if we get here, we have\nto just split it into the vars anyway. So the second block is entirely\nuseless.\n\nThat however means we don't need the processing with GroupExprInfo and\nGroupVarInfo lists, i.e. we can revert back to the original simpler\nprocessing, with a bit of extra logic to match expressions, that's all.\n\nThe patch 0003 does this (it's a bit crude, but hopefully enough to\ndemonstrate).\n\nhere's an updated patch. 0001 should address most of the today's review\nitems regarding comments etc.\n\n0002 is an attempt to fix an issue I noticed today - we need to handle\ntype changes. Until now we did not have problems with that, because we\nonly had attnums - so we just reset the statistics (with the exception\nof functional dependencies, on the assumption that those remain valid).\n\nWith expressions it's a bit more complicated, though.\n\n1) we need to transform the expressions so that the Vars contain the\nright type info etc. Otherwise an analyze with the old pg_node_tree crashes\n\n2) we need to reset the pg_statistic[] data too, which however makes\nkeeping the functional dependencies a bit less useful, because those\nrely on the expression stats :-(\n\nSo I'm wondering what to do about this. I looked into how ALTER TABLE\nhandles indexes, and 0003 is a PoC to do the same thing for statistics.\nOf couse, this is a bit unfortunate because it recreates the statistics\n(so we don't keep anything, not even functional dependencies).\n\nI think we have two options:\n\na) Make UpdateStatisticsForTypeChange smarter to also transform and\nupdate the expression string, and reset pg_statistics[] data.\n\nb) Just recreate the statistics, just like we do for indexes. Currently\nthis does not force analyze, so it just resets all the stats. Maybe it\nshould do analyze, though.\n\nAny opinions? I need to think about this a bit more, but maybe (b) with\nthe analyze is the right thing to do. Keeping just some of the stats\nalways seemed a bit weird. (This is why the 0002 patch breaks one of the\nregression tests.)\n\nBTW I wonder how useful the updated statistics actually is. Consider\nthis example:\n\n========================================================================\nCREATE TABLE t (a int, b int, c int);\n\nINSERT INTO t SELECT mod(i,10), mod(i,10), mod(i,10)\n FROM generate_series(1,1000000) s(i);\n\nCREATE STATISTICS s (ndistinct) ON (a+b), (b+c) FROM t;\n\nANALYZE t;\n\nEXPLAIN ANALYZE SELECT 1 FROM t GROUP BY (a+b), (b+c);\n\ntest=# \\d t\n Table \"public.t\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\n c | integer | | |\nStatistics objects:\n \"public\".\"s\" (ndistinct) ON ((a + b)), ((b + c)) FROM t\n\ntest=# EXPLAIN SELECT 1 FROM t GROUP BY (a+b), (b+c);\n QUERY PLAN\n-----------------------------------------------------------------\n HashAggregate (cost=25406.00..25406.15 rows=10 width=12)\n Group Key: (a + b), (b + c)\n -> Seq Scan on t (cost=0.00..20406.00 rows=1000000 width=8)\n(3 rows)\n========================================================================\n\nGreat. Now let's change one of the data types to something else:\n\n========================================================================\ntest=# alter table t alter column c type numeric;\nALTER TABLE\ntest=# \\d t\n Table \"public.t\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\n c | numeric | | |\nStatistics objects:\n \"public\".\"s\" (ndistinct) ON ((a + b)), (((b)::numeric + c)) FROM t\n\ntest=# analyze t;\nANALYZE\ntest=# EXPLAIN SELECT 1 FROM t GROUP BY (a+b), (b+c);\n QUERY PLAN\n------------------------------------------------------------------\n HashAggregate (cost=27906.00..27906.17 rows=10 width=40)\n Group Key: (a + b), ((b)::numeric + c)\n -> Seq Scan on t (cost=0.00..22906.00 rows=1000000 width=36)\n(3 rows)\n========================================================================\n\nGreat! Let's change it again:\n\n========================================================================\ntest=# alter table t alter column c type double precision;\nALTER TABLE\ntest=# analyze t;\nANALYZE\ntest=# EXPLAIN SELECT 1 FROM t GROUP BY (a+b), (b+c);\n QUERY PLAN\n------------------------------------------------------------------\n HashAggregate (cost=27906.00..27923.50 rows=1000 width=16)\n Group Key: (a + b), ((b)::double precision + c)\n -> Seq Scan on t (cost=0.00..22906.00 rows=1000000 width=12)\n(3 rows)\n========================================================================\n\nWell, not that great, apparently. We clearly failed to match the second\nexpression, so we ended with (b+c) estimated as (10 * 10). Why? Because\nthe expression now looks like this:\n\n========================================================================\n\"public\".\"s\" (ndistinct) ON ((a + b)), ((((b)::numeric)::double\nprecision + c)) FROM t\n========================================================================\n\nBut we're matching it to (((b)::double precision + c)), so that fails.\n\nThis is not specific to extended statistics - indexes have exactly the\nsame issue. Not sure how common this is in practice.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 25 Mar 2021 01:05:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Thu, Mar 25, 2021 at 01:05:37AM +0100, Tomas Vondra wrote:\n> here's an updated patch. 0001 should address most of the today's review\n> items regarding comments etc.\n\nThis is still an issue:\n\npostgres=# CREATE STATISTICS xt ON a FROM t JOIN t ON true;\nERROR: schema \"i\" does not exist\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Mar 2021 19:30:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 3/25/21 1:30 AM, Justin Pryzby wrote:\n> On Thu, Mar 25, 2021 at 01:05:37AM +0100, Tomas Vondra wrote:\n>> here's an updated patch. 0001 should address most of the today's review\n>> items regarding comments etc.\n> \n> This is still an issue:\n> \n> postgres=# CREATE STATISTICS xt ON a FROM t JOIN t ON true;\n> ERROR: schema \"i\" does not exist\n> \n\nAh, right. That's a weird issue. I was really confused about this,\nbecause nothing changes about the grammar or how we check the number of\nrelations. The problem is pretty trivial - the new code in utility.c\njust grabs the first element and casts it to RangeVar, without checking\nthat it actually is RangeVar. With joins it's a JoinExpr, so we get a\nbogus error.\n\nThe attached version fixes it by simply doing the check in utility.c.\nIt's a bit redundant with what's in CreateStatistics() but I don't think\nwe can just postpone it easily - we need to do the transformation here,\nwith access to queryString. But maybe we don't need to pass the relid,\nwhen we have the list of relations in CreateStatsStmt itself ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 25 Mar 2021 02:33:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 3/25/21 1:05 AM, Tomas Vondra wrote:\n> ...\n>\n> 0002 is an attempt to fix an issue I noticed today - we need to handle\n> type changes. Until now we did not have problems with that, because we\n> only had attnums - so we just reset the statistics (with the exception\n> of functional dependencies, on the assumption that those remain valid).\n> \n> With expressions it's a bit more complicated, though.\n> \n> 1) we need to transform the expressions so that the Vars contain the\n> right type info etc. Otherwise an analyze with the old pg_node_tree crashes\n> \n> 2) we need to reset the pg_statistic[] data too, which however makes\n> keeping the functional dependencies a bit less useful, because those\n> rely on the expression stats :-(\n> \n> So I'm wondering what to do about this. I looked into how ALTER TABLE\n> handles indexes, and 0003 is a PoC to do the same thing for statistics.\n> Of couse, this is a bit unfortunate because it recreates the statistics\n> (so we don't keep anything, not even functional dependencies).\n> \n> I think we have two options:\n> \n> a) Make UpdateStatisticsForTypeChange smarter to also transform and\n> update the expression string, and reset pg_statistics[] data.\n> \n> b) Just recreate the statistics, just like we do for indexes. Currently\n> this does not force analyze, so it just resets all the stats. Maybe it\n> should do analyze, though.\n> \n> Any opinions? I need to think about this a bit more, but maybe (b) with\n> the analyze is the right thing to do. Keeping just some of the stats\n> always seemed a bit weird. (This is why the 0002 patch breaks one of the\n> regression tests.)\n> \n\nAfter thinking about this a bit more I think (b) is the right choice,\nand the analyze is not necessary. The reason is fairly simple - we drop\nthe per-column statistics, because ATExecAlterColumnType does\n\n RemoveStatistics(RelationGetRelid(rel), attnum);\n\nso the user has to run analyze anyway, to get any reasonable estimates\n(we keep the functional dependencies, but those still rely on per-column\nstatistics quite a bit). And we'll have to do the same thing with\nper-expression stats too. It was a nice idea to keep at least the stats\nthat are not outright broken, but unfortunately it's not a very useful\noptimization. It increases the instability of the system, because now we\nhave estimates with all statistics, no statistics, and something in\nbetween after the partial reset. Not nice.\n\nSo my plan is to get rid of UpdateStatisticsForTypeChange, and just do\nmostly what we do for indexes. It's not perfect (as demonstrated in last\nmessage), but that'd apply even to option (a).\n\nAny better ideas?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 25 Mar 2021 05:07:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Thu, 25 Mar 2021 at 00:05, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Actually, I think we need that block at all - there's no point in\n> keeping the exact expression, because if there was a statistics matching\n> it it'd be matched by the examine_variable. So if we get here, we have\n> to just split it into the vars anyway. So the second block is entirely\n> useless.\n\nGood point.\n\n> That however means we don't need the processing with GroupExprInfo and\n> GroupVarInfo lists, i.e. we can revert back to the original simpler\n> processing, with a bit of extra logic to match expressions, that's all.\n>\n> The patch 0003 does this (it's a bit crude, but hopefully enough to\n> demonstrate).\n\nCool. I did wonder about that, but I didn't fully think it through.\nI'll take a look.\n\n> 0002 is an attempt to fix an issue I noticed today - we need to handle\n> type changes.\n>\n> I think we have two options:\n>\n> a) Make UpdateStatisticsForTypeChange smarter to also transform and\n> update the expression string, and reset pg_statistics[] data.\n>\n> b) Just recreate the statistics, just like we do for indexes. Currently\n> this does not force analyze, so it just resets all the stats. Maybe it\n> should do analyze, though.\n\nI'd vote for (b) without an analyse, and I agree with getting rid of\nUpdateStatisticsForTypeChange(). I've always been a bit skeptical\nabout trying to preserve extended statistics after a type change, when\nwe don't preserve regular per-column stats.\n\n> BTW I wonder how useful the updated statistics actually is. Consider\n> this example:\n> ...\n> the expression now looks like this:\n>\n> ========================================================================\n> \"public\".\"s\" (ndistinct) ON ((a + b)), ((((b)::numeric)::double\n> precision + c)) FROM t\n> ========================================================================\n>\n> But we're matching it to (((b)::double precision + c)), so that fails.\n>\n> This is not specific to extended statistics - indexes have exactly the\n> same issue. Not sure how common this is in practice.\n\nHmm, that's unfortunate. Maybe it's not that common in practice\nthough. I'm not sure if there is any practical way to fix it, but if\nthere is, I guess we'd want to apply the same fix to both stats and\nindexes, and that certainly seems out of scope for this patch.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 25 Mar 2021 11:35:21 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Thu, 25 Mar 2021 at 00:05, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> here's an updated patch. 0001\n\nThe change to the way that CreateStatistics() records dependencies\nisn't quite right -- recordDependencyOnSingleRelExpr() will not create\nany dependencies if the expression uses only a whole-row Var. However,\npull_varattnos() will include whole-row Vars, and so nattnums_exprs\nwill be non-zero, and CreateStatistics() will not create a whole-table\ndependency when it should.\n\nI suppose that could be fixed up by inspecting the bitmapset returned\nby pull_varattnos() in more detail, but I think it's probably safer to\nrevert to the previous code, which matched what index_create() did.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 25 Mar 2021 13:33:34 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 3/25/21 2:33 PM, Dean Rasheed wrote:\n> On Thu, 25 Mar 2021 at 00:05, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> here's an updated patch. 0001\n> \n> The change to the way that CreateStatistics() records dependencies\n> isn't quite right -- recordDependencyOnSingleRelExpr() will not create\n> any dependencies if the expression uses only a whole-row Var. However,\n> pull_varattnos() will include whole-row Vars, and so nattnums_exprs\n> will be non-zero, and CreateStatistics() will not create a whole-table\n> dependency when it should.\n> \n> I suppose that could be fixed up by inspecting the bitmapset returned\n> by pull_varattnos() in more detail, but I think it's probably safer to\n> revert to the previous code, which matched what index_create() did.\n> \n\nAh, good catch. I haven't realized recordDependencyOnSingleRelExpr works\nlike that, so I've moved it after the whole-table dependency.\n\nAttached is an updated patch series, with all the changes discussed\nhere. I've cleaned up the ndistinct stuff a bit more (essentially\nreverting back from GroupExprInfo to GroupVarInfo name), and got rid of\nthe UpdateStatisticsForTypeChange.\n\nI've also looked at speeding up the stats_ext regression tests. The 0002\npatch reduces the size of a couple of test tables, and removes a bunch\nof queries. I've initially mostly just copied the original tests, but we\ndon't really need that many queries I think. This cuts the runtime about\nin half, so it's mostly in line with other tests. Some of these changes\nare in existing tests, I'll consider moving that into a separate patch\napplied before the main one.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 25 Mar 2021 20:59:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Thu, 25 Mar 2021 at 19:59, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Attached is an updated patch series, with all the changes discussed\n> here. I've cleaned up the ndistinct stuff a bit more (essentially\n> reverting back from GroupExprInfo to GroupVarInfo name), and got rid of\n> the UpdateStatisticsForTypeChange.\n>\n\nI've looked over all that and I think it's in pretty good shape. I\nparticularly like how much simpler the ndistinct code has now become.\n\nSome (hopefully final) review comments:\n\n1). I don't think index.c is the right place for\nStatisticsGetRelation(). I appreciate that it is very similar to the\nadjacent IndexGetRelation() function, but index.c is really only for\nindex-related code, so I think StatisticsGetRelation() should go in\nstatscmds.c\n\n2). Perhaps the error message at statscmds.c:293 should read\n\n \"expression cannot be used in multivariate statistics because its\ntype %s has no default btree operator class\"\n\n(i.e., add the word \"multivariate\", since the same expression *can* be\nused in univariate statistics even though it has no less-than\noperator).\n\n3). The comment for ATExecAddStatistics() should probably mention that\n\"ALTER TABLE ADD STATISTICS\" isn't a command in the grammar, in a\nsimilar way to other similar functions, e.g.:\n\n/*\n * ALTER TABLE ADD STATISTICS\n *\n * This is no such command in the grammar, but we use this internally to add\n * AT_ReAddStatistics subcommands to rebuild extended statistics after a table\n * column type change.\n */\n\n4). The comment at the start of ATPostAlterTypeParse() needs updating\nto mention CREATE STATISTICS statements.\n\n5). I think ATPostAlterTypeParse() should also attempt to preserve any\nCOMMENTs attached to statistics objects, i.e., something like:\n\n--- src/backend/commands/tablecmds.c.orig 2021-03-26 10:39:38.328631864 +0000\n+++ src/backend/commands/tablecmds.c 2021-03-26 10:47:21.042279580 +0000\n@@ -12619,6 +12619,9 @@\n CreateStatsStmt *stmt = (CreateStatsStmt *) stm;\n AlterTableCmd *newcmd;\n\n+ /* keep the statistics object's comment */\n+ stmt->stxcomment = GetComment(oldId, StatisticExtRelationId, 0);\n+\n newcmd = makeNode(AlterTableCmd);\n newcmd->subtype = AT_ReAddStatistics;\n newcmd->def = (Node *) stmt;\n\n6). Comment typo at extended_stats.c:2532 - s/statitics/statistics/\n\n7). I don't think that the big XXX comment near the start of\nestimate_multivariate_ndistinct() is really relevant anymore, now that\nthe code has been simplified and we no longer extract Vars from\nexpressions, so perhaps it can just be deleted.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 26 Mar 2021 11:37:37 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 3/26/21 12:37 PM, Dean Rasheed wrote:\n> On Thu, 25 Mar 2021 at 19:59, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Attached is an updated patch series, with all the changes discussed\n>> here. I've cleaned up the ndistinct stuff a bit more (essentially\n>> reverting back from GroupExprInfo to GroupVarInfo name), and got rid of\n>> the UpdateStatisticsForTypeChange.\n>>\n> \n> I've looked over all that and I think it's in pretty good shape. I\n> particularly like how much simpler the ndistinct code has now become.\n> \n> Some (hopefully final) review comments:\n> \n> 1). I don't think index.c is the right place for\n> StatisticsGetRelation(). I appreciate that it is very similar to the\n> adjacent IndexGetRelation() function, but index.c is really only for\n> index-related code, so I think StatisticsGetRelation() should go in\n> statscmds.c\n> \n\nAh, right, I forgot about this. I wonder if we should add\ncatalog/statistics.c, similar to catalog/index.c (instead of adding it\nlocally to statscmds.c).\n\n> 2). Perhaps the error message at statscmds.c:293 should read\n> \n> \"expression cannot be used in multivariate statistics because its\n> type %s has no default btree operator class\"\n> \n> (i.e., add the word \"multivariate\", since the same expression *can* be\n> used in univariate statistics even though it has no less-than\n> operator).\n> \n> 3). The comment for ATExecAddStatistics() should probably mention that\n> \"ALTER TABLE ADD STATISTICS\" isn't a command in the grammar, in a\n> similar way to other similar functions, e.g.:\n> \n> /*\n> * ALTER TABLE ADD STATISTICS\n> *\n> * This is no such command in the grammar, but we use this internally to add\n> * AT_ReAddStatistics subcommands to rebuild extended statistics after a table\n> * column type change.\n> */\n> \n> 4). The comment at the start of ATPostAlterTypeParse() needs updating\n> to mention CREATE STATISTICS statements.\n> \n> 5). I think ATPostAlterTypeParse() should also attempt to preserve any\n> COMMENTs attached to statistics objects, i.e., something like:\n> \n> --- src/backend/commands/tablecmds.c.orig 2021-03-26 10:39:38.328631864 +0000\n> +++ src/backend/commands/tablecmds.c 2021-03-26 10:47:21.042279580 +0000\n> @@ -12619,6 +12619,9 @@\n> CreateStatsStmt *stmt = (CreateStatsStmt *) stm;\n> AlterTableCmd *newcmd;\n> \n> + /* keep the statistics object's comment */\n> + stmt->stxcomment = GetComment(oldId, StatisticExtRelationId, 0);\n> +\n> newcmd = makeNode(AlterTableCmd);\n> newcmd->subtype = AT_ReAddStatistics;\n> newcmd->def = (Node *) stmt;\n> \n> 6). Comment typo at extended_stats.c:2532 - s/statitics/statistics/\n> \n> 7). I don't think that the big XXX comment near the start of\n> estimate_multivariate_ndistinct() is really relevant anymore, now that\n> the code has been simplified and we no longer extract Vars from\n> expressions, so perhaps it can just be deleted.\n> \n\nThanks! I'll fix these, and then will consider getting it committed\nsometime later today, once the buildfarm does some testing on the other\nstuff I already committed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 26 Mar 2021 13:54:13 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 3/26/21 1:54 PM, Tomas Vondra wrote:\n> \n> \n> On 3/26/21 12:37 PM, Dean Rasheed wrote:\n>> On Thu, 25 Mar 2021 at 19:59, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> Attached is an updated patch series, with all the changes discussed\n>>> here. I've cleaned up the ndistinct stuff a bit more (essentially\n>>> reverting back from GroupExprInfo to GroupVarInfo name), and got rid of\n>>> the UpdateStatisticsForTypeChange.\n>>>\n>>\n>> I've looked over all that and I think it's in pretty good shape. I\n>> particularly like how much simpler the ndistinct code has now become.\n>>\n>> Some (hopefully final) review comments:\n>>\n>> ...\n>>\n> \n> Thanks! I'll fix these, and then will consider getting it committed\n> sometime later today, once the buildfarm does some testing on the other\n> stuff I already committed.\n> \n\nOK, pushed after a bit more polishing and testing. I've noticed one more\nmissing piece in describe (expressions missing in \\dX), so I fixed that.\n\nMay the buildfarm be merciful ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 27 Mar 2021 01:17:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 3/27/21 1:17 AM, Tomas Vondra wrote:\n> On 3/26/21 1:54 PM, Tomas Vondra wrote:\n>>\n>>\n>> On 3/26/21 12:37 PM, Dean Rasheed wrote:\n>>> On Thu, 25 Mar 2021 at 19:59, Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> Attached is an updated patch series, with all the changes discussed\n>>>> here. I've cleaned up the ndistinct stuff a bit more (essentially\n>>>> reverting back from GroupExprInfo to GroupVarInfo name), and got rid of\n>>>> the UpdateStatisticsForTypeChange.\n>>>>\n>>>\n>>> I've looked over all that and I think it's in pretty good shape. I\n>>> particularly like how much simpler the ndistinct code has now become.\n>>>\n>>> Some (hopefully final) review comments:\n>>>\n>>> ...\n>>>\n>>\n>> Thanks! I'll fix these, and then will consider getting it committed\n>> sometime later today, once the buildfarm does some testing on the other\n>> stuff I already committed.\n>>\n> \n> OK, pushed after a bit more polishing and testing. I've noticed one more\n> missing piece in describe (expressions missing in \\dX), so I fixed that.\n> \n> May the buildfarm be merciful ...\n> \n\nLOL! It failed on *my* buildfarm machine, because apparently some of the\nexpressions used in stats_ext depend on locale and the machine is using\ncs_CZ.UTF-8. Will fix later ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 27 Mar 2021 03:11:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "I suggest to add some kind of reference to stats expressions here.\n\n--- a/doc/src/sgml/maintenance.sgml\n+++ b/doc/src/sgml/maintenance.sgml\n\n <sect2 id=\"vacuum-for-statistics\">\n <title>Updating Planner Statistics</title>\n\n <indexterm zone=\"vacuum-for-statistics\">\n <primary>statistics</primary>\n <secondary>of the planner</secondary>\n </indexterm>\n\n[...]\n\n@@ -330,10 +330,12 @@\n \n <para>\n Also, by default there is limited information available about\n- the selectivity of functions. However, if you create an expression\n+ the selectivity of functions. However, if you create a statistics\n+ expression or an expression\n index that uses a function call, useful statistics will be\n gathered about the function, which can greatly improve query\n plans that use the expression index.\n\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Apr 2021 21:50:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 02:09:04PM +0100, Tomas Vondra wrote:\n> On 1/22/21 5:01 AM, Justin Pryzby wrote:\n> > On Fri, Jan 22, 2021 at 04:49:51AM +0100, Tomas Vondra wrote:\n> > > > > | Statistics objects:\n> > > > > | \"public\".\"s2\" (ndistinct, dependencies, mcv) ON FROM t\n> > > \n> > > Umm, for me that prints:\n> > \n> > > \"public\".\"s2\" ON ((i + 1)), (((i + 1) + 0)) FROM t\n> > > \n> > > which I think is OK. But maybe there's something else to trigger the\n> > > problem?\n> > \n> > Oh. It's because I was using /usr/bin/psql and not ./src/bin/psql.\n> > I think it's considered ok if old client's \\d commands don't work on new\n> > server, but it's not clear to me if it's ok if they misbehave. It's almost\n> > better it made an ERROR.\n> > \n> \n> Well, how would the server know to throw an error? We can't quite patch the\n> old psql (if we could, we could just tweak the query).\n\nTo refresh: stats objects on a v14 server which include expressions are shown\nby pre-v14 psql client with the expressions elided (because the attnums don't\ncorrespond to anything in pg_attribute).\n\nI'm mentioning it again since, even though I knew about this earlier in the\nyear, it caused some confusion for me again just now while testing our\napplication. I had the v14 server installed but the psql symlink still pointed\nto the v13 client.\n\nThere may not be anything we can do about it.\nAnd it may not be a significant issue outside the beta period: more typically,\nthe client version would match the server version.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 2 Jun 2021 21:39:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions (\\d in old client)"
},
{
"msg_contents": "On Sat, Mar 27, 2021 at 01:17:14AM +0100, Tomas Vondra wrote:\n> OK, pushed after a bit more polishing and testing.\n\nThis added a \"transformed\" field to CreateStatsStmt, but it didn't mention\nthat field in src/backend/nodes. Should those functions handle the field?\n\n\n",
"msg_date": "Sat, 5 Jun 2021 22:37:40 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\nOn 6/6/21 7:37 AM, Noah Misch wrote:\n> On Sat, Mar 27, 2021 at 01:17:14AM +0100, Tomas Vondra wrote:\n>> OK, pushed after a bit more polishing and testing.\n> \n> This added a \"transformed\" field to CreateStatsStmt, but it didn't mention\n> that field in src/backend/nodes. Should those functions handle the field?\n> \n\nYup, that's a mistake - it should do whatever CREATE INDEX is doing. Not\nsure if it can result in error/failure or just inefficiency (due to\ntransforming the expressions repeatedly), but it should do whatever\nCREATE INDEX is doing.\n\nThanks for noticing! Fixed by d57ecebd12.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 6 Jun 2021 21:13:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 6/6/21 7:37 AM, Noah Misch wrote:\n>> This added a \"transformed\" field to CreateStatsStmt, but it didn't mention\n>> that field in src/backend/nodes. Should those functions handle the field?\n\n> Yup, that's a mistake - it should do whatever CREATE INDEX is doing. Not\n> sure if it can result in error/failure or just inefficiency (due to\n> transforming the expressions repeatedly), but it should do whatever\n> CREATE INDEX is doing.\n\nI'm curious about how come the buildfarm didn't notice this. The\nanimals using COPY_PARSE_PLAN_TREES should have failed. The fact\nthat they didn't implies that there's no test case that makes use\nof a nonzero value for this field, which seems like a testing gap.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Jun 2021 15:17:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 6/6/21 9:17 PM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 6/6/21 7:37 AM, Noah Misch wrote:\n>>> This added a \"transformed\" field to CreateStatsStmt, but it didn't mention\n>>> that field in src/backend/nodes. Should those functions handle the field?\n> \n>> Yup, that's a mistake - it should do whatever CREATE INDEX is doing. Not\n>> sure if it can result in error/failure or just inefficiency (due to\n>> transforming the expressions repeatedly), but it should do whatever\n>> CREATE INDEX is doing.\n> \n> I'm curious about how come the buildfarm didn't notice this. The\n> animals using COPY_PARSE_PLAN_TREES should have failed. The fact\n> that they didn't implies that there's no test case that makes use\n> of a nonzero value for this field, which seems like a testing gap.\n> \n\nAFAICS the reason is pretty simple - the COPY_PARSE_PLAN_TREES checks\nlook like this:\n\n List *new_list = copyObject(raw_parsetree_list);\n\n /* This checks both copyObject() and the equal() routines... */\n if (!equal(new_list, raw_parsetree_list))\n elog(WARNING, \"copyObject() failed to produce an equal raw\n parse tree\");\n else\n raw_parsetree_list = new_list;\n }\n\nBut if the field is missing from all the functions, equal() can't detect\nthat copyObject() did not actually copy it. It'd detect a case when the\nfield was added just to one place, but not this. The CREATE INDEX (which\nserved as an example for CREATE STATISTICS) has exactly the same issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 6 Jun 2021 22:01:23 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 6/6/21 9:17 PM, Tom Lane wrote:\n>> I'm curious about how come the buildfarm didn't notice this. The\n>> animals using COPY_PARSE_PLAN_TREES should have failed. The fact\n>> that they didn't implies that there's no test case that makes use\n>> of a nonzero value for this field, which seems like a testing gap.\n\n> AFAICS the reason is pretty simple - the COPY_PARSE_PLAN_TREES checks\n> look like this:\n> ...\n> But if the field is missing from all the functions, equal() can't detect\n> that copyObject() did not actually copy it.\n\nRight, that code would only detect a missing copyfuncs.c line if\nequalfuncs.c did have the line, which isn't all that likely. However,\nwe then pass the copied node on to further processing, which in principle\nshould result in visible failures when copyfuncs.c is missing a line.\n\nI think the reason it didn't is that the transformed field would always\nbe zero (false) in grammar output. We could only detect a problem if\nwe copied already-transformed nodes and then used them further. Even\nthen it *might* not fail, because the consequence would likely be an\nextra round of parse analysis on the expressions, which is likely to\nbe a no-op.\n\nNot sure if there's a good way to improve that. I hope sometime soon\nwe'll be able to auto-generate these functions, and then the risk of\nthis sort of mistake will go away (he says optimistically).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Jun 2021 16:08:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Sun, Jun 06, 2021 at 09:13:17PM +0200, Tomas Vondra wrote:\n> \n> On 6/6/21 7:37 AM, Noah Misch wrote:\n> > On Sat, Mar 27, 2021 at 01:17:14AM +0100, Tomas Vondra wrote:\n> >> OK, pushed after a bit more polishing and testing.\n> > \n> > This added a \"transformed\" field to CreateStatsStmt, but it didn't mention\n> > that field in src/backend/nodes. Should those functions handle the field?\n> > \n> \n> Yup, that's a mistake - it should do whatever CREATE INDEX is doing. Not\n> sure if it can result in error/failure or just inefficiency (due to\n> transforming the expressions repeatedly), but it should do whatever\n> CREATE INDEX is doing.\n> \n> Thanks for noticing! Fixed by d57ecebd12.\n\nGreat. For future reference, this didn't need a catversion bump. readfuncs.c\nchanges need a catversion bump, since the catalogs might contain input for\neach read function. Other src/backend/nodes functions don't face that. Also,\nsrc/backend/nodes generally process fields in the order that they appear in\nthe struct. The order you used in d57ecebd12 is nicer, being more like\nIndexStmt, so I'm pushing an order change to the struct.\n\n\n",
"msg_date": "Thu, 10 Jun 2021 21:55:46 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 6/11/21 6:55 AM, Noah Misch wrote:\n> On Sun, Jun 06, 2021 at 09:13:17PM +0200, Tomas Vondra wrote:\n>>\n>> On 6/6/21 7:37 AM, Noah Misch wrote:\n>>> On Sat, Mar 27, 2021 at 01:17:14AM +0100, Tomas Vondra wrote:\n>>>> OK, pushed after a bit more polishing and testing.\n>>>\n>>> This added a \"transformed\" field to CreateStatsStmt, but it didn't mention\n>>> that field in src/backend/nodes. Should those functions handle the field?\n>>>\n>>\n>> Yup, that's a mistake - it should do whatever CREATE INDEX is doing. Not\n>> sure if it can result in error/failure or just inefficiency (due to\n>> transforming the expressions repeatedly), but it should do whatever\n>> CREATE INDEX is doing.\n>>\n>> Thanks for noticing! Fixed by d57ecebd12.\n> \n> Great. For future reference, this didn't need a catversion bump. readfuncs.c\n> changes need a catversion bump, since the catalogs might contain input for\n> each read function. Other src/backend/nodes functions don't face that. Also,\n> src/backend/nodes generally process fields in the order that they appear in\n> the struct. The order you used in d57ecebd12 is nicer, being more like\n> IndexStmt, so I'm pushing an order change to the struct.\n> \n\nOK, makes sense. Thanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 11 Jun 2021 13:04:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 1/22/21 5:01 AM, Justin Pryzby wrote:\n> > In any case, why are there so many parentheses ?\n\nOn Fri, Jan 22, 2021 at 02:09:04PM +0100, Tomas Vondra wrote:\n> That's a bug in pg_get_statisticsobj_worker, probably. It shouldn't be\n> adding extra parentheses, on top of what deparse_expression_pretty does.\n> Will fix.\n\nThe extra parens are still here - is it intended ?\n\npostgres=# CREATE STATISTICS s ON i, (1+i), (2+i) FROM t;\nCREATE STATISTICS\npostgres=# \\d t\n Table \"public.t\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n i | integer | | | \nStatistics objects:\n \"public\".\"s\" ON i, ((1 + i)), ((2 + i)) FROM t\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 15 Aug 2021 20:31:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Mon, Dec 07, 2020 at 03:15:17PM +0100, Tomas Vondra wrote:\n> > Looking at the current behaviour, there are a couple of things that\n> > seem a little odd, even though they are understandable. For example,\n> > the fact that\n> > \n> > CREATE STATISTICS s (expressions) ON (expr), col FROM tbl;\n> > \n> > fails, but\n> > \n> > CREATE STATISTICS s (expressions, mcv) ON (expr), col FROM tbl;\n> > \n> > succeeds and creates both \"expressions\" and \"mcv\" statistics. Also, the syntax\n> > \n> > CREATE STATISTICS s (expressions) ON (expr1), (expr2) FROM tbl;\n> > \n> > tends to suggest that it's going to create statistics on the pair of\n> > expressions, describing their correlation, when actually it builds 2\n> > independent statistics. Also, this error text isn't entirely accurate:\n> > \n> > CREATE STATISTICS s ON col FROM tbl;\n> > ERROR: extended statistics require at least 2 columns\n> > \n> > because extended statistics don't always require 2 columns, they can\n> > also just have an expression, or multiple expressions and 0 or 1\n> > columns.\n> > \n> > I think a lot of this stems from treating \"expressions\" in the same\n> > way as the other (multi-column) stats kinds, and it might actually be\n> > neater to have separate documented syntaxes for single- and\n> > multi-column statistics:\n> > \n> > CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n> > ON (expression)\n> > FROM table_name\n> > \n> > CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n> > [ ( statistics_kind [, ... ] ) ]\n> > ON { column_name | (expression) } , { column_name | (expression) } [, ...]\n> > FROM table_name\n> > \n> > The first syntax would create single-column stats, and wouldn't accept\n> > a statistics_kind argument, because there is only one kind of\n> > single-column statistic. Maybe that might change in the future, but if\n> > so, it's likely that the kinds of single-column stats will be\n> > different from the kinds of multi-column stats.\n> > \n> > In the second syntax, the only accepted kinds would be the current\n> > multi-column stats kinds (ndistinct, dependencies, and mcv), and it\n> > would always build stats describing the correlations between the\n> > columns listed. It would continue to build standard/expression stats\n> > on any expressions in the list, but that's more of an implementation\n> > detail.\n> > \n> > It would no longer be possible to do \"CREATE STATISTICS s\n> > (expressions) ON (expr1), (expr2) FROM tbl\". Instead, you'd have to\n> > issue 2 separate \"CREATE STATISTICS\" commands, but that seems more\n> > logical, because they're independent stats.\n> > \n> > The parsing code might not change much, but some of the errors would\n> > be different. For example, the errors \"building only extended\n> > expression statistics on simple columns not allowed\" and \"extended\n> > expression statistics require at least one expression\" would go away,\n> > and the error \"extended statistics require at least 2 columns\" might\n> > become more specific, depending on the stats kind.\n\nThis still seems odd:\n\npostgres=# CREATE STATISTICS asf ON i FROM t;\nERROR: extended statistics require at least 2 columns\npostgres=# CREATE STATISTICS asf ON (i) FROM t;\nCREATE STATISTICS\n\nIt seems wrong that the command works with added parens, but builds expression\nstats on a simple column (which is redundant with what analyze does without\nextended stats).\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 15 Aug 2021 20:32:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 8/16/21 3:31 AM, Justin Pryzby wrote:\n> On 1/22/21 5:01 AM, Justin Pryzby wrote:\n>>> In any case, why are there so many parentheses ?\n> \n> On Fri, Jan 22, 2021 at 02:09:04PM +0100, Tomas Vondra wrote:\n>> That's a bug in pg_get_statisticsobj_worker, probably. It shouldn't be\n>> adding extra parentheses, on top of what deparse_expression_pretty does.\n>> Will fix.\n> \n> The extra parens are still here - is it intended ?\n> \n\nAh, thanks for reminding me! I was looking at this, and the problem is \nthat pg_get_statisticsobj_worker only does this:\n\n prettyFlags = PRETTYFLAG_INDENT;\n\nChanging that to\n\n prettyFlags = PRETTYFLAG_INDENT | PRETTYFLAG_PAREN;\n\nfixes this (not sure we need the INDENT flag - probably not).\n\nI'm a bit confused, though. My assumption was \"PRETTYFLAG_PAREN = true\" \nwould force the deparsing itself to add the parens, if needed, but in \nreality it works the other way around.\n\nI guess it's more complicated due to deparsing multi-level expressions, \nbut unfortunately, there's no comment explaining what it does.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Aug 2021 15:19:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 8/16/21 3:32 AM, Justin Pryzby wrote:\n> On Mon, Dec 07, 2020 at 03:15:17PM +0100, Tomas Vondra wrote:\n>>> Looking at the current behaviour, there are a couple of things that\n>>> seem a little odd, even though they are understandable. For example,\n>>> the fact that\n>>>\n>>> CREATE STATISTICS s (expressions) ON (expr), col FROM tbl;\n>>>\n>>> fails, but\n>>>\n>>> CREATE STATISTICS s (expressions, mcv) ON (expr), col FROM tbl;\n>>>\n>>> succeeds and creates both \"expressions\" and \"mcv\" statistics. Also, the syntax\n>>>\n>>> CREATE STATISTICS s (expressions) ON (expr1), (expr2) FROM tbl;\n>>>\n>>> tends to suggest that it's going to create statistics on the pair of\n>>> expressions, describing their correlation, when actually it builds 2\n>>> independent statistics. Also, this error text isn't entirely accurate:\n>>>\n>>> CREATE STATISTICS s ON col FROM tbl;\n>>> ERROR: extended statistics require at least 2 columns\n>>>\n>>> because extended statistics don't always require 2 columns, they can\n>>> also just have an expression, or multiple expressions and 0 or 1\n>>> columns.\n>>>\n>>> I think a lot of this stems from treating \"expressions\" in the same\n>>> way as the other (multi-column) stats kinds, and it might actually be\n>>> neater to have separate documented syntaxes for single- and\n>>> multi-column statistics:\n>>>\n>>> CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n>>> ON (expression)\n>>> FROM table_name\n>>>\n>>> CREATE STATISTICS [ IF NOT EXISTS ] statistics_name\n>>> [ ( statistics_kind [, ... ] ) ]\n>>> ON { column_name | (expression) } , { column_name | (expression) } [, ...]\n>>> FROM table_name\n>>>\n>>> The first syntax would create single-column stats, and wouldn't accept\n>>> a statistics_kind argument, because there is only one kind of\n>>> single-column statistic. Maybe that might change in the future, but if\n>>> so, it's likely that the kinds of single-column stats will be\n>>> different from the kinds of multi-column stats.\n>>>\n>>> In the second syntax, the only accepted kinds would be the current\n>>> multi-column stats kinds (ndistinct, dependencies, and mcv), and it\n>>> would always build stats describing the correlations between the\n>>> columns listed. It would continue to build standard/expression stats\n>>> on any expressions in the list, but that's more of an implementation\n>>> detail.\n>>>\n>>> It would no longer be possible to do \"CREATE STATISTICS s\n>>> (expressions) ON (expr1), (expr2) FROM tbl\". Instead, you'd have to\n>>> issue 2 separate \"CREATE STATISTICS\" commands, but that seems more\n>>> logical, because they're independent stats.\n>>>\n>>> The parsing code might not change much, but some of the errors would\n>>> be different. For example, the errors \"building only extended\n>>> expression statistics on simple columns not allowed\" and \"extended\n>>> expression statistics require at least one expression\" would go away,\n>>> and the error \"extended statistics require at least 2 columns\" might\n>>> become more specific, depending on the stats kind.\n> \n> This still seems odd:\n> \n> postgres=# CREATE STATISTICS asf ON i FROM t;\n> ERROR: extended statistics require at least 2 columns\n> postgres=# CREATE STATISTICS asf ON (i) FROM t;\n> CREATE STATISTICS\n> \n> It seems wrong that the command works with added parens, but builds expression\n> stats on a simple column (which is redundant with what analyze does without\n> extended stats).\n> \n\nWell, yeah. But I think this is a behavior that was discussed somewhere \nin this thread, and the agreement was that it's not worth the \ncomplexity, as this comment explains\n\n * XXX We do only the bare minimum to separate simple attribute and\n * complex expressions - for example \"(a)\" will be treated as a complex\n * expression. No matter how elaborate the check is, there'll always be\n * a way around it, if the user is determined (consider e.g. \"(a+0)\"),\n * so it's not worth protecting against it.\n\nOf course, maybe that wasn't the right decision - it's a bit weird that\n\n CREATE INDEX on t ((a), (b))\n\nactually \"extracts\" the column references and stores that in indkeys, \ninstead of treating that as expressions.\n\nPatch 0001 fixes the \"double parens\" issue discussed elsewhere in this \nthread, and patch 0002 tweaks CREATE STATISTICS to treat \"(a)\" as a \nsimple column reference.\n\nBut I'm not sure 0002 is something we can do without catversion bump. \nWhat if someone created such \"bogus\" statistics? It's mostly harmless, \nbecause the statistics is useless anyway (AFAICS we'll just use the \nregular one we have for the column), but if they do pg_dump, that'll \nfail because of this new restriction.\n\nOTOH we're still \"only\" in beta, and IIRC the rule is not to bump \ncatversion after rc1.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 16 Aug 2021 17:41:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "> Patch 0001 fixes the \"double parens\" issue discussed elsewhere in this\n> thread, and patch 0002 tweaks CREATE STATISTICS to treat \"(a)\" as a simple\n> column reference.\n\n> From: Tomas Vondra <tomas.vondra@postgresql.org>\n> Date: Mon, 16 Aug 2021 17:19:33 +0200\n> Subject: [PATCH 2/2] fix: identify single-attribute references\n\n> diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl\n> index a4ee54d516..be1f3a5175 100644\n> --- a/src/bin/pg_dump/t/002_pg_dump.pl\n> +++ b/src/bin/pg_dump/t/002_pg_dump.pl\n> @@ -2811,7 +2811,7 @@ my %tests = (\n> \t\tcreate_sql => 'CREATE STATISTICS dump_test.test_ext_stats_expr\n> \t\t\t\t\t\t\tON (2 * col1) FROM dump_test.test_fifth_table',\n> \t\tregexp => qr/^\n> -\t\t\t\\QCREATE STATISTICS dump_test.test_ext_stats_expr ON ((2 * col1)) FROM dump_test.test_fifth_table;\\E\n> +\t\t\t\\QCREATE STATISTICS dump_test.test_ext_stats_expr ON (2 * col1) FROM dump_test.test_fifth_table;\\E\n\n\nThis hunk should be in 0001, no ?\n\n> But I'm not sure 0002 is something we can do without catversion bump. What\n> if someone created such \"bogus\" statistics? It's mostly harmless, because\n> the statistics is useless anyway (AFAICS we'll just use the regular one we\n> have for the column), but if they do pg_dump, that'll fail because of this\n> new restriction.\n\nI think it's okay if it pg_dump throws an error, since the fix is as easy as\ndropping the stx object. (It wouldn't be okay if it silently misbehaved.)\n\nAndres concluded similarly with the reverted autovacuum patch:\nhttps://www.postgresql.org/message-id/20210817105022.e2t4rozkhqy2myhn@alap3.anarazel.de\n\n+RMT in case someone wants to argue otherwise.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Aug 2021 22:07:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 8/18/21 5:07 AM, Justin Pryzby wrote:\n>> Patch 0001 fixes the \"double parens\" issue discussed elsewhere in this\n>> thread, and patch 0002 tweaks CREATE STATISTICS to treat \"(a)\" as a simple\n>> column reference.\n> \n>> From: Tomas Vondra <tomas.vondra@postgresql.org>\n>> Date: Mon, 16 Aug 2021 17:19:33 +0200\n>> Subject: [PATCH 2/2] fix: identify single-attribute references\n> \n>> diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl\n>> index a4ee54d516..be1f3a5175 100644\n>> --- a/src/bin/pg_dump/t/002_pg_dump.pl\n>> +++ b/src/bin/pg_dump/t/002_pg_dump.pl\n>> @@ -2811,7 +2811,7 @@ my %tests = (\n>> \t\tcreate_sql => 'CREATE STATISTICS dump_test.test_ext_stats_expr\n>> \t\t\t\t\t\t\tON (2 * col1) FROM dump_test.test_fifth_table',\n>> \t\tregexp => qr/^\n>> -\t\t\t\\QCREATE STATISTICS dump_test.test_ext_stats_expr ON ((2 * col1)) FROM dump_test.test_fifth_table;\\E\n>> +\t\t\t\\QCREATE STATISTICS dump_test.test_ext_stats_expr ON (2 * col1) FROM dump_test.test_fifth_table;\\E\n> \n> \n> This hunk should be in 0001, no ?\n>\nYeah, I mixed that up a bit.\n\n>> But I'm not sure 0002 is something we can do without catversion bump. What\n>> if someone created such \"bogus\" statistics? It's mostly harmless, because\n>> the statistics is useless anyway (AFAICS we'll just use the regular one we\n>> have for the column), but if they do pg_dump, that'll fail because of this\n>> new restriction.\n> \n> I think it's okay if it pg_dump throws an error, since the fix is as easy as\n> dropping the stx object. (It wouldn't be okay if it silently misbehaved.)\n> \n> Andres concluded similarly with the reverted autovacuum patch:\n> https://www.postgresql.org/message-id/20210817105022.e2t4rozkhqy2myhn@alap3.anarazel.de\n> \n > +RMT in case someone wants to argue otherwise.\n >\n\nI feel a bit uneasy about it, but if there's a precedent ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Aug 2021 13:43:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 05:41:57PM +0200, Tomas Vondra wrote:\n> > This still seems odd:\n> > \n> > postgres=# CREATE STATISTICS asf ON i FROM t;\n> > ERROR: extended statistics require at least 2 columns\n> > postgres=# CREATE STATISTICS asf ON (i) FROM t;\n> > CREATE STATISTICS\n> > \n> > It seems wrong that the command works with added parens, but builds expression\n> > stats on a simple column (which is redundant with what analyze does without\n> > extended stats).\n> \n> Well, yeah. But I think this is a behavior that was discussed somewhere in\n> this thread, and the agreement was that it's not worth the complexity, as\n> this comment explains\n> \n> * XXX We do only the bare minimum to separate simple attribute and\n> * complex expressions - for example \"(a)\" will be treated as a complex\n> * expression. No matter how elaborate the check is, there'll always be\n> * a way around it, if the user is determined (consider e.g. \"(a+0)\"),\n> * so it's not worth protecting against it.\n> \n> Patch 0001 fixes the \"double parens\" issue discussed elsewhere in this\n> thread, and patch 0002 tweaks CREATE STATISTICS to treat \"(a)\" as a simple\n> column reference.\n\n0002 refuses to create expressional stats on a simple column reference like\n(a), which I think is helps to avoid a user accidentally creating useless ext\nstats objects (which are redundant with the table's column stats).\n\n0002 does not attempt to refuse cases like (a+0), which I think is fine:\nwe don't try to reject useless cases if someone insists on it.\nSee 240971675, 701fd0bbc.\n\nSo I am +1 to apply both patches.\n\nI added this as an Opened Item for increased visibility.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 24 Aug 2021 08:13:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 8/24/21 3:13 PM, Justin Pryzby wrote:\n> On Mon, Aug 16, 2021 at 05:41:57PM +0200, Tomas Vondra wrote:\n>>> This still seems odd:\n>>>\n>>> postgres=# CREATE STATISTICS asf ON i FROM t;\n>>> ERROR: extended statistics require at least 2 columns\n>>> postgres=# CREATE STATISTICS asf ON (i) FROM t;\n>>> CREATE STATISTICS\n>>>\n>>> It seems wrong that the command works with added parens, but builds expression\n>>> stats on a simple column (which is redundant with what analyze does without\n>>> extended stats).\n>>\n>> Well, yeah. But I think this is a behavior that was discussed somewhere in\n>> this thread, and the agreement was that it's not worth the complexity, as\n>> this comment explains\n>>\n>> * XXX We do only the bare minimum to separate simple attribute and\n>> * complex expressions - for example \"(a)\" will be treated as a complex\n>> * expression. No matter how elaborate the check is, there'll always be\n>> * a way around it, if the user is determined (consider e.g. \"(a+0)\"),\n>> * so it's not worth protecting against it.\n>>\n>> Patch 0001 fixes the \"double parens\" issue discussed elsewhere in this\n>> thread, and patch 0002 tweaks CREATE STATISTICS to treat \"(a)\" as a simple\n>> column reference.\n> \n> 0002 refuses to create expressional stats on a simple column reference like\n> (a), which I think is helps to avoid a user accidentally creating useless ext\n> stats objects (which are redundant with the table's column stats).\n> \n> 0002 does not attempt to refuse cases like (a+0), which I think is fine:\n> we don't try to reject useless cases if someone insists on it.\n> See 240971675, 701fd0bbc.\n> \n> So I am +1 to apply both patches.\n> \n> I added this as an Opened Item for increased visibility.\n> \n\nI've pushed both fixes, so the open item should be resolved.\n\nHowever while polishing the second patch, I realized we're allowing \nstatistics on expressions referencing system attributes. So this fails;\n\nCREATE STATISTICS s ON ctid, x FROM t;\n\nbut this passes:\n\nCREATE STATISTICS s ON (ctid::text), x FROM t;\n\nIMO we should reject such expressions, just like we reject direct \nreferences to system attributes - patch attached.\n\nFurthermore, I wonder if we should reject expressions without any Vars? \nThis works now:\n\nCREATE STATISTICS s ON (11:text) FROM t;\n\nbut it seems rather silly / useless, so maybe we should reject it.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 1 Sep 2021 18:45:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, Sep 01, 2021 at 06:45:29PM +0200, Tomas Vondra wrote:\n> > > Patch 0001 fixes the \"double parens\" issue discussed elsewhere in this\n> > > thread, and patch 0002 tweaks CREATE STATISTICS to treat \"(a)\" as a simple\n> > > column reference.\n> > \n> > 0002 refuses to create expressional stats on a simple column reference like\n> > (a), which I think is helps to avoid a user accidentally creating useless ext\n> > stats objects (which are redundant with the table's column stats).\n> > \n> > 0002 does not attempt to refuse cases like (a+0), which I think is fine:\n> > we don't try to reject useless cases if someone insists on it.\n> > See 240971675, 701fd0bbc.\n> > \n> > So I am +1 to apply both patches.\n> > \n> > I added this as an Opened Item for increased visibility.\n> \n> I've pushed both fixes, so the open item should be resolved.\n\nThank you - I marked it as such.\n\nThere are some typos in 537ca68db (refenrece)\nI'll add them to my typos branch if you don't want to patch them right now or\nwait to see if someone notices anything else.\n\ndiff --git a/src/backend/commands/statscmds.c b/src/backend/commands/statscmds.c\nindex 59369f8736..17cbd97808 100644\n--- a/src/backend/commands/statscmds.c\n+++ b/src/backend/commands/statscmds.c\n@@ -205,27 +205,27 @@ CreateStatistics(CreateStatsStmt *stmt)\n \tnumcols = list_length(stmt->exprs);\n \tif (numcols > STATS_MAX_DIMENSIONS)\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_TOO_MANY_COLUMNS),\n \t\t\t\t errmsg(\"cannot have more than %d columns in statistics\",\n \t\t\t\t\t\tSTATS_MAX_DIMENSIONS)));\n \n \t/*\n \t * Convert the expression list to a simple array of attnums, but also keep\n \t * a list of more complex expressions. While at it, enforce some\n \t * constraints - we don't allow extended statistics on system attributes,\n-\t * and we require the data type to have less-than operator.\n+\t * and we require the data type to have a less-than operator.\n \t *\n-\t * There are many ways how to \"mask\" a simple attribute refenrece as an\n+\t * There are many ways to \"mask\" a simple attribute reference as an\n \t * expression, for example \"(a+0)\" etc. We can't possibly detect all of\n-\t * them, but we handle at least the simple case with attribute in parens.\n+\t * them, but we handle at least the simple case with the attribute in parens.\n \t * There'll always be a way around this, if the user is determined (like\n \t * the \"(a+0)\" example), but this makes it somewhat consistent with how\n \t * indexes treat attributes/expressions.\n \t */\n \tforeach(cell, stmt->exprs)\n \t{\n \t\tStatsElem *selem = lfirst_node(StatsElem, cell);\n \n \t\tif (selem->name)\t\t/* column reference */\n \t\t{\n \t\t\tchar\t *attname;\n\n\n",
"msg_date": "Wed, 1 Sep 2021 14:38:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "\n\nOn 9/1/21 9:38 PM, Justin Pryzby wrote:\n> On Wed, Sep 01, 2021 at 06:45:29PM +0200, Tomas Vondra wrote:\n>>>> Patch 0001 fixes the \"double parens\" issue discussed elsewhere in this\n>>>> thread, and patch 0002 tweaks CREATE STATISTICS to treat \"(a)\" as a simple\n>>>> column reference.\n>>>\n>>> 0002 refuses to create expressional stats on a simple column reference like\n>>> (a), which I think is helps to avoid a user accidentally creating useless ext\n>>> stats objects (which are redundant with the table's column stats).\n>>>\n>>> 0002 does not attempt to refuse cases like (a+0), which I think is fine:\n>>> we don't try to reject useless cases if someone insists on it.\n>>> See 240971675, 701fd0bbc.\n>>>\n>>> So I am +1 to apply both patches.\n>>>\n>>> I added this as an Opened Item for increased visibility.\n>>\n>> I've pushed both fixes, so the open item should be resolved.\n> \n> Thank you - I marked it as such.\n> \n> There are some typos in 537ca68db (refenrece)\n> I'll add them to my typos branch if you don't want to patch them right now or\n> wait to see if someone notices anything else.\n> \n\nYeah, probably better to wait a bit. Any opinions on rejecting \nexpressions referencing system attributes or no attributes at all?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 Sep 2021 21:56:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On Wed, Sep 01, 2021 at 06:45:29PM +0200, Tomas Vondra wrote:\n> However while polishing the second patch, I realized we're allowing\n> statistics on expressions referencing system attributes. So this fails;\n> \n> CREATE STATISTICS s ON ctid, x FROM t;\n> \n> but this passes:\n> \n> CREATE STATISTICS s ON (ctid::text), x FROM t;\n> \n> IMO we should reject such expressions, just like we reject direct references\n> to system attributes - patch attached.\n\nRight, same as indexes. +1\n\n> Furthermore, I wonder if we should reject expressions without any Vars? This\n> works now:\n> \n> CREATE STATISTICS s ON (11:text) FROM t;\n> \n> but it seems rather silly / useless, so maybe we should reject it.\n\nTo my surprise, this is also allowed for indexes...\n\nBut (maybe this is what I was remembering) it's prohibited to have a constant\nexpression as a partition key.\n\nExpressions without a var seem like a case where the user did something\ndeliberately silly, and dis-similar from the case of making a stats expression\non a simple column - that seemed like it could be a legitimate\nmistake/confusion (it's not unreasonable to write an extra parenthesis, but\nit's strange if that causes it to behave differently).\n\nI think it's not worth too much effort to prohibit this: if they're determined,\nthey can still write an expresion with a var which is constant. I'm not going\nto say it's worth zero effort, though.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 2 Sep 2021 22:56:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
},
{
"msg_contents": "On 9/3/21 5:56 AM, Justin Pryzby wrote:\n> On Wed, Sep 01, 2021 at 06:45:29PM +0200, Tomas Vondra wrote:\n>> However while polishing the second patch, I realized we're allowing\n>> statistics on expressions referencing system attributes. So this fails;\n>>\n>> CREATE STATISTICS s ON ctid, x FROM t;\n>>\n>> but this passes:\n>>\n>> CREATE STATISTICS s ON (ctid::text), x FROM t;\n>>\n>> IMO we should reject such expressions, just like we reject direct references\n>> to system attributes - patch attached.\n> \n> Right, same as indexes. +1\n>\n\nI've pushed this check, disallowing extended stats on expressions \nreferencing system attributes. This means we'll reject both ctid and \n(ctid::text), just like for indexes.\n\n>> Furthermore, I wonder if we should reject expressions without any Vars? This\n>> works now:\n>>\n>> CREATE STATISTICS s ON (11:text) FROM t;\n>>\n>> but it seems rather silly / useless, so maybe we should reject it.\n> \n> To my surprise, this is also allowed for indexes...\n> \n> But (maybe this is what I was remembering) it's prohibited to have a constant\n> expression as a partition key.\n> \n> Expressions without a var seem like a case where the user did something\n> deliberately silly, and dis-similar from the case of making a stats expression\n> on a simple column - that seemed like it could be a legitimate\n> mistake/confusion (it's not unreasonable to write an extra parenthesis, but\n> it's strange if that causes it to behave differently).\n> \n> I think it's not worth too much effort to prohibit this: if they're determined,\n> they can still write an expresion with a var which is constant. I'm not going\n> to say it's worth zero effort, though.\n> \n\nI've decided not to push this. The statistics objects on expressions not \nreferencing any variables seem useless, but maybe not entirely - we \nallow volatile expressions, like\n\n CREATE STATISTICS s ON (random()) FROM t;\n\nwhich I suppose might be useful. And we reject similar cases (except for \nthe volatility, of course) for indexes too.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 Sep 2021 01:07:23 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: PoC/WIP: Extended statistics on expressions"
}
] |
[
{
"msg_contents": "The docs are misleading for this feature, since they say:\n\"This option disables all page-skipping behavior, and is\nintended to be used only when the contents of the visibility map are\nsuspect, which should happen only if there is a hardware or software\nissue causing database corruption.\"\n\nThe docs do correctly say \"Pages where all tuples are known to be\nfrozen can always be skipped\". Checking the code, lazy_scan_heap()\ncomments say\n\"we can still skip pages that are all-frozen, since such pages do not\nneed freezing\".\n\nThe code is quite clear: DISABLE_PAGE_SKIPPING makes the vacuum into\nan aggressive vacuum. Line 487, heap_vacuum_rel(). Aggressive vacuums\ncan still skip a page that is frozen, and rely on the visibility map\nfor that information.\n\nSo the docs are wrong - we don't disable *all* page-skipping and it is\nnot appropriate to warn users away from this feature by saying \"is\nintended to be used only when the contents of the visibility map are\nsuspect\".\n\nReworded docs patch attached.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 16 Nov 2020 20:52:31 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 1:52 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n\n> The docs are misleading for this feature, since they say:\n> \"This option disables all page-skipping behavior, and is\n> intended to be used only when the contents of the visibility map are\n> suspect, which should happen only if there is a hardware or software\n> issue causing database corruption.\"\n>\n[...]\n\n>\n> The code is quite clear: DISABLE_PAGE_SKIPPING makes the vacuum into\n> an aggressive vacuum. Line 487, heap_vacuum_rel(). Aggressive vacuums\n> can still skip a page that is frozen, and rely on the visibility map\n> for that information.\n>\n> So the docs are wrong - we don't disable *all* page-skipping and it is\n> not appropriate to warn users away from this feature by saying \"is\n> intended to be used only when the contents of the visibility map are\n> suspect\".\n>\n\nThis patch seems mis-placed, at least in HEAD. If DISABLE_PAGE_SKIPPING\nisn't doing what the documentation says it should, but instead provides\nidentical behavior to FREEZE, then the bug should be fixed in HEAD. I'd\nargue for batch-patching it as well.\n\nDavid J.\n\nOn Mon, Nov 16, 2020 at 1:52 PM Simon Riggs <simon@2ndquadrant.com> wrote:The docs are misleading for this feature, since they say:\n\"This option disables all page-skipping behavior, and is\nintended to be used only when the contents of the visibility map are\nsuspect, which should happen only if there is a hardware or software\nissue causing database corruption.\"\n[...] \n\nThe code is quite clear: DISABLE_PAGE_SKIPPING makes the vacuum into\nan aggressive vacuum. Line 487, heap_vacuum_rel(). Aggressive vacuums\ncan still skip a page that is frozen, and rely on the visibility map\nfor that information.\n\nSo the docs are wrong - we don't disable *all* page-skipping and it is\nnot appropriate to warn users away from this feature by saying \"is\nintended to be used only when the contents of the visibility map are\nsuspect\".This patch seems mis-placed, at least in HEAD. If DISABLE_PAGE_SKIPPING isn't doing what the documentation says it should, but instead provides identical behavior to FREEZE, then the bug should be fixed in HEAD. I'd argue for batch-patching it as well.David J.",
"msg_date": "Mon, 16 Nov 2020 15:18:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 5:52 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> The docs are misleading for this feature, since they say:\n> \"This option disables all page-skipping behavior, and is\n> intended to be used only when the contents of the visibility map are\n> suspect, which should happen only if there is a hardware or software\n> issue causing database corruption.\"\n>\n> The docs do correctly say \"Pages where all tuples are known to be\n> frozen can always be skipped\". Checking the code, lazy_scan_heap()\n> comments say\n> \"we can still skip pages that are all-frozen, since such pages do not\n> need freezing\".\n>\n> The code is quite clear: DISABLE_PAGE_SKIPPING makes the vacuum into\n> an aggressive vacuum. Line 487, heap_vacuum_rel(). Aggressive vacuums\n> can still skip a page that is frozen, and rely on the visibility map\n> for that information.\n>\n> So the docs are wrong - we don't disable *all* page-skipping and it is\n> not appropriate to warn users away from this feature by saying \"is\n> intended to be used only when the contents of the visibility map are\n> suspect\".\n\nI don't think the doc is wrong. If DISABLE_PAGE_SKIPPING is specified,\nwe not only set aggressive = true but also skip checking visibility\nmap. For instance, see line 905 and line 963, lazy_scan_heap().\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 17 Nov 2020 07:52:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Mon, 16 Nov 2020 at 22:53, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Tue, Nov 17, 2020 at 5:52 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n\n> I don't think the doc is wrong. If DISABLE_PAGE_SKIPPING is specified,\n> we not only set aggressive = true but also skip checking visibility\n> map. For instance, see line 905 and line 963, lazy_scan_heap().\n\nOK, so you're saying that the docs illustrate the true intention of\nthe patch, which I immediately accept since I know you were the\nauthor. Forgive me for not discussing it with you first, I thought\nthis was a clear cut case.\n\nBut that then highlights another area where the docs are wrong...\n\n> On Tue, Nov 17, 2020 at 5:52 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > The docs do correctly say \"Pages where all tuples are known to be\n> > frozen can always be skipped\". Checking the code, lazy_scan_heap()\n> > comments say\n> > \"we can still skip pages that are all-frozen, since such pages do not\n> > need freezing\".\n\nThe docs say this:\n\"Pages where all tuples are known to be frozen can always be skipped.\"\nWhy bother to say that if the feature then ignores that point and\nscans them anyway?\nMay I submit a patch to remove that sentence?\n\nAnyway, we're back to where I started: looking for a user-initiated\ncommand option that allows a table to scanned aggressively so that\nrelfrozenxid can be moved forward as quickly as possible. This is what\nI thought that you were trying to achieve with DISABLE_PAGE_SKIPPING\noption, my bad.\n\nNow David J, above, says this would be VACUUM FREEZE, but I don't\nthink that is right. Setting VACUUM FREEZE has these effects: 1) makes\na vacuum aggressive, but it also 2) moves the freeze limit so high\nthat it freezes mostly everything. (1) allows the vacuum to reset\nrelfrozenxid, but (2) actually slows down the scan by making it freeze\nmore blocks than it would do normally.\n\nSo we have 3 ways to reset relfrozenxid by a user action:\nVACUUM (DISABLE_PAGE_SKIPPING ON) - scans all blocks, deliberately\nignoring the ones it could have skipped. This certainly slows it down.\nVACUUM (FREEZE ON) - freezes everything in its path, slowing down the\nscan by writing too many blocks.\nVACUUM (FULL on) - rewrites table and rebuilds index, so very slow\n\nWhat I think we need is a 4th option which aims to move relfrozenxid\nforwards as quickly as possible\n* initiates an aggressive scan, so it does not skip blocks because of\nbusy buffer pins\n* skip pages that are all-frozen, as we are allowed to do\n* uses normal freeze limits, so we avoid writing to blocks if possible\n\nIf we do all 3 of those things, the scan will complete as quickly as\npossible and reset relfrozenxid quickly. It would make sense to use\nthat in conjunction with index_cleanup=off\n\nAs an additional optimization, if we do find a row that needs freezing\non a data block, we should simply freeze *all* row versions on the\npage, not just the ones below the selected cutoff. This is justified\nsince writing the block is the biggest cost and it doesn't make much\nsense to leave a few rows unfrozen on a block that we are dirtying.\n\nThoughts?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 17 Nov 2020 11:27:36 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 2020-Nov-17, Simon Riggs wrote:\n\n> As an additional optimization, if we do find a row that needs freezing\n> on a data block, we should simply freeze *all* row versions on the\n> page, not just the ones below the selected cutoff. This is justified\n> since writing the block is the biggest cost and it doesn't make much\n> sense to leave a few rows unfrozen on a block that we are dirtying.\n\nYeah. We've had ealier proposals to use high and low watermarks: if any\ntuple is past the high watermark, then freeze all tuples that are past\nthe low watermark. However this is ancient thinking (prior to\nHEAP_XMIN_FROZEN) and we don't need the low watermark to be different\nfrom zero, since the original xid is retained anyway.\n\nSo +1 for this idea.\n\n\n",
"msg_date": "Tue, 17 Nov 2020 23:04:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 8:27 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Mon, 16 Nov 2020 at 22:53, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Tue, Nov 17, 2020 at 5:52 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> > I don't think the doc is wrong. If DISABLE_PAGE_SKIPPING is specified,\n> > we not only set aggressive = true but also skip checking visibility\n> > map. For instance, see line 905 and line 963, lazy_scan_heap().\n>\n> OK, so you're saying that the docs illustrate the true intention of\n> the patch, which I immediately accept since I know you were the\n> author. Forgive me for not discussing it with you first, I thought\n> this was a clear cut case.\n>\n> But that then highlights another area where the docs are wrong...\n>\n> > On Tue, Nov 17, 2020 at 5:52 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > The docs do correctly say \"Pages where all tuples are known to be\n> > > frozen can always be skipped\". Checking the code, lazy_scan_heap()\n> > > comments say\n> > > \"we can still skip pages that are all-frozen, since such pages do not\n> > > need freezing\".\n>\n> The docs say this:\n> \"Pages where all tuples are known to be frozen can always be skipped.\"\n> Why bother to say that if the feature then ignores that point and\n> scans them anyway?\n> May I submit a patch to remove that sentence?\n\nI think the docs describe the page skipping behaviour first and then\ndescribe what DISABLE_PAGE_SKIPPING option does. When discussing this\nfeature, I thought this sentence would help users to understand that\nbecause users might be confused with that option name without\ndescription of page skipping. So I'm not sure it's unnecessary but if\nthis has confused you, it would need to be somehow improved.\n\n>\n> Anyway, we're back to where I started: looking for a user-initiated\n> command option that allows a table to scanned aggressively so that\n> relfrozenxid can be moved forward as quickly as possible. This is what\n> I thought that you were trying to achieve with DISABLE_PAGE_SKIPPING\n> option, my bad.\n>\n> Now David J, above, says this would be VACUUM FREEZE, but I don't\n> think that is right. Setting VACUUM FREEZE has these effects: 1) makes\n> a vacuum aggressive, but it also 2) moves the freeze limit so high\n> that it freezes mostly everything. (1) allows the vacuum to reset\n> relfrozenxid, but (2) actually slows down the scan by making it freeze\n> more blocks than it would do normally.\n>\n> So we have 3 ways to reset relfrozenxid by a user action:\n> VACUUM (DISABLE_PAGE_SKIPPING ON) - scans all blocks, deliberately\n> ignoring the ones it could have skipped. This certainly slows it down.\n> VACUUM (FREEZE ON) - freezes everything in its path, slowing down the\n> scan by writing too many blocks.\n> VACUUM (FULL on) - rewrites table and rebuilds index, so very slow\n>\n> What I think we need is a 4th option which aims to move relfrozenxid\n> forwards as quickly as possible\n> * initiates an aggressive scan, so it does not skip blocks because of\n> busy buffer pins\n> * skip pages that are all-frozen, as we are allowed to do\n> * uses normal freeze limits, so we avoid writing to blocks if possible\n>\n\nThis can be done with VACUUM today by vacuum_freeze_table_age and\nvacuum_multixact_freeze_table_age to 0. Adding an option for this\nbehavior would be good for users to understand and would work well\nwith the optimization.\n\n> If we do all 3 of those things, the scan will complete as quickly as\n> possible and reset relfrozenxid quickly. It would make sense to use\n> that in conjunction with index_cleanup=off\n\nAgreed.\n\n>\n> As an additional optimization, if we do find a row that needs freezing\n> on a data block, we should simply freeze *all* row versions on the\n> page, not just the ones below the selected cutoff. This is justified\n> since writing the block is the biggest cost and it doesn't make much\n> sense to leave a few rows unfrozen on a block that we are dirtying.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 18 Nov 2020 19:27:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Wed, 18 Nov 2020 at 10:28, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> > So we have 3 ways to reset relfrozenxid by a user action:\n> > VACUUM (DISABLE_PAGE_SKIPPING ON) - scans all blocks, deliberately\n> > ignoring the ones it could have skipped. This certainly slows it down.\n> > VACUUM (FREEZE ON) - freezes everything in its path, slowing down the\n> > scan by writing too many blocks.\n> > VACUUM (FULL on) - rewrites table and rebuilds index, so very slow\n> >\n> > What I think we need is a 4th option which aims to move relfrozenxid\n> > forwards as quickly as possible\n> > * initiates an aggressive scan, so it does not skip blocks because of\n> > busy buffer pins\n> > * skip pages that are all-frozen, as we are allowed to do\n> > * uses normal freeze limits, so we avoid writing to blocks if possible\n> >\n>\n> This can be done with VACUUM today by vacuum_freeze_table_age and\n> vacuum_multixact_freeze_table_age to 0. Adding an option for this\n> behavior would be good for users to understand and would work well\n> with the optimization.\n>\n> > If we do all 3 of those things, the scan will complete as quickly as\n> > possible and reset relfrozenxid quickly. It would make sense to use\n> > that in conjunction with index_cleanup=off\n>\n> Agreed.\n\nPatches attached.\n1. vacuum_anti_wraparound.v2.patch\n2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n\n> > As an additional optimization, if we do find a row that needs freezing\n> > on a data block, we should simply freeze *all* row versions on the\n> > page, not just the ones below the selected cutoff. This is justified\n> > since writing the block is the biggest cost and it doesn't make much\n> > sense to leave a few rows unfrozen on a block that we are dirtying.\n>\n> +1\n\nI'll work on that.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Wed, 18 Nov 2020 17:53:40 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> Patches attached.\n> 1. vacuum_anti_wraparound.v2.patch\n> 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n\nI don't like the use of ANTI_WRAPAROUND as a name for this new option.\nWouldn't it make more sense to call it AGGRESSIVE? Or maybe something\nelse, but I dislike anti-wraparound. It's neither the most aggressive\nthing we can do to prevent wraparound (that's FREEZE), nor is it the\ncase that vacuums without this option (or indeed any options) can't\nhelp prevent wraparound, because the aggressive strategy may be\nchosen anyway.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Nov 2020 12:59:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Wed, 18 Nov 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > Patches attached.\n> > 1. vacuum_anti_wraparound.v2.patch\n> > 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n>\n> I don't like the use of ANTI_WRAPAROUND as a name for this new option.\n> Wouldn't it make more sense to call it AGGRESSIVE? Or maybe something\n> else, but I dislike anti-wraparound.\n\n-1 for using the term AGGRESSIVE, which seems likely to offend people.\nI'm sure a more descriptive term exists.\n\n> It's neither the most aggressive\n> thing we can do to prevent wraparound (that's FREEZE),\n\nThe new option is not the same thing as the FREEZE option, as discussed above.\n\n> nor is it the\n> case that vacuums without this option (or indeed any options) can't\n> help prevent wraparound, because the aggressive strategy may be\n> chosen anyway.\n\nMaybe.\n\nThe \"aim [is] to move relfrozenxid forwards as quickly as possible\" so\nas to avoid wraparound, so having an unambiguous command that does\nthat is important for usability. It also allows us to rely on the\nuser's explicit intention to optimize vacuum towards that goal.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 19 Nov 2020 11:02:46 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Wed, 18 Nov 2020 at 02:04, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Nov-17, Simon Riggs wrote:\n>\n> > As an additional optimization, if we do find a row that needs freezing\n> > on a data block, we should simply freeze *all* row versions on the\n> > page, not just the ones below the selected cutoff. This is justified\n> > since writing the block is the biggest cost and it doesn't make much\n> > sense to leave a few rows unfrozen on a block that we are dirtying.\n>\n> Yeah. We've had earlier proposals to use high and low watermarks: if any\n> tuple is past the high watermark, then freeze all tuples that are past\n> the low watermark. However this is ancient thinking (prior to\n> HEAP_XMIN_FROZEN) and we don't need the low watermark to be different\n> from zero, since the original xid is retained anyway.\n>\n> So +1 for this idea.\n\nPatch to do this attached, for discussion.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 19 Nov 2020 21:02:43 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 8:02 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Wed, 18 Nov 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > Patches attached.\n> > > 1. vacuum_anti_wraparound.v2.patch\n> > > 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n> >\n> > I don't like the use of ANTI_WRAPAROUND as a name for this new option.\n> > Wouldn't it make more sense to call it AGGRESSIVE? Or maybe something\n> > else, but I dislike anti-wraparound.\n>\n> -1 for using the term AGGRESSIVE, which seems likely to offend people.\n> I'm sure a more descriptive term exists.\n\nSince we use the term aggressive scan in the docs, I personally don't\nfeel unnatural about that. But since this option also disables index\ncleanup when not enabled explicitly, I’m concerned a bit if user might\nget confused. I came up with some names like FEEZE_FAST and\nFREEZE_MINIMAL but I'm not sure these are better.\n\nBTW if this option also disables index cleanup for faster freezing,\nwhy don't we disable heap truncation as well?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 20 Nov 2020 10:39:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 6:02 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Wed, 18 Nov 2020 at 02:04, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2020-Nov-17, Simon Riggs wrote:\n> >\n> > > As an additional optimization, if we do find a row that needs freezing\n> > > on a data block, we should simply freeze *all* row versions on the\n> > > page, not just the ones below the selected cutoff. This is justified\n> > > since writing the block is the biggest cost and it doesn't make much\n> > > sense to leave a few rows unfrozen on a block that we are dirtying.\n> >\n> > Yeah. We've had earlier proposals to use high and low watermarks: if any\n> > tuple is past the high watermark, then freeze all tuples that are past\n> > the low watermark. However this is ancient thinking (prior to\n> > HEAP_XMIN_FROZEN) and we don't need the low watermark to be different\n> > from zero, since the original xid is retained anyway.\n> >\n> > So +1 for this idea.\n>\n> Patch to do this attached, for discussion.\n\nThank you for the patch!\n\n+ *\n+ * Once we decide to dirty the data block we may as well freeze\n+ * any tuples that are visible to all, since the additional\n+ * cost of freezing multiple tuples is low.\n\nI'm concerned that always freezing all tuples when we're going to make\nthe page dirty would affect the existing vacuum workload much. The\nadditional cost of freezing multiple tuples would be low but if we\nfreeze tuples we would also need to write WAL, which is not negligible\noverhead I guess. In the worst case, if a table has dead tuples on all\npages we process them, but with this patch, in addition to that, we\nwill end up not only freezing all live tuples but also writing\nXLOG_HEAP2_FREEZE_PAGE WAL for all pages. So I think it would be\nbetter either to freeze all tuples if we find a tuple that needs to be\nfrozen or to make this behavior work only if the new VACUUM option is\nspecified.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 20 Nov 2020 12:53:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, 20 Nov 2020 at 01:40, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 19, 2020 at 8:02 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > On Wed, 18 Nov 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > > Patches attached.\n> > > > 1. vacuum_anti_wraparound.v2.patch\n> > > > 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n> > >\n> > > I don't like the use of ANTI_WRAPAROUND as a name for this new option.\n> > > Wouldn't it make more sense to call it AGGRESSIVE? Or maybe something\n> > > else, but I dislike anti-wraparound.\n> >\n> > -1 for using the term AGGRESSIVE, which seems likely to offend people.\n> > I'm sure a more descriptive term exists.\n>\n> Since we use the term aggressive scan in the docs, I personally don't\n> feel unnatural about that. But since this option also disables index\n> cleanup when not enabled explicitly, I’m concerned a bit if user might\n> get confused. I came up with some names like FEEZE_FAST and\n> FREEZE_MINIMAL but I'm not sure these are better.\n\nFREEZE_FAST seems good.\n\n> BTW if this option also disables index cleanup for faster freezing,\n> why don't we disable heap truncation as well?\n\nGood idea\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Fri, 20 Nov 2020 10:15:50 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, 20 Nov 2020 at 03:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> > > So +1 for this idea.\n> >\n> > Patch to do this attached, for discussion.\n>\n> Thank you for the patch!\n>\n> + *\n> + * Once we decide to dirty the data block we may as well freeze\n> + * any tuples that are visible to all, since the additional\n> + * cost of freezing multiple tuples is low.\n>\n> I'm concerned that always freezing all tuples when we're going to make\n> the page dirty would affect the existing vacuum workload much. The\n> additional cost of freezing multiple tuples would be low but if we\n> freeze tuples we would also need to write WAL, which is not negligible\n> overhead I guess. In the worst case, if a table has dead tuples on all\n> pages we process them, but with this patch, in addition to that, we\n> will end up not only freezing all live tuples but also writing\n> XLOG_HEAP2_FREEZE_PAGE WAL for all pages. So I think it would be\n> better either to freeze all tuples if we find a tuple that needs to be\n> frozen or to make this behavior work only if the new VACUUM option is\n> specified.\n\nThe additional cost of freezing is sizeof(xl_heap_freeze_tuple) = 11 bytes\n\nI guess there is some overhead for writing the WAL record itself, the\nonly question is how much. If that is a concern then we definitely\ndon't want to do that only when using FAST_FREEZE, since that would\nslow it down when we want to speed it up.\n\nI've updated the patch to match your proposal, so we can compare. It\nseems a shorter patch.\n\n(This patch is an optimization that is totally independent to the\nother proposals on this thread).\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 20 Nov 2020 11:15:36 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, 20 Nov 2020 at 10:15, Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Fri, 20 Nov 2020 at 01:40, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Nov 19, 2020 at 8:02 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > >\n> > > On Wed, 18 Nov 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > >\n> > > > On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > > > Patches attached.\n> > > > > 1. vacuum_anti_wraparound.v2.patch\n> > > > > 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n> > > >\n> > > > I don't like the use of ANTI_WRAPAROUND as a name for this new option.\n> > > > Wouldn't it make more sense to call it AGGRESSIVE? Or maybe something\n> > > > else, but I dislike anti-wraparound.\n> > >\n> > > -1 for using the term AGGRESSIVE, which seems likely to offend people.\n> > > I'm sure a more descriptive term exists.\n> >\n> > Since we use the term aggressive scan in the docs, I personally don't\n> > feel unnatural about that. But since this option also disables index\n> > cleanup when not enabled explicitly, I’m concerned a bit if user might\n> > get confused. I came up with some names like FEEZE_FAST and\n> > FREEZE_MINIMAL but I'm not sure these are better.\n>\n> FREEZE_FAST seems good.\n>\n> > BTW if this option also disables index cleanup for faster freezing,\n> > why don't we disable heap truncation as well?\n>\n> Good idea\n\nPatch attached, using the name \"FAST_FREEZE\" instead.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 20 Nov 2020 11:47:37 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 2020-Nov-20, Masahiko Sawada wrote:\n\n> I'm concerned that always freezing all tuples when we're going to make\n> the page dirty would affect the existing vacuum workload much. The\n> additional cost of freezing multiple tuples would be low but if we\n> freeze tuples we would also need to write WAL, which is not negligible\n> overhead I guess. In the worst case, if a table has dead tuples on all\n> pages we process them, but with this patch, in addition to that, we\n> will end up not only freezing all live tuples but also writing\n> XLOG_HEAP2_FREEZE_PAGE WAL for all pages. So I think it would be\n> better either to freeze all tuples if we find a tuple that needs to be\n> frozen or to make this behavior work only if the new VACUUM option is\n> specified.\n\nThere are two costs associated with this processing. One is dirtying\nthe page (which means it needs to be written down when evicted), and the\nother is to write WAL records for each change. The cost for the latter\nis going to be the same in both cases (with this change and without)\nbecause the same WAL will have to be written -- the only difference is\n*when* do you pay it. The cost of the former is quite different; with\nSimon's patch we dirty the page once, and without the patch we may dirty\nit several times before it becomes \"stable\" and no more writes are done\nto it.\n\n(If you have tables whose pages change all the time, there would be no\ndifference with or without the patch.)\n\nDirtying the page less times means less full-page images to WAL, too,\nwhich can be significant.\n\n\n",
"msg_date": "Fri, 20 Nov 2020 11:07:52 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, 20 Nov 2020 at 14:07, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Nov-20, Masahiko Sawada wrote:\n>\n> > I'm concerned that always freezing all tuples when we're going to make\n> > the page dirty would affect the existing vacuum workload much. The\n> > additional cost of freezing multiple tuples would be low but if we\n> > freeze tuples we would also need to write WAL, which is not negligible\n> > overhead I guess. In the worst case, if a table has dead tuples on all\n> > pages we process them, but with this patch, in addition to that, we\n> > will end up not only freezing all live tuples but also writing\n> > XLOG_HEAP2_FREEZE_PAGE WAL for all pages. So I think it would be\n> > better either to freeze all tuples if we find a tuple that needs to be\n> > frozen or to make this behavior work only if the new VACUUM option is\n> > specified.\n>\n> There are two costs associated with this processing. One is dirtying\n> the page (which means it needs to be written down when evicted), and the\n> other is to write WAL records for each change. The cost for the latter\n> is going to be the same in both cases (with this change and without)\n> because the same WAL will have to be written -- the only difference is\n> *when* do you pay it. The cost of the former is quite different; with\n> Simon's patch we dirty the page once, and without the patch we may dirty\n> it several times before it becomes \"stable\" and no more writes are done\n> to it.\n>\n> (If you have tables whose pages change all the time, there would be no\n> difference with or without the patch.)\n>\n> Dirtying the page less times means less full-page images to WAL, too,\n> which can be significant.\n\nSummary of patch effects:\n\none_freeze_then_max_freeze.v5.patch\ndoesn't increase/decrease the number of pages that are dirtied by\nfreezing, but when a page will be dirtied **by any activity** it\nmaximises the number of rows frozen, so in some cases it may write a\nWAL freeze record when it would not have done previously.\n\none_freeze_then_max_freeze.v6.patch\ndoesn't increase/decrease the number of pages that are dirtied by\nfreezing, but when a page will be dirtied **by freezing only** it\nmaximises the number of rows frozen, so the number of WAL records for\nfreezing is the same, but they may contain more tuples than before and\nwill increase the probability that the page is marked all_frozen.\n\nSo yes, as you say, the net effect will be to reduce the number of\nwrite I/Os in subsequent vacuums required to move forward\nrelfrozenxid.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 20 Nov 2020 14:47:33 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "Note on heap_prepare_freeze_tuple()'s fifth parameter, it's not valid to\npass OldestXmin; you need a multixact limit there, not an Xid limit. I\nthink the return value of GetOldestMultiXactId is a good choice. AFAICS\nthis means that you'll need to add a new output argument to\nvacuum_set_xid_limits (You *could* call GetOldestMultiXactId again, but\nit seems a waste).\n\nMaybe it's time for vacuum_set_xid_limits to have a caller's-stack-\nallocated struct for input and output values, rather than so many\narguments and output pointers to fill.\n\n\n",
"msg_date": "Fri, 20 Nov 2020 12:29:44 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 2020-Nov-20, Simon Riggs wrote:\n\n> On Fri, 20 Nov 2020 at 01:40, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> > Since we use the term aggressive scan in the docs, I personally don't\n> > feel unnatural about that. But since this option also disables index\n> > cleanup when not enabled explicitly, I’m concerned a bit if user might\n> > get confused. I came up with some names like FEEZE_FAST and\n> > FREEZE_MINIMAL but I'm not sure these are better.\n> \n> FREEZE_FAST seems good.\n\nVACUUM (CHILL) ?\n\n\n\n",
"msg_date": "Fri, 20 Nov 2020 12:33:40 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 9:08 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> There are two costs associated with this processing. One is dirtying\n> the page (which means it needs to be written down when evicted), and the\n> other is to write WAL records for each change. The cost for the latter\n> is going to be the same in both cases (with this change and without)\n> because the same WAL will have to be written -- the only difference is\n> *when* do you pay it. The cost of the former is quite different; with\n> Simon's patch we dirty the page once, and without the patch we may dirty\n> it several times before it becomes \"stable\" and no more writes are done\n> to it.\n>\n> (If you have tables whose pages change all the time, there would be no\n> difference with or without the patch.)\n>\n> Dirtying the page less times means less full-page images to WAL, too,\n> which can be significant.\n\nYeah, I think dirtying the page fewer times is a big win. However, a\npage may have tuples that are not yet all-visible, and we can't freeze\nthose just because we are freezing others.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Nov 2020 10:38:55 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 2020-Nov-20, Robert Haas wrote:\n\n> Yeah, I think dirtying the page fewer times is a big win. However, a\n> page may have tuples that are not yet all-visible, and we can't freeze\n> those just because we are freezing others.\n\nOf course! We should only freeze tuples that are freezable. I thought\nthat was obvious :-)\n\n\n",
"msg_date": "Fri, 20 Nov 2020 13:02:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 11:02 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2020-Nov-20, Robert Haas wrote:\n> > Yeah, I think dirtying the page fewer times is a big win. However, a\n> > page may have tuples that are not yet all-visible, and we can't freeze\n> > those just because we are freezing others.\n>\n> Of course! We should only freeze tuples that are freezable. I thought\n> that was obvious :-)\n\nI didn't mean to imply that anyone in particular thought the contrary.\nIt's just an easy mistake to make when thinking about these kinds of\ntopics. Ask me how I know.\n\nIt's also easy to forget that both xmin and xmax can require freezing,\nand that the time at which you can do one can be different than the\ntime at which you can do the other.\n\nOr at least, I have found it so.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Nov 2020 11:09:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, 20 Nov 2020 at 11:47, Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Fri, 20 Nov 2020 at 10:15, Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > On Fri, 20 Nov 2020 at 01:40, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 19, 2020 at 8:02 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > >\n> > > > On Wed, 18 Nov 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > > > > Patches attached.\n> > > > > > 1. vacuum_anti_wraparound.v2.patch\n> > > > > > 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n> > > > >\n> > > > > I don't like the use of ANTI_WRAPAROUND as a name for this new option.\n> > > > > Wouldn't it make more sense to call it AGGRESSIVE? Or maybe something\n> > > > > else, but I dislike anti-wraparound.\n> > > >\n> > > > -1 for using the term AGGRESSIVE, which seems likely to offend people.\n> > > > I'm sure a more descriptive term exists.\n> > >\n> > > Since we use the term aggressive scan in the docs, I personally don't\n> > > feel unnatural about that. But since this option also disables index\n> > > cleanup when not enabled explicitly, I’m concerned a bit if user might\n> > > get confused. I came up with some names like FEEZE_FAST and\n> > > FREEZE_MINIMAL but I'm not sure these are better.\n> >\n> > FREEZE_FAST seems good.\n> >\n> > > BTW if this option also disables index cleanup for faster freezing,\n> > > why don't we disable heap truncation as well?\n> >\n> > Good idea\n>\n> Patch attached, using the name \"FAST_FREEZE\" instead.\n\nAnd the additional patch also.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Sun, 29 Nov 2020 16:42:22 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, 20 Nov 2020 at 15:29, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Note on heap_prepare_freeze_tuple()'s fifth parameter, it's not valid to\n> pass OldestXmin; you need a multixact limit there, not an Xid limit. I\n> think the return value of GetOldestMultiXactId is a good choice. AFAICS\n> this means that you'll need to add a new output argument to\n> vacuum_set_xid_limits (You *could* call GetOldestMultiXactId again, but\n> it seems a waste).\n>\n> Maybe it's time for vacuum_set_xid_limits to have a caller's-stack-\n> allocated struct for input and output values, rather than so many\n> arguments and output pointers to fill.\n\nThe idea was to maximise freezing because we are already dirtying this\ndata block, so the reasoning doesn't extend directly to multixacts.\nBeing more active with multixacts could easily cause more churn there.\n\nSo I'm changing the patch to only work with Xids not both xids and\nmultixacts. If this gets committed we can think more about multixacts\nand whether to optimize freezing them in the same situation or not.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Sun, 29 Nov 2020 16:53:41 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 8:47 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Fri, 20 Nov 2020 at 10:15, Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > On Fri, 20 Nov 2020 at 01:40, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 19, 2020 at 8:02 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > >\n> > > > On Wed, 18 Nov 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > > > > Patches attached.\n> > > > > > 1. vacuum_anti_wraparound.v2.patch\n> > > > > > 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n> > > > >\n> > > > > I don't like the use of ANTI_WRAPAROUND as a name for this new option.\n> > > > > Wouldn't it make more sense to call it AGGRESSIVE? Or maybe something\n> > > > > else, but I dislike anti-wraparound.\n> > > >\n> > > > -1 for using the term AGGRESSIVE, which seems likely to offend people.\n> > > > I'm sure a more descriptive term exists.\n> > >\n> > > Since we use the term aggressive scan in the docs, I personally don't\n> > > feel unnatural about that. But since this option also disables index\n> > > cleanup when not enabled explicitly, I’m concerned a bit if user might\n> > > get confused. I came up with some names like FEEZE_FAST and\n> > > FREEZE_MINIMAL but I'm not sure these are better.\n> >\n> > FREEZE_FAST seems good.\n> >\n> > > BTW if this option also disables index cleanup for faster freezing,\n> > > why don't we disable heap truncation as well?\n> >\n> > Good idea\n>\n> Patch attached, using the name \"FAST_FREEZE\" instead.\n>\n\nThank you for updating the patch.\n\nHere are some comments on the patch.\n\n----\n- if (params->options & VACOPT_DISABLE_PAGE_SKIPPING)\n+ if (params->options & VACOPT_DISABLE_PAGE_SKIPPING ||\n+ params->options & VACOPT_FAST_FREEZE)\n\nI think we need to update the following comment that is above this\nchange as well:\n\n /*\n * We request an aggressive scan if the table's frozen Xid is now older\n * than or equal to the requested Xid full-table scan limit; or if the\n * table's minimum MultiXactId is older than or equal to the requested\n * mxid full-table scan limit; or if DISABLE_PAGE_SKIPPING was specified.\n */\n\nThis mentions only DISABLE_PAGE_SKIPPING now. Or the second idea is to\nset both params.freeze_table_age and params.multixact_freeze_table_age\nto 0 at ExecVacuum() instead of getting aggressive turned on here.\nConsidering the consistency between FREEZE and FREEZE_FAST, we might\nwant to take the second option.\n\n---\n+ if (fast_freeze &&\n+ params.index_cleanup == VACOPT_TERNARY_DEFAULT)\n+ params.index_cleanup = VACOPT_TERNARY_DISABLED;\n+\n+ if (fast_freeze &&\n+ params.truncate == VACOPT_TERNARY_DEFAULT)\n+ params.truncate = VACOPT_TERNARY_DISABLED;\n+\n+ if (fast_freeze && freeze)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot specify both FREEZE and FAST_FREEZE\noptions on VACUUM\")));\n+\n\nI guess that you disallow enabling both FREEZE and FAST_FREEZE because\nit's contradictory, which makes sense to me. But it seems to me that\nenabling FAST_FREEZE, INDEX_CLEANUP, and TRUNCATE is also\ncontradictory because it will no longer be “fast”. The purpose of this\noption to advance relfrozenxid as fast as possible by disabling index\ncleanup, heap truncation etc. Is there any use case where a user wants\nto enable these options (FAST_FREEZE, INDEX_CLEANUP, and TRUNCATE) at\nthe same time? If not, probably it’s better to either disallow it or\nhave FAST_FREEZE overwrites these two settings even if the user\nspecifies them explicitly.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 1 Dec 2020 10:45:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "Hi Simon,\n\nOn Mon, Nov 30, 2020 at 1:53 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Fri, 20 Nov 2020 at 15:29, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Note on heap_prepare_freeze_tuple()'s fifth parameter, it's not valid to\n> > pass OldestXmin; you need a multixact limit there, not an Xid limit. I\n> > think the return value of GetOldestMultiXactId is a good choice. AFAICS\n> > this means that you'll need to add a new output argument to\n> > vacuum_set_xid_limits (You *could* call GetOldestMultiXactId again, but\n> > it seems a waste).\n> >\n> > Maybe it's time for vacuum_set_xid_limits to have a caller's-stack-\n> > allocated struct for input and output values, rather than so many\n> > arguments and output pointers to fill.\n>\n> The idea was to maximise freezing because we are already dirtying this\n> data block, so the reasoning doesn't extend directly to multixacts.\n> Being more active with multixacts could easily cause more churn there.\n>\n> So I'm changing the patch to only work with Xids not both xids and\n> multixacts. If this gets committed we can think more about multixacts\n> and whether to optimize freezing them in the same situation or not.\n>\n\nYou sent in your patch, one_freeze_then_max_freeze.v7.patch to\npgsql-hackers on Nov 30, but you did not post it to the next\nCommitFest[1]. If this was intentional, then you need to take no\naction. However, if you want your patch to be reviewed as part of the\nupcoming CommitFest, then you need to add it yourself before\n2021-01-01 AoE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 29 Dec 2020 16:42:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 10:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Nov 20, 2020 at 8:47 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > On Fri, 20 Nov 2020 at 10:15, Simon Riggs <simon@2ndquadrant.com> wrote:\n> > >\n> > > On Fri, 20 Nov 2020 at 01:40, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Nov 19, 2020 at 8:02 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > > >\n> > > > > On Wed, 18 Nov 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > >\n> > > > > > On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > > > > > Patches attached.\n> > > > > > > 1. vacuum_anti_wraparound.v2.patch\n> > > > > > > 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n> > > > > >\n> > > > > > I don't like the use of ANTI_WRAPAROUND as a name for this new option.\n> > > > > > Wouldn't it make more sense to call it AGGRESSIVE? Or maybe something\n> > > > > > else, but I dislike anti-wraparound.\n> > > > >\n> > > > > -1 for using the term AGGRESSIVE, which seems likely to offend people.\n> > > > > I'm sure a more descriptive term exists.\n> > > >\n> > > > Since we use the term aggressive scan in the docs, I personally don't\n> > > > feel unnatural about that. But since this option also disables index\n> > > > cleanup when not enabled explicitly, I’m concerned a bit if user might\n> > > > get confused. I came up with some names like FEEZE_FAST and\n> > > > FREEZE_MINIMAL but I'm not sure these are better.\n> > >\n> > > FREEZE_FAST seems good.\n> > >\n> > > > BTW if this option also disables index cleanup for faster freezing,\n> > > > why don't we disable heap truncation as well?\n> > >\n> > > Good idea\n> >\n> > Patch attached, using the name \"FAST_FREEZE\" instead.\n> >\n>\n> Thank you for updating the patch.\n>\n> Here are some comments on the patch.\n>\n> ----\n> - if (params->options & VACOPT_DISABLE_PAGE_SKIPPING)\n> + if (params->options & VACOPT_DISABLE_PAGE_SKIPPING ||\n> + params->options & VACOPT_FAST_FREEZE)\n>\n> I think we need to update the following comment that is above this\n> change as well:\n>\n> /*\n> * We request an aggressive scan if the table's frozen Xid is now older\n> * than or equal to the requested Xid full-table scan limit; or if the\n> * table's minimum MultiXactId is older than or equal to the requested\n> * mxid full-table scan limit; or if DISABLE_PAGE_SKIPPING was specified.\n> */\n>\n> This mentions only DISABLE_PAGE_SKIPPING now. Or the second idea is to\n> set both params.freeze_table_age and params.multixact_freeze_table_age\n> to 0 at ExecVacuum() instead of getting aggressive turned on here.\n> Considering the consistency between FREEZE and FREEZE_FAST, we might\n> want to take the second option.\n>\n> ---\n> + if (fast_freeze &&\n> + params.index_cleanup == VACOPT_TERNARY_DEFAULT)\n> + params.index_cleanup = VACOPT_TERNARY_DISABLED;\n> +\n> + if (fast_freeze &&\n> + params.truncate == VACOPT_TERNARY_DEFAULT)\n> + params.truncate = VACOPT_TERNARY_DISABLED;\n> +\n> + if (fast_freeze && freeze)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot specify both FREEZE and FAST_FREEZE\n> options on VACUUM\")));\n> +\n>\n> I guess that you disallow enabling both FREEZE and FAST_FREEZE because\n> it's contradictory, which makes sense to me. But it seems to me that\n> enabling FAST_FREEZE, INDEX_CLEANUP, and TRUNCATE is also\n> contradictory because it will no longer be “fast”. The purpose of this\n> option to advance relfrozenxid as fast as possible by disabling index\n> cleanup, heap truncation etc. Is there any use case where a user wants\n> to enable these options (FAST_FREEZE, INDEX_CLEANUP, and TRUNCATE) at\n> the same time? If not, probably it’s better to either disallow it or\n> have FAST_FREEZE overwrites these two settings even if the user\n> specifies them explicitly.\n>\n\nI sent some review comments a month ago but the patch was marked as\n\"needs review”, which was incorrect So I think \"waiting on author\" is\na more appropriate state for this patch. I'm switching the patch as\n\"waiting on author\".\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 4 Jan 2021 23:45:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "Hi Simon,\n\nOn Mon, Jan 4, 2021 at 11:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Dec 1, 2020 at 10:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Nov 20, 2020 at 8:47 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > >\n> > > On Fri, 20 Nov 2020 at 10:15, Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > >\n> > > > On Fri, 20 Nov 2020 at 01:40, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Nov 19, 2020 at 8:02 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > > > >\n> > > > > > On Wed, 18 Nov 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Wed, Nov 18, 2020 at 12:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > > > > > > > Patches attached.\n> > > > > > > > 1. vacuum_anti_wraparound.v2.patch\n> > > > > > > > 2. vacuumdb_anti_wrap.v1.patch - depends upon (1)\n> > > > > > >\n> > > > > > > I don't like the use of ANTI_WRAPAROUND as a name for this new option.\n> > > > > > > Wouldn't it make more sense to call it AGGRESSIVE? Or maybe something\n> > > > > > > else, but I dislike anti-wraparound.\n> > > > > >\n> > > > > > -1 for using the term AGGRESSIVE, which seems likely to offend people.\n> > > > > > I'm sure a more descriptive term exists.\n> > > > >\n> > > > > Since we use the term aggressive scan in the docs, I personally don't\n> > > > > feel unnatural about that. But since this option also disables index\n> > > > > cleanup when not enabled explicitly, I’m concerned a bit if user might\n> > > > > get confused. I came up with some names like FEEZE_FAST and\n> > > > > FREEZE_MINIMAL but I'm not sure these are better.\n> > > >\n> > > > FREEZE_FAST seems good.\n> > > >\n> > > > > BTW if this option also disables index cleanup for faster freezing,\n> > > > > why don't we disable heap truncation as well?\n> > > >\n> > > > Good idea\n> > >\n> > > Patch attached, using the name \"FAST_FREEZE\" instead.\n> > >\n> >\n> > Thank you for updating the patch.\n> >\n> > Here are some comments on the patch.\n> >\n> > ----\n> > - if (params->options & VACOPT_DISABLE_PAGE_SKIPPING)\n> > + if (params->options & VACOPT_DISABLE_PAGE_SKIPPING ||\n> > + params->options & VACOPT_FAST_FREEZE)\n> >\n> > I think we need to update the following comment that is above this\n> > change as well:\n> >\n> > /*\n> > * We request an aggressive scan if the table's frozen Xid is now older\n> > * than or equal to the requested Xid full-table scan limit; or if the\n> > * table's minimum MultiXactId is older than or equal to the requested\n> > * mxid full-table scan limit; or if DISABLE_PAGE_SKIPPING was specified.\n> > */\n> >\n> > This mentions only DISABLE_PAGE_SKIPPING now. Or the second idea is to\n> > set both params.freeze_table_age and params.multixact_freeze_table_age\n> > to 0 at ExecVacuum() instead of getting aggressive turned on here.\n> > Considering the consistency between FREEZE and FREEZE_FAST, we might\n> > want to take the second option.\n> >\n> > ---\n> > + if (fast_freeze &&\n> > + params.index_cleanup == VACOPT_TERNARY_DEFAULT)\n> > + params.index_cleanup = VACOPT_TERNARY_DISABLED;\n> > +\n> > + if (fast_freeze &&\n> > + params.truncate == VACOPT_TERNARY_DEFAULT)\n> > + params.truncate = VACOPT_TERNARY_DISABLED;\n> > +\n> > + if (fast_freeze && freeze)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > + errmsg(\"cannot specify both FREEZE and FAST_FREEZE\n> > options on VACUUM\")));\n> > +\n> >\n> > I guess that you disallow enabling both FREEZE and FAST_FREEZE because\n> > it's contradictory, which makes sense to me. But it seems to me that\n> > enabling FAST_FREEZE, INDEX_CLEANUP, and TRUNCATE is also\n> > contradictory because it will no longer be “fast”. The purpose of this\n> > option to advance relfrozenxid as fast as possible by disabling index\n> > cleanup, heap truncation etc. Is there any use case where a user wants\n> > to enable these options (FAST_FREEZE, INDEX_CLEANUP, and TRUNCATE) at\n> > the same time? If not, probably it’s better to either disallow it or\n> > have FAST_FREEZE overwrites these two settings even if the user\n> > specifies them explicitly.\n> >\n>\n> I sent some review comments a month ago but the patch was marked as\n> \"needs review”, which was incorrect So I think \"waiting on author\" is\n> a more appropriate state for this patch. I'm switching the patch as\n> \"waiting on author\".\n>\n\nStatus update for a commitfest entry.\n\nThis entry has been \"Waiting on Author\" status and the patch has not\nbeen updated since Nov 30. Are you still planning to work on this?\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Jan 2021 21:52:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Thu, 28 Jan 2021 at 12:53, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> This entry has been \"Waiting on Author\" status and the patch has not\n> been updated since Nov 30. Are you still planning to work on this?\n\nYes, new patch version tomorrow. Thanks for the nudge and the review.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 28 Jan 2021 13:33:58 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 1/28/21 2:33 PM, Simon Riggs wrote:\n> On Thu, 28 Jan 2021 at 12:53, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> \n>> This entry has been \"Waiting on Author\" status and the patch has not\n>> been updated since Nov 30. Are you still planning to work on this?\n> \n> Yes, new patch version tomorrow. Thanks for the nudge and the review.\n> \n\nSo, is it tomorrow already? ;-)\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Mar 2021 23:16:11 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Fri, 12 Mar 2021 at 22:16, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 1/28/21 2:33 PM, Simon Riggs wrote:\n> > On Thu, 28 Jan 2021 at 12:53, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >> This entry has been \"Waiting on Author\" status and the patch has not\n> >> been updated since Nov 30. Are you still planning to work on this?\n> >\n> > Yes, new patch version tomorrow. Thanks for the nudge and the review.\n> >\n>\n> So, is it tomorrow already? ;-)\n\nBeen a long day. ;-)\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 17 Mar 2021 20:50:27 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 3/17/21 4:50 PM, Simon Riggs wrote:\n> On Fri, 12 Mar 2021 at 22:16, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 1/28/21 2:33 PM, Simon Riggs wrote:\n>>> On Thu, 28 Jan 2021 at 12:53, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>\n>>>> This entry has been \"Waiting on Author\" status and the patch has not\n>>>> been updated since Nov 30. Are you still planning to work on this?\n>>>\n>>> Yes, new patch version tomorrow. Thanks for the nudge and the review.\n>>>\n>>\n>> So, is it tomorrow already? ;-)\n> \n> Been a long day. ;-)\n\nIt has been five months since this patch was updated, so marking \nReturned with Feedback.\n\nPlease resubmit to the next CF when you have a new patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:23:27 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Thu, 8 Apr 2021 at 16:23, David Steele <david@pgmasters.net> wrote:\n>\n> On 3/17/21 4:50 PM, Simon Riggs wrote:\n> > On Fri, 12 Mar 2021 at 22:16, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 1/28/21 2:33 PM, Simon Riggs wrote:\n> >>> On Thu, 28 Jan 2021 at 12:53, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>\n> >>>> This entry has been \"Waiting on Author\" status and the patch has not\n> >>>> been updated since Nov 30. Are you still planning to work on this?\n> >>>\n> >>> Yes, new patch version tomorrow. Thanks for the nudge and the review.\n> >>>\n> >>\n> >> So, is it tomorrow already? ;-)\n> >\n> > Been a long day. ;-)\n>\n> It has been five months since this patch was updated, so marking\n> Returned with Feedback.\n>\n> Please resubmit to the next CF when you have a new patch.\n\nThere are 2 separate patch-sets on this thread, with separate CF entries.\n\nOne has requested changes that have not been made. I presume this one\nhas been RwF'd.\n\nWhat is happening about the other patch?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 16:41:41 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 4/8/21 11:41 AM, Simon Riggs wrote:\n> On Thu, 8 Apr 2021 at 16:23, David Steele <david@pgmasters.net> wrote:\n>>\n>> On 3/17/21 4:50 PM, Simon Riggs wrote:\n>>> On Fri, 12 Mar 2021 at 22:16, Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> On 1/28/21 2:33 PM, Simon Riggs wrote:\n>>>>> On Thu, 28 Jan 2021 at 12:53, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>>\n>>>>>> This entry has been \"Waiting on Author\" status and the patch has not\n>>>>>> been updated since Nov 30. Are you still planning to work on this?\n>>>>>\n>>>>> Yes, new patch version tomorrow. Thanks for the nudge and the review.\n>>>>>\n>>>>\n>>>> So, is it tomorrow already? ;-)\n>>>\n>>> Been a long day. ;-)\n>>\n>> It has been five months since this patch was updated, so marking\n>> Returned with Feedback.\n>>\n>> Please resubmit to the next CF when you have a new patch.\n> \n> There are 2 separate patch-sets on this thread, with separate CF entries.\n> \n> One has requested changes that have not been made. I presume this one\n> has been RwF'd.\n> \n> What is happening about the other patch?\n\nHmmm, well https://commitfest.postgresql.org/32/2908 and \nhttps://commitfest.postgresql.org/32/2909 are both linked to the same \nthread with the same patch, so I thought it was a duplicate.\n\nIt's not clear to me which patch is which, so perhaps move one CF entry \nto next CF and clarify which patch is current?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:58:19 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Thu, 8 Apr 2021 at 16:58, David Steele <david@pgmasters.net> wrote:\n\n> >> It has been five months since this patch was updated, so marking\n> >> Returned with Feedback.\n> >>\n> >> Please resubmit to the next CF when you have a new patch.\n> >\n> > There are 2 separate patch-sets on this thread, with separate CF entries.\n> >\n> > One has requested changes that have not been made. I presume this one\n> > has been RwF'd.\n> >\n> > What is happening about the other patch?\n>\n> Hmmm, well https://commitfest.postgresql.org/32/2908 and\n> https://commitfest.postgresql.org/32/2909 are both linked to the same\n> thread with the same patch, so I thought it was a duplicate.\n\nNobody had mentioned any confusion. Multiple patches on one thread is common.\n\n> It's not clear to me which patch is which, so perhaps move one CF entry\n> to next CF and clarify which patch is current?\n\nEntry: Maximize page freezing\nhas this patch, perfectly fine, awaiting review from Alvaro since 29 Nov.\none_freeze_then_max_freeze.v7.patch\n\nOther patch is awaiting changes.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 17:29:25 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 4/8/21 12:29 PM, Simon Riggs wrote:\n> On Thu, 8 Apr 2021 at 16:58, David Steele <david@pgmasters.net> wrote:\n> \n>>>> It has been five months since this patch was updated, so marking\n>>>> Returned with Feedback.\n>>>>\n>>>> Please resubmit to the next CF when you have a new patch.\n>>>\n>>> There are 2 separate patch-sets on this thread, with separate CF entries.\n>>>\n>>> One has requested changes that have not been made. I presume this one\n>>> has been RwF'd.\n>>>\n>>> What is happening about the other patch?\n>>\n>> Hmmm, well https://commitfest.postgresql.org/32/2908 and\n>> https://commitfest.postgresql.org/32/2909 are both linked to the same\n>> thread with the same patch, so I thought it was a duplicate.\n> \n> Nobody had mentioned any confusion. Multiple patches on one thread is common.\n\nYes, but these days they generally get updated in the same reply so the \ncfbot can test them all. In your case only the latest patch was being \ntested and it was not applying cleanly.\n\n>> It's not clear to me which patch is which, so perhaps move one CF entry\n>> to next CF and clarify which patch is current?\n> \n> Entry: Maximize page freezing\n> has this patch, perfectly fine, awaiting review from Alvaro since 29 Nov.\n> one_freeze_then_max_freeze.v7.patch\n\nThat CF entry was marked Waiting for Author on 3/11, so I thought it was \nfor this patch. In fact, both CF entries were Waiting for Author for \nsome time.\n\nMoved to the next CF in Waiting for Author state.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:44:39 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Thu, 8 Apr 2021 at 17:44, David Steele <david@pgmasters.net> wrote:\n>\n> On 4/8/21 12:29 PM, Simon Riggs wrote:\n> > On Thu, 8 Apr 2021 at 16:58, David Steele <david@pgmasters.net> wrote:\n> >\n> >>>> It has been five months since this patch was updated, so marking\n> >>>> Returned with Feedback.\n> >>>>\n> >>>> Please resubmit to the next CF when you have a new patch.\n> >>>\n> >>> There are 2 separate patch-sets on this thread, with separate CF entries.\n> >>>\n> >>> One has requested changes that have not been made. I presume this one\n> >>> has been RwF'd.\n> >>>\n> >>> What is happening about the other patch?\n> >>\n> >> Hmmm, well https://commitfest.postgresql.org/32/2908 and\n> >> https://commitfest.postgresql.org/32/2909 are both linked to the same\n> >> thread with the same patch, so I thought it was a duplicate.\n> >\n> > Nobody had mentioned any confusion. Multiple patches on one thread is common.\n>\n> Yes, but these days they generally get updated in the same reply so the\n> cfbot can test them all. In your case only the latest patch was being\n> tested and it was not applying cleanly.\n>\n> >> It's not clear to me which patch is which, so perhaps move one CF entry\n> >> to next CF and clarify which patch is current?\n> >\n> > Entry: Maximize page freezing\n> > has this patch, perfectly fine, awaiting review from Alvaro since 29 Nov.\n> > one_freeze_then_max_freeze.v7.patch\n>\n> That CF entry was marked Waiting for Author on 3/11, so I thought it was\n> for this patch. In fact, both CF entries were Waiting for Author for\n> some time.\n\nThat was done by someone that hadn't mentioned it to me, or the list.\n\nIt is not Waiting on Author.\n\n> Moved to the next CF in Waiting for Author state.\n\nThat is clearly an incorrect state.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 17:49:46 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On 2021-Apr-08, Simon Riggs wrote:\n\n> On Thu, 8 Apr 2021 at 16:58, David Steele <david@pgmasters.net> wrote:\n\n> > It's not clear to me which patch is which, so perhaps move one CF entry\n> > to next CF and clarify which patch is current?\n> \n> Entry: Maximize page freezing\n> has this patch, perfectly fine, awaiting review from Alvaro since 29 Nov.\n> one_freeze_then_max_freeze.v7.patch\n\nOh dear, was this waiting on me? It was not in my to-do, so I didn't\npay close attention.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Thu, 8 Apr 2021 13:15:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Thu, 8 Apr 2021 at 18:15, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-08, Simon Riggs wrote:\n>\n> > On Thu, 8 Apr 2021 at 16:58, David Steele <david@pgmasters.net> wrote:\n>\n> > > It's not clear to me which patch is which, so perhaps move one CF entry\n> > > to next CF and clarify which patch is current?\n> >\n> > Entry: Maximize page freezing\n> > has this patch, perfectly fine, awaiting review from Alvaro since 29 Nov.\n> > one_freeze_then_max_freeze.v7.patch\n>\n> Oh dear, was this waiting on me? It was not in my to-do, so I didn't\n> pay close attention.\n\nNo problem, if I felt the idea deserved priority attention I would\nhave pinged you.\n\nI think I'll open a new thread later to allow it to be discussed\nwithout confusion.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 19:10:09 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 11:40 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Thu, 8 Apr 2021 at 18:15, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Apr-08, Simon Riggs wrote:\n> >\n> > > On Thu, 8 Apr 2021 at 16:58, David Steele <david@pgmasters.net> wrote:\n> >\n> > > > It's not clear to me which patch is which, so perhaps move one CF entry\n> > > > to next CF and clarify which patch is current?\n> > >\n> > > Entry: Maximize page freezing\n> > > has this patch, perfectly fine, awaiting review from Alvaro since 29 Nov.\n> > > one_freeze_then_max_freeze.v7.patch\n> >\n> > Oh dear, was this waiting on me? It was not in my to-do, so I didn't\n> > pay close attention.\n>\n> No problem, if I felt the idea deserved priority attention I would\n> have pinged you.\n>\n> I think I'll open a new thread later to allow it to be discussed\n> without confusion.\n\nThe patch no longer applies on the head, please post a rebased patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Jul 2021 21:52:13 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
},
{
"msg_contents": "On Wed, 14 Jul 2021 at 17:22, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Apr 8, 2021 at 11:40 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > On Thu, 8 Apr 2021 at 18:15, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2021-Apr-08, Simon Riggs wrote:\n> > >\n> > > > On Thu, 8 Apr 2021 at 16:58, David Steele <david@pgmasters.net> wrote:\n> > >\n> > > > > It's not clear to me which patch is which, so perhaps move one CF entry\n> > > > > to next CF and clarify which patch is current?\n> > > >\n> > > > Entry: Maximize page freezing\n> > > > has this patch, perfectly fine, awaiting review from Alvaro since 29 Nov.\n> > > > one_freeze_then_max_freeze.v7.patch\n> > >\n> > > Oh dear, was this waiting on me? It was not in my to-do, so I didn't\n> > > pay close attention.\n> >\n> > No problem, if I felt the idea deserved priority attention I would\n> > have pinged you.\n> >\n> > I think I'll open a new thread later to allow it to be discussed\n> > without confusion.\n>\n> The patch no longer applies on the head, please post a rebased patch.\n\nI've withdrawn this patch/thread to avoid further confusion.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Thu, 15 Jul 2021 08:00:08 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM (DISABLE_PAGE_SKIPPING on)"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nWhile investigating a possible file-descriptor issue, I enabled the\nFDDEBUG code in src/backend/storage/file/fd.c, only to find that it\nresulted in a crash as soon as it was invoked.\nIt turns out the crash was in some logging code that was being passed\na NULL fileName.\nI've attached a small patch that corrects the issue.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia",
"msg_date": "Tue, 17 Nov 2020 11:11:22 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": true,
"msg_subject": "Crash in virtual file descriptor FDDEBUG code"
},
{
"msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> While investigating a possible file-descriptor issue, I enabled the\n> FDDEBUG code in src/backend/storage/file/fd.c, only to find that it\n> resulted in a crash as soon as it was invoked.\n\nYeah, I can imagine that code doesn't get tested often.\n\n> I've attached a small patch that corrects the issue.\n\nThanks, will push.\n\nBTW, while looking at this I started to cast an unfriendly eye on the\n_dump_lru() debug function that's hidden under that symbol. That seems\nlike it's spending a lot of cycles to give you information that will\nvery possibly be incomplete (since its 2K buffer would fill up fast).\nAnd it could very easily turn into an infinite loop if there were\nanything actually wrong with the LRU chain. So I'm not seeing where\nthe cost-benefit ratio is there.\n\nMaybe we should change it to just print the first and last entries\nand the number of entries --- which it could count using a loop that\nknows there shouldn't be more entries than the size of the VFD\narray, so as to prevent infinite-looping if the chain is corrupt.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Nov 2020 19:47:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Crash in virtual file descriptor FDDEBUG code"
}
] |
[
{
"msg_contents": "Two recent patches that I have reviewed have reminded me that the \nResourceOwner interface is a bit clunky. There are two issues:\n\n1. Performance. It's fast enough in most use, but when I was testing \nKyotaro's catcache patches [1], the Resowner functions to track catcache \nreferences nevertheless showed up, accounting for about 20% of the CPU \ntime used by catcache lookups.\n\n2. It's difficult for extensions to use. There is a callback mechanism \nfor extensions, but it's much less convenient to use than the built-in \nfunctions. The pgcrypto code uses the callbacks currently, and Michael's \npatch [2] would move that support for tracking OpenSSL contexts to the \ncore, which makes it a lot more convenient for pgcrypto. Wouldn't it be \nnice if extensions could have the same ergonomics as built-in code, if \nthey need to track external resources?\n\nAttached patch refactors the ResourceOwner internals to do that.\n\nThe current code in HEAD has a separate array for each \"kind\" of object \nthat can be tracked. The number of different kinds of objects has grown \nover the years, currently there is support for tracking buffers, files, \ncatcache, relcache and plancache references, tupledescs, snapshots, DSM \nsegments and LLVM JIT contexts. And locks, which are a bit special.\n\nIn the patch, I'm using the same array to hold all kinds of objects, and \ncarry another field with each tracked object, to tell what kind of an \nobject it is. An extension can define a new object kind, by defining \nRelease and PrintLeakWarning callback functions for it. The code in \nresowner.c is now much more generic, as it doesn't need to know about \nall the different object kinds anymore (with a couple of exceptions). In \nthe new scheme, there is one small fixed-size array to hold a few most \nrecently-added references, that is linear-searched, and older entries \nare moved to a hash table.\n\nI haven't done thorough performance testing of this, but with some quick \ntesting with Kyotaro's \"catcachebench\" to stress-test syscache lookups, \nthis performs a little better. In that test, all the activity happens in \nthe small array portion, but I believe that's true for most use cases.\n\nThoughts? Can anyone suggest test scenarios to verify the performance of \nthis?\n\n[1] \nhttps://www.postgresql.org/message-id/20201106.172958.1103727352904717607.horikyota.ntt%40gmail.com\n\n[2] https://www.postgresql.org/message-id/20201113031429.GB1631@paquier.xyz\n\n- Heikki",
"msg_date": "Tue, 17 Nov 2020 16:21:29 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "ResourceOwner refactoring"
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 04:21:29PM +0200, Heikki Linnakangas wrote:\n> 2. It's difficult for extensions to use. There is a callback mechanism for\n> extensions, but it's much less convenient to use than the built-in\n> functions. The pgcrypto code uses the callbacks currently, and Michael's\n> patch [2] would move that support for tracking OpenSSL contexts to the core,\n> which makes it a lot more convenient for pgcrypto. Wouldn't it be nice if\n> extensions could have the same ergonomics as built-in code, if they need to\n> track external resources?\n\n+1. True that the current interface is a bit hard to grasp, one can\neasily miss that showing reference leaks is very important if an\nallocation happens out of the in-core palloc machinery at commit time.\n(The patch you are referring to is not really popular, but that does\nnot prevent this thread to move on on its own.)\n\n> Attached patch refactors the ResourceOwner internals to do that.\n\n+ * Size of the small fixed-size array to hold most-recently remembered resources.\n */\n-#define RESARRAY_INIT_SIZE 16\n+#define RESOWNER_ARRAY_SIZE 8\nWhy did you choose this size for the initial array?\n\n+extern const char *ResourceOwnerGetName(ResourceOwner owner);\nThis is only used in resowner.c, at one place.\n\n@@ -140,7 +139,6 @@ jit_release_context(JitContext *context)\n if (provider_successfully_loaded)\n provider.release_context(context);\n\n- ResourceOwnerForgetJIT(context->resowner, PointerGetDatum(context));\n[...]\n+ ResourceOwnerForget(context->resowner, PointerGetDatum(context), &jit_funcs);\nThis moves the JIT context release from jit.c to llvm.c. I think\nthat's indeed more consistent, and correct. It would be better to\ncheck this one with Andres, though.\n\n> I haven't done thorough performance testing of this, but with some quick\n> testing with Kyotaro's \"catcachebench\" to stress-test syscache lookups, this\n> performs a little better. In that test, all the activity happens in the\n> small array portion, but I believe that's true for most use cases.\n\n\n\n> Thoughts? Can anyone suggest test scenarios to verify the performance of\n> this?\n\nPlaying with catcache.c is the first thing that came to my mind.\nAfter that some micro-benchmarking with DSM or snapshots? I am not\nsure if we would notice a difference in a real-life scenario, say even\na pgbench -S with prepared queries.\n--\nMichael",
"msg_date": "Wed, 18 Nov 2020 17:06:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 18/11/2020 10:06, Michael Paquier wrote:\n> On Tue, Nov 17, 2020 at 04:21:29PM +0200, Heikki Linnakangas wrote:\n>> Attached patch refactors the ResourceOwner internals to do that.\n> \n> + * Size of the small fixed-size array to hold most-recently remembered resources.\n> */\n> -#define RESARRAY_INIT_SIZE 16\n> +#define RESOWNER_ARRAY_SIZE 8\n> Why did you choose this size for the initial array?\n\nJust a guess. The old init size was 16 Datums. The entries in the new \narray are twice as large, Datum+pointer.\n\nThe \"RESOWNER_STATS\" #ifdef blocks can be enabled to check how many \nlookups fit in the array. With pgbench, RESOWNER_ARRAY_SIZE 8:\n\nRESOWNER STATS: lookups: array 235, hash 32\n\nIf RESOWNER_ARRAY_STATS is increased to 16, all the lookups fit in the \narray. But I haven't done any benchmarking to see which is faster.\n\nBTW, I think there would be an easy win in the hashing codepath, by \nchanging to a cheaper hash function. Currently, with or without this \npatch, we use hash_any(). Changing that to murmurhash32() or something \nsimilar would be a drop-in replacement.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 18 Nov 2020 10:50:08 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 10:50:08AM +0200, Heikki Linnakangas wrote:\n> If RESOWNER_ARRAY_STATS is increased to 16, all the lookups fit in the\n> array. But I haven't done any benchmarking to see which is faster.\n\nMy gut tells me that your guess is right, but it would be better to be\nsure.\n\n> BTW, I think there would be an easy win in the hashing codepath, by changing\n> to a cheaper hash function. Currently, with or without this patch, we use\n> hash_any(). Changing that to murmurhash32() or something similar would be a\n> drop-in replacement.\n\nGood idea.\n--\nMichael",
"msg_date": "Thu, 19 Nov 2020 11:16:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 10:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Nov 18, 2020 at 10:50:08AM +0200, Heikki Linnakangas wrote:\n> > If RESOWNER_ARRAY_STATS is increased to 16, all the lookups fit in the\n> > array. But I haven't done any benchmarking to see which is faster.\n>\n> My gut tells me that your guess is right, but it would be better to be\n> sure.\n>\n> > BTW, I think there would be an easy win in the hashing codepath, by changing\n> > to a cheaper hash function. Currently, with or without this patch, we use\n> > hash_any(). Changing that to murmurhash32() or something similar would be a\n> > drop-in replacement.\n>\n> Good idea.\n\n+1, and +1 for this refactoring.\n\nI just saw a minor issue in a comment while reviewing the patch:\n\n[...]\n+ /* Is there space in the hash? If not, enlarge it. */\n\n /* Double the capacity of the array (capacity must stay a power of 2!) */\n- newcap = (oldcap > 0) ? oldcap * 2 : RESARRAY_INIT_SIZE;\n[...]\n\nThe existing comment is kept as-is, but it should mention that it's\nnow the hash capacity that is increased.\n\n\n",
"msg_date": "Thu, 19 Nov 2020 17:27:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "[off-list for now]\n\nOn Tue, Nov 17, 2020 at 10:21 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n>\n> 2. It's difficult for extensions to use. There is a callback mechanism\n> for extensions, but it's much less convenient to use than the built-in\n> functions. The pgcrypto code uses the callbacks currently, and Michael's\n> patch [2] would move that support for tracking OpenSSL contexts to the\n> core, which makes it a lot more convenient for pgcrypto. Wouldn't it be\n> nice if extensions could have the same ergonomics as built-in code, if\n> they need to track external resources?\n>\n\nYes, in general I'm a big fan of making life easier for extensions (see\npostscript), and this looks helpful. It's certainly fairly clear to read.\nI'm in-principle in favour, though I haven't read the old resowner code\nclosely enough to have a strong opinion.\n\nI haven't done thorough performance testing of this, but with some quick\n> testing with Kyotaro's \"catcachebench\" to stress-test syscache lookups,\n> this performs a little better. In that test, all the activity happens in\n> the small array portion, but I believe that's true for most use cases.\n>\n> Thoughts? Can anyone suggest test scenarios to verify the performance of\n> this?\n>\n\nI didn't think of much really.\n\nI have tossed together a patch on top of yours that adds some\nsystemtap/dtrace tracepoints and provides a demo systemtap script that\nshows some basic stats collection done using them. My intention was to make\nit easier to see where the work in the resource owner code was being done.\nBut after I did the initial version I realised that in this case it\nprobably doesn't add a lot of value over using targeted profiling of only\nfunctions in resowner.c and their callees which is easy enough using perf\nand regular DWARF debuginfo. So I didn't land up polishing it and adding\nstats for the other counters. It's attached anyway, in case it's\ninteresting or useful to anyone.\n\nP.S. At some point I want to tackle some things similar to what you've done\nhere for other kinds of backend resources. In particular, to allow\nextensions to register their own heavyweight lock types - with the ability\nto handle lock timeouts, and add callbacks in the deadlock detector to\nallow extensions to register dependency edges that are not natively visible\nto the deadlock detector and to choose victim priorities. Also some\nenhancements to how pg_depend tracking can be used by extensions. And I\nwant to help get the proposed custom ProcSignal changes in too. I think\nthere's a whole lot to be done to make extensions easier and safer to\nauthor in general, like providing a simple way to do error trapping and\nrecovery in an extension mainloop without copy/pasting a bunch of\nPostgresMain code, better default signal handlers, startup/shutdown that\nshares more with user backends, etc. Right now it's quite tricky to get\nbgworkers to behave well. </wildhandwaving>",
"msg_date": "Thu, 19 Nov 2020 18:40:19 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 06:40:19PM +0800, Craig Ringer wrote:\n> [off-list for now]\n\nLooks like you have missed something here.\n--\nMichael",
"msg_date": "Thu, 26 Nov 2020 16:29:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "At Thu, 19 Nov 2020 17:27:18 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Thu, Nov 19, 2020 at 10:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Nov 18, 2020 at 10:50:08AM +0200, Heikki Linnakangas wrote:\n> > > If RESOWNER_ARRAY_STATS is increased to 16, all the lookups fit in the\n> > > array. But I haven't done any benchmarking to see which is faster.\n> >\n> > My gut tells me that your guess is right, but it would be better to be\n> > sure.\n> >\n> > > BTW, I think there would be an easy win in the hashing codepath, by changing\n> > > to a cheaper hash function. Currently, with or without this patch, we use\n> > > hash_any(). Changing that to murmurhash32() or something similar would be a\n> > > drop-in replacement.\n> >\n> > Good idea.\n> \n> +1, and +1 for this refactoring.\n\n+1 for making the interface more generic. I thought a similar thing\nwhen I added an resowner array for WaitEventSet (not committed yet).\nAbout performance, though I'm not sure about, there's no reason not to\ndo this as far as the resowner mechanism doesn't get slower, and +2 if\ngets faster.\n\n> I just saw a minor issue in a comment while reviewing the patch:\n> \n> [...]\n> + /* Is there space in the hash? If not, enlarge it. */\n> \n> /* Double the capacity of the array (capacity must stay a power of 2!) */\n> - newcap = (oldcap > 0) ? oldcap * 2 : RESARRAY_INIT_SIZE;\n> [...]\n> \n> The existing comment is kept as-is, but it should mention that it's\n> now the hash capacity that is increased.\n\n+\t\t\t/* And release old array. */\n+\t\t\tpfree(oldhash);\n\n:p\n\n\n+\t\tfor (int i = 0; i < owner->narr; i++)\n \t\t{\n+\t\t\tif (owner->arr[i].kind->phase == phase)\n+\t\t\t{\n+\t\t\t\tDatum\t\tvalue = owner->arr[i].item;\n+\t\t\t\tResourceOwnerFuncs *kind = owner->arr[i].kind;\n+\n+\t\t\t\tif (printLeakWarnings)\n+\t\t\t\t\tkind->PrintLeakWarning(value);\n+\t\t\t\tkind->ReleaseResource(value);\n+\t\t\t\tfound = true;\n+\t\t\t}\n+\t\t/*\n+\t\t * If any resources were released, check again because some of the\n+\t\t * elements might have moved by the callbacks. We don't want to\n+\t\t * miss them.\n+\t\t */\n+\t} while (found && owner->narr > 0);\n\nCoundn't that missing be avoided by just not incrementing i if found?\n\n\n+\t\t/*\n+\t\t * Like with the array, we must check again after we reach the\n+\t\t * end, if any callbacks were called. XXX: We could probably\n+\t\t * stipulate that the callbacks may not do certain thing, like\n+\t\t * remember more references in the same resource owner, to avoid\n+\t\t * that.\n+\t\t */\n\nIf I read this patch correctly, ResourceOwnerForget doesn't seem to do\nsuch a thing for hash?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 26 Nov 2020 17:52:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 26/11/2020 10:52, Kyotaro Horiguchi wrote:\n> +\t\tfor (int i = 0; i < owner->narr; i++)\n> \t\t{\n> +\t\t\tif (owner->arr[i].kind->phase == phase)\n> +\t\t\t{\n> +\t\t\t\tDatum\t\tvalue = owner->arr[i].item;\n> +\t\t\t\tResourceOwnerFuncs *kind = owner->arr[i].kind;\n> +\n> +\t\t\t\tif (printLeakWarnings)\n> +\t\t\t\t\tkind->PrintLeakWarning(value);\n> +\t\t\t\tkind->ReleaseResource(value);\n> +\t\t\t\tfound = true;\n> +\t\t\t}\n> +\t\t/*\n> +\t\t * If any resources were released, check again because some of the\n> +\t\t * elements might have moved by the callbacks. We don't want to\n> +\t\t * miss them.\n> +\t\t */\n> +\t} while (found && owner->narr > 0);\n> \n> Coundn't that missing be avoided by just not incrementing i if found?\n\nHmm, perhaps. ResourceOwnerForget() can move an entry from the end of \nthe array to the beginning, but if it's removing the entry that we're \njust looking at, it probably can't move anything before that entry. I'm \nnot very comfortable with that, though. What if the callback releases \nsomething else as a side effect?\n\nThis isn't super-critical for performance, and given that the array is \nvery small, it's very cheap to loop through it. So I'm inclined to keep \nit safe.\n\n> +\t\t/*\n> +\t\t * Like with the array, we must check again after we reach the\n> +\t\t * end, if any callbacks were called. XXX: We could probably\n> +\t\t * stipulate that the callbacks may not do certain thing, like\n> +\t\t * remember more references in the same resource owner, to avoid\n> +\t\t * that.\n> +\t\t */\n> \n> If I read this patch correctly, ResourceOwnerForget doesn't seem to do\n> such a thing for hash?\n\nHmm, true. I tried removing the loop and hit another issue, however: if \nthe same resource has been remembered twice in the same resource owner, \nthe callback might remove different reference to it than what we're \nlooking at. So we need to loop anyway to handle that. Also, what if the \ncallback remembers some other resource in the same resowner, causing the \nhash to grow? I'm not sure if any of the callbacks currently do that, \nbut better safe than sorry. I updated the code and the comments accordingly.\n\n\nHere's an updated version of this patch. I fixed the bit rot, and \naddressed all the other comment issues and such that people pointed out \n(thanks!).\n\nTODO:\n\n1. Performance testing. We discussed this a little bit, but I haven't \ndone any more testing.\n\n2. Before this patch, the ResourceOwnerEnlarge() calls enlarged the \narray for the particular \"kind\" of resource, but now they are all stored \nin the same structure. That could lead to trouble if we do something \nlike this:\n\nResourceOwnerEnlargeAAA()\nResourceOwnerEnlargeBBB()\nResourceOwnerRememberAAA()\nResourceOwnerRememberBBB()\n\nPreviously, this was OK, because resources AAA and BBB were kept in \nseparate arrays. But after this patch, it's not guaranteed that the \nResourceOwnerRememberBBB() will find an empty slot.\n\nI don't think we do that currently, but to be sure, I'm planning to grep \nResourceOwnerRemember and look at each call carefullly. And perhaps we \ncan add an assertion for this, although I'm not sure where.\n\n- Heikki",
"msg_date": "Wed, 16 Dec 2020 17:46:33 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Dear Heikki, \r\n\r\nI'm also interested in this patch, but it cannot be applied to the current HEAD...\r\n\r\n$ git apply ~/v2-0001-Make-resowners-more-easily-extensible.patch\r\nerror: patch failed: src/common/cryptohash_openssl.c:57\r\nerror: src/common/cryptohash_openssl.c: patch does not apply\r\nerror: patch failed: src/include/utils/resowner_private.h:1\r\nerror: src/include/utils/resowner_private.h: patch does not apply\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 13 Jan 2021 01:55:15 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: ResourceOwner refactoring"
},
{
"msg_contents": "On 13/01/2021 03:55, kuroda.hayato@fujitsu.com wrote:\n> Dear Heikki,\n> \n> I'm also interested in this patch, but it cannot be applied to the current HEAD...\n> \n> $ git apply ~/v2-0001-Make-resowners-more-easily-extensible.patch\n> error: patch failed: src/common/cryptohash_openssl.c:57\n> error: src/common/cryptohash_openssl.c: patch does not apply\n> error: patch failed: src/include/utils/resowner_private.h:1\n> error: src/include/utils/resowner_private.h: patch does not apply\n\nHere's a rebased version. Thanks!\n\n- Heikki",
"msg_date": "Wed, 13 Jan 2021 09:18:57 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 09:18:57AM +0200, Heikki Linnakangas wrote:\n> --- a/src/common/cryptohash_openssl.c\n> +++ b/src/common/cryptohash_openssl.c\n> +static ResourceOwnerFuncs cryptohash_funcs =\n> +{\n> +\t/* relcache references */\n> +\t.name = \"LLVM JIT context\",\n> +\t.phase = RESOURCE_RELEASE_BEFORE_LOCKS,\n> +\t.ReleaseResource = ResOwnerReleaseCryptoHash,\n> +\t.PrintLeakWarning = ResOwnerPrintCryptoHashLeakWarning,\n> +};\n> +#endif\n\nLooks like a copy-paste error here.\n--\nMichael",
"msg_date": "Wed, 13 Jan 2021 16:22:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Dear Heikki,\r\n\r\nThank you for rebasing it, I confirmed it can be applied.\r\nI will check the source.\r\n\r\nNow I put the very elementary comment.\r\nResourceOwnerEnlarge(), ResourceOwnerRemember(), and ResourceOwnerForget()\r\nare exported routines.\r\nThey should put below L418.\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 13 Jan 2021 07:39:38 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\r\n\r\nI put some comments.\r\n\r\nThroughout, some components don’t have helper functions.\r\n(e.g. catcache has ResOwnerReleaseCatCache, but tupdesc doesn't.)\r\nI think it should be unified.\r\n\r\n[catcache.c]\r\n> +/* support for catcache refcount management */\r\n> +static inline void\r\n> +ResourceOwnerEnlargeCatCacheRefs(ResourceOwner owner)\r\n> +{\r\n> + ResourceOwnerEnlarge(owner);\r\n> +}\r\n\r\nThis function is not needed.\r\n\r\n[syscache.c]\r\n> -static CatCache *SysCache[SysCacheSize];\r\n> + CatCache *SysCache[SysCacheSize];\r\n\r\nIs it right? Compilation is done even if this variable is static...\r\n\r\n[fd.c, dsm.c]\r\nIn these files helper functions are implemented as the define directive.\r\nCould you explain the reason? For the performance?\r\n\r\n> Previously, this was OK, because resources AAA and BBB were kept in \r\n> separate arrays. But after this patch, it's not guaranteed that the \r\n> ResourceOwnerRememberBBB() will find an empty slot.\r\n>\r\n> I don't think we do that currently, but to be sure, I'm planning to grep \r\n> ResourceOwnerRemember and look at each call carefullly. And perhaps we \r\n> can add an assertion for this, although I'm not sure where.\r\n\r\nIndeed, but I think this line works well, isn't it?\r\n\r\n> \tAssert(owner->narr < RESOWNER_ARRAY_SIZE);\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 14 Jan 2021 10:15:35 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: ResourceOwner refactoring"
},
{
"msg_contents": "On 14/01/2021 12:15, kuroda.hayato@fujitsu.com wrote:\n> I put some comments.\n\nThanks for the review!\n\n> Throughout, some components don’t have helper functions.\n> (e.g. catcache has ResOwnerReleaseCatCache, but tupdesc doesn't.)\n> I think it should be unified.\n\nHmm. ResOwnerReleaseTupleDesc() does exist, those functions are needed \nfor the callbacks. I think you meant the wrappers around \nResourceOwnerRemember and ResourceOwnerForget, like \nResourceOwnerRememberCatCacheRef(). I admit those are not fully \nconsistent: I didn't bother with the wrapper functions when there is \nonly one caller.\n\n> [catcache.c]\n>> +/* support for catcache refcount management */\n>> +static inline void\n>> +ResourceOwnerEnlargeCatCacheRefs(ResourceOwner owner)\n>> +{\n>> + ResourceOwnerEnlarge(owner);\n>> +}\n> \n> This function is not needed.\n\nRemoved.\n\n> [syscache.c]\n>> -static CatCache *SysCache[SysCacheSize];\n>> + CatCache *SysCache[SysCacheSize];\n> \n> Is it right? Compilation is done even if this variable is static...\n\nFixed. (It was a leftover from when I was playing with Kyotaro's \n\"catcachebench\" tool from another thread).\n\n> [fd.c, dsm.c]\n> In these files helper functions are implemented as the define directive.\n> Could you explain the reason? For the performance?\n\nNo particular reason. I turned them all into macros for consistency.\n\n>> Previously, this was OK, because resources AAA and BBB were kept in\n>> separate arrays. But after this patch, it's not guaranteed that the\n>> ResourceOwnerRememberBBB() will find an empty slot.\n>>\n>> I don't think we do that currently, but to be sure, I'm planning to grep\n>> ResourceOwnerRemember and look at each call carefullly. And perhaps we\n>> can add an assertion for this, although I'm not sure where.\n> \n> Indeed, but I think this line works well, isn't it?\n> \n>> \tAssert(owner->narr < RESOWNER_ARRAY_SIZE);\n\nThat catches cases where you actually overrun the array, but it doesn't \ncatch unsafe patterns when there happens to be enough space left in the \narray. For example, if you have code like this:\n\n/* Make sure there's room for one more entry, but remember *two* things */\nResourceOwnerEnlarge();\nResourceOwnerRemember(foo);\nResourceOwnerRemember(bar);\n\nThat is not safe, but it would only fail the assertion if the first \nResourceOwnerRemember() call happens to consume the last remaining slot \nin the array.\n\n\nI checked all the callers of ResourceOwnerEnlarge() to see if they're \nsafe. A couple of places seemed a bit suspicious. I fixed them by moving \nthe ResourceOwnerEnlarge() calls closer to the ResourceOwnerRemember() \ncalls, so that it's now easier to see that they are correct. See first \nattached patch.\n\nThe second patch is an updated version of the main patch, fixing all the \nlittle things you and Michael pointed out since the last patch version.\n\nI've been working on performance testing too. I'll post more numbers \nlater, but preliminary result from some micro-benchmarking suggests that \nthe new code is somewhat slower, except in the common case that the \nobject to remember and forget fits in the array. When running the \nregression test suite, about 96% of ResourceOwnerForget() calls fit in \nthe array. I think that's acceptable.\n\n- Heikki",
"msg_date": "Thu, 14 Jan 2021 15:34:56 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Dear Heikki,\r\n\r\n> Hmm. ResOwnerReleaseTupleDesc() does exist, those functions are needed \r\n> for the callbacks. I think you meant the wrappers around \r\n> ResourceOwnerRemember and ResourceOwnerForget, like \r\n> ResourceOwnerRememberCatCacheRef(). I admit those are not fully \r\n> consistent: I didn't bother with the wrapper functions when there is \r\n> only one caller.\r\n\r\nYeah, I meant it. And I prefer your policy.\r\n\r\n> Hmm. ResOwnerReleaseTupleDesc() does exist, those functions are needed \r\n> for the callbacks. I think you meant the wrappers around \r\n> ResourceOwnerRemember and ResourceOwnerForget, like \r\n> ResourceOwnerRememberCatCacheRef(). I admit those are not fully \r\n> consistent: I didn't bother with the wrapper functions when there is \r\n> only one caller.\r\n\r\nGood job. I confirmed your fixes, and I confirmed it looks fine.\r\nI will check another ResourceOwnerEnlarge() if I have a time.\r\n\r\n> I've been working on performance testing too.\r\n\r\nI'm looking forward to seeing it.\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 15 Jan 2021 09:10:34 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: ResourceOwner refactoring"
},
{
"msg_contents": "Dear Heikki,\r\n\r\nI apologize for sending again.\r\n\r\n> I will check another ResourceOwnerEnlarge() if I have a time.\r\n\r\nI checked all ResourceOwnerEnlarge() types after applying patch 0001.\r\n(searched by \"grep -rI ResourceOwnerEnlarge\")\r\nNo problem was found.\r\n\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 18 Jan 2021 07:49:20 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: ResourceOwner refactoring"
},
{
"msg_contents": "On 18/01/2021 09:49, kuroda.hayato@fujitsu.com wrote:\n> Dear Heikki,\n> \n> I apologize for sending again.\n> \n>> I will check another ResourceOwnerEnlarge() if I have a time.\n> \n> I checked all ResourceOwnerEnlarge() types after applying patch 0001.\n> (searched by \"grep -rI ResourceOwnerEnlarge\")\n> No problem was found.\n\nGreat, thanks!\n\nHere are more details on the performance testing I did:\n\nUnpatched\n----------\n\npostgres=# \\i snaptest.sql\n numkeep | numsnaps | lifo_time_ns | fifo_time_ns\n---------+----------+--------------+--------------\n 0 | 1 | 38.2 | 34.8\n 0 | 5 | 35.7 | 35.4\n 0 | 10 | 37.6 | 37.6\n 0 | 60 | 35.4 | 39.2\n 0 | 70 | 55.0 | 53.8\n 0 | 100 | 48.6 | 48.9\n 0 | 1000 | 54.8 | 57.0\n 0 | 10000 | 63.9 | 67.1\n 9 | 10 | 39.3 | 38.8\n 9 | 100 | 44.0 | 42.5\n 9 | 1000 | 45.8 | 47.1\n 9 | 10000 | 64.2 | 66.8\n 65 | 70 | 45.7 | 43.7\n 65 | 100 | 42.9 | 43.7\n 65 | 1000 | 46.9 | 45.7\n 65 | 10000 | 65.0 | 64.5\n(16 rows)\n\n\nPatched\n--------\n\npostgres=# \\i snaptest.sql\n numkeep | numsnaps | lifo_time_ns | fifo_time_ns\n---------+----------+--------------+--------------\n 0 | 1 | 35.3 | 33.8\n 0 | 5 | 34.8 | 34.6\n 0 | 10 | 49.8 | 51.4\n 0 | 60 | 53.1 | 57.2\n 0 | 70 | 53.2 | 59.6\n 0 | 100 | 53.1 | 58.2\n 0 | 1000 | 63.1 | 69.7\n 0 | 10000 | 83.3 | 83.5\n 9 | 10 | 35.0 | 35.1\n 9 | 100 | 55.4 | 54.1\n 9 | 1000 | 56.8 | 60.3\n 9 | 10000 | 88.6 | 82.0\n 65 | 70 | 36.4 | 35.1\n 65 | 100 | 52.4 | 53.0\n 65 | 1000 | 55.8 | 59.4\n 65 | 10000 | 79.3 | 80.3\n(16 rows)\n\nAbout the test:\n\nEach test call Register/UnregisterSnapshot in a loop. numsnaps is the \nnumber of snapshots that are registered in each iteration. For exmaple, \non the second line with numkeep=0 and numnaps=5, each iteration in the \nLIFO test does essentially:\n\nrs[0] = RegisterSnapshot()\nrs[1] = RegisterSnapshot()\nrs[2] = RegisterSnapshot()\nrs[3] = RegisterSnapshot()\nrs[4] = RegisterSnapshot()\nUnregisterSnapshot(rs[4]);\nUnregisterSnapshot(rs[3]);\nUnregisterSnapshot(rs[2]);\nUnregisterSnapshot(rs[1]);\nUnregisterSnapshot(rs[0]);\n\nIn the FIFO tests, the Unregister calls are made in reverse order.\n\nWhen numkeep is non zero, that many snapshots are registered once at the \nbeginning of the test, and released only after the test loop. So this \nsimulates the scenario that you remember a bunch of resources and hold \nthem for a long time, and while holding them, you do many more \nremember/forget calls.\n\nBased on this test, this patch makes things slightly slower overall. \nHowever, *not* in the cases that matter the most I believe. That's the \ncases where the (numsnaps - numkeep) is small, so that the hot action \nhappens in the array and the hashing is not required. In my testing \nearlier, about 95% of the ResourceOwnerRemember calls in the regression \ntests fall into that category.\n\nThere are a few simple things we could do speed this up, if needed. A \ndifferent hash function might be cheaper, for example. And we could \ninline the fast paths of the ResourceOwnerEnlarge and \nResourceOwnerRemember() into the callers. But I didn't do those things \nas part of this patch, because it wouldn't be a fair comparison; we \ncould do those with the old implementation too. And unless we really \nneed the speed, it's more readable this way.\n\nI'm OK with these results. Any objections?\n\nAttached are the patches again. Same as previous version, except for a \ncouple of little comment changes. And a third patch containing the \nsource for the performance test.\n\n- Heikki",
"msg_date": "Mon, 18 Jan 2021 14:22:33 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 2021-Jan-18, Heikki Linnakangas wrote:\n\n> +static ResourceOwnerFuncs jit_funcs =\n> +{\n> +\t/* relcache references */\n> +\t.name = \"LLVM JIT context\",\n> +\t.phase = RESOURCE_RELEASE_BEFORE_LOCKS,\n> +\t.ReleaseResource = ResOwnerReleaseJitContext,\n> +\t.PrintLeakWarning = ResOwnerPrintJitContextLeakWarning\n> +};\n\nI think you mean jit_resowner_funcs here; \"jit_funcs\" is a bit\nexcessively vague. Also, why did you choose not to define\nResourceOwnerRememberJIT? You do that in other modules and it seems\nbetter.\n\n> +static ResourceOwnerFuncs catcache_funcs =\n> +{\n> +\t/* catcache references */\n> +\t.name = \"catcache reference\",\n> +\t.phase = RESOURCE_RELEASE_AFTER_LOCKS,\n> +\t.ReleaseResource = ResOwnerReleaseCatCache,\n> +\t.PrintLeakWarning = ResOwnerPrintCatCacheLeakWarning\n> +};\n> +\n> +static ResourceOwnerFuncs catlistref_funcs =\n> +{\n> +\t/* catcache-list pins */\n> +\t.name = \"catcache list reference\",\n> +\t.phase = RESOURCE_RELEASE_AFTER_LOCKS,\n> +\t.ReleaseResource = ResOwnerReleaseCatCacheList,\n> +\t.PrintLeakWarning = ResOwnerPrintCatCacheListLeakWarning\n> +};\n\nSimilar naming issue here, I say.\n\n\n> diff --git a/src/common/cryptohash_openssl.c b/src/common/cryptohash_openssl.c\n> index 551ec392b60..642e72d8c0f 100644\n> --- a/src/common/cryptohash_openssl.c\n> +++ b/src/common/cryptohash_openssl.c\n> @@ -60,6 +59,21 @@ struct pg_cryptohash_ctx\n> #endif\n> };\n> \n> +/* ResourceOwner callbacks to hold JitContexts */\n> +#ifndef FRONTEND\n> +static void ResOwnerReleaseCryptoHash(Datum res);\n> +static void ResOwnerPrintCryptoHashLeakWarning(Datum res);\n\nThe comment is wrong -- \"Crypohashes\", not \"JitContexts\".\n\n\nSo according to your performance benchmark, we're willing to accept a\n30% performance loss on an allegedly common operation -- numkeep=0\nnumsnaps=10 becomes 49.8ns from 37.6ns. That seems a bit shocking.\nMaybe you can claim that these operations aren't exactly hot spots, and\nso the fact that we remain in the same power-of-ten is sufficient. Is\nthat the argument?\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n\n\n",
"msg_date": "Mon, 18 Jan 2021 11:34:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 18/01/2021 16:34, Alvaro Herrera wrote:\n> So according to your performance benchmark, we're willing to accept a\n> 30% performance loss on an allegedly common operation -- numkeep=0\n> numsnaps=10 becomes 49.8ns from 37.6ns. That seems a bit shocking.\n> Maybe you can claim that these operations aren't exactly hot spots, and\n> so the fact that we remain in the same power-of-ten is sufficient. Is\n> that the argument?\n\nThat's right. The fast path is fast, and that's important. The slow path \nbecomes 30% slower, but that's acceptable.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 18 Jan 2021 17:19:04 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 10:19 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 18/01/2021 16:34, Alvaro Herrera wrote:\n> > So according to your performance benchmark, we're willing to accept a\n> > 30% performance loss on an allegedly common operation -- numkeep=0\n> > numsnaps=10 becomes 49.8ns from 37.6ns. That seems a bit shocking.\n> > Maybe you can claim that these operations aren't exactly hot spots, and\n> > so the fact that we remain in the same power-of-ten is sufficient. Is\n> > that the argument?\n>\n> That's right. The fast path is fast, and that's important. The slow path\n> becomes 30% slower, but that's acceptable.\n>\n> - Heikki\n>\n>\n\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Jan 2021 11:11:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 11:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jan 18, 2021 at 10:19 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > On 18/01/2021 16:34, Alvaro Herrera wrote:\n> > > So according to your performance benchmark, we're willing to accept a\n> > > 30% performance loss on an allegedly common operation -- numkeep=0\n> > > numsnaps=10 becomes 49.8ns from 37.6ns. That seems a bit shocking.\n> > > Maybe you can claim that these operations aren't exactly hot spots, and\n> > > so the fact that we remain in the same power-of-ten is sufficient. Is\n> > > that the argument?\n> >\n> > That's right. The fast path is fast, and that's important. The slow path\n> > becomes 30% slower, but that's acceptable.\n\nSorry for the empty message.\n\nI don't know whether a 30% slowdown will hurt anybody, but it seems\nlike kind of a lot, and I'm not sure I understand what corresponding\nbenefit we're getting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Jan 2021 11:11:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 02:22:33PM +0200, Heikki Linnakangas wrote:\n> diff --git a/src/common/cryptohash_openssl.c b/src/common/cryptohash_openssl.c\n> index 551ec392b60..642e72d8c0f 100644\n> --- a/src/common/cryptohash_openssl.c\n> +++ b/src/common/cryptohash_openssl.c\n[...]\n> +/* ResourceOwner callbacks to hold JitContexts */\n\nSlight copy-paste error here.\n\n> \t/*\n> \t * Ensure, while the spinlock's not yet held, that there's a free\n> -\t * refcount entry.\n> +\t * refcount entry and that the resoure owner has room to remember the\n> +\t * pin.\ns/resoure/resource/.\n--\nMichael",
"msg_date": "Tue, 19 Jan 2021 15:48:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 18/01/2021 18:11, Robert Haas wrote:\n> On Mon, Jan 18, 2021 at 11:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Mon, Jan 18, 2021 at 10:19 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> On 18/01/2021 16:34, Alvaro Herrera wrote:\n>>>> So according to your performance benchmark, we're willing to accept a\n>>>> 30% performance loss on an allegedly common operation -- numkeep=0\n>>>> numsnaps=10 becomes 49.8ns from 37.6ns. That seems a bit shocking.\n>>>> Maybe you can claim that these operations aren't exactly hot spots, and\n>>>> so the fact that we remain in the same power-of-ten is sufficient. Is\n>>>> that the argument?\n>>>\n>>> That's right. The fast path is fast, and that's important. The slow path\n>>> becomes 30% slower, but that's acceptable.\n> \n> I don't know whether a 30% slowdown will hurt anybody, but it seems\n> like kind of a lot, and I'm not sure I understand what corresponding\n> benefit we're getting.\n\nThe benefit is to make it easy for extensions to use resource owners to \ntrack whatever resources they need to track. And arguably, the new \nmechanism is nicer for built-in code, too.\n\nI'll see if I can optimize the slow paths, to make it more palatable.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 19 Jan 2021 11:09:16 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 18/01/2021 16:34, Alvaro Herrera wrote:\n> On 2021-Jan-18, Heikki Linnakangas wrote:\n> \n>> +static ResourceOwnerFuncs jit_funcs =\n>> +{\n>> +\t/* relcache references */\n>> +\t.name = \"LLVM JIT context\",\n>> +\t.phase = RESOURCE_RELEASE_BEFORE_LOCKS,\n>> +\t.ReleaseResource = ResOwnerReleaseJitContext,\n>> +\t.PrintLeakWarning = ResOwnerPrintJitContextLeakWarning\n>> +};\n> \n> I think you mean jit_resowner_funcs here; \"jit_funcs\" is a bit\n> excessively vague. Also, why did you choose not to define\n> ResourceOwnerRememberJIT? You do that in other modules and it seems\n> better.\n\nI did it in modules that had more than one ResourceOwnerRemeber/Forget \ncall. Didn't seem worth it in functions like IncrTupleDescRefCount(), \nfor example.\n\nHayato Kuroda also pointed that out, though. So perhaps it's better to \nbe consistent, to avoid the confusion. I'll add the missing wrappers.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 19 Jan 2021 16:45:57 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 19/01/2021 11:09, Heikki Linnakangas wrote:\n> On 18/01/2021 18:11, Robert Haas wrote:\n>> On Mon, Jan 18, 2021 at 11:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>> On Mon, Jan 18, 2021 at 10:19 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>>> On 18/01/2021 16:34, Alvaro Herrera wrote:\n>>>>> So according to your performance benchmark, we're willing to accept a\n>>>>> 30% performance loss on an allegedly common operation -- numkeep=0\n>>>>> numsnaps=10 becomes 49.8ns from 37.6ns. That seems a bit shocking.\n>>>>> Maybe you can claim that these operations aren't exactly hot spots, and\n>>>>> so the fact that we remain in the same power-of-ten is sufficient. Is\n>>>>> that the argument?\n>>>>\n>>>> That's right. The fast path is fast, and that's important. The slow path\n>>>> becomes 30% slower, but that's acceptable.\n>>\n>> I don't know whether a 30% slowdown will hurt anybody, but it seems\n>> like kind of a lot, and I'm not sure I understand what corresponding\n>> benefit we're getting.\n> \n> The benefit is to make it easy for extensions to use resource owners to\n> track whatever resources they need to track. And arguably, the new\n> mechanism is nicer for built-in code, too.\n> \n> I'll see if I can optimize the slow paths, to make it more palatable.\n\nOk, here's a new set of patches, and new test results. I replaced the \nhash function with a cheaper one. I also added the missing wrappers that \nAlvaro and Hayato Kuroda commented on, and fixed the typos that Michael \nPaquier pointed out.\n\nIn the test script, I increased the number of iterations used in the \ntests, to make them run longer and produce more stable results. There is \nstill a fair amount of jitter in the results, so take any particular \nnumber with a grain of salt, but the overall trend is repeatable.\n\nThe results now look like this:\n\nUnpatched\n---------\n\npostgres=# \\i snaptest.sql\n numkeep | numsnaps | lifo_time_ns | fifo_time_ns\n---------+----------+--------------+--------------\n 0 | 1 | 32.7 | 32.3\n 0 | 5 | 33.0 | 32.9\n 0 | 10 | 34.1 | 35.5\n 0 | 60 | 32.5 | 38.3\n 0 | 70 | 47.6 | 48.9\n 0 | 100 | 47.9 | 49.3\n 0 | 1000 | 52.9 | 52.7\n 0 | 10000 | 61.7 | 62.4\n 9 | 10 | 38.4 | 37.6\n 9 | 100 | 42.3 | 42.3\n 9 | 1000 | 43.9 | 45.0\n 9 | 10000 | 62.2 | 62.5\n 65 | 70 | 42.4 | 42.9\n 65 | 100 | 43.2 | 43.9\n 65 | 1000 | 44.0 | 45.1\n 65 | 10000 | 62.4 | 62.6\n(16 rows)\n\nPatched\n-------\n\npostgres=# \\i snaptest.sql\n numkeep | numsnaps | lifo_time_ns | fifo_time_ns\n---------+----------+--------------+--------------\n 0 | 1 | 32.8 | 34.2\n 0 | 5 | 33.8 | 32.2\n 0 | 10 | 37.2 | 40.2\n 0 | 60 | 40.2 | 45.1\n 0 | 70 | 40.9 | 48.4\n 0 | 100 | 41.9 | 45.2\n 0 | 1000 | 49.0 | 55.6\n 0 | 10000 | 62.4 | 67.2\n 9 | 10 | 33.6 | 33.0\n 9 | 100 | 39.6 | 39.7\n 9 | 1000 | 42.2 | 45.0\n 9 | 10000 | 60.7 | 63.9\n 65 | 70 | 32.5 | 33.6\n 65 | 100 | 37.5 | 39.7\n 65 | 1000 | 42.3 | 46.3\n 65 | 10000 | 61.9 | 64.8\n(16 rows)\n\nFor easier side-by-side comparison, here are the same numbers with the \npatched and unpatched results side by side, and their ratio (ratio < 1 \nmeans the patched version is faster):\n\nLIFO tests:\n numkeep | numsnaps | unpatched | patched | ratio\n---------+----------+-----------+---------+-------\n 0 | 1 | 32.7 | 32.8 | 1.00\n 0 | 5 | 33.0 | 33.8 | 1.02\n 0 | 10 | 34.1 | 37.2 | 1.09\n 0 | 60 | 32.5 | 40.2 | 1.24\n 0 | 70 | 47.6 | 40.9 | 0.86\n 0 | 100 | 47.9 | 41.9 | 0.87\n 0 | 1000 | 52.9 | 49.0 | 0.93\n 0 | 10000 | 61.7 | 62.4 | 1.01\n 9 | 10 | 38.4 | 33.6 | 0.88\n 9 | 100 | 42.3 | 39.6 | 0.94\n 9 | 1000 | 43.9 | 42.2 | 0.96\n 9 | 10000 | 62.2 | 60.7 | 0.98\n 65 | 70 | 42.4 | 32.5 | 0.77\n 65 | 100 | 43.2 | 37.5 | 0.87\n 65 | 1000 | 44.0 | 42.3 | 0.96\n 65 | 10000 | 62.4 | 61.9 | 0.99\n(16 rows)\n\n\nFIFO tests:\n numkeep | numsnaps | unpatched | patched | ratio\n---------+----------+-----------+---------+-------\n 0 | 1 | 32.3 | 34.2 | 1.06\n 0 | 5 | 32.9 | 32.2 | 0.98\n 0 | 10 | 35.5 | 40.2 | 1.13\n 0 | 60 | 38.3 | 45.1 | 1.18\n 0 | 70 | 48.9 | 48.4 | 0.99\n 0 | 100 | 49.3 | 45.2 | 0.92\n 0 | 1000 | 52.7 | 55.6 | 1.06\n 0 | 10000 | 62.4 | 67.2 | 1.08\n 9 | 10 | 37.6 | 33.0 | 0.88\n 9 | 100 | 42.3 | 39.7 | 0.94\n 9 | 1000 | 45.0 | 45.0 | 1.00\n 9 | 10000 | 62.5 | 63.9 | 1.02\n 65 | 70 | 42.9 | 33.6 | 0.78\n 65 | 100 | 43.9 | 39.7 | 0.90\n 65 | 1000 | 45.1 | 46.3 | 1.03\n 65 | 10000 | 62.6 | 64.8 | 1.04\n(16 rows)\n\nSummary: In the the worst scenario, the patched version is still 24% \nslower than unpatched. But many other scenarios are now faster with the \npatch.\n\n- Heikki",
"msg_date": "Thu, 21 Jan 2021 00:11:37 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Dear Heikki,\r\n\r\nI tested in the same situation, and I confirmed that almost same results are returned.\r\n(results are at the end of the e-mail)\r\n\r\nYou think that these results are acceptable\r\nbecause resowners own many resources(more than 64) in general\r\nand it's mainly faster in such a situation, isn't it?\r\n\r\nI cannot distinguish correctly, but sounds good.\r\n\r\n====test result====\r\n\r\nunpatched\r\n\r\n numkeep | numsnaps | lifo_time_ns | fifo_time_ns \r\n---------+----------+--------------+--------------\r\n 0 | 1 | 68.5 | 69.9\r\n 0 | 5 | 73.2 | 76.8\r\n 0 | 10 | 70.6 | 74.7\r\n 0 | 60 | 68.0 | 75.6\r\n 0 | 70 | 91.3 | 94.8\r\n 0 | 100 | 89.0 | 89.1\r\n 0 | 1000 | 97.9 | 98.9\r\n 0 | 10000 | 116.0 | 115.9\r\n 9 | 10 | 74.7 | 76.6\r\n 9 | 100 | 80.8 | 80.1\r\n 9 | 1000 | 86.0 | 86.2\r\n 9 | 10000 | 116.1 | 116.8\r\n 65 | 70 | 84.7 | 85.3\r\n 65 | 100 | 80.5 | 80.3\r\n 65 | 1000 | 86.3 | 86.2\r\n 65 | 10000 | 115.4 | 115.9\r\n(16 rows)\r\n\r\npatched\r\n\r\n numkeep | numsnaps | lifo_time_ns | fifo_time_ns \r\n---------+----------+--------------+--------------\r\n 0 | 1 | 62.4 | 62.6\r\n 0 | 5 | 68.0 | 66.9\r\n 0 | 10 | 73.6 | 78.1\r\n 0 | 60 | 82.3 | 87.2\r\n 0 | 70 | 83.0 | 89.1\r\n 0 | 100 | 82.8 | 87.9\r\n 0 | 1000 | 88.2 | 96.6\r\n 0 | 10000 | 119.6 | 124.5\r\n 9 | 10 | 62.0 | 62.8\r\n 9 | 100 | 75.3 | 78.0\r\n 9 | 1000 | 82.6 | 89.3\r\n 9 | 10000 | 116.6 | 122.6\r\n 65 | 70 | 66.7 | 66.4\r\n 65 | 100 | 74.6 | 77.2\r\n 65 | 1000 | 82.1 | 88.2\r\n 65 | 10000 | 118.0 | 124.1\r\n(16 rows)\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 21 Jan 2021 02:22:20 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: ResourceOwner refactoring"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 12:11:37AM +0200, Heikki Linnakangas wrote:\n> Summary: In the the worst scenario, the patched version is still 24% slower\n> than unpatched. But many other scenarios are now faster with the patch.\n\nIs there a reason explaining the sudden drop for numsnaps within the\n[10,60] range? The gap looks deeper with a low numkeep.\n--\nMichael",
"msg_date": "Thu, 21 Jan 2021 13:14:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 21/01/2021 06:14, Michael Paquier wrote:\n> On Thu, Jan 21, 2021 at 12:11:37AM +0200, Heikki Linnakangas wrote:\n>> Summary: In the the worst scenario, the patched version is still 24% slower\n>> than unpatched. But many other scenarios are now faster with the patch.\n> \n> Is there a reason explaining the sudden drop for numsnaps within the\n> [10,60] range? The gap looks deeper with a low numkeep.\n\nIt's the switch from array to hash table. With the patch, the array \nholds 8 entries. Without the patch, it's 64 entries. So you see a drop \naround those points. I added more data points in that range to get a \nbetter picture:\n\nLIFO tests:\n numkeep | numsnaps | unpatched | patched | ratio\n---------+----------+-----------+---------+-------\n 0 | 1 | 32.8 | 32.9 | 1.00\n 0 | 2 | 31.6 | 32.8 | 1.04\n 0 | 4 | 32.7 | 32.0 | 0.98\n 0 | 6 | 34.1 | 33.9 | 0.99\n 0 | 8 | 35.1 | 32.4 | 0.92\n 0 | 10 | 34.0 | 37.1 | 1.09\n 0 | 15 | 33.1 | 35.9 | 1.08\n 0 | 20 | 33.0 | 38.8 | 1.18\n 0 | 25 | 32.9 | 42.3 | 1.29\n 0 | 30 | 32.9 | 40.5 | 1.23\n 0 | 35 | 33.1 | 39.9 | 1.21\n 0 | 40 | 33.0 | 39.0 | 1.18\n 0 | 45 | 35.3 | 41.1 | 1.16\n 0 | 50 | 33.0 | 40.8 | 1.24\n 0 | 55 | 32.8 | 40.6 | 1.24\n 0 | 58 | 33.0 | 41.5 | 1.26\n 0 | 60 | 33.1 | 41.6 | 1.26\n 0 | 62 | 32.8 | 41.7 | 1.27\n 0 | 64 | 46.8 | 40.9 | 0.87\n 0 | 66 | 47.0 | 42.5 | 0.90\n 0 | 68 | 47.1 | 41.8 | 0.89\n 0 | 70 | 47.8 | 41.7 | 0.87\n(22 rows)\n\nFIFO tests:\n numkeep | numsnaps | unpatched | patched | ratio\n---------+----------+-----------+---------+-------\n 0 | 1 | 32.3 | 32.1 | 0.99\n 0 | 2 | 33.4 | 31.6 | 0.95\n 0 | 4 | 34.0 | 31.4 | 0.92\n 0 | 6 | 35.4 | 33.2 | 0.94\n 0 | 8 | 34.8 | 31.9 | 0.92\n 0 | 10 | 35.4 | 40.2 | 1.14\n 0 | 15 | 35.4 | 40.3 | 1.14\n 0 | 20 | 35.6 | 43.8 | 1.23\n 0 | 25 | 35.4 | 42.4 | 1.20\n 0 | 30 | 36.0 | 43.3 | 1.20\n 0 | 35 | 36.4 | 45.1 | 1.24\n 0 | 40 | 36.9 | 46.6 | 1.26\n 0 | 45 | 37.7 | 45.3 | 1.20\n 0 | 50 | 37.2 | 43.9 | 1.18\n 0 | 55 | 38.4 | 46.8 | 1.22\n 0 | 58 | 37.6 | 45.0 | 1.20\n 0 | 60 | 37.7 | 46.6 | 1.24\n 0 | 62 | 38.4 | 46.5 | 1.21\n 0 | 64 | 48.7 | 47.6 | 0.98\n 0 | 66 | 48.2 | 45.8 | 0.95\n 0 | 68 | 48.5 | 47.5 | 0.98\n 0 | 70 | 48.4 | 47.3 | 0.98\n(22 rows)\n\nLet's recap the behavior:\n\nWithout patch\n-------------\n\nFor each different kind of resource, there's an array that holds up to \n64 entries. In ResourceOwnerForget(), the array is scanned linearly. If \nthe array fills up, we replace the array with a hash table. After \nswitching, all operations use the hash table.\n\nWith patch\n----------\n\nThere is one array that holds up to 8 entries. It is shared by all \nresources. In ResourceOwnerForget(), the array is always scanned.\n\nIf the array fills up, all the entries are moved to a hash table, \nfreeing up space in the array, and the new entry is added to the array. \nSo the array is used together with the hash table, like an LRU cache of \nthe most recently remembered entries.\n\n\nWhy this change? I was afraid that now that all different resources \nshare the same data structure, remembering e.g. a lot of locks at the \nbeginning of a transaction would cause the switch to the hash table, \nmaking all subsequent remember/forget operations, like for buffer pins, \nslower. That kind of interference seems bad. By continuing to use the \narray for the recently-remembered entries, we avoid that problem. The \n[numkeep, numsnaps] = [65, 70] test is in that regime, and the patched \nversion was significantly faster.\n\nBecause the array is now always scanned, I felt that it needs to be \nsmall, to avoid wasting much time scanning for entries that have already \nbeen moved to the hash table. That's why I made it just 8 entries.\n\nPerhaps 8 entries is too small, after all? Linearly scanning an array is \nvery fast. To test that, I bumped up RESOWNER_ARRAY_SIZE to 64, and ran \nthe test again:\n\n numkeep | numsnaps | lifo_time_ns | fifo_time_ns\n---------+----------+--------------+--------------\n 0 | 1 | 35.4 | 31.5\n 0 | 2 | 32.3 | 32.3\n 0 | 4 | 32.8 | 31.0\n 0 | 6 | 34.5 | 33.7\n 0 | 8 | 33.9 | 32.7\n 0 | 10 | 33.7 | 33.0\n 0 | 15 | 34.8 | 33.1\n 0 | 20 | 35.0 | 32.6\n 0 | 25 | 36.9 | 33.0\n 0 | 30 | 38.7 | 33.2\n 0 | 35 | 39.9 | 34.5\n 0 | 40 | 40.8 | 35.5\n 0 | 45 | 42.6 | 36.4\n 0 | 50 | 44.9 | 37.8\n 0 | 55 | 45.4 | 38.5\n 0 | 58 | 47.7 | 39.6\n 0 | 60 | 45.9 | 40.2\n 0 | 62 | 47.6 | 40.9\n 0 | 64 | 51.4 | 41.2\n 0 | 66 | 38.2 | 39.8\n 0 | 68 | 39.7 | 40.3\n 0 | 70 | 38.1 | 40.9\n(22 rows)\n\nHere you can see that as numsnaps increases, the test becomes slower, \nbut then it becomes faster again at 64-66, when it switches to the hash \ntable. So 64 seems too much. 32 seems to be the sweet spot here, that's \nwhere scanning the hash and scanning the array are about the same speed.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 21 Jan 2021 12:14:10 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 5:14 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Here you can see that as numsnaps increases, the test becomes slower,\n> but then it becomes faster again at 64-66, when it switches to the hash\n> table. So 64 seems too much. 32 seems to be the sweet spot here, that's\n> where scanning the hash and scanning the array are about the same speed.\n\nThat sounds good. I mean, it could be that not all hardware behaves\nthe same here. But trying to get it into the right ballpark makes\nsense.\n\nI also like the fact that this now has some cases where it wins by a\nsignificant margin. That's pretty cool; thanks for working on it!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Jan 2021 12:14:42 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 10:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jan 21, 2021 at 5:14 AM Heikki Linnakangas <hlinnaka@iki.fi>\n> wrote:\n> > Here you can see that as numsnaps increases, the test becomes slower,\n> > but then it becomes faster again at 64-66, when it switches to the hash\n> > table. So 64 seems too much. 32 seems to be the sweet spot here, that's\n> > where scanning the hash and scanning the array are about the same speed.\n>\n> That sounds good. I mean, it could be that not all hardware behaves\n> the same here. But trying to get it into the right ballpark makes\n> sense.\n>\n> I also like the fact that this now has some cases where it wins by a\n> significant margin. That's pretty cool; thanks for working on it!\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n>\nThe patchset does not apply successfully, there are some hunk failures.\n\nhttp://cfbot.cputube.org/patch_32_2834.log\n\nv6-0002-Make-resowners-more-easily-extensible.patch\n\n1 out of 6 hunks FAILED -- saving rejects to file\nsrc/backend/utils/cache/plancache.c.rej\n2 out of 15 hunks FAILED -- saving rejects to file\nsrc/backend/utils/resowner/resowner.c.rej\n\n\nCan we get a rebase?\n\nI am marking the patch \"Waiting on Author\"\n\n-- \nIbrar Ahmed\n\nOn Mon, Jan 25, 2021 at 10:15 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jan 21, 2021 at 5:14 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Here you can see that as numsnaps increases, the test becomes slower,\n> but then it becomes faster again at 64-66, when it switches to the hash\n> table. So 64 seems too much. 32 seems to be the sweet spot here, that's\n> where scanning the hash and scanning the array are about the same speed.\n\nThat sounds good. I mean, it could be that not all hardware behaves\nthe same here. But trying to get it into the right ballpark makes\nsense.\n\nI also like the fact that this now has some cases where it wins by a\nsignificant margin. That's pretty cool; thanks for working on it!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n The patchset does not apply successfully, there are some hunk failures.http://cfbot.cputube.org/patch_32_2834.log v6-0002-Make-resowners-more-easily-extensible.patch1 out of 6 hunks FAILED -- saving rejects to file src/backend/utils/cache/plancache.c.rej2 out of 15 hunks FAILED -- saving rejects to file src/backend/utils/resowner/resowner.c.rejCan we get a rebase? I am marking the patch \"Waiting on Author\"-- Ibrar Ahmed",
"msg_date": "Mon, 8 Mar 2021 21:47:42 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 08/03/2021 18:47, Ibrar Ahmed wrote:\n> The patchset does not apply successfully, there are some hunk failures.\n> \n> http://cfbot.cputube.org/patch_32_2834.log \n> <http://cfbot.cputube.org/patch_32_2834.log>\n> \n> v6-0002-Make-resowners-more-easily-extensible.patch\n> \n> 1 out of 6 hunks FAILED -- saving rejects to file \n> src/backend/utils/cache/plancache.c.rej\n> 2 out of 15 hunks FAILED -- saving rejects to file \n> src/backend/utils/resowner/resowner.c.rej\n> \n> \n> Can we get a rebase?\n\nHere you go.\n\n- Heikki",
"msg_date": "Tue, 9 Mar 2021 14:40:42 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 6:10 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 08/03/2021 18:47, Ibrar Ahmed wrote:\n> > The patchset does not apply successfully, there are some hunk failures.\n> >\n> > http://cfbot.cputube.org/patch_32_2834.log\n> > <http://cfbot.cputube.org/patch_32_2834.log>\n> >\n> > v6-0002-Make-resowners-more-easily-extensible.patch\n> >\n> > 1 out of 6 hunks FAILED -- saving rejects to file\n> > src/backend/utils/cache/plancache.c.rej\n> > 2 out of 15 hunks FAILED -- saving rejects to file\n> > src/backend/utils/resowner/resowner.c.rej\n> >\n> >\n> > Can we get a rebase?\n>\n> Here you go.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Jul 2021 17:44:42 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 2021-Jul-14, vignesh C wrote:\n\n> On Tue, Mar 9, 2021 at 6:10 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> > Here you go.\n> \n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nSupport for hmac was added by e6bdfd9700eb so the rebase is not trivial.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)\n\n\n",
"msg_date": "Wed, 14 Jul 2021 10:07:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 14/07/2021 17:07, Alvaro Herrera wrote:\n> On 2021-Jul-14, vignesh C wrote:\n> \n>> On Tue, Mar 9, 2021 at 6:10 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n>>> Here you go.\n>>\n>> The patch does not apply on Head anymore, could you rebase and post a\n>> patch. I'm changing the status to \"Waiting for Author\".\n> \n> Support for hmac was added by e6bdfd9700eb so the rebase is not trivial.\n\nYeah, needed some manual fixing, but here you go.\n\n- Heikki",
"msg_date": "Wed, 14 Jul 2021 17:40:27 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 7:40 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 14/07/2021 17:07, Alvaro Herrera wrote:\n> > On 2021-Jul-14, vignesh C wrote:\n> >\n> >> On Tue, Mar 9, 2021 at 6:10 PM Heikki Linnakangas <hlinnaka@iki.fi>\n> wrote:\n> >\n> >>> Here you go.\n> >>\n> >> The patch does not apply on Head anymore, could you rebase and post a\n> >> patch. I'm changing the status to \"Waiting for Author\".\n> >\n> > Support for hmac was added by e6bdfd9700eb so the rebase is not trivial.\n>\n> Yeah, needed some manual fixing, but here you go.\n>\n> - Heikki\n>\nHi,\nFor the loop over the hash:\n\n+ for (int idx = 0; idx < capacity; idx++)\n {\n- if (olditemsarr[i] != resarr->invalidval)\n- ResourceArrayAdd(resarr, olditemsarr[i]);\n+ while (owner->hash[idx].kind != NULL &&\n+ owner->hash[idx].kind->phase == phase)\n...\n+ } while (capacity != owner->capacity);\n\nSince the phase variable doesn't seem to change for the while loop, I\nwonder what benefit the while loop has (since the release is governed by\nphase).\n\nCheers\n\nOn Wed, Jul 14, 2021 at 7:40 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 14/07/2021 17:07, Alvaro Herrera wrote:\n> On 2021-Jul-14, vignesh C wrote:\n> \n>> On Tue, Mar 9, 2021 at 6:10 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n>>> Here you go.\n>>\n>> The patch does not apply on Head anymore, could you rebase and post a\n>> patch. I'm changing the status to \"Waiting for Author\".\n> \n> Support for hmac was added by e6bdfd9700eb so the rebase is not trivial.\n\nYeah, needed some manual fixing, but here you go.\n\n- HeikkiHi,For the loop over the hash:+ for (int idx = 0; idx < capacity; idx++) {- if (olditemsarr[i] != resarr->invalidval)- ResourceArrayAdd(resarr, olditemsarr[i]);+ while (owner->hash[idx].kind != NULL &&+ owner->hash[idx].kind->phase == phase) ...+ } while (capacity != owner->capacity);Since the phase variable doesn't seem to change for the while loop, I wonder what benefit the while loop has (since the release is governed by phase).Cheers",
"msg_date": "Wed, 14 Jul 2021 08:18:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Thanks for having a look!\n\nOn 14/07/2021 18:18, Zhihong Yu wrote:\n> For the loop over the hash:\n> \n> + for (int idx = 0; idx < capacity; idx++)\n> {\n> - if (olditemsarr[i] != resarr->invalidval)\n> - ResourceArrayAdd(resarr, olditemsarr[i]);\n> + while (owner->hash[idx].kind != NULL &&\n> + owner->hash[idx].kind->phase == phase)\n> ...\n> + } while (capacity != owner->capacity);\n> \n> Since the phase variable doesn't seem to change for the while loop, I \n> wonder what benefit the while loop has (since the release is governed by \n> phase).\n\nHmm, the phase variable doesn't change, but could the element at \n'owner->hash[idx]' change? I'm not sure about that. The loop is supposed \nto handle the case that the hash table grows; could that replace the \nelement at 'owner->hash[idx]' with something else, with different phase? \nThe check is very cheap, so I'm inclined to keep it to be sure.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 14 Jul 2021 18:26:13 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 8:26 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> Thanks for having a look!\n>\n> On 14/07/2021 18:18, Zhihong Yu wrote:\n> > For the loop over the hash:\n> >\n> > + for (int idx = 0; idx < capacity; idx++)\n> > {\n> > - if (olditemsarr[i] != resarr->invalidval)\n> > - ResourceArrayAdd(resarr, olditemsarr[i]);\n> > + while (owner->hash[idx].kind != NULL &&\n> > + owner->hash[idx].kind->phase == phase)\n> > ...\n> > + } while (capacity != owner->capacity);\n> >\n> > Since the phase variable doesn't seem to change for the while loop, I\n> > wonder what benefit the while loop has (since the release is governed by\n> > phase).\n>\n> Hmm, the phase variable doesn't change, but could the element at\n> 'owner->hash[idx]' change? I'm not sure about that. The loop is supposed\n> to handle the case that the hash table grows; could that replace the\n> element at 'owner->hash[idx]' with something else, with different phase?\n> The check is very cheap, so I'm inclined to keep it to be sure.\n>\n> - Heikki\n>\nHi,\nAgreed that ```owner->hash[idx].kind->phase == phase``` can be kept.\n\nI just wonder if we should put limit on the number of iterations for the\nwhile loop.\nMaybe add a bool variable indicating that kind->ReleaseResource(value) is\ncalled in the inner loop.\nIf there is no releasing when the inner loop finishes, we can come out of\nthe while loop.\n\nCheers\n\nOn Wed, Jul 14, 2021 at 8:26 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:Thanks for having a look!\n\nOn 14/07/2021 18:18, Zhihong Yu wrote:\n> For the loop over the hash:\n> \n> + for (int idx = 0; idx < capacity; idx++)\n> {\n> - if (olditemsarr[i] != resarr->invalidval)\n> - ResourceArrayAdd(resarr, olditemsarr[i]);\n> + while (owner->hash[idx].kind != NULL &&\n> + owner->hash[idx].kind->phase == phase)\n> ...\n> + } while (capacity != owner->capacity);\n> \n> Since the phase variable doesn't seem to change for the while loop, I \n> wonder what benefit the while loop has (since the release is governed by \n> phase).\n\nHmm, the phase variable doesn't change, but could the element at \n'owner->hash[idx]' change? I'm not sure about that. The loop is supposed \nto handle the case that the hash table grows; could that replace the \nelement at 'owner->hash[idx]' with something else, with different phase? \nThe check is very cheap, so I'm inclined to keep it to be sure.\n\n- HeikkiHi,Agreed that ```owner->hash[idx].kind->phase == phase``` can be kept.I just wonder if we should put limit on the number of iterations for the while loop.Maybe add a bool variable indicating that kind->ReleaseResource(value) is called in the inner loop.If there is no releasing when the inner loop finishes, we can come out of the while loop.Cheers",
"msg_date": "Wed, 14 Jul 2021 08:41:43 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi Heikki,\n\n> Yeah, needed some manual fixing, but here you go.\n\nThanks for working on this!\n\nv8-0002 didn't apply to the current master, so I rebased it. See\nattached v9-* patches. I also included v9-0004 with some minor tweaks\nfrom me. I have several notes regarding the code.\n\n1. Not sure if I understand this code of ResourceOwnerReleaseAll():\n```\n /* First handle all the entries in the array. */\n do\n {\n found = false;\n for (int i = 0; i < owner->narr; i++)\n {\n if (owner->arr[i].kind->phase == phase)\n {\n Datum value = owner->arr[i].item;\n ResourceOwnerFuncs *kind = owner->arr[i].kind;\n\n if (printLeakWarnings)\n kind->PrintLeakWarning(value);\n kind->ReleaseResource(value);\n found = true;\n }\n }\n\n /*\n * If any resources were released, check again because some of the\n * elements might have been moved by the callbacks. We don't want to\n * miss them.\n */\n } while (found && owner->narr > 0);\n```\n\nShouldn't we mark the resource as released and/or decrease narr and/or\nsave the last processed i? Why this will not call ReleaseResource()\nendlessly on the same resource (array item)? Same question for the\nfollowing code that iterates over the hash table.\n\n2. Just an idea/observation. It's possible that the performance of\nResourceOwnerEnlarge() can be slightly improved. Since the size of the\nhash table is always a power of 2 and the code always doubles the size\nof the hash table, (idx & mask) value will get one extra bit, which\ncan be 0 or 1. If it's 0, the value is already in its place,\notherwise, it should be moved on the known distance. In other words,\nit's possible to copy `oldhash` to `newhash` and then move only half\nof the items. I don't claim that this code necessarily should be\nfaster, or that this should be checked in the scope of this work.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 8 Sep 2021 13:19:19 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Wed, Sep 08, 2021 at 01:19:19PM +0300, Aleksander Alekseev wrote:\n> Thanks for working on this!\n\nThe latest patch does not apply anymore, and has been waiting on\nauthor for a couple of weeks now, so switched as RwF for now.\n--\nMichael",
"msg_date": "Fri, 1 Oct 2021 16:18:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 08/09/2021 13:19, Aleksander Alekseev wrote:\n> Hi Heikki,\n> \n>> Yeah, needed some manual fixing, but here you go.\n> \n> Thanks for working on this!\n> \n> v8-0002 didn't apply to the current master, so I rebased it. See\n> attached v9-* patches. I also included v9-0004 with some minor tweaks\n> from me. I have several notes regarding the code.\n\nThanks!\n\nAttached is a newly rebased version. It includes your tweaks, and a few \nmore comment and indentation tweaks.\n\n> 1. Not sure if I understand this code of ResourceOwnerReleaseAll():\n> ```\n> /* First handle all the entries in the array. */\n> do\n> {\n> found = false;\n> for (int i = 0; i < owner->narr; i++)\n> {\n> if (owner->arr[i].kind->phase == phase)\n> {\n> Datum value = owner->arr[i].item;\n> ResourceOwnerFuncs *kind = owner->arr[i].kind;\n> \n> if (printLeakWarnings)\n> kind->PrintLeakWarning(value);\n> kind->ReleaseResource(value);\n> found = true;\n> }\n> }\n> \n> /*\n> * If any resources were released, check again because some of the\n> * elements might have been moved by the callbacks. We don't want to\n> * miss them.\n> */\n> } while (found && owner->narr > 0);\n> ```\n> \n> Shouldn't we mark the resource as released and/or decrease narr and/or\n> save the last processed i? Why this will not call ReleaseResource()\n> endlessly on the same resource (array item)? Same question for the\n> following code that iterates over the hash table.\n\nReleaseResource() must call ResourceOwnerForget(), which removes the \nitem from the or hash table. This works the same as the code in 'master':\n\n> \t\t/*\n> \t\t * Release buffer pins. Note that ReleaseBuffer will remove the\n> \t\t * buffer entry from our array, so we just have to iterate till there\n> \t\t * are none.\n> \t\t *\n> \t\t * During a commit, there shouldn't be any remaining pins --- that\n> \t\t * would indicate failure to clean up the executor correctly --- so\n> \t\t * issue warnings. In the abort case, just clean up quietly.\n> \t\t */\n> \t\twhile (ResourceArrayGetAny(&(owner->bufferarr), &foundres))\n> \t\t{\n> \t\t\tBuffer\t\tres = DatumGetBuffer(foundres);\n> \n> \t\t\tif (isCommit)\n> \t\t\t\tPrintBufferLeakWarning(res);\n> \t\t\tReleaseBuffer(res);\n> \t\t}\n\nThat comment explains how it works. I added a comment like that in this \npatch, too.\n\n> 2. Just an idea/observation. It's possible that the performance of\n> ResourceOwnerEnlarge() can be slightly improved. Since the size of the\n> hash table is always a power of 2 and the code always doubles the size\n> of the hash table, (idx & mask) value will get one extra bit, which\n> can be 0 or 1. If it's 0, the value is already in its place,\n> otherwise, it should be moved on the known distance. In other words,\n> it's possible to copy `oldhash` to `newhash` and then move only half\n> of the items. I don't claim that this code necessarily should be\n> faster, or that this should be checked in the scope of this work.\n\nHmm, the hash table uses open addressing, I think we want to also \nrearrange any existing entries that might not be at their right place \nbecause of collisions. We don't actually do that currently when an \nelement is removed (even before this patch), but enlarging the array is \na good opportunity for it. In any case, I haven't seen \nResourceOwnerEnlarge() show up while profiling, so I'm going to leave it \nas it is.\n\n- Heikki",
"msg_date": "Mon, 18 Oct 2021 14:17:30 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi Heikki,\n\n> Attached is a newly rebased version. It includes your tweaks, and a few\n> more comment and indentation tweaks.\n\nv10-0002 rotted a little so I rebased it. The new patch set passed `make\ninstallcheck-world` locally, but let's see if cfbot has a second opinion.\n\nI'm a bit busy with various things at the moment, so (hopefully) I will\nsubmit the actual code review in the follow-up email.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 23 Nov 2021 18:37:03 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi Heikki,\n\n> I will submit the actual code review in the follow-up email\n\nThe patchset is in a good shape. I'm changing the status to \"Ready for\nCommitter\".\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 26 Nov 2021 15:40:57 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn Fri, Nov 26, 2021 at 8:41 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> > I will submit the actual code review in the follow-up email\n>\n> The patchset is in a good shape. I'm changing the status to \"Ready for\n> Committer\".\n\nThe 2nd patch doesn't apply anymore due to a conflict on\nresowner_private.h: http://cfbot.cputube.org/patch_36_3364.log.\n\nI'm switching the entry to Waiting on Author.\n\n\n",
"msg_date": "Wed, 12 Jan 2022 13:57:14 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 12/01/2022 07:57, Julien Rouhaud wrote:\n> On Fri, Nov 26, 2021 at 8:41 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n>>\n>> The patchset is in a good shape. I'm changing the status to \"Ready for\n>> Committer\".\n> \n> The 2nd patch doesn't apply anymore due to a conflict on\n> resowner_private.h: http://cfbot.cputube.org/patch_36_3364.log.\n\nRebased version attached. Given that Aleksander marked this as Ready for \nCommitter earlier, I'll add this to the next commitfest in that state, \nand will commit in the next few days, barring any new objections.\n\n- Heikki",
"msg_date": "Mon, 31 Oct 2022 10:51:36 +0100",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi Heikki,\n\n> Rebased version attached. Given that Aleksander marked this as Ready for\n> Committer earlier, I'll add this to the next commitfest in that state,\n> and will commit in the next few days, barring any new objections.\n\nThanks for resurrecting this patch.\n\nWhile taking a fresh look at the code I noticed a few things.\n\nIn 0002 we have:\n\n```\n+ .name = \"buffer\"\n...\n+ .name = \"File\",\n```\n\nNot sure why \"File\" starts with an uppercase letter while \"buffer\"\nstarts with a lowercase one. This is naturally not a big deal but\ncould be worth changing for consistency.\n\nIn 0003:\n\n```\n+#if SIZEOF_DATUM == 8\n+ return hash_combine64(murmurhash64((uint64) value), (uint64) kind);\n+#else\n+ return hash_combine(murmurhash32((uint32) value), (uint32) kind);\n+#endif\n```\n\nMaybe it's worth using PointerGetDatum() + DatumGetInt32() /\nDatumGetInt64() inline functions instead of casting Datums and\npointers directly.\n\nThese are arguably nitpicks though and shouldn't stop you from merging\nthe patches as is.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 31 Oct 2022 15:49:37 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi hackers,\n\n> > Rebased version attached. Given that Aleksander marked this as Ready for\n> > Committer earlier, I'll add this to the next commitfest in that state,\n> > and will commit in the next few days, barring any new objections.\n>\n> Thanks for resurrecting this patch.\n\nAdditionally I decided to run `make installcheck-world` on Raspberry\nPi 3. It just terminated successfully.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 31 Oct 2022 18:51:41 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-31 10:51:36 +0100, Heikki Linnakangas wrote:\n> These are functions where quite a lot of things happen between the\n> ResourceOwnerEnlarge and ResourceOwnerRemember calls. It's important that\n> there are no unrelated ResourceOwnerRemember() calls in the code in\n> between, otherwise the entry reserved by the ResourceOwnerEnlarge() call\n> might be used up by the intervening ResourceOwnerRemember() and not be\n> available at the intended ResourceOwnerRemember() call anymore. The longer\n> the code path between them is, the harder it is to verify that.\n\nThis seems to work towards a future where only one kind of resource can be\nreserved ahead of time. That doesn't strike me as great.\n\n\n> Instead of having a separate array/hash for each resource kind, use a\n> single array and hash to hold all kinds of resources. This makes it\n> possible to introduce new resource \"kinds\" without having to modify the\n> ResourceOwnerData struct. In particular, this makes it possible for\n> extensions to register custom resource kinds.\n\nAs a goal I like this.\n\nHowever, I'm not quite sold on the implementation. Two main worries:\n\n1) As far as I can tell, the way ResourceOwnerReleaseAll() now works seems to\n assume that within a phase the reset order does not matter. I don't think\n that's a good assumption. I e.g. have a patch to replace InProgressBuf with\n resowner handling, and in-progress IO errors should be processed before\n before pins are released.\n\n2) There's quite a few resource types where we actually don't need an entry in\n an array, because we can instead add a dlist_node to the resource -\n avoiding memory overhead and making removal cheap. I have a few pending\n patches that use that approach, and this doesn't really provide a path for\n that anymore.\n\n\nI did try out the benchmark from\nhttps://postgr.es/m/20221029200025.w7bvlgvamjfo6z44%40awork3.anarazel.de and\nthe patches performed well, slightly better than my approach of allocating\nsome initial memory for each resarray.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:15:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 01/11/2022 02:15, Andres Freund wrote:\n> On 2022-10-31 10:51:36 +0100, Heikki Linnakangas wrote:\n>> These are functions where quite a lot of things happen between the\n>> ResourceOwnerEnlarge and ResourceOwnerRemember calls. It's important that\n>> there are no unrelated ResourceOwnerRemember() calls in the code in\n>> between, otherwise the entry reserved by the ResourceOwnerEnlarge() call\n>> might be used up by the intervening ResourceOwnerRemember() and not be\n>> available at the intended ResourceOwnerRemember() call anymore. The longer\n>> the code path between them is, the harder it is to verify that.\n> \n> This seems to work towards a future where only one kind of resource can be\n> reserved ahead of time. That doesn't strike me as great.\n\nTrue. It is enough for all the current callers AFAICS, though. I don't \nremember wanting to reserve multiple resources at the same time. \nUsually, the distance between the Enlarge and Remember calls is very \nshort. If it's not, it's error-prone anyway, so we should try to keep \nthe distance short.\n\nWhile we're at it, who says that it's enough to reserve space for only \none resource of a kind either? For example, what if you want to pin two \nbuffers?\n\nIf we really need to support that, I propose something like this:\n\n/*\n * Reserve a slot in the resource owner.\n *\n * This is separate from actually inserting an entry because if we run out\n * of memory, it's critical to do so *before* acquiring the resource.\n */\nResOwnerReservation *\nResourceOwnerReserve(ResourceOwner owner)\n\n/*\n * Remember that an object is owner by a ResourceOwner\n *\n * 'reservation' is a slot in the resource owner that was pre-reserved\n * by ResOwnerReservation.\n */\nvoid\nResOwnerRemember(ResOwnerReservaton *reservation, Datum value, \nResourceOwnerFuncs *kind)\n\n\n>> Instead of having a separate array/hash for each resource kind, use a\n>> single array and hash to hold all kinds of resources. This makes it\n>> possible to introduce new resource \"kinds\" without having to modify the\n>> ResourceOwnerData struct. In particular, this makes it possible for\n>> extensions to register custom resource kinds.\n> \n> As a goal I like this.\n> \n> However, I'm not quite sold on the implementation. Two main worries:\n> \n> 1) As far as I can tell, the way ResourceOwnerReleaseAll() now works seems to\n> assume that within a phase the reset order does not matter. I don't think\n> that's a good assumption. I e.g. have a patch to replace InProgressBuf with\n> resowner handling, and in-progress IO errors should be processed before\n> before pins are released.\n\nHmm. Currently, you're not supposed to hold any resources at commit. You \nget warnings about resource leaks if a resource owner is not empty on \nResourceOwnerReleaseAll(). On abort, does the order matter? I'm not \nfamiliar with your InProgressBuf patch, but I guess you could handle the \nin-progress IO errors in ReleaseBuffer().\n\nIf we do need to worry about release order, perhaps add a \"priority\" or \n\"phase\" to each resource kind, and release them in priority order. We \nalready have before- and after-locks as phases, but we could generalize \nthat.\n\nHowever, I feel that trying to enforce a particular order moves the \ngoalposts. If we need that, let's add it as a separate patch later.\n\n> 2) There's quite a few resource types where we actually don't need an entry in\n> an array, because we can instead add a dlist_node to the resource -\n> avoiding memory overhead and making removal cheap. I have a few pending\n> patches that use that approach, and this doesn't really provide a path for\n> that anymore.\n\nIs that materially better than using the array? The fast path with an \narray is very fast. If it is better, perhaps we should bite the bullet \nand require a dlist node and use that mechanism for all resource types?\n\n> I did try out the benchmark from\n> https://postgr.es/m/20221029200025.w7bvlgvamjfo6z44%40awork3.anarazel.de and\n> the patches performed well, slightly better than my approach of allocating\n> some initial memory for each resarray.\n\nThank you, glad to hear that!\n\n- Heikki\n\n\n\n",
"msg_date": "Tue, 1 Nov 2022 12:39:39 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 6:39 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> However, I feel that trying to enforce a particular order moves the\n> goalposts. If we need that, let's add it as a separate patch later.\n\nI don't really see it that way, because with the current\nimplementation, we do all resources of a particular type together,\nbefore moving on to the next type. That seems like a valuable property\nto preserve, and I think we should.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 1 Nov 2022 08:43:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-01 12:39:39 +0200, Heikki Linnakangas wrote:\n> > 1) As far as I can tell, the way ResourceOwnerReleaseAll() now works seems to\n> > assume that within a phase the reset order does not matter. I don't think\n> > that's a good assumption. I e.g. have a patch to replace InProgressBuf with\n> > resowner handling, and in-progress IO errors should be processed before\n> > before pins are released.\n> \n> Hmm. Currently, you're not supposed to hold any resources at commit. You get\n> warnings about resource leaks if a resource owner is not empty on\n> ResourceOwnerReleaseAll(). On abort, does the order matter? I'm not familiar\n> with your InProgressBuf patch, but I guess you could handle the in-progress\n> IO errors in ReleaseBuffer().\n\nI was thinking about doing that as well, but it's not really trivial to know\nabout the in-progress IO at that time, without additional tracking (which\nisn't free).\n\n\n> If we do need to worry about release order, perhaps add a \"priority\" or\n> \"phase\" to each resource kind, and release them in priority order. We\n> already have before- and after-locks as phases, but we could generalize\n> that.\n> \n> However, I feel that trying to enforce a particular order moves the\n> goalposts. If we need that, let's add it as a separate patch later.\n\nLike Robert, I think that the patch is moving the goalpost...\n\n\n> > 2) There's quite a few resource types where we actually don't need an entry in\n> > an array, because we can instead add a dlist_node to the resource -\n> > avoiding memory overhead and making removal cheap. I have a few pending\n> > patches that use that approach, and this doesn't really provide a path for\n> > that anymore.\n> \n> Is that materially better than using the array?\n\nIt's safe in critical sections. I have a, not really baked but promising,\npatch to make WAL writes use AIO. There's no way to know the number of\n\"asynchronous IOs\" needed before entering the critical section.\n\n\n> The fast path with an array is very fast. If it is better, perhaps we should\n> bite the bullet and require a dlist node and use that mechanism for all\n> resource types?\n\nI don't think it's suitable for all - you need an exclusively owned region of\nmemory to embed a list in. That works nicely for some things, but not others\n(e.g. buffer pins).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Nov 2022 08:42:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 01/11/2022 17:42, Andres Freund wrote:\n> On 2022-11-01 12:39:39 +0200, Heikki Linnakangas wrote:\n>> If we do need to worry about release order, perhaps add a \"priority\" or\n>> \"phase\" to each resource kind, and release them in priority order. We\n>> already have before- and after-locks as phases, but we could generalize\n>> that.\n>>\n>> However, I feel that trying to enforce a particular order moves the\n>> goalposts. If we need that, let's add it as a separate patch later.\n> \n> Like Robert, I think that the patch is moving the goalpost...\n\nOk, I added a release-priority mechanism to this. New patchset attached. \nI added it as a separate patch on top of the previous patches, as \nv13-0003-Release-resources-in-priority-order.patch, to make it easier to \nsee what changed after the previous patchset version. But it should be \nsquashed with patch 0002 before committing.\n\nIn this implementation, when you call ResourceOwnerRelease(), the \narray/hash table is sorted by phase and priority, and the callbacks are \ncalled in that order. To make that feasible, I had to add one limitation:\n\nAfter you call ResourceOwnerRelease(RESOURCE_RELEASE_BEFORE_LOCKS), you \ncannot continue using the resource owner. You now get an error if you \ntry to call ResourceOwnerRemember() or ResourceOwnerForget() after \nResourceOwnerRelease(). Previously, it was allowed, but if you \nremembered any more before-locks resources after calling \nResourceOwnerRelease(RESOURCE_RELEASE_BEFORE_LOCKS), you had to be \ncareful to release them manually, or you hit an assertion failure when \ndeleting the ResourceOwner.\n\nWe did that sometimes with relcache references in AbortSubTransaction(), \nin AtEOSubXact_Inval(). Regression tests caught that case. I worked \naround that in AbortSubTransaction() by switching to the parent \nsubxact's resource owner between the phases. Other end-of-transaction \nroutines also perform some tasks between the phases, but AFAICS they \ndon't try to remember any extra resources. It's a bit hard to tell, \nthough, there's a lot going on in AtEOXact_Inval().\n\nA more general fix would be to have special ResourceOwner for use at \nend-of-transaction, similar to the TransactionAbortContext memory \ncontext. But in general, end-of-transaction activities should be kept \nsimple, especially between the release phases, so I feel that having to \nremember extra resources there is a bad code smell and we shouldn't \nencourage that. Perhaps there is a better fix or workaround for the \nknown case in AbortSubTransaction(), too, that would avoid having to \nremember any resources there.\n\nI added a test module in src/test/modules/test_resowner to test those cases.\n\nAlso, I changed the ReleaseResource callback's contract so that the \ncallback no longer needs to call ResourceOwnerForget. That required some \nchanges in bufmgr.c and some other files, to avoid calling \nResourceOwnerForget when the resource is released by the callback.\n\nThere are no changes to the performance-critical Remember/Forget \ncodepaths, the performance is the same as before.\n\n- Heikki",
"msg_date": "Tue, 21 Feb 2023 13:40:57 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nI noticed that the patchset didn't make much progress since February\nand decided to give it another round of code review.\n\n> [...]. But in general, end-of-transaction activities should be kept\n> simple, especially between the release phases, so I feel that having to\n> remember extra resources there is a bad code smell and we shouldn't\n> encourage that.\n\n+1\n\n> I added a test module in src/test/modules/test_resowner to test those cases.\n\nThis is much appreciated, as well as extensive changes made to READMEs\nand the comments.\n\n> [...] New patchset attached.\n> I added it as a separate patch on top of the previous patches, as\n> v13-0003-Release-resources-in-\n> priority-order.patch, to make it easier to\n> see what changed after the previous patchset version. But it should be\n> squashed with patch 0002 before committing.\n\nMy examination, which besides reading the code included running it\nunder sanitizer and checking the code coverage, didn't reveal any\nmajor problems with the patchset.\n\nCertain \"should never happen in practice\" scenarios seem not to be\ntest-covered in resowner.c, particularly:\n\n```\nelog(ERROR, \"ResourceOwnerEnlarge called after release started\");\nelog(ERROR, \"ResourceOwnerRemember called but array was full\");\nelog(ERROR, \"ResourceOwnerForget called for %s after release started\",\nkind->name);\nelog(ERROR, \"%s %p is not owned by resource owner %s\",\nelog(ERROR, \"ResourceOwnerForget called for %s after release started\",\nkind->name);\nelog(ERROR, \"lock reference %p is not owned by resource owner %s\"\n```\n\nI didn't check whether these or similar code paths were tested before\nthe patch and I don't have a strong opinion on whether we should test\nthese scenarios. Personally I'm OK with the fact that these few lines\nare not covered with tests.\n\nThe following procedures are never executed:\n\n* RegisterResourceReleaseCallback()\n* UnregisterResourceReleaseCallback()\n\nAnd are never actually called anymore due to changes in 0005.\nPreviously they were used by contrib/pgcrypto. I suggest dropping this\npart of the API since it seems to be redundant now. This will simplify\nthe implementation even further.\n\nThis patch, although moderately complicated, was moved between several\ncommitfests. I think it would be great if it made it to PG16. I'm\ninclined to change the status of the patchset to RfC in a bit, unless\nanyone has a second opinion.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 22 Mar 2023 17:23:40 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\n> This patch, although moderately complicated, was moved between several\n> commitfests. I think it would be great if it made it to PG16. I'm\n> inclined to change the status of the patchset to RfC in a bit, unless\n> anyone has a second opinion.\n\n> I added a test module in src/test/modules/test_resowner to test those cases.\n\nHmm... it looks like a file is missing in the patchset. When building\nwith Autotools:\n\n```\n============== running regression test queries ==============\ntest test_resowner ... /bin/sh: 1: cannot open\n/home/eax/projects/postgresql/src/test/modules/test_resowner/sql/test_resowner.sql:\nNo such file\n```\n\nI wonder why the tests pass when using Meson.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 30 Mar 2023 22:17:07 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Here's another version of this patchset:\n\n- I squashed the resource release priority changes with the main patch.\n\n- In 0001-Move-a-few-ResourceOwnerEnlarge-calls-for-safety.patch, I \nmoved things around a little differently. In previous version, I changed \nPinBuffer() so that it does ResourceOwnerEnlargeBuffers, so that the \ncallers don't need to do it. In this version, I went the other \ndirection, and made the caller responsible for calling both \nResourceOwnerEnlargeBuffers and ReservePrivateRefCountEntry. There were \nsome callers that had to call those functions before calling PinBuffer() \nanyway, because they wanted to avoid the possibility of elog(ERROR), so \nit seems better to always make it the caller's responsibility.\n\nMore comments below:\n\nOn 22/03/2023 16:23, Aleksander Alekseev wrote:\n> Certain \"should never happen in practice\" scenarios seem not to be\n> test-covered in resowner.c, particularly:\n> \n> ```\n> elog(ERROR, \"ResourceOwnerEnlarge called after release started\");\n> elog(ERROR, \"ResourceOwnerRemember called but array was full\");\n> elog(ERROR, \"ResourceOwnerForget called for %s after release started\",\n> kind->name);\n> elog(ERROR, \"%s %p is not owned by resource owner %s\",\n> elog(ERROR, \"ResourceOwnerForget called for %s after release started\",\n> kind->name);\n> elog(ERROR, \"lock reference %p is not owned by resource owner %s\"\n> ```\n> \n> I didn't check whether these or similar code paths were tested before\n> the patch and I don't have a strong opinion on whether we should test\n> these scenarios. Personally I'm OK with the fact that these few lines\n> are not covered with tests.\n\nThey were not covered before. Might make sense to add coverage, I'll \nlook into that.\n\n> The following procedures are never executed:\n> \n> * RegisterResourceReleaseCallback()\n> * UnregisterResourceReleaseCallback()\n> \n> And are never actually called anymore due to changes in 0005.\n> Previously they were used by contrib/pgcrypto. I suggest dropping this\n> part of the API since it seems to be redundant now. This will simplify\n> the implementation even further.\n\nThere might be extensions out there that use those callbacks, that's why \nI left them in place. I'll add a test for those functions in \ntest_resowner now, so that we still have some coverage for them in core.\n\n\n> Hmm... it looks like a file is missing in the patchset. When building\n> with Autotools:\n> \n> ```\n> ============== running regression test queries ==============\n> test test_resowner ... /bin/sh: 1: cannot open\n> /home/eax/projects/postgresql/src/test/modules/test_resowner/sql/test_resowner.sql:\n> No such file\n> ```\n\nOops, added.\n\n> I wonder why the tests pass when using Meson.\n\nA-ha, it's because I didn't add the new test_resowner directory to the \nsrc/test/modules/meson.build file. Fixed.\n\nThanks for the review!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 22 May 2023 12:40:23 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\n> [...]\n> A-ha, it's because I didn't add the new test_resowner directory to the\n> src/test/modules/meson.build file. Fixed.\n>\n> Thanks for the review!\n\nYou probably already noticed, but for the record: cfbot seems to be\nextremely unhappy with the patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 23 May 2023 16:41:55 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 23/05/2023 16:41, Aleksander Alekseev wrote:\n> Hi,\n> \n>> [...]\n>> A-ha, it's because I didn't add the new test_resowner directory to the\n>> src/test/modules/meson.build file. Fixed.\n>>\n>> Thanks for the review!\n> \n> You probably already noticed, but for the record: cfbot seems to be\n> extremely unhappy with the patch.\n\nThanks, here's a fixed version. (ResourceOwner resource release \ncallbacks mustn't call ResourceOwnerForget anymore, but AbortBufferIO \nwas still doing that)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 25 May 2023 20:14:23 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\n> Thanks, here's a fixed version. (ResourceOwner resource release\n> callbacks mustn't call ResourceOwnerForget anymore, but AbortBufferIO\n> was still doing that)\n\nI believe v15 is ready to be merged. I suggest we merge it early in\nthe PG17 release cycle.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 4 Jul 2023 17:18:24 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "A few suggestions on the API:\n\n > +static ResourceOwnerFuncs tupdesc_resowner_funcs =\n\nThese aren't all \"functions\", so maybe another word like \"info\" or \n\"description\" would be more appropriate?\n\n\n > + .release_phase = RESOURCE_RELEASE_AFTER_LOCKS,\n > + .release_priority = RELEASE_PRIO_TUPDESC_REFS,\n\nI suggest assigning explicit non-zero numbers to the RESOURCE_RELEASE_* \nconstants, as well as changing RELEASE_PRIO_FIRST to 1 and adding an \nassertion that the priority is not zero. Otherwise, someone might \nforget to assign these fields and would implicitly get phase 0 and \npriority 0, which would probably work in most cases, but wouldn't be the \nexplicit intention.\n\nThen again, you are using RELEASE_PRIO_FIRST for the pgcrypto patch. Is \nit the recommendation that if there are no other requirements, external \nusers should use that? If we are going to open this up to external \nusers, we should probably have some documented guidance around this.\n\n\n > + .PrintLeakWarning = ResOwnerPrintTupleDescLeakWarning\n\nI think this can be refactored further. We really just need a function \nto describe a resource, like maybe\n\nstatic char *\nResOwnerDescribeTupleDesc(Datum res)\n{\n TupleDesc tupdesc = (TupleDesc) DatumGetPointer(res);\n\n return psprintf(\"TupleDesc %p (%u,%d)\",\n tupdesc, tupdesc->tdtypeid, tupdesc->tdtypmod);\n}\n\nand then the common code can generate the warnings from that like\n\n elog(WARNING,\n \"%s leak: %s still referenced\",\n kind->name, kind->describe(value));\n\nThat way, we could more easily develop further options, such as turning \nthis warning into an error (useful for TAP tests).\n\nAlso, maybe, you could supply a default description that just prints \n\"%p\", which appears to be applicable in about half the places.\n\nPossible bonus project: I'm not sure these descriptions are so useful \nanyway. It doesn't help me much in debugging if \"File 55 was not \nreleased\" or some such. If would be cool if ResourceOwnerRemember() in \nsome debug mode could remember the source code location, and then print \nthat together with the leak warning.\n\n > +#define ResourceOwnerRememberTupleDesc(owner, tupdesc) \\\n > + ResourceOwnerRemember(owner, PointerGetDatum(tupdesc), \n&tupdesc_resowner_funcs)\n > +#define ResourceOwnerForgetTupleDesc(owner, tupdesc) \\\n > + ResourceOwnerForget(owner, PointerGetDatum(tupdesc), \n&tupdesc_resowner_funcs)\n\nI would prefer that these kinds of things be made inline functions, so \nthat we maintain the type safety of the previous interface. And it's \nalso easier when reading code to see type decorations.\n\n(But I suspect the above bonus project would require these to be macros?)\n\n\n > + if (context->resowner)\n > + ResourceOwnerForgetJIT(context->resowner, context);\n\nIt seems most ResourceOwnerForget*() calls have this kind of check \nbefore them. Could this be moved inside the function?\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 14:37:23 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-25 20:14:23 +0300, Heikki Linnakangas wrote:\n> @@ -2491,9 +2495,6 @@ BufferSync(int flags)\n> \tint\t\t\tmask = BM_DIRTY;\n> \tWritebackContext wb_context;\n>\n> -\t/* Make sure we can handle the pin inside SyncOneBuffer */\n> -\tResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n> -\n> \t/*\n> \t * Unless this is a shutdown checkpoint or we have been explicitly told,\n> \t * we write only permanent, dirty buffers. But at shutdown or end of\n> @@ -2970,9 +2971,6 @@ BgBufferSync(WritebackContext *wb_context)\n> \t * requirements, or hit the bgwriter_lru_maxpages limit.\n> \t */\n>\n> -\t/* Make sure we can handle the pin inside SyncOneBuffer */\n> -\tResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n> -\n> \tnum_to_scan = bufs_to_lap;\n> \tnum_written = 0;\n> \treusable_buffers = reusable_buffers_est;\n> @@ -3054,8 +3052,6 @@ BgBufferSync(WritebackContext *wb_context)\n> *\n> * (BUF_WRITTEN could be set in error if FlushBuffer finds the buffer clean\n> * after locking it, but we don't care all that much.)\n> - *\n> - * Note: caller must have done ResourceOwnerEnlargeBuffers.\n> */\n> static int\n> SyncOneBuffer(int buf_id, bool skip_recently_used, WritebackContext *wb_context)\n\n> @@ -3065,7 +3061,9 @@ SyncOneBuffer(int buf_id, bool skip_recently_used, WritebackContext *wb_context)\n> \tuint32\t\tbuf_state;\n> \tBufferTag\ttag;\n>\n> +\t/* Make sure we can handle the pin */\n> \tReservePrivateRefCountEntry();\n> +\tResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n>\n> \t/*\n> \t * Check whether buffer needs writing.\n> @@ -4110,9 +4108,6 @@ FlushRelationBuffers(Relation rel)\n> \t\treturn;\n> \t}\n>\n> -\t/* Make sure we can handle the pin inside the loop */\n> -\tResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n> -\n> \tfor (i = 0; i < NBuffers; i++)\n> \t{\n> \t\tuint32\t\tbuf_state;\n> @@ -4126,7 +4121,9 @@ FlushRelationBuffers(Relation rel)\n> \t\tif (!BufTagMatchesRelFileLocator(&bufHdr->tag, &rel->rd_locator))\n> \t\t\tcontinue;\n>\n> +\t\t/* Make sure we can handle the pin */\n> \t\tReservePrivateRefCountEntry();\n> +\t\tResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n>\n> \t\tbuf_state = LockBufHdr(bufHdr);\n> \t\tif (BufTagMatchesRelFileLocator(&bufHdr->tag, &rel->rd_locator) &&\n> @@ -4183,9 +4180,6 @@ FlushRelationsAllBuffers(SMgrRelation *smgrs, int nrels)\n> \tif (use_bsearch)\n> \t\tpg_qsort(srels, nrels, sizeof(SMgrSortArray), rlocator_comparator);\n>\n> -\t/* Make sure we can handle the pin inside the loop */\n> -\tResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n> -\n> \tfor (i = 0; i < NBuffers; i++)\n> \t{\n> \t\tSMgrSortArray *srelent = NULL;\n\nHm, a tad worried that increasing the number of ResourceOwnerEnlargeBuffers()\nthat drastically could have a visible overhead.\n\n\n\n> +static ResourceOwnerFuncs tupdesc_resowner_funcs =\n> +{\n> +\t.name = \"tupdesc reference\",\n> +\t.release_phase = RESOURCE_RELEASE_AFTER_LOCKS,\n> +\t.release_priority = RELEASE_PRIO_TUPDESC_REFS,\n> +\t.ReleaseResource = ResOwnerReleaseTupleDesc,\n> +\t.PrintLeakWarning = ResOwnerPrintTupleDescLeakWarning\n> +};\n\nI think these should all be marked const, there's no need to have them be in a\nmutable section of the binary. Of course that will require adjusting a bunch\nof the signatures too, but that seems fine.\n\n> --- a/src/backend/access/transam/xact.c\n> +++ b/src/backend/access/transam/xact.c\n> @@ -5171,9 +5171,24 @@ AbortSubTransaction(void)\n> \t\tResourceOwnerRelease(s->curTransactionOwner,\n> \t\t\t\t\t\t\t RESOURCE_RELEASE_BEFORE_LOCKS,\n> \t\t\t\t\t\t\t false, false);\n> +\n> \t\tAtEOSubXact_RelationCache(false, s->subTransactionId,\n> \t\t\t\t\t\t\t\t s->parent->subTransactionId);\n> +\n> +\n> +\t\t/*\n> +\t\t * AtEOSubXact_Inval sometimes needs to temporarily\n> +\t\t * bump the refcount on the relcache entries that it processes.\n> +\t\t * We cannot use the subtransaction's resource owner anymore,\n> +\t\t * because we've already started releasing it. But we can use\n> +\t\t * the parent resource owner.\n> +\t\t */\n> +\t\tCurrentResourceOwner = s->parent->curTransactionOwner;\n> +\n> \t\tAtEOSubXact_Inval(false);\n> +\n> +\t\tCurrentResourceOwner = s->curTransactionOwner;\n> +\n> \t\tResourceOwnerRelease(s->curTransactionOwner,\n> \t\t\t\t\t\t\t RESOURCE_RELEASE_LOCKS,\n> \t\t\t\t\t\t\t false, false);\n\nI might be a bit slow this morning, but why did this need to change as part of\nthis?\n\n\n> @@ -5182,8 +5221,8 @@ TerminateBufferIO(BufferDesc *buf, bool clear_dirty, uint32 set_flag_bits)\n> *\tIf I/O was in progress, we always set BM_IO_ERROR, even though it's\n> *\tpossible the error condition wasn't related to the I/O.\n> */\n> -void\n> -AbortBufferIO(Buffer buffer)\n> +static void\n> +AbortBufferIO(Buffer buffer, bool forget_owner)\n\nWhat is the forget_owner argument for? It's always false, afaics?\n\n\n\n> +/* Convenience wrappers over ResourceOwnerRemember/Forget */\n> +#define ResourceOwnerRememberCatCacheRef(owner, tuple) \\\n> +\tResourceOwnerRemember(owner, PointerGetDatum(tuple), &catcache_resowner_funcs)\n> +#define ResourceOwnerForgetCatCacheRef(owner, tuple) \\\n> +\tResourceOwnerForget(owner, PointerGetDatum(tuple), &catcache_resowner_funcs)\n> +#define ResourceOwnerRememberCatCacheListRef(owner, list) \\\n> +\tResourceOwnerRemember(owner, PointerGetDatum(list), &catlistref_resowner_funcs)\n> +#define ResourceOwnerForgetCatCacheListRef(owner, list) \\\n> +\tResourceOwnerForget(owner, PointerGetDatum(list), &catlistref_resowner_funcs)\n\nNot specific to this type of resource owner, but I wonder if it'd not be\nbetter to use inline functions for these - the PointerGetDatum() cast removes\nnearly all type safety.\n\n\n\n> +Releasing\n> +---------\n> +\n> +Releasing the resources of a ResourceOwner happens in three phases:\n> +\n> +1. \"Before-locks\" resources\n> +\n> +2. Locks\n> +\n> +3. \"After-locks\" resources\n\nGiven that we now have priorites, I wonder if we shouldn't merge the phase and\nthe priorities? We have code like this:\n\n\n\tResourceOwnerRelease(TopTransactionResourceOwner,\n\t\t\t\t\t\t RESOURCE_RELEASE_BEFORE_LOCKS,\n\t\t\t\t\t\t true, true);\n...\n\tResourceOwnerRelease(TopTransactionResourceOwner,\n\t\t\t\t\t\t RESOURCE_RELEASE_LOCKS,\n\t\t\t\t\t\t true, true);\n\tResourceOwnerRelease(TopTransactionResourceOwner,\n\t\t\t\t\t\t RESOURCE_RELEASE_AFTER_LOCKS,\n\t\t\t\t\t\t true, true);\n\nNow, admittedly locks currently are \"special\" anyway, but we have a plan to\nget rid of that. If we did, then we could do the last two as one as\nResourceOwnerRelease(from = RELEASE_PRIO_LOCKS, to: RELEASE_PRIO_LAST, ...)\nor such.\n\n\n> +Normally, you are expected to call ResourceOwnerForget on every resource so\n> +that at commit, the ResourceOwner is empty (locks are an exception). If there\n> +are any resources still held at commit, ResourceOwnerRelease will call the\n> +PrintLeakWarning callback on each such resource. At abort, however, we truly\n> +rely on the ResourceOwner mechanism and it is normal that there are resources\n> +to be released.\n\nI am not sure it's a good idea to encode that all kinds of resources ought to\nhave been released prior on commit. I can see that not always making sense -\nin fact we already don't do it for locks, no? Perhaps just add a \"commonly\"?\n\n\n> +typedef struct ResourceElem\n> {\n> ...\n> +\tDatum\t\titem;\n> +\tResourceOwnerFuncs *kind;\t/* NULL indicates a free hash table slot */\n> +} ResourceElem;\n\n> /*\n> - * Initially allocated size of a ResourceArray. Must be power of two since\n> - * we'll use (arraysize - 1) as mask for hashing.\n> + * Size of the fixed-size array to hold most-recently remembered resources.\n> */\n> -#define RESARRAY_INIT_SIZE 16\n> +#define RESOWNER_ARRAY_SIZE 32\n\nThat's 512 bytes, pretty large to be searched sequentially. It's somewhat sad\nthat the array needs to include 8 byte of ResourceOwnerFuncs...\n\n\n> +\tbool\t\treleasing;\t\t/* has ResourceOwnerRelease been called? */\n> +\n> +\t/*\n> +\t * The fixed-size array for recent resources.\n> +\t *\n> +\t * If 'releasing' is set, the contents are sorted by release priority.\n> +\t */\n> +\tuint32\t\tnarr;\t\t\t/* how many items are stored in the array */\n> +\tResourceElem arr[RESOWNER_ARRAY_SIZE];\n> +\n> +\t/*\n> +\t * The hash table. Uses open-addressing. 'nhash' is the number of items\n> +\t * present; if it would exceed 'grow_at', we enlarge it and re-hash.\n> +\t * 'grow_at' should be rather less than 'capacity' so that we don't waste\n> +\t * too much time searching for empty slots.\n> +\t *\n> +\t * If 'releasing' is set, the contents are no longer hashed, but sorted by\n> +\t * release priority. The first 'nhash' elements are occupied, the rest\n> +\t * are empty.\n> +\t */\n> +\tResourceElem *hash;\n> +\tuint32\t\tnhash;\t\t\t/* how many items are stored in the hash */\n> +\tuint32\t\tcapacity;\t\t/* allocated length of hash[] */\n> +\tuint32\t\tgrow_at;\t\t/* grow hash when reach this */\n> +\n> +\t/* The local locks cache. */\n> +\tuint32\t\tnlocks;\t\t\t/* number of owned locks */\n> \tLOCALLOCK *locks[MAX_RESOWNER_LOCKS];\t/* list of owned locks */\n\nDue to 'narr' being before the large 'arr', 'nhash' is always on a separate\ncacheline. I.e. the number of cachelines accessed in happy paths is higher\nthan necessary, also relevant because there's more dependencies on narr, nhash\nthan on the contents of those.\n\nI'd probably order it so that both of the large elements (arr, locks) are at\nthe end.\n\nIt's hard to know with changes this small, but it does seem to yield a small\nand repeatable performance benefit (pipelined pgbench -S).\n\n\n> -}\t\t\tResourceOwnerData;\n> +} ResourceOwnerData;\n\nThat one pgindent will likely undo again ;)\n\n\n> +/*\n> + * Forget that an object is owned by a ResourceOwner\n> + *\n> + * Note: if same resource ID is associated with the ResourceOwner more than\n> + * once, one instance is removed.\n> + */\n> +void\n> +ResourceOwnerForget(ResourceOwner owner, Datum value, ResourceOwnerFuncs *kind)\n> +{\n> +#ifdef RESOWNER_TRACE\n> +\telog(LOG, \"FORGET %d: owner %p value \" UINT64_FORMAT \", kind: %s\",\n> +\t\t resowner_trace_counter++, owner, DatumGetUInt64(value), kind->name);\n> +#endif\n> +\n> +\t/*\n> +\t * Mustn't call this after we have already started releasing resources.\n> +\t * (Release callback functions are not allowed to release additional\n> +\t * resources.)\n> +\t */\n> +\tif (owner->releasing)\n> +\t\telog(ERROR, \"ResourceOwnerForget called for %s after release started\", kind->name);\n> +\n> +\t/* Search through all items in the array first. */\n> +\tfor (uint32 i = 0; i < owner->narr; i++)\n> +\t{\n\nHm, I think we oftend up releasing resources in a lifo order. Would it\npossibly make sense to search in reverse order?\n\n\nSeparately, I wonder if it'd be worth to check if there are any other matches\nwhen assertions are enabled.\n\n\n> ResourceOwnerRelease(ResourceOwner owner,\n> @@ -492,6 +631,15 @@ ResourceOwnerRelease(ResourceOwner owner,\n> {\n> \t/* There's not currently any setup needed before recursing */\n> \tResourceOwnerReleaseInternal(owner, phase, isCommit, isTopLevel);\n> +\n> +#ifdef RESOWNER_STATS\n> +\tif (isTopLevel)\n> +\t{\n> +\t\telog(LOG, \"RESOWNER STATS: lookups: array %d, hash %d\", narray_lookups, nhash_lookups);\n> +\t\tnarray_lookups = 0;\n> +\t\tnhash_lookups = 0;\n> +\t}\n> +#endif\n> }\n\nWhy do we have ResourceOwnerRelease() vs ResourceOwnerReleaseInternal()? Looks\nlike that's older than your patch, but still confused.\n\n\n> +\t/* Let add-on modules get a chance too */\n> +\tfor (item = ResourceRelease_callbacks; item; item = next)\n> +\t{\n> +\t\t/* allow callbacks to unregister themselves when called */\n> +\t\tnext = item->next;\n> +\t\titem->callback(phase, isCommit, isTopLevel, item->arg);\n> +\t}\n\nI wonder if it's really a good idea to continue having this API. What you are\nallowed to do in a resowner changed\n\n\n> +/*\n> + * ResourceOwnerReleaseAllOfKind\n> + *\t\tRelease all resources of a certain type held by this owner.\n> + */\n> +void\n> +ResourceOwnerReleaseAllOfKind(ResourceOwner owner, ResourceOwnerFuncs *kind)\n> +{\n> +\t/* Mustn't call this after we have already started releasing resources. */\n> +\tif (owner->releasing)\n> +\t\telog(ERROR, \"ResourceOwnerForget called for %s after release started\", kind->name);\n>\n> -\t\t/* Ditto for plancache references */\n> -\t\twhile (ResourceArrayGetAny(&(owner->planrefarr), &foundres))\n> +\t/*\n> +\t * Temporarily set 'releasing', to prevent calls to ResourceOwnerRemember\n> +\t * while we're scanning the owner. Enlarging the hash would cause us to\n> +\t * lose track of the point we're scanning.\n> +\t */\n> +\towner->releasing = true;\n\nIs it a problem than a possible error would lead to ->releasing = true to be\n\"leaked\"? I think it might be, because we haven't called ResourceOwnerSort(),\nwhich means we'd not process resources in the right order during abort\nprocessing.\n\n\n>\n\n> +/*\n> + * Hash function for value+kind combination.\n> + */\n> static inline uint32\n> hash_resource_elem(Datum value, ResourceOwnerFuncs *kind)\n> {\n> -\tDatum\t\tdata[2];\n> -\n> -\tdata[0] = value;\n> -\tdata[1] = PointerGetDatum(kind);\n> -\n> -\treturn hash_bytes((unsigned char *) &data, 2 * SIZEOF_DATUM);\n> +\t/*\n> +\t * Most resource kinds store a pointer in 'value', and pointers are unique\n> +\t * all on their own. But some resources store plain integers (Files and\n> +\t * Buffers as of this writing), so we want to incorporate the 'kind' in\n> +\t * the hash too, otherwise those resources will collide a lot. But\n> +\t * because there are only a few resource kinds like that - and only a few\n> +\t * resource kinds to begin with - we don't need to work too hard to mix\n> +\t * 'kind' into the hash. Just add it with hash_combine(), it perturbs the\n> +\t * result enough for our purposes.\n> +\t */\n> +#if SIZEOF_DATUM == 8\n> +\treturn hash_combine64(murmurhash64((uint64) value), (uint64) kind);\n> +#else\n> +\treturn hash_combine(murmurhash32((uint32) value), (uint32) kind);\n> +#endif\n> }\n\nWhy are we using a 64bit hash to then just throw out the high bits?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jul 2023 12:14:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-10 12:14:46 -0700, Andres Freund wrote:\n> > /*\n> > - * Initially allocated size of a ResourceArray. Must be power of two since\n> > - * we'll use (arraysize - 1) as mask for hashing.\n> > + * Size of the fixed-size array to hold most-recently remembered resources.\n> > */\n> > -#define RESARRAY_INIT_SIZE 16\n> > +#define RESOWNER_ARRAY_SIZE 32\n>\n> That's 512 bytes, pretty large to be searched sequentially. It's somewhat sad\n> that the array needs to include 8 byte of ResourceOwnerFuncs...\n\nOn that note:\n\nIt's perhaps worth noting that this change actually *increases* the size of\nResourceOwnerData. Previously:\n\n /* size: 576, cachelines: 9, members: 19 */\n /* sum members: 572, holes: 1, sum holes: 4 */\n\nnow:\n /* size: 696, cachelines: 11, members: 13 */\n /* sum members: 693, holes: 1, sum holes: 3 */\n\nIt won't make a difference memory-usage wise, given aset.c's rounding\nbehaviour. But it does mean that more memory needs to be zeroed, as\nResourceOwnerCreate() uses MemoryContextAllocZero.\n\nAs there's really no need to initialize the long ->arr, ->locks, it seems\nworth to just initialize explicitly instead of using MemoryContextAllocZero().\n\n\nWith the new representation, is there any point in having ->locks? We still\nneed ->nlocks to be able to switch to the lossy behaviour, but there doesn't\nseem to be much point in having the separate array.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jul 2023 12:40:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 10/07/2023 15:37, Peter Eisentraut wrote:\n> A few suggestions on the API:\n> \n> > +static ResourceOwnerFuncs tupdesc_resowner_funcs =\n> \n> These aren't all \"functions\", so maybe another word like \"info\" or\n> \"description\" would be more appropriate?\n> \n> \n> > + .release_phase = RESOURCE_RELEASE_AFTER_LOCKS,\n> > + .release_priority = RELEASE_PRIO_TUPDESC_REFS,\n> \n> I suggest assigning explicit non-zero numbers to the RESOURCE_RELEASE_*\n> constants, as well as changing RELEASE_PRIO_FIRST to 1 and adding an\n> assertion that the priority is not zero. Otherwise, someone might\n> forget to assign these fields and would implicitly get phase 0 and\n> priority 0, which would probably work in most cases, but wouldn't be the\n> explicit intention.\n\nDone.\n\n> Then again, you are using RELEASE_PRIO_FIRST for the pgcrypto patch. Is\n> it the recommendation that if there are no other requirements, external\n> users should use that? If we are going to open this up to external\n> users, we should probably have some documented guidance around this.\n\nI added a brief comment about that in the ResourceReleasePhase typedef.\n\nI also added a section to the README on how to define your own resource \nkind. (The README doesn't go into any detail on how to choose the \nrelease priority though).\n\n> > + .PrintLeakWarning = ResOwnerPrintTupleDescLeakWarning\n> \n> I think this can be refactored further. We really just need a function\n> to describe a resource, like maybe\n> \n> static char *\n> ResOwnerDescribeTupleDesc(Datum res)\n> {\n> TupleDesc tupdesc = (TupleDesc) DatumGetPointer(res);\n> \n> return psprintf(\"TupleDesc %p (%u,%d)\",\n> tupdesc, tupdesc->tdtypeid, tupdesc->tdtypmod);\n> }\n> \n> and then the common code can generate the warnings from that like\n> \n> elog(WARNING,\n> \"%s leak: %s still referenced\",\n> kind->name, kind->describe(value));\n> \n> That way, we could more easily develop further options, such as turning\n> this warning into an error (useful for TAP tests).\n> \n> Also, maybe, you could supply a default description that just prints\n> \"%p\", which appears to be applicable in about half the places.\n\nRefactored it that way.\n\n> Possible bonus project: I'm not sure these descriptions are so useful\n> anyway. It doesn't help me much in debugging if \"File 55 was not\n> released\" or some such. If would be cool if ResourceOwnerRemember() in\n> some debug mode could remember the source code location, and then print\n> that together with the leak warning.\n\nYeah, that would be useful. I remember I've hacked something like that \nas a one-off thing in the past, when debugging a leak.\n\n> > +#define ResourceOwnerRememberTupleDesc(owner, tupdesc) \\\n> > + ResourceOwnerRemember(owner, PointerGetDatum(tupdesc),\n> &tupdesc_resowner_funcs)\n> > +#define ResourceOwnerForgetTupleDesc(owner, tupdesc) \\\n> > + ResourceOwnerForget(owner, PointerGetDatum(tupdesc),\n> &tupdesc_resowner_funcs)\n> \n> I would prefer that these kinds of things be made inline functions, so\n> that we maintain the type safety of the previous interface. And it's\n> also easier when reading code to see type decorations.\n> \n> (But I suspect the above bonus project would require these to be macros?)\n> \n> \n> > + if (context->resowner)\n> > + ResourceOwnerForgetJIT(context->resowner, context);\n> \n> It seems most ResourceOwnerForget*() calls have this kind of check\n> before them. Could this be moved inside the function?\n\nMany do, but I don't know if it's the majority. We could make \nResurceOwnerForget(NULL) do nothing, but I think it's better to have the \ncheck in the callers. You wouldn't want to silently do nothing when the \nresource owner is not expected to be NULL.\n\n(I'm attaching new patch version in my reply to Andres shortly)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 25 Oct 2023 14:34:39 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 10/07/2023 22:14, Andres Freund wrote:\n> On 2023-05-25 20:14:23 +0300, Heikki Linnakangas wrote:\n>> @@ -4183,9 +4180,6 @@ FlushRelationsAllBuffers(SMgrRelation *smgrs, int nrels)\n>> \tif (use_bsearch)\n>> \t\tpg_qsort(srels, nrels, sizeof(SMgrSortArray), rlocator_comparator);\n>>\n>> -\t/* Make sure we can handle the pin inside the loop */\n>> -\tResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n>> -\n>> \tfor (i = 0; i < NBuffers; i++)\n>> \t{\n>> \t\tSMgrSortArray *srelent = NULL;\n> \n> Hm, a tad worried that increasing the number of ResourceOwnerEnlargeBuffers()\n> that drastically could have a visible overhead.\n\nYou mean in FlushRelationsAllBuffers() in particular? Seems pretty cheap \ncompared to all the other work involved, and it's not that performance \ncritical anyway.\n\nExtendBufferedRelShared() gave me a pause, as it is in the critical \npath. But there too, the ResourceOwnerEnlarge() call pales in comparison \nto all the other work that it does for each buffer.\n\nOther than that, I don't think I'm introducing new \nResourceOwnerEnlarge() calls to hot codepaths, just moving them around.\n\n>> +static ResourceOwnerFuncs tupdesc_resowner_funcs =\n>> +{\n>> +\t.name = \"tupdesc reference\",\n>> +\t.release_phase = RESOURCE_RELEASE_AFTER_LOCKS,\n>> +\t.release_priority = RELEASE_PRIO_TUPDESC_REFS,\n>> +\t.ReleaseResource = ResOwnerReleaseTupleDesc,\n>> +\t.PrintLeakWarning = ResOwnerPrintTupleDescLeakWarning\n>> +};\n> \n> I think these should all be marked const, there's no need to have them be in a\n> mutable section of the binary. Of course that will require adjusting a bunch\n> of the signatures too, but that seems fine.\n\nDone.\n\n>> --- a/src/backend/access/transam/xact.c\n>> +++ b/src/backend/access/transam/xact.c\n>> @@ -5171,9 +5171,24 @@ AbortSubTransaction(void)\n>> \t\tResourceOwnerRelease(s->curTransactionOwner,\n>> \t\t\t\t\t\t\t RESOURCE_RELEASE_BEFORE_LOCKS,\n>> \t\t\t\t\t\t\t false, false);\n>> +\n>> \t\tAtEOSubXact_RelationCache(false, s->subTransactionId,\n>> \t\t\t\t\t\t\t\t s->parent->subTransactionId);\n>> +\n>> +\n>> +\t\t/*\n>> +\t\t * AtEOSubXact_Inval sometimes needs to temporarily\n>> +\t\t * bump the refcount on the relcache entries that it processes.\n>> +\t\t * We cannot use the subtransaction's resource owner anymore,\n>> +\t\t * because we've already started releasing it. But we can use\n>> +\t\t * the parent resource owner.\n>> +\t\t */\n>> +\t\tCurrentResourceOwner = s->parent->curTransactionOwner;\n>> +\n>> \t\tAtEOSubXact_Inval(false);\n>> +\n>> +\t\tCurrentResourceOwner = s->curTransactionOwner;\n>> +\n>> \t\tResourceOwnerRelease(s->curTransactionOwner,\n>> \t\t\t\t\t\t\t RESOURCE_RELEASE_LOCKS,\n>> \t\t\t\t\t\t\t false, false);\n> \n> I might be a bit slow this morning, but why did this need to change as part of\n> this?\n\nI tried to explain it in the comment. AtEOSubXact_Inval() sometimes \nneeds to bump the refcount on a relcache entry, which is tracked in the \ncurrent resource owner. But we have already started to release it. On \nmaster, you can allocate new resources in a ResourceOwner after you have \nalready called ResourceOwnerRelease(RESOURCE_RELEASE_BEFORE_LOCKS), but \nwith this patch, that's no longer allowed and you get an assertion \nfailure. I think it was always a bit questionable; it's not clear that \nthe resource would've been released correctly if an error occurred. In \npractice, I don't think an error could actually occur while \nAtEOXSubXact_Inval() briefly holds the relcache entry.\n\n>> @@ -5182,8 +5221,8 @@ TerminateBufferIO(BufferDesc *buf, bool clear_dirty, uint32 set_flag_bits)\n>> *\tIf I/O was in progress, we always set BM_IO_ERROR, even though it's\n>> *\tpossible the error condition wasn't related to the I/O.\n>> */\n>> -void\n>> -AbortBufferIO(Buffer buffer)\n>> +static void\n>> +AbortBufferIO(Buffer buffer, bool forget_owner)\n> \n> What is the forget_owner argument for? It's always false, afaics?\n\nRemoved. There used to be more callers of AbortBufferIO() which needed \nthat, but not anymore.\n\n>> +/* Convenience wrappers over ResourceOwnerRemember/Forget */\n>> +#define ResourceOwnerRememberCatCacheRef(owner, tuple) \\\n>> +\tResourceOwnerRemember(owner, PointerGetDatum(tuple), &catcache_resowner_funcs)\n>> +#define ResourceOwnerForgetCatCacheRef(owner, tuple) \\\n>> +\tResourceOwnerForget(owner, PointerGetDatum(tuple), &catcache_resowner_funcs)\n>> +#define ResourceOwnerRememberCatCacheListRef(owner, list) \\\n>> +\tResourceOwnerRemember(owner, PointerGetDatum(list), &catlistref_resowner_funcs)\n>> +#define ResourceOwnerForgetCatCacheListRef(owner, list) \\\n>> +\tResourceOwnerForget(owner, PointerGetDatum(list), &catlistref_resowner_funcs)\n> \n> Not specific to this type of resource owner, but I wonder if it'd not be\n> better to use inline functions for these - the PointerGetDatum() cast removes\n> nearly all type safety.\n\nPeter just said the same thing. I guess you're right, changed.\n\n>> +Releasing\n>> +---------\n>> +\n>> +Releasing the resources of a ResourceOwner happens in three phases:\n>> +\n>> +1. \"Before-locks\" resources\n>> +\n>> +2. Locks\n>> +\n>> +3. \"After-locks\" resources\n> \n> Given that we now have priorities, I wonder if we shouldn't merge the phase and\n> the priorities? We have code like this:\n> \n> \n> \tResourceOwnerRelease(TopTransactionResourceOwner,\n> \t\t\t\t\t\t RESOURCE_RELEASE_BEFORE_LOCKS,\n> \t\t\t\t\t\t true, true);\n> ...\n> \tResourceOwnerRelease(TopTransactionResourceOwner,\n> \t\t\t\t\t\t RESOURCE_RELEASE_LOCKS,\n> \t\t\t\t\t\t true, true);\n> \tResourceOwnerRelease(TopTransactionResourceOwner,\n> \t\t\t\t\t\t RESOURCE_RELEASE_AFTER_LOCKS,\n> \t\t\t\t\t\t true, true);\n> \n> Now, admittedly locks currently are \"special\" anyway, but we have a plan to\n> get rid of that. If we did, then we could do the last two as one as\n> ResourceOwnerRelease(from = RELEASE_PRIO_LOCKS, to: RELEASE_PRIO_LAST, ...)\n> or such.\n\nCurrently, we always release parent's resources before a child's, within \neach phase. I'm not sure if we rely on that parent-child ordering \nanywhere though. I'd like to leave it as it is for now, to limit the \nscope of this, and maybe revisit later.\n\n>> +Normally, you are expected to call ResourceOwnerForget on every resource so\n>> +that at commit, the ResourceOwner is empty (locks are an exception). If there\n>> +are any resources still held at commit, ResourceOwnerRelease will call the\n>> +PrintLeakWarning callback on each such resource. At abort, however, we truly\n>> +rely on the ResourceOwner mechanism and it is normal that there are resources\n>> +to be released.\n> \n> I am not sure it's a good idea to encode that all kinds of resources ought to\n> have been released prior on commit. I can see that not always making sense -\n> in fact we already don't do it for locks, no? Perhaps just add a \"commonly\"?\n\nHmm, I'd still like to print the warnings for resources that we expect \nto be released before commit, though. We could have a flag in the \nResourceOwnerDesc to give flexibility, but I'm inclined to wait until we \nget the first case of someone actually needing it.\n\n>> +typedef struct ResourceElem\n>> {\n>> ...\n>> +\tDatum\t\titem;\n>> +\tResourceOwnerFuncs *kind;\t/* NULL indicates a free hash table slot */\n>> +} ResourceElem;\n> \n>> /*\n>> - * Initially allocated size of a ResourceArray. Must be power of two since\n>> - * we'll use (arraysize - 1) as mask for hashing.\n>> + * Size of the fixed-size array to hold most-recently remembered resources.\n>> */\n>> -#define RESARRAY_INIT_SIZE 16\n>> +#define RESOWNER_ARRAY_SIZE 32\n> \n> That's 512 bytes, pretty large to be searched sequentially. It's somewhat sad\n> that the array needs to include 8 byte of ResourceOwnerFuncs...\n\nYeah. Early on I considered having a RegisterResourceKind() function \nthat you need to call first, and then each resource kind could be \nassigned a smaller ID. If we wanted to keep the old mechanism of \nseparate arrays for each different resource kind, that'd be the way to \ngo. But overall this seems simpler, and the performance is decent \ndespite that linear scan.\n\nNote that the current code in master also uses a plain array, until it \ngrows to 512 bytes, when it switches to a hash table. Lot of the details \nwith the array/hash combo are different, but that threshold was the same.\n\n>> +\tbool\t\treleasing;\t\t/* has ResourceOwnerRelease been called? */\n>> +\n>> +\t/*\n>> +\t * The fixed-size array for recent resources.\n>> +\t *\n>> +\t * If 'releasing' is set, the contents are sorted by release priority.\n>> +\t */\n>> +\tuint32\t\tnarr;\t\t\t/* how many items are stored in the array */\n>> +\tResourceElem arr[RESOWNER_ARRAY_SIZE];\n>> +\n>> +\t/*\n>> +\t * The hash table. Uses open-addressing. 'nhash' is the number of items\n>> +\t * present; if it would exceed 'grow_at', we enlarge it and re-hash.\n>> +\t * 'grow_at' should be rather less than 'capacity' so that we don't waste\n>> +\t * too much time searching for empty slots.\n>> +\t *\n>> +\t * If 'releasing' is set, the contents are no longer hashed, but sorted by\n>> +\t * release priority. The first 'nhash' elements are occupied, the rest\n>> +\t * are empty.\n>> +\t */\n>> +\tResourceElem *hash;\n>> +\tuint32\t\tnhash;\t\t\t/* how many items are stored in the hash */\n>> +\tuint32\t\tcapacity;\t\t/* allocated length of hash[] */\n>> +\tuint32\t\tgrow_at;\t\t/* grow hash when reach this */\n>> +\n>> +\t/* The local locks cache. */\n>> +\tuint32\t\tnlocks;\t\t\t/* number of owned locks */\n>> \tLOCALLOCK *locks[MAX_RESOWNER_LOCKS];\t/* list of owned locks */\n> \n> Due to 'narr' being before the large 'arr', 'nhash' is always on a separate\n> cacheline. I.e. the number of cachelines accessed in happy paths is higher\n> than necessary, also relevant because there's more dependencies on narr, nhash\n> than on the contents of those.\n> \n> I'd probably order it so that both of the large elements (arr, locks) are at\n> the end.\n> \n> It's hard to know with changes this small, but it does seem to yield a small\n> and repeatable performance benefit (pipelined pgbench -S).\n\nSwapped the 'hash' and 'arr' parts of the struct, so that 'arr' and \n'locks' are at the end.\n\n>> +/*\n>> + * Forget that an object is owned by a ResourceOwner\n>> + *\n>> + * Note: if same resource ID is associated with the ResourceOwner more than\n>> + * once, one instance is removed.\n>> + */\n>> +void\n>> +ResourceOwnerForget(ResourceOwner owner, Datum value, ResourceOwnerFuncs *kind)\n>> +{\n>> +#ifdef RESOWNER_TRACE\n>> +\telog(LOG, \"FORGET %d: owner %p value \" UINT64_FORMAT \", kind: %s\",\n>> +\t\t resowner_trace_counter++, owner, DatumGetUInt64(value), kind->name);\n>> +#endif\n>> +\n>> +\t/*\n>> +\t * Mustn't call this after we have already started releasing resources.\n>> +\t * (Release callback functions are not allowed to release additional\n>> +\t * resources.)\n>> +\t */\n>> +\tif (owner->releasing)\n>> +\t\telog(ERROR, \"ResourceOwnerForget called for %s after release started\", kind->name);\n>> +\n>> +\t/* Search through all items in the array first. */\n>> +\tfor (uint32 i = 0; i < owner->narr; i++)\n>> +\t{\n> \n> Hm, I think we oftend up releasing resources in a lifo order. Would it\n> possibly make sense to search in reverse order?\n\nChanged. I can see a small speedup in a micro benchmark, when the array \nis almost full and you repeatedly remember / forget one more resource.\n\n> Separately, I wonder if it'd be worth to check if there are any other matches\n> when assertions are enabled.\n\nIt is actually allowed to remember the same resource twice. There's a \ncomment in ResourceOwnerForget (previously in ResourceArrayRemove) that \neach call releases one instance in that case. I'm not sure if we rely on \nthat anywhere currently.\n\n>> ResourceOwnerRelease(ResourceOwner owner,\n>> @@ -492,6 +631,15 @@ ResourceOwnerRelease(ResourceOwner owner,\n>> {\n>> \t/* There's not currently any setup needed before recursing */\n>> \tResourceOwnerReleaseInternal(owner, phase, isCommit, isTopLevel);\n>> +\n>> +#ifdef RESOWNER_STATS\n>> +\tif (isTopLevel)\n>> +\t{\n>> +\t\telog(LOG, \"RESOWNER STATS: lookups: array %d, hash %d\", narray_lookups, nhash_lookups);\n>> +\t\tnarray_lookups = 0;\n>> +\t\tnhash_lookups = 0;\n>> +\t}\n>> +#endif\n>> }\n> \n> Why do we have ResourceOwnerRelease() vs ResourceOwnerReleaseInternal()? Looks\n> like that's older than your patch, but still confused.\n\nDunno. It provides a place to do things on the top-level resource owner \nthat you don't want to do when you recurse to the children, as hinted by \nthe comment, but I don't know if we've ever had such things. It was \nhandy for printing the RESOWNER_STATS now, though :-).\n\n>> +\t/* Let add-on modules get a chance too */\n>> +\tfor (item = ResourceRelease_callbacks; item; item = next)\n>> +\t{\n>> +\t\t/* allow callbacks to unregister themselves when called */\n>> +\t\tnext = item->next;\n>> +\t\titem->callback(phase, isCommit, isTopLevel, item->arg);\n>> +\t}\n> \n> I wonder if it's really a good idea to continue having this API. What you are\n> allowed to do in a resowner changed\n\nHmm, I don't think anything changed for users of this API. I'm not in a \nhurry to remove this and force extensions to adapt, as it's not hard to \nmaintain this.\n\n>> +/*\n>> + * ResourceOwnerReleaseAllOfKind\n>> + *\t\tRelease all resources of a certain type held by this owner.\n>> + */\n>> +void\n>> +ResourceOwnerReleaseAllOfKind(ResourceOwner owner, ResourceOwnerFuncs *kind)\n>> +{\n>> +\t/* Mustn't call this after we have already started releasing resources. */\n>> +\tif (owner->releasing)\n>> +\t\telog(ERROR, \"ResourceOwnerForget called for %s after release started\", kind->name);\n>>\n>> -\t\t/* Ditto for plancache references */\n>> -\t\twhile (ResourceArrayGetAny(&(owner->planrefarr), &foundres))\n>> +\t/*\n>> +\t * Temporarily set 'releasing', to prevent calls to ResourceOwnerRemember\n>> +\t * while we're scanning the owner. Enlarging the hash would cause us to\n>> +\t * lose track of the point we're scanning.\n>> +\t */\n>> +\towner->releasing = true;\n> \n> Is it a problem than a possible error would lead to ->releasing = true to be\n> \"leaked\"? I think it might be, because we haven't called ResourceOwnerSort(),\n> which means we'd not process resources in the right order during abort\n> processing.\n\nGood point. I added a separate 'sorted' flag to handle that gracefully.\n\n>> +/*\n>> + * Hash function for value+kind combination.\n>> + */\n>> static inline uint32\n>> hash_resource_elem(Datum value, ResourceOwnerFuncs *kind)\n>> {\n>> -\tDatum\t\tdata[2];\n>> -\n>> -\tdata[0] = value;\n>> -\tdata[1] = PointerGetDatum(kind);\n>> -\n>> -\treturn hash_bytes((unsigned char *) &data, 2 * SIZEOF_DATUM);\n>> +\t/*\n>> +\t * Most resource kinds store a pointer in 'value', and pointers are unique\n>> +\t * all on their own. But some resources store plain integers (Files and\n>> +\t * Buffers as of this writing), so we want to incorporate the 'kind' in\n>> +\t * the hash too, otherwise those resources will collide a lot. But\n>> +\t * because there are only a few resource kinds like that - and only a few\n>> +\t * resource kinds to begin with - we don't need to work too hard to mix\n>> +\t * 'kind' into the hash. Just add it with hash_combine(), it perturbs the\n>> +\t * result enough for our purposes.\n>> +\t */\n>> +#if SIZEOF_DATUM == 8\n>> +\treturn hash_combine64(murmurhash64((uint64) value), (uint64) kind);\n>> +#else\n>> +\treturn hash_combine(murmurhash32((uint32) value), (uint32) kind);\n>> +#endif\n>> }\n> \n> Why are we using a 64bit hash to then just throw out the high bits?\n\nIt was just expedient when the inputs are 64-bit. Implementation of \nmurmurhash64 is tiny, and we already use murmurhash32 elsewhere, so it \nseemed like an OK choice. I'm sure there are better algorithms out \nthere, though.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 25 Oct 2023 15:43:36 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-25 15:43:36 +0300, Heikki Linnakangas wrote:\n> On 10/07/2023 22:14, Andres Freund wrote:\n> > > /*\n> > > - * Initially allocated size of a ResourceArray. Must be power of two since\n> > > - * we'll use (arraysize - 1) as mask for hashing.\n> > > + * Size of the fixed-size array to hold most-recently remembered resources.\n> > > */\n> > > -#define RESARRAY_INIT_SIZE 16\n> > > +#define RESOWNER_ARRAY_SIZE 32\n> >\n> > That's 512 bytes, pretty large to be searched sequentially. It's somewhat sad\n> > that the array needs to include 8 byte of ResourceOwnerFuncs...\n>\n> Yeah. Early on I considered having a RegisterResourceKind() function that\n> you need to call first, and then each resource kind could be assigned a\n> smaller ID. If we wanted to keep the old mechanism of separate arrays for\n> each different resource kind, that'd be the way to go. But overall this\n> seems simpler, and the performance is decent despite that linear scan.\n\nRealistically, on a 64bit platform, the compiler / we want 64bit alignment for\nResourceElem->item, so making \"kind\" smaller isn't going to do much...\n\n\n> > Due to 'narr' being before the large 'arr', 'nhash' is always on a separate\n> > cacheline. I.e. the number of cachelines accessed in happy paths is higher\n> > than necessary, also relevant because there's more dependencies on narr, nhash\n> > than on the contents of those.\n> >\n> > I'd probably order it so that both of the large elements (arr, locks) are at\n> > the end.\n> >\n> > It's hard to know with changes this small, but it does seem to yield a small\n> > and repeatable performance benefit (pipelined pgbench -S).\n>\n> Swapped the 'hash' and 'arr' parts of the struct, so that 'arr' and 'locks'\n> are at the end.\n\nI don't remember what the layout was then, but now there are 10 bytes of holes\non x86-64 (and I think most other 64bit architectures).\n\n\nstruct ResourceOwnerData {\n ResourceOwner parent; /* 0 8 */\n ResourceOwner firstchild; /* 8 8 */\n ResourceOwner nextchild; /* 16 8 */\n const char * name; /* 24 8 */\n _Bool releasing; /* 32 1 */\n _Bool sorted; /* 33 1 */\n\n /* XXX 6 bytes hole, try to pack */\n\n ResourceElem * hash; /* 40 8 */\n uint32 nhash; /* 48 4 */\n uint32 capacity; /* 52 4 */\n uint32 grow_at; /* 56 4 */\n uint32 narr; /* 60 4 */\n /* --- cacheline 1 boundary (64 bytes) --- */\n ResourceElem arr[32]; /* 64 512 */\n /* --- cacheline 9 boundary (576 bytes) --- */\n uint32 nlocks; /* 576 4 */\n\n /* XXX 4 bytes hole, try to pack */\n\n LOCALLOCK * locks[15]; /* 584 120 */\n\n /* size: 704, cachelines: 11, members: 14 */\n /* sum members: 694, holes: 2, sum holes: 10 */\n};\n\nWhile somewhat annoying from a human pov, it seems like it would make sense to\nrearrange a bit? At least moving nlocks into the first cacheline would be good\n(not that we guarantee cacheline alignment rn, though it sometimes works out\nto be aligned).\n\nWe also can make narr, nlocks uint8s.\n\nWith that narrowing and reordering, we end up with 688 bytes. Not worth for\nmost structs, but here perhaps worth doing?\n\nMoving the lock length to the start of the struct would make sense to keep\nthings that are likely to be used in a \"stalling\" manner at the start of the\nstruct / in one cacheline.\n\nIt's not too awful to have it be in this order:\n\nstruct ResourceOwnerData {\n ResourceOwner parent; /* 0 8 */\n ResourceOwner firstchild; /* 8 8 */\n ResourceOwner nextchild; /* 16 8 */\n const char * name; /* 24 8 */\n _Bool releasing; /* 32 1 */\n _Bool sorted; /* 33 1 */\n uint8 narr; /* 34 1 */\n uint8 nlocks; /* 35 1 */\n uint32 nhash; /* 36 4 */\n uint32 capacity; /* 40 4 */\n uint32 grow_at; /* 44 4 */\n ResourceElem arr[32]; /* 48 512 */\n /* --- cacheline 8 boundary (512 bytes) was 48 bytes ago --- */\n ResourceElem * hash; /* 560 8 */\n LOCALLOCK * locks[15]; /* 568 120 */\n\n /* size: 688, cachelines: 11, members: 14 */\n /* last cacheline: 48 bytes */\n};\n\nRequires rephrasing a few comments to document that the lenghts are separate\nfrom the array / hashtable / locks, but otherwise...\n\nThis reliably shows a decent speed improvement in my stress test [1], on the\norder of 5%.\n\n\nAt that point, the first array entry fits into the first cacheline. If we were\nto move parent, firstchild, nextchild, name further down the struct, three\narray entries would be on the first line. Just using the array presumably is a\nvery common case, so that might be worth it?\n\nI got less clear performance results with this one, and it's also quite\npossible it could hurt, if resowners aren't actually \"used\", just\nreleased. Therefore it's probably not worth it for now.\n\n\nGreetings,\n\nAndres Freund\n\n[1] pgbench running\n DO $$ BEGIN FOR i IN 1..5000 LOOP PERFORM abalance FROM pgbench_accounts WHERE aid = 17;END LOOP;END;$$;\n\n\n",
"msg_date": "Wed, 25 Oct 2023 11:07:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "It looks like this patch set needs a bit of surgery to adapt to the LLVM \nchanges in 9dce22033d. The cfbot is reporting compiler warnings about \nthis, and also some crashes, which might also be caused by this.\n\nI do like the updated APIs. (Maybe the repeated \".DebugPrint = NULL, \n /* default message is fine */\" lines could be omitted?)\n\nI like that one can now easily change the elog(WARNING) in \nResourceOwnerReleaseAll() to a PANIC or something to get automatic \nverification during testing. I wonder if we should make this the \ndefault if assertions are on? This would need some adjustments to \nsrc/test/modules/test_resowner because it would then fail.\n\n\n\n",
"msg_date": "Mon, 6 Nov 2023 11:43:32 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 25/10/2023 21:07, Andres Freund wrote:\n> It's not too awful to have it be in this order:\n> \n> struct ResourceOwnerData {\n> ResourceOwner parent; /* 0 8 */\n> ResourceOwner firstchild; /* 8 8 */\n> ResourceOwner nextchild; /* 16 8 */\n> const char * name; /* 24 8 */\n> _Bool releasing; /* 32 1 */\n> _Bool sorted; /* 33 1 */\n> uint8 narr; /* 34 1 */\n> uint8 nlocks; /* 35 1 */\n> uint32 nhash; /* 36 4 */\n> uint32 capacity; /* 40 4 */\n> uint32 grow_at; /* 44 4 */\n> ResourceElem arr[32]; /* 48 512 */\n> /* --- cacheline 8 boundary (512 bytes) was 48 bytes ago --- */\n> ResourceElem * hash; /* 560 8 */\n> LOCALLOCK * locks[15]; /* 568 120 */\n> \n> /* size: 688, cachelines: 11, members: 14 */\n> /* last cacheline: 48 bytes */\n> };\n> \n> Requires rephrasing a few comments to document that the lenghts are separate\n> from the array / hashtable / locks, but otherwise...\n\nHmm, let's move 'capacity' and 'grow_at' after 'arr', too. They are only \nneeded together with 'hash'.\n\n> This reliably shows a decent speed improvement in my stress test [1], on the\n> order of 5%.\n> \n> [1] pgbench running\n> DO $$ BEGIN FOR i IN 1..5000 LOOP PERFORM abalance FROM pgbench_accounts WHERE aid = 17;END LOOP;END;$$;\n\nI'm seeing similar results, although there's enough noise in the test \nthat I'm sure how real the would be across different tests.\n\n> At that point, the first array entry fits into the first cacheline. If we were\n> to move parent, firstchild, nextchild, name further down the struct, three\n> array entries would be on the first line. Just using the array presumably is a\n> very common case, so that might be worth it?\n> \n> I got less clear performance results with this one, and it's also quite\n> possible it could hurt, if resowners aren't actually \"used\", just\n> released. Therefore it's probably not worth it for now.\n\nYou're assuming that the ResourceOwner struct begins at a cacheline \nboundary. That's not usually true, we don't try to cacheline-align it. \nSo while it's helpful to avoid padding and to keep frequently-accessed \nfields close to each other, there's no benefit in keeping them at the \nbeginning of the struct.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 7 Nov 2023 13:24:48 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 06/11/2023 12:43, Peter Eisentraut wrote:\n> It looks like this patch set needs a bit of surgery to adapt to the LLVM\n> changes in 9dce22033d. The cfbot is reporting compiler warnings about\n> this, and also some crashes, which might also be caused by this.\n\nUpdated patch set attached. I fixed those LLVM crashes, and reordered \nthe fields in the ResourceOwner struct per Andres' suggestion.\n\n> I do like the updated APIs. (Maybe the repeated \".DebugPrint = NULL,\n> /* default message is fine */\" lines could be omitted?)\n> \n> I like that one can now easily change the elog(WARNING) in\n> ResourceOwnerReleaseAll() to a PANIC or something to get automatic\n> verification during testing. I wonder if we should make this the\n> default if assertions are on? This would need some adjustments to\n> src/test/modules/test_resowner because it would then fail.\n\nYeah, perhaps, but I'll leave that to possible a follow-up patch.\n\nI feel pretty good about this overall. Barring objections or new cfbot \nfailures, I will commit this in the next few days.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 7 Nov 2023 13:28:28 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hello Heikki,\n\n07.11.2023 14:28, Heikki Linnakangas wrote:\n>\n> I feel pretty good about this overall. Barring objections or new cfbot failures, I will commit this in the next few days.\n>\n\nA script, that I published in [1], detects several typos in the patches:\nbetwen\nther\nReourceOwner\nResouceOwnerSort\nReleaseOwnerRelease\n\nMaybe it's worth to fix them now, or just accumulate and clean all such\ndefects once a year...\n\n[1] https://www.postgresql.org/message-id/27a7998b-d67f-e32a-a28d-e659a2e390c6%40gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 7 Nov 2023 17:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 07/11/2023 16:00, Alexander Lakhin wrote:\n> 07.11.2023 14:28, Heikki Linnakangas wrote:\n>>\n>> I feel pretty good about this overall. Barring objections or new cfbot failures, I will commit this in the next few days.\n> \n> A script, that I published in [1], detects several typos in the patches:\n> betwen\n> ther\n> ReourceOwner\n> ResouceOwnerSort\n> ReleaseOwnerRelease\n> \n> Maybe it's worth to fix them now, or just accumulate and clean all such\n> defects once a year...\n\nFixed these, and pushed. Thanks everyone for reviewing!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 13:37:42 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hello Heikki,\n\n08.11.2023 14:37, Heikki Linnakangas wrote:\n>\n> Fixed these, and pushed. Thanks everyone for reviewing!\n>\n\nPlease look at a new assertion failure, I've managed to trigger with this script:\nCREATE TABLE t(\ni01 int, i02 int, i03 int, i04 int, i05 int, i06 int, i07 int, i08 int, i09 int, i10 int,\ni11 int, i12 int, i13 int, i14 int, i15 int, i16 int, i17 int, i18 int, i19 int, i20 int,\ni21 int, i22 int, i23 int, i24 int, i25 int, i26 int\n);\nCREATE TABLE tp PARTITION OF t FOR VALUES IN (1);\n\n(gdb) bt\n...\n#5 0x0000560dd4e42f77 in ExceptionalCondition (conditionName=0x560dd5059fbc \"owner->narr == 0\", fileName=0x560dd5059e31 \n\"resowner.c\", lineNumber=362) at assert.c:66\n#6 0x0000560dd4e93cbd in ResourceOwnerReleaseAll (owner=0x560dd69cebd0, phase=RESOURCE_RELEASE_BEFORE_LOCKS, \nprintLeakWarnings=false) at resowner.c:362\n#7 0x0000560dd4e92e22 in ResourceOwnerReleaseInternal (owner=0x560dd69cebd0, phase=RESOURCE_RELEASE_BEFORE_LOCKS, \nisCommit=false, isTopLevel=true) at resowner.c:725\n#8 0x0000560dd4e92d42 in ResourceOwnerReleaseInternal (owner=0x560dd69ce3f8, phase=RESOURCE_RELEASE_BEFORE_LOCKS, \nisCommit=false, isTopLevel=true) at resowner.c:678\n#9 0x0000560dd4e92cdb in ResourceOwnerRelease (owner=0x560dd69ce3f8, phase=RESOURCE_RELEASE_BEFORE_LOCKS, \nisCommit=false, isTopLevel=true) at resowner.c:652\n#10 0x0000560dd47316ef in AbortTransaction () at xact.c:2848\n#11 0x0000560dd47329ac in AbortCurrentTransaction () at xact.c:3339\n#12 0x0000560dd4c37284 in PostgresMain (dbname=0x560dd69779f0 \"regression\", username=0x560dd69779d8 \"law\") at \npostgres.c:4370\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 8 Nov 2023 21:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 08/11/2023 20:00, Alexander Lakhin wrote:\n> Please look at a new assertion failure, I've managed to trigger with this script:\n> CREATE TABLE t(\n> i01 int, i02 int, i03 int, i04 int, i05 int, i06 int, i07 int, i08 int, i09 int, i10 int,\n> i11 int, i12 int, i13 int, i14 int, i15 int, i16 int, i17 int, i18 int, i19 int, i20 int,\n> i21 int, i22 int, i23 int, i24 int, i25 int, i26 int\n> );\n> CREATE TABLE tp PARTITION OF t FOR VALUES IN (1);\n> \n> (gdb) bt\n> ...\n> #5 0x0000560dd4e42f77 in ExceptionalCondition (conditionName=0x560dd5059fbc \"owner->narr == 0\", fileName=0x560dd5059e31\n> \"resowner.c\", lineNumber=362) at assert.c:66\n> #6 0x0000560dd4e93cbd in ResourceOwnerReleaseAll (owner=0x560dd69cebd0, phase=RESOURCE_RELEASE_BEFORE_LOCKS,\n> printLeakWarnings=false) at resowner.c:362\n> #7 0x0000560dd4e92e22 in ResourceOwnerReleaseInternal (owner=0x560dd69cebd0, phase=RESOURCE_RELEASE_BEFORE_LOCKS,\n> isCommit=false, isTopLevel=true) at resowner.c:725\n> #8 0x0000560dd4e92d42 in ResourceOwnerReleaseInternal (owner=0x560dd69ce3f8, phase=RESOURCE_RELEASE_BEFORE_LOCKS,\n> isCommit=false, isTopLevel=true) at resowner.c:678\n> #9 0x0000560dd4e92cdb in ResourceOwnerRelease (owner=0x560dd69ce3f8, phase=RESOURCE_RELEASE_BEFORE_LOCKS,\n> isCommit=false, isTopLevel=true) at resowner.c:652\n> #10 0x0000560dd47316ef in AbortTransaction () at xact.c:2848\n> #11 0x0000560dd47329ac in AbortCurrentTransaction () at xact.c:3339\n> #12 0x0000560dd4c37284 in PostgresMain (dbname=0x560dd69779f0 \"regression\", username=0x560dd69779d8 \"law\") at\n> postgres.c:4370\n> ...\n\nThanks for the testing! Fixed. I introduced that bug in one of the last \nrevisions of the patch: I changed ResourceOwnerSort() to not bother \nmoving all the elements from the array to the hash table if the hash \ntable is allocated but empty. However, ResourceOwnerReleas() didn't get \nthe memo, and still checked \"owner->hash != NULL\" rather than \n\"owner->nhash == 0\".\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 9 Nov 2023 01:48:59 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hello Heikki,\n\n09.11.2023 02:48, Heikki Linnakangas wrote:\n>\n> Thanks for the testing! Fixed. ...\n\nThank you for the fix!\n\nPlease look at one more failure caused be the new implementation of\nResourceOwners:\nnumdbs=80\nfor ((i=1;i<=10;i++)); do\necho \"ITERATION $i\"\n\nfor ((d=1;d<=$numdbs;d++)); do createdb db$d; done\n\nfor ((d=1;d<=$numdbs;d++)); do\necho \"\ncreate table t(t1 text);\ndrop table t;\n\" | psql -d db$d >psql-$d.log 2>&1 &\ndone\nwait\ngrep 'PANIC' server.log && break;\n\nfor ((d=1;d<=$numdbs;d++)); do dropdb db$d; done\ngrep 'PANIC' server.log && break;\ndone\n\nI could see two failure modes:\n2023-11-10 08:42:28.870 UTC [1163274] ERROR: ResourceOwnerEnlarge called after release started\n2023-11-10 08:42:28.870 UTC [1163274] STATEMENT: drop table t;\n2023-11-10 08:42:28.870 UTC [1163274] WARNING: AbortTransaction while in COMMIT state\n2023-11-10 08:42:28.870 UTC [1163274] PANIC: cannot abort transaction 906, it was already committed\n\n2023-11-10 08:43:27.897 UTC [1164148] ERROR: ResourceOwnerEnlarge called after release started\n2023-11-10 08:43:27.897 UTC [1164148] STATEMENT: DROP DATABASE db69;\n2023-11-10 08:43:27.897 UTC [1164148] WARNING: AbortTransaction while in COMMIT state\n2023-11-10 08:43:27.897 UTC [1164148] PANIC: cannot abort transaction 1043, it was already committed\n\nThe stack trace for the second ERROR (ResourceOwnerEnlarge called ...) is:\n...\n#6 0x0000558af5b2f35c in ResourceOwnerEnlarge (owner=0x558af716f3c8) at resowner.c:455\n#7 0x0000558af5888f18 in dsm_create_descriptor () at dsm.c:1207\n#8 0x0000558af5889205 in dsm_attach (h=3172038420) at dsm.c:697\n#9 0x0000558af5b1ebed in get_segment_by_index (area=0x558af711da18, index=2) at dsa.c:1764\n#10 0x0000558af5b1ea4b in dsa_get_address (area=0x558af711da18, dp=2199023329568) at dsa.c:970\n#11 0x0000558af5669366 in dshash_seq_next (status=0x7ffdd5912fd0) at dshash.c:687\n#12 0x0000558af5901998 in pgstat_drop_database_and_contents (dboid=16444) at pgstat_shmem.c:830\n#13 0x0000558af59016f0 in pgstat_drop_entry (kind=PGSTAT_KIND_DATABASE, dboid=16444, objoid=0) at pgstat_shmem.c:888\n#14 0x0000558af59044eb in AtEOXact_PgStat_DroppedStats (xact_state=0x558af7111ee0, isCommit=true) at pgstat_xact.c:88\n#15 0x0000558af59043c7 in AtEOXact_PgStat (isCommit=true, parallel=false) at pgstat_xact.c:55\n#16 0x0000558af53c782e in CommitTransaction () at xact.c:2371\n#17 0x0000558af53c709e in CommitTransactionCommand () at xact.c:306\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 10 Nov 2023 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Thanks for the testing again!\n\nOn 10/11/2023 11:00, Alexander Lakhin wrote:\n> I could see two failure modes:\n> 2023-11-10 08:42:28.870 UTC [1163274] ERROR: ResourceOwnerEnlarge called after release started\n> 2023-11-10 08:42:28.870 UTC [1163274] STATEMENT: drop table t;\n> 2023-11-10 08:42:28.870 UTC [1163274] WARNING: AbortTransaction while in COMMIT state\n> 2023-11-10 08:42:28.870 UTC [1163274] PANIC: cannot abort transaction 906, it was already committed\n> \n> 2023-11-10 08:43:27.897 UTC [1164148] ERROR: ResourceOwnerEnlarge called after release started\n> 2023-11-10 08:43:27.897 UTC [1164148] STATEMENT: DROP DATABASE db69;\n> 2023-11-10 08:43:27.897 UTC [1164148] WARNING: AbortTransaction while in COMMIT state\n> 2023-11-10 08:43:27.897 UTC [1164148] PANIC: cannot abort transaction 1043, it was already committed\n> \n> The stack trace for the second ERROR (ResourceOwnerEnlarge called ...) is:\n> ...\n> #6 0x0000558af5b2f35c in ResourceOwnerEnlarge (owner=0x558af716f3c8) at resowner.c:455\n> #7 0x0000558af5888f18 in dsm_create_descriptor () at dsm.c:1207\n> #8 0x0000558af5889205 in dsm_attach (h=3172038420) at dsm.c:697\n> #9 0x0000558af5b1ebed in get_segment_by_index (area=0x558af711da18, index=2) at dsa.c:1764\n> #10 0x0000558af5b1ea4b in dsa_get_address (area=0x558af711da18, dp=2199023329568) at dsa.c:970\n> #11 0x0000558af5669366 in dshash_seq_next (status=0x7ffdd5912fd0) at dshash.c:687\n> #12 0x0000558af5901998 in pgstat_drop_database_and_contents (dboid=16444) at pgstat_shmem.c:830\n> #13 0x0000558af59016f0 in pgstat_drop_entry (kind=PGSTAT_KIND_DATABASE, dboid=16444, objoid=0) at pgstat_shmem.c:888\n> #14 0x0000558af59044eb in AtEOXact_PgStat_DroppedStats (xact_state=0x558af7111ee0, isCommit=true) at pgstat_xact.c:88\n> #15 0x0000558af59043c7 in AtEOXact_PgStat (isCommit=true, parallel=false) at pgstat_xact.c:55\n> #16 0x0000558af53c782e in CommitTransaction () at xact.c:2371\n> #17 0x0000558af53c709e in CommitTransactionCommand () at xact.c:306\n> ...\n\nThe quick, straightforward fix is to move the \"CurrentResourceOwner = \nNULL\" line earlier in CommitTransaction, per attached \n0003-Clear-CurrentResourceOwner-earlier-in-CommitTransact.patch. You're \nnot allowed to use the resource owner after you start to release it; it \nwas a bit iffy even before the ResourceOwner rewrite but now it's \nexplicitly forbidden. By clearing CurrentResourceOwner as soon as we \nstart releasing it, we can prevent any accidental use.\n\nWhen CurrentResourceOwner == NULL, dsm_attach() returns a handle that's \nnot associated with any ResourceOwner. That's appropriate for the pgstat \ncase. The DSA is \"pinned\", so the handle is forgotten from the \nResourceOwner right after calling dsm_attach() anyway.\n\nLooking closer at dsa.c, I think this is a wider problem though. The \ncomments don't make it very clear how it's supposed to interact with \nResourceOwners. There's just this brief comment in dsa_pin_mapping():\n\n> * By default, areas are owned by the current resource owner, which means they\n> * are detached automatically when that scope ends.\n\nThe dsa_area struct isn't directly owned by any ResourceOwner though. \nThe DSM segments created by dsa_create() or dsa_attach() are.\n\nBut the functions dsa_allocate() and dsa_get_address() can create or \nattach more DSM segments to the area, and they will be owned by the by \nthe current resource owner *at the time of the call*. So if you call \ndsa_get_address() while in a different resource owner, things get very \nconfusing. The attached new test module demonstrates that \n(0001-Add-test_dsa-module.patch), here's a shortened version:\n\n\ta = dsa_create(tranche_id);\n\n\t/* Switch to new resource owner */\n\toldowner = CurrentResourceOwner;\n\tchildowner = ResourceOwnerCreate(oldowner, \"temp owner\");\n\tCurrentResourceOwner = childowner;\n\n\t/* make a bunch of allocations */\n\tfor (int i = 0; i < 10000; i++)\n\t\tp[i] = dsa_allocate(a, 1000);\n\n\t/* Release the child resource owner */\n\tCurrentResourceOwner = oldowner;\n\tResourceOwnerRelease(childowner,\n\t\t\t\t\t\t RESOURCE_RELEASE_BEFORE_LOCKS,\n\t\t\t\t\t\t true, false);\n\tResourceOwnerRelease(childowner,\n\t\t\t\t\t\t RESOURCE_RELEASE_LOCKS,\n\t\t\t\t\t\t true, false);\n\tResourceOwnerRelease(childowner,\n\t\t\t\t\t\t RESOURCE_RELEASE_AFTER_LOCKS,\n\t\t\t\t\t\t true, false);\n\tResourceOwnerDelete(childowner);\n\n\n\tdsa_detach(a);\n\nThis first prints warnings on The ResourceOwnerRelease() calls:\n\n2023-11-10 13:57:21.475 EET [745346] WARNING: resource was not closed: \ndynamic shared memory segment 2395813396\n2023-11-10 13:57:21.475 EET [745346] WARNING: resource was not closed: \ndynamic shared memory segment 3922992700\n2023-11-10 13:57:21.475 EET [745346] WARNING: resource was not closed: \ndynamic shared memory segment 1155452762\n2023-11-10 13:57:21.475 EET [745346] WARNING: resource was not closed: \ndynamic shared memory segment 4045183168\n2023-11-10 13:57:21.476 EET [745346] WARNING: resource was not closed: \ndynamic shared memory segment 1529990480\n\nAnd a segfault at the dsm_detach() call:\n\n2023-11-10 13:57:21.480 EET [745246] LOG: server process (PID 745346) \nwas terminated by signal 11: Segmentation fault\n\n#0 0x000055a5148f64ee in slist_pop_head_node (head=0x55a516d06bf8) at \n../../../../src/include/lib/ilist.h:1034\n#1 0x000055a5148f5eba in dsm_detach (seg=0x55a516d06bc0) at dsm.c:822\n#2 0x000055a514b86db0 in dsa_detach (area=0x55a516d64dc8) at dsa.c:1939\n#3 0x00007fd5dcaee5e0 in test_dsa_resowners (fcinfo=0x55a516d56a28) at \ntest_dsa.c:112\n\nI think that is surprising behavior from the DSA facility. When you make \nallocations with dsa_allocate() or just call dsa_get_address() on an \nexisting dsa_pointer, you wouldn't expect the current resource owner to \nmatter. I think dsa_create/attach() should store the current resource \nowner in the dsa_area, for use in subsequent operations on the DSA, per \nattached patch (0002-Fix-dsa.c-with-different-resource-owners.patch).\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 10 Nov 2023 16:26:51 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-10 16:26:51 +0200, Heikki Linnakangas wrote:\n> The quick, straightforward fix is to move the \"CurrentResourceOwner = NULL\"\n> line earlier in CommitTransaction, per attached\n> 0003-Clear-CurrentResourceOwner-earlier-in-CommitTransact.patch. You're not\n> allowed to use the resource owner after you start to release it; it was a\n> bit iffy even before the ResourceOwner rewrite but now it's explicitly\n> forbidden. By clearing CurrentResourceOwner as soon as we start releasing\n> it, we can prevent any accidental use.\n> \n> When CurrentResourceOwner == NULL, dsm_attach() returns a handle that's not\n> associated with any ResourceOwner. That's appropriate for the pgstat case.\n> The DSA is \"pinned\", so the handle is forgotten from the ResourceOwner right\n> after calling dsm_attach() anyway.\n\nYea - I wonder if we should have a version of dsm_attach() that pins at the\nsame time. It's pretty silly to first attach (remembering the dsm) and then\nimmediately pin (forgetting the dsm).\n\n\n> Looking closer at dsa.c, I think this is a wider problem though. The\n> comments don't make it very clear how it's supposed to interact with\n> ResourceOwners. There's just this brief comment in dsa_pin_mapping():\n> \n> > * By default, areas are owned by the current resource owner, which means they\n> > * are detached automatically when that scope ends.\n> \n> The dsa_area struct isn't directly owned by any ResourceOwner though. The\n> DSM segments created by dsa_create() or dsa_attach() are.\n\nBut there's a relationship from the dsa too, via on_dsm_detach().\n\n\n> But the functions dsa_allocate() and dsa_get_address() can create or attach\n> more DSM segments to the area, and they will be owned by the by the current\n> resource owner *at the time of the call*.\n\nYea, that doesn't seem great. It shouldn't affect the stats could though, due\nthe dsa being pinned.\n\n> I think that is surprising behavior from the DSA facility. When you make\n> allocations with dsa_allocate() or just call dsa_get_address() on an\n> existing dsa_pointer, you wouldn't expect the current resource owner to\n> matter. I think dsa_create/attach() should store the current resource owner\n> in the dsa_area, for use in subsequent operations on the DSA, per attached\n> patch (0002-Fix-dsa.c-with-different-resource-owners.patch).\n\nYea, that seems the right direction from here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Nov 2023 15:34:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hello Heikki,\n\n10.11.2023 17:26, Heikki Linnakangas wrote:\n>\n> I think that is surprising behavior from the DSA facility. When you make allocations with dsa_allocate() or just call \n> dsa_get_address() on an existing dsa_pointer, you wouldn't expect the current resource owner to matter. I think \n> dsa_create/attach() should store the current resource owner in the dsa_area, for use in subsequent operations on the \n> DSA, per attached patch (0002-Fix-dsa.c-with-different-resource-owners.patch).\n>\n\nWith the patch 0002 applied, I'm observing another anomaly:\nCREATE TABLE t(t1 text, t2 text);\nINSERT INTO t SELECT md5(g::text), '12345678901234567890' FROM generate_series(1, 100000) g;\nCREATE INDEX tidx ON t(t1);\n\nCREATE FUNCTION f() RETURNS TABLE (t1 text, t2 text) AS 'BEGIN END' LANGUAGE plpgsql;\n\nSELECT * FROM f();\n\ngives me under Valgrind:\n2023-11-11 11:54:18.964 UTC|law|regression|654f5d2e.3c7a92|LOG: statement: SELECT * FROM f();\n==00:00:00:49.589 3963538== Invalid read of size 1\n==00:00:00:49.589 3963538== at 0x9C8785: ResourceOwnerEnlarge (resowner.c:454)\n==00:00:00:49.589 3963538== by 0x7507C4: dsm_create_descriptor (dsm.c:1207)\n==00:00:00:49.589 3963538== by 0x74EF71: dsm_create (dsm.c:538)\n==00:00:00:49.589 3963538== by 0x9B7BEA: make_new_segment (dsa.c:2171)\n==00:00:00:49.589 3963538== by 0x9B6E28: ensure_active_superblock (dsa.c:1696)\n==00:00:00:49.589 3963538== by 0x9B67DD: alloc_object (dsa.c:1487)\n==00:00:00:49.589 3963538== by 0x9B5064: dsa_allocate_extended (dsa.c:816)\n==00:00:00:49.589 3963538== by 0x978A4C: share_tupledesc (typcache.c:2742)\n==00:00:00:49.589 3963538== by 0x978C07: find_or_make_matching_shared_tupledesc (typcache.c:2796)\n==00:00:00:49.589 3963538== by 0x977652: assign_record_type_typmod (typcache.c:1995)\n==00:00:00:49.589 3963538== by 0x9885A2: internal_get_result_type (funcapi.c:462)\n==00:00:00:49.589 3963538== by 0x98800D: get_expr_result_type (funcapi.c:299)\n==00:00:00:49.589 3963538== Address 0x72f9bf0 is 2,096 bytes inside a recently re-allocated block of size 16,384 alloc'd\n==00:00:00:49.589 3963538== at 0x4848899: malloc (vg_replace_malloc.c:381)\n==00:00:00:49.589 3963538== by 0x9B1F94: AllocSetAlloc (aset.c:928)\n==00:00:00:49.589 3963538== by 0x9C1162: MemoryContextAllocZero (mcxt.c:1076)\n==00:00:00:49.589 3963538== by 0x9C872C: ResourceOwnerCreate (resowner.c:423)\n==00:00:00:49.589 3963538== by 0x2CC522: AtStart_ResourceOwner (xact.c:1211)\n==00:00:00:49.589 3963538== by 0x2CD52A: StartTransaction (xact.c:2084)\n==00:00:00:49.589 3963538== by 0x2CE442: StartTransactionCommand (xact.c:2948)\n==00:00:00:49.589 3963538== by 0x992D69: InitPostgres (postinit.c:860)\n==00:00:00:49.589 3963538== by 0x791E6E: PostgresMain (postgres.c:4209)\n==00:00:00:49.589 3963538== by 0x6B13C4: BackendRun (postmaster.c:4423)\n==00:00:00:49.589 3963538== by 0x6B09EB: BackendStartup (postmaster.c:4108)\n==00:00:00:49.589 3963538== by 0x6AD0A6: ServerLoop (postmaster.c:1767)\n\nwithout Valgrind I get:\n2023-11-11 11:08:38.511 UTC|law|regression|654f60b6.3ca7eb|ERROR: ResourceOwnerEnlarge called after release started\n2023-11-11 11:08:38.511 UTC|law|regression|654f60b6.3ca7eb|STATEMENT: SELECT * FROM f();\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 11 Nov 2023 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 11/11/2023 14:00, Alexander Lakhin wrote:\n> 10.11.2023 17:26, Heikki Linnakangas wrote:\n>>\n>> I think that is surprising behavior from the DSA facility. When you make allocations with dsa_allocate() or just call\n>> dsa_get_address() on an existing dsa_pointer, you wouldn't expect the current resource owner to matter. I think\n>> dsa_create/attach() should store the current resource owner in the dsa_area, for use in subsequent operations on the\n>> DSA, per attached patch (0002-Fix-dsa.c-with-different-resource-owners.patch).\n>>\n> \n> With the patch 0002 applied, I'm observing another anomaly:\n\nOk, so the ownership of a dsa was still muddled :-(. Attached is a new \nversion that goes a little further. It replaces the whole \n'mapping_pinned' flag in dsa_area with the 'resowner'. When a mapping is \npinned, it means that 'resowner == NULL'. This is now similar to how the \nresowner field and pinning works in dsm.c.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Sun, 12 Nov 2023 23:16:50 +0100",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 11:16 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 11/11/2023 14:00, Alexander Lakhin wrote:\n> > 10.11.2023 17:26, Heikki Linnakangas wrote:\n> >> I think that is surprising behavior from the DSA facility. When you make allocations with dsa_allocate() or just call\n> >> dsa_get_address() on an existing dsa_pointer, you wouldn't expect the current resource owner to matter. I think\n> >> dsa_create/attach() should store the current resource owner in the dsa_area, for use in subsequent operations on the\n> >> DSA, per attached patch (0002-Fix-dsa.c-with-different-resource-owners.patch).\n\nYeah, I agree that it is surprising and dangerous that the DSM\nsegments can have different owners if you're not careful, and it's not\nclear that you have to be. Interesting that no one ever reported\nbreakage in parallel query due to this thinko, presumably because the\ncurrent resource owner always turns out to be either the same one or\nsomething with longer life.\n\n> > With the patch 0002 applied, I'm observing another anomaly:\n>\n> Ok, so the ownership of a dsa was still muddled :-(. Attached is a new\n> version that goes a little further. It replaces the whole\n> 'mapping_pinned' flag in dsa_area with the 'resowner'. When a mapping is\n> pinned, it means that 'resowner == NULL'. This is now similar to how the\n> resowner field and pinning works in dsm.c.\n\nThis patch makes sense to me.\n\nIt might be tempting to delete the dsa_pin_mapping() interface\ncompletely and say that if CurrentResourceOwner == NULL, then it's\neffectively (what we used to call) pinned, but I think it's useful for\nexception-safe construction of multiple objects to be able to start\nout with a resource owner and then later 'commit' by clearing it. As\nseen in GetSessionDsmHandle().\n\nI don't love the way this stuff is not very explicit, and if we're\ngoing to keep doing this 'dynamic scope' stuff then I wish we could\nfind a scoping notation that looks more like the stuff one sees in\nother languages that say something like\n\"with-resource-owner(area->resowner) { block of code }\".\n\n\n",
"msg_date": "Mon, 13 Nov 2023 12:08:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 13/11/2023 01:08, Thomas Munro wrote:\n> On Mon, Nov 13, 2023 at 11:16 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> On 11/11/2023 14:00, Alexander Lakhin wrote:\n>>> 10.11.2023 17:26, Heikki Linnakangas wrote:\n>>>> I think that is surprising behavior from the DSA facility. When you make allocations with dsa_allocate() or just call\n>>>> dsa_get_address() on an existing dsa_pointer, you wouldn't expect the current resource owner to matter. I think\n>>>> dsa_create/attach() should store the current resource owner in the dsa_area, for use in subsequent operations on the\n>>>> DSA, per attached patch (0002-Fix-dsa.c-with-different-resource-owners.patch).\n> \n> Yeah, I agree that it is surprising and dangerous that the DSM\n> segments can have different owners if you're not careful, and it's not\n> clear that you have to be. Interesting that no one ever reported\n> breakage in parallel query due to this thinko, presumably because the\n> current resource owner always turns out to be either the same one or\n> something with longer life.\n\nYep. I ran check-world with an extra assertion that \n\"CurrentResourceOwner == area->resowner || area->resowner == NULL\" to \nverify that. No failures.\n\n>>> With the patch 0002 applied, I'm observing another anomaly:\n>>\n>> Ok, so the ownership of a dsa was still muddled :-(. Attached is a new\n>> version that goes a little further. It replaces the whole\n>> 'mapping_pinned' flag in dsa_area with the 'resowner'. When a mapping is\n>> pinned, it means that 'resowner == NULL'. This is now similar to how the\n>> resowner field and pinning works in dsm.c.\n> \n> This patch makes sense to me.\n> \n> It might be tempting to delete the dsa_pin_mapping() interface\n> completely and say that if CurrentResourceOwner == NULL, then it's\n> effectively (what we used to call) pinned, but I think it's useful for\n> exception-safe construction of multiple objects to be able to start\n> out with a resource owner and then later 'commit' by clearing it. As\n> seen in GetSessionDsmHandle().\n\nYeah that's useful. I don't like the \"pinned mapping\" term here. It's \nconfusing because we also have dsa_pin(), which means something \ndifferent: the area will continue to exist even after all backends have \ndetached from it. I think we should rename 'dsa_pin_mapping()' to \n'dsa_forget_resowner()' or something like that.\n\nI pushed these fixes, without that renaming. Let's do that as a separate \npatch if we like that.\n\nI didn't backpatch this for now, because we don't seem to have a live \nbug in back branches. You could argue for backpatching because this \ncould bite us in the future if we backpatch something else that uses \ndsa's, or if there are extensions using it. We can do that later after \nwe've had this in 'master' for a while.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 15 Nov 2023 11:11:40 +0100",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-07 13:28:28 +0200, Heikki Linnakangas wrote:\n> I feel pretty good about this overall. Barring objections or new cfbot\n> failures, I will commit this in the next few days.\n\nI am working on rebasing the AIO patch over this. I think I found a crash\nthat's unrelated to AIO.\n\n#4 0x0000000000ea7631 in ExceptionalCondition (conditionName=0x4ba6f1 \"owner->narr == 0\",\n fileName=0x4ba628 \"../../../../../home/andres/src/postgresql/src/backend/utils/resowner/resowner.c\", lineNumber=354)\n at ../../../../../home/andres/src/postgresql/src/backend/utils/error/assert.c:66\n#5 0x0000000000ef13b5 in ResourceOwnerReleaseAll (owner=0x3367d48, phase=RESOURCE_RELEASE_BEFORE_LOCKS, printLeakWarnings=false)\n at ../../../../../home/andres/src/postgresql/src/backend/utils/resowner/resowner.c:354\n#6 0x0000000000ef1d9c in ResourceOwnerReleaseInternal (owner=0x3367d48, phase=RESOURCE_RELEASE_BEFORE_LOCKS, isCommit=false, isTopLevel=true)\n at ../../../../../home/andres/src/postgresql/src/backend/utils/resowner/resowner.c:717\n#7 0x0000000000ef1c89 in ResourceOwnerRelease (owner=0x3367d48, phase=RESOURCE_RELEASE_BEFORE_LOCKS, isCommit=false, isTopLevel=true)\n at ../../../../../home/andres/src/postgresql/src/backend/utils/resowner/resowner.c:644\n#8 0x00000000008c1f87 in AbortTransaction () at ../../../../../home/andres/src/postgresql/src/backend/access/transam/xact.c:2851\n#9 0x00000000008c4ae0 in AbortOutOfAnyTransaction () at ../../../../../home/andres/src/postgresql/src/backend/access/transam/xact.c:4761\n#10 0x0000000000ec1502 in ShutdownPostgres (code=1, arg=0) at ../../../../../home/andres/src/postgresql/src/backend/utils/init/postinit.c:1357\n#11 0x0000000000c942e5 in shmem_exit (code=1) at ../../../../../home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:243\n\nI think the problem is that ResourceOwnerSort() looks at owner->nhash == 0,\nwhereas ResourceOwnerReleaseAll() looks at !owner->hash. Therefore it's\npossible to reach ResourceOwnerReleaseAll() with owner->hash != NULL while\nalso having owner->narr > 0.\n\nI think both checks for !owner->hash in ResourceOwnerReleaseAll() need to\ninstead check owner->nhash == 0.\n\n\nI'm somewhat surprised that this is only hit with the AIO branch. I guess one\nneeds to be somewhat unlucky to end up with a hashtable but 0 elements in it\nat the time of resowner release. But still somewhat surprising.\n\n\nSeperately, I see that resowner_private.h still exists in the repo, I think\nthat should be deleted, as many of the functions don't exist anymore?\n\n\nLastly, I think it'd be good to assert that we're not releasing the resowner\nin ResourceOwnerRememberLock(). It's currently not strictly required, but that\nseems like it's just leaking an implementation detail out?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Nov 2023 12:44:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 12:44:41 -0800, Andres Freund wrote:\n> On 2023-11-07 13:28:28 +0200, Heikki Linnakangas wrote:\n> > I feel pretty good about this overall. Barring objections or new cfbot\n> > failures, I will commit this in the next few days.\n> \n> I am working on rebasing the AIO patch over this. I think I found a crash\n> that's unrelated to AIO.\n> \n> #4 0x0000000000ea7631 in ExceptionalCondition (conditionName=0x4ba6f1 \"owner->narr == 0\",\n> fileName=0x4ba628 \"../../../../../home/andres/src/postgresql/src/backend/utils/resowner/resowner.c\", lineNumber=354)\n> at ../../../../../home/andres/src/postgresql/src/backend/utils/error/assert.c:66\n> #5 0x0000000000ef13b5 in ResourceOwnerReleaseAll (owner=0x3367d48, phase=RESOURCE_RELEASE_BEFORE_LOCKS, printLeakWarnings=false)\n> at ../../../../../home/andres/src/postgresql/src/backend/utils/resowner/resowner.c:354\n> #6 0x0000000000ef1d9c in ResourceOwnerReleaseInternal (owner=0x3367d48, phase=RESOURCE_RELEASE_BEFORE_LOCKS, isCommit=false, isTopLevel=true)\n> at ../../../../../home/andres/src/postgresql/src/backend/utils/resowner/resowner.c:717\n> #7 0x0000000000ef1c89 in ResourceOwnerRelease (owner=0x3367d48, phase=RESOURCE_RELEASE_BEFORE_LOCKS, isCommit=false, isTopLevel=true)\n> at ../../../../../home/andres/src/postgresql/src/backend/utils/resowner/resowner.c:644\n> #8 0x00000000008c1f87 in AbortTransaction () at ../../../../../home/andres/src/postgresql/src/backend/access/transam/xact.c:2851\n> #9 0x00000000008c4ae0 in AbortOutOfAnyTransaction () at ../../../../../home/andres/src/postgresql/src/backend/access/transam/xact.c:4761\n> #10 0x0000000000ec1502 in ShutdownPostgres (code=1, arg=0) at ../../../../../home/andres/src/postgresql/src/backend/utils/init/postinit.c:1357\n> #11 0x0000000000c942e5 in shmem_exit (code=1) at ../../../../../home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:243\n> \n> I think the problem is that ResourceOwnerSort() looks at owner->nhash == 0,\n> whereas ResourceOwnerReleaseAll() looks at !owner->hash. Therefore it's\n> possible to reach ResourceOwnerReleaseAll() with owner->hash != NULL while\n> also having owner->narr > 0.\n> \n> I think both checks for !owner->hash in ResourceOwnerReleaseAll() need to\n> instead check owner->nhash == 0.\n> \n> I'm somewhat surprised that this is only hit with the AIO branch. I guess one\n> needs to be somewhat unlucky to end up with a hashtable but 0 elements in it\n> at the time of resowner release. But still somewhat surprising.\n\nOops - I hadn't rebased far enough at this point... That was already fixed in\n8f4a1ab471e.\n\nSeems like it'd be good to cover that path in one of the tests?\n\n- Andres\n\n\n",
"msg_date": "Fri, 17 Nov 2023 12:52:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hello!\n\nFound possibly missing PGDLLIMPORT in the definition of the\n\nextern const ResourceOwnerDesc buffer_io_resowner_desc;\nand\nextern const ResourceOwnerDesc buffer_pin_resowner_desc;\n\ncallbacks in the src/include/storage/buf_internals.h\n\nPlease take a look on it.\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 16 Jan 2024 14:23:56 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 16/01/2024 13:23, Anton A. Melnikov wrote:\n> Hello!\n> \n> Found possibly missing PGDLLIMPORT in the definition of the\n> \n> extern const ResourceOwnerDesc buffer_io_resowner_desc;\n> and\n> extern const ResourceOwnerDesc buffer_pin_resowner_desc;\n> \n> callbacks in the src/include/storage/buf_internals.h\n> \n> Please take a look on it.\n\nFixed, thanks. mark_pgdllimport.pl also highlighted two new variables in \nwalsummarizer.h, fixed those too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 16 Jan 2024 13:54:53 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "\nOn 16.01.2024 14:54, Heikki Linnakangas wrote:\n> Fixed, thanks. mark_pgdllimport.pl also highlighted two new variables in walsummarizer.h, fixed those too.\n> \n\nThanks!\n\nHave a nice day!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 16 Jan 2024 15:07:13 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hello Heikki,\n\n08.11.2023 14:37, Heikki Linnakangas wrote:\n>\n> Fixed these, and pushed. Thanks everyone for reviewing!\n>\n\nPlease try the following script:\nmkdir /tmp/50m\nsudo mount -t tmpfs -o size=50M tmpfs /tmp/50m\nexport PGDATA=/tmp/50m/tmpdb\n\ninitdb\npg_ctl -l server.log start\n\ncat << 'EOF' | psql\nCREATE TEMP TABLE t (a name, b name, c name, d name);\nINSERT INTO t SELECT 'a', 'b', 'c', 'd' FROM generate_series(1, 1000) g;\n\nCOPY t TO '/tmp/t.data';\nSELECT 'COPY t FROM ''/tmp/t.data''' FROM generate_series(1, 100)\n\\gexec\nEOF\n\nwhich produces an unexpected error, a warning, and an assertion failure,\nstarting from b8bff07da:\n..\nCOPY 1000\nERROR: could not extend file \"base/5/t2_16386\" with FileFallocate(): No space left on device\nHINT: Check free disk space.\nCONTEXT: COPY t, line 1000\nERROR: ResourceOwnerRemember called but array was full\nCONTEXT: COPY t, line 1000\nWARNING: local buffer refcount leak: [-499] (rel=base/5/t2_16386, blockNum=1483, flags=0x2000000, refcount=0 1)\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n2024-02-02 08:29:24.434 UTC [2052831] LOG: server process (PID 2052839) was terminated by signal 6: Aborted\n\nserver.log contains:\nTRAP: failed Assert(\"RefCountErrors == 0\"), File: \"localbuf.c\", Line: 804, PID: 2052839\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 2 Feb 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 02/02/2024 11:00, Alexander Lakhin wrote:\n> Please try the following script:\n> mkdir /tmp/50m\n> sudo mount -t tmpfs -o size=50M tmpfs /tmp/50m\n> export PGDATA=/tmp/50m/tmpdb\n> \n> initdb\n> pg_ctl -l server.log start\n> \n> cat << 'EOF' | psql\n> CREATE TEMP TABLE t (a name, b name, c name, d name);\n> INSERT INTO t SELECT 'a', 'b', 'c', 'd' FROM generate_series(1, 1000) g;\n> \n> COPY t TO '/tmp/t.data';\n> SELECT 'COPY t FROM ''/tmp/t.data''' FROM generate_series(1, 100)\n> \\gexec\n> EOF\n> \n> which produces an unexpected error, a warning, and an assertion failure,\n> starting from b8bff07da:\n\nFixed, thanks for the report!\n\nComparing ExtendBufferedRelLocal() and ExtendBufferedRelShared(), it's \neasy to see that ExtendBufferedRelLocal() was missing a \nResourceOwnerEnlarge() call in the loop. But it's actually a bit more \nsubtle: it was correct without the ResourceOwnerEnlarge() call until \ncommit b8bff07da, because ExtendBufferedRelLocal() unpins the old buffer \npinning the new one, while ExtendBufferedRelShared() does it the other \nway 'round. The implicit assumption was that unpinning the old buffer \nensures that you can pin a new one. That no longer holds with commit \nb8bff07da. Remembering a new resource expects there to be a free slot in \nthe fixed-size array, but if the forgotten resource was in the hash, \nrather than the array, forgetting it doesn't make space in the array.\n\nWe also make that assumption here in BufferAlloc:\n\n> \t\t/*\n> \t\t * Got a collision. Someone has already done what we were about to do.\n> \t\t * We'll just handle this as if it were found in the buffer pool in\n> \t\t * the first place. First, give up the buffer we were planning to\n> \t\t * use.\n> \t\t *\n> \t\t * We could do this after releasing the partition lock, but then we'd\n> \t\t * have to call ResourceOwnerEnlarge() & ReservePrivateRefCountEntry()\n> \t\t * before acquiring the lock, for the rare case of such a collision.\n> \t\t */\n> \t\tUnpinBuffer(victim_buf_hdr);\n\nIt turns out to be OK in that case, because it unpins the buffer that \nwas the last one pinned. That does ensure that you have one free slot in \nthe array, but forgetting anything other than the most recently \nremembered resource does not.\n\nI've added a note to that in ResourceOwnerForget. I read through the \nother callers of ResourceOwnerRemember and PinBuffer, but didn't find \nany other unsafe uses. I'm not too happy with this subtlety, but at \nleast it's documented now.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 2 Feb 2024 21:32:36 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Hello Heikki,\n\nI've stumbled upon yet another anomaly introduced with b8bff07da.\n\nPlease try the following script:\nSELECT 'init' FROM pg_create_logical_replication_slot('slot',\n 'test_decoding');\nCREATE TABLE tbl(t text);\nBEGIN;\nTRUNCATE table tbl;\n\nSELECT data FROM pg_logical_slot_get_changes('slot', NULL, NULL,\n 'include-xids', '0', 'skip-empty-xacts', '1', 'stream-changes', '1');\n\nOn b8bff07da (and the current HEAD), it ends up with:\nERROR: ResourceOwnerEnlarge called after release started\n\nBut on the previous commit, I get no error:\n data\n------\n(0 rows)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 1 Jun 2024 11:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "Thanks for the testing!\n\nOn 01/06/2024 11:00, Alexander Lakhin wrote:\n> Hello Heikki,\n> \n> I've stumbled upon yet another anomaly introduced with b8bff07da.\n> \n> Please try the following script:\n> SELECT 'init' FROM pg_create_logical_replication_slot('slot',\n> 'test_decoding');\n> CREATE TABLE tbl(t text);\n> BEGIN;\n> TRUNCATE table tbl;\n> \n> SELECT data FROM pg_logical_slot_get_changes('slot', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1', 'stream-changes', '1');\n> \n> On b8bff07da (and the current HEAD), it ends up with:\n> ERROR: ResourceOwnerEnlarge called after release started\n> \n> But on the previous commit, I get no error:\n> data\n> ------\n> (0 rows)\n\nThis is another case of the issue I tried to fix in commit b8bff07da \nwith this:\n\n> --- b/src/backend/access/transam/xact.c\n> +++ a/src/backend/access/transam/xact.c\n> @@ -5279,7 +5279,20 @@\n> \n> \t\tAtEOSubXact_RelationCshould be ache(false, s->subTransactionId,\n> \t\t\t\t\t\t\t\t s->parent->subTransactionId);\n> +\n> +\n> +\t\t/*\n> +\t\t * AtEOSubXact_Inval sometimes needs to temporarily bump the refcount\n> +\t\t * on the relcache entries that it processes. We cannot use the\n> +\t\t * subtransaction's resource owner anymore, because we've already\n> +\t\t * started releasing it. But we can use the parent resource owner.\n> +\t\t */\n> +\t\tCurrentResourceOwner = s->parent->curTransactionOwner;\n> +\n> \t\tAtEOSubXact_Inval(false);\n> +\n> +\t\tCurrentResourceOwner = s->curTransactionOwner;\n> +\n> \t\tResourceOwnerRelease(s->curTransactionOwner,\n> \t\t\t\t\t\t\t RESOURCE_RELEASE_LOCKS,\n> \t\t\t\t\t\t\t false, false);\n\nAtEOSubXact_Inval calls LocalExecuteInvalidationMessage(), which \nultimately calls RelationFlushRelation(). Your test case reaches \nReorderBufferExecuteInvalidations(), which also calls \nLocalExecuteInvalidationMessage(), in an already aborted subtransaction.\n\nLooking closer at relcache.c, I think we need to make \nRelationFlushRelation() safe to call without a valid resource owner. \nThere are already IsTransactionState() checks in \nRelationClearRelation(), and a comment noting that it does not touch \nCurrentResourceOwner. So we should make sure RelationFlushRelation() \ndoesn't touch it either, and revert the above change to \nAbortTransaction(). I didn't feel very good about it in the first place, \nand now I see what the proper fix is.\n\nA straightforward fix is to modify RelationFlushRelation() so that if \n!IsTransactionState(), it just marks the entry as invalid instead of \ncalling RelationClearRelation(). That's what RelationClearRelation() \nwould do anyway, if it didn't hit the assertion first.\n\nRelationClearRelation() is complicated. Depending on the circumstances, \nit removes the relcache entry, rebuilds it, marks it as invalid, or \nperforms a partial reload of a nailed relation. So before going for the \nstraightforward fix, I'm going to see if I can refactor \nRelationClearRelation() to be less complicated.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 4 Jun 2024 01:49:52 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 04/06/2024 01:49, Heikki Linnakangas wrote:\n> A straightforward fix is to modify RelationFlushRelation() so that if\n> !IsTransactionState(), it just marks the entry as invalid instead of\n> calling RelationClearRelation(). That's what RelationClearRelation()\n> would do anyway, if it didn't hit the assertion first.\n\nHere's a patch with that straightforward fix. Your test case hit the \n\"rd_createSubid != InvalidSubTransactionId\" case, I expanded it to also \ncover the \"rd_firstRelfilelocatorSubid != InvalidSubTransactionId\" case. \nBarring objections, I'll commit this later today or tomorrow. Thanks for \nthe report!\n\nI started a new thread with the bigger refactorings I had in mind: \nhttps://www.postgresql.org/message-id/9c9e8908-7b3e-4ce7-85a8-00c0e165a3d6%40iki.fi\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 5 Jun 2024 16:58:45 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 05/06/2024 16:58, Heikki Linnakangas wrote:\n> On 04/06/2024 01:49, Heikki Linnakangas wrote:\n>> A straightforward fix is to modify RelationFlushRelation() so that if\n>> !IsTransactionState(), it just marks the entry as invalid instead of\n>> calling RelationClearRelation(). That's what RelationClearRelation()\n>> would do anyway, if it didn't hit the assertion first.\n> \n> Here's a patch with that straightforward fix. Your test case hit the\n> \"rd_createSubid != InvalidSubTransactionId\" case, I expanded it to also\n> cover the \"rd_firstRelfilelocatorSubid != InvalidSubTransactionId\" case.\n\nFor the record, I got the above backwards: your test case covered the \nrd_firstRelfilelocatorSubid case and I expanded it to also cover the \nrd_createSubid case.\n\n> Barring objections, I'll commit this later today or tomorrow. Thanks for\n> the report!\n\nCommitted.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 6 Jun 2024 14:32:38 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On Thu, Jun 6, 2024 at 7:32 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > Barring objections, I'll commit this later today or tomorrow. Thanks for\n> > the report!\n>\n> Committed.\n\nI think you may have forgotten to push.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Jun 2024 11:27:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ResourceOwner refactoring"
},
{
"msg_contents": "On 06/06/2024 18:27, Robert Haas wrote:\n> On Thu, Jun 6, 2024 at 7:32 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> Barring objections, I'll commit this later today or tomorrow. Thanks for\n>>> the report!\n>>\n>> Committed.\n> \n> I think you may have forgotten to push.\n\nHuh, you're right. I could swear I did...\n\nPushed now thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 6 Jun 2024 18:57:21 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ResourceOwner refactoring"
}
] |
[
{
"msg_contents": "Hello,\n\nMy database is experiencing a 'ERROR: too many dynamic shared memory\nsegment' error from time to time. It seems to happen most when traffic\nis high, and it happens with semi-simple select statements that run a\nparallel query plan with a parallel bitmap heap scan. It is currently\non version 12.4.\n\nHere is an example query: https://explain.depesz.com/s/aatA\n\nI did find a message in the archive on this topic and it seems the\nresolution was to increase the max connection configuration. Since I\nam running my database on heroku I do not have access to this\nconfiguration. It also seems like a bit of an unrelated configuration\nchange in order to not run into this issue.\n\nLink to archived discussion:\nhttps://www.postgresql.org/message-id/CAEepm%3D2RcEWgES-f%2BHyg4931bOa0mbJ2AwrmTrabz6BKiAp%3DsQ%40mail.gmail.com\n\nI think this block of code is determining the size for the control segment:\nhttps://github.com/postgres/postgres/blob/REL_13_1/src/backend/storage/ipc/dsm.c#L157-L159\n\nmaxitems = 64 + 2 * MaxBackends\n\nIt seems like the reason increasing the max connections helps is\nbecause that number is part of the equation for determining the max\nsegment size.\nI noticed in version 13.1 there is a commit that changes the\nmultiplier from 2 to 5:\nhttps://github.com/postgres/postgres/commit/d061ea21fc1cc1c657bb5c742f5c4a1564e82ee2\n\nmaxitems = 64 + 5 * MaxBackends\n\nShould this commit be back-ported to earlier versions of postgres to\nprevent this error in other versions?\n\nThank you,\nBen\n\n\n",
"msg_date": "Tue, 17 Nov 2020 10:21:35 -0500",
"msg_from": "Ben Kanouse <kanobt61@gmail.com>",
"msg_from_op": true,
"msg_subject": "ERROR: too many dynamic shared memory segment"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 4:21 AM Ben Kanouse <kanobt61@gmail.com> wrote:\n> My database is experiencing a 'ERROR: too many dynamic shared memory\n> segment' error from time to time. It seems to happen most when traffic\n> is high, and it happens with semi-simple select statements that run a\n> parallel query plan with a parallel bitmap heap scan. It is currently\n> on version 12.4.\n>\n> Here is an example query: https://explain.depesz.com/s/aatA\n\nHmm, not sure how this plan could cause the problem, even if running\nmany copies of it. In the past, this problem has been reported from\nsingle queries that had a very high number of separate Gather nodes,\nor very very large parallel hash joins, or, once you've hit the error\na few times those ways, also due to an ancient DSM leak that was fixed\nin 93745f1e (fix present in 12.4).\n\n> I noticed in version 13.1 there is a commit that changes the\n> multiplier from 2 to 5:\n> https://github.com/postgres/postgres/commit/d061ea21fc1cc1c657bb5c742f5c4a1564e82ee2\n>\n> maxitems = 64 + 5 * MaxBackends\n>\n> Should this commit be back-ported to earlier versions of postgres to\n> prevent this error in other versions?\n\nYeah, that seems like a good idea anyway. I will do that tomorrow,\nbarring objections.\n\n\n",
"msg_date": "Wed, 18 Nov 2020 10:55:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: too many dynamic shared memory segment"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 10:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> a few times those ways, also due to an ancient DSM leak that was fixed\n> in 93745f1e (fix present in 12.4).\n\nI take that bit back -- I misremembered -- that was a leak of the\nactual memory once slots were exhausted, not a leak of the slot.\n\n\n",
"msg_date": "Wed, 18 Nov 2020 11:00:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: too many dynamic shared memory segment"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 10:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Should this commit be back-ported to earlier versions of postgres to\n> > prevent this error in other versions?\n>\n> Yeah, that seems like a good idea anyway. I will do that tomorrow,\n> barring objections.\n\nDone, for 10, 11 and 12.\n\n\n",
"msg_date": "Fri, 20 Nov 2020 11:00:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: too many dynamic shared memory segment"
}
] |
[
{
"msg_contents": "Hello dear pgsql hackers\n\n\nI think Postgres had fix 2038 problems since 8.4, when I read in\nstackexchange:\n\n\nhttps://dba.stackexchange.com/a/233084/102852\n\n\nSo I test on my PC by simply change system date to `2040-09-25`:\n\n\n\n - Windows 10 Home edition\n - CPU: AMD 64 bit\n - Postgres version 13.1 64-bit\n - Postgres version 10.14 64-bit\n\n\nBoth Postgres edition seems basically work:\n\n - Postgres can start the process, and connect with psql/pgAdmin\n successfully. (but not show as “started” in services.msc)\n - Table with the Timestamp column, test with basic CRUD sql is work.\n\n\nBut there are many strange logs, like:\n\n\ncat postgresql-2040-09-25_123738.log\n\n> 1904-08-20 06:09:23.083 JST [7520] LOG: stats_timestamp 2000-01-01\n09:00:00+09 is later than collector's time 1904-08-20 06:09:23.083102+09\nfor database 0\n\n> 1904-08-20 06:10:23.226 JST [13904] LOG: stats collector's time\n2000-01-01 09:00:00+09 is later than backend local time 1904-08-20\n06:10:23.22678+09\n\n> 1904-08-20 06:10:23.227 JST [7520] LOG: stats_timestamp 2000-01-01\n09:00:00+09 is later than collector's time 1904-08-20 06:10:23.227854+09\nfor database 12938\n\n\nNote that log file name is OK, but logs content mix many time stamps (e.g\n1904-08-20, 2000-01-01).\n\n\n\nAnd I also found someone says in\n\nhttps://www.postgresql-archive.org/Year-2038-Bug-td1990821.html\n\n\n> PostgreSQL 8.4 uses 64bit data type for time. But if you use system\ntimezone\n\n> then you can get in trouble if system does not support 64bit zic files.\n\n\nBut IMO output content in logs is not OS problems, it should be Postgres’s\n2038 bug.\n\n\nAs our products has 15 years support period, we need to think about 2038\nseriously from now,\n\nIs there any road map for 2038 problems in Postgres?\n\n\nBest regards,\n\nFang\n\n\nHello dear pgsql hackers\n\nI think Postgres had fix 2038 problems since 8.4, when I read in stackexchange:\n\nhttps://dba.stackexchange.com/a/233084/102852\n\nSo I test on my PC by simply change system date to `2040-09-25`:\n\n\nWindows 10 Home edition\nCPU: AMD 64 bit\nPostgres version 13.1 64-bit \nPostgres version 10.14 64-bit \n\n\nBoth Postgres edition seems basically work:\n\nPostgres can start the process, and connect with psql/pgAdmin successfully. (but not show as “started” in services.msc)\nTable with the Timestamp column, test with basic CRUD sql is work.\n\n\nBut there are many strange logs, like:\n\ncat postgresql-2040-09-25_123738.log\n> 1904-08-20 06:09:23.083 JST [7520] LOG: stats_timestamp 2000-01-01 09:00:00+09 is later than collector's time 1904-08-20 06:09:23.083102+09 for database 0\n> 1904-08-20 06:10:23.226 JST [13904] LOG: stats collector's time 2000-01-01 09:00:00+09 is later than backend local time 1904-08-20 06:10:23.22678+09\n> 1904-08-20 06:10:23.227 JST [7520] LOG: stats_timestamp 2000-01-01 09:00:00+09 is later than collector's time 1904-08-20 06:10:23.227854+09 for database 12938\n\nNote that log file name is OK, but logs content mix many time stamps (e.g 1904-08-20, 2000-01-01).\n\n\nAnd I also found someone says in \nhttps://www.postgresql-archive.org/Year-2038-Bug-td1990821.html\n\n> PostgreSQL 8.4 uses 64bit data type for time. But if you use system timezone\n> then you can get in trouble if system does not support 64bit zic files.\n\nBut IMO output content in logs is not OS problems, it should be Postgres’s 2038 bug.\n\nAs our products has 15 years support period, we need to think about 2038 seriously from now,\nIs there any road map for 2038 problems in Postgres?Best regards,Fang",
"msg_date": "Wed, 18 Nov 2020 00:39:33 +0900",
"msg_from": "=?UTF-8?B?5pa55b6z6Lyd?= <javaeecoding@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is postgres ready for 2038?"
},
{
"msg_contents": "=?UTF-8?B?5pa55b6z6Lyd?= <javaeecoding@gmail.com> writes:\n> Is there any road map for 2038 problems in Postgres?\n\nPostgres has no problem with post-2038 dates as long as you are using a\nsystem with 64-bit time_t. In other words, the road map is \"get off\nWindows, or press Microsoft to fix their problems\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Nov 2020 11:04:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "\nOn 11/17/20 11:04 AM, Tom Lane wrote:\n> =?UTF-8?B?5pa55b6z6Lyd?= <javaeecoding@gmail.com> writes:\n>> Is there any road map for 2038 problems in Postgres?\n> Postgres has no problem with post-2038 dates as long as you are using a\n> system with 64-bit time_t. In other words, the road map is \"get off\n> Windows, or press Microsoft to fix their problems\".\n>\n> \t\t\n\n\nBut it does: \"time_t is, by default, equivalent to __time64_t.\" See\n\n<https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/time-time32-time64?view=msvc-160>\n\n\nMaybe we need to dig a little more to see what's going on here.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 18 Nov 2020 07:56:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": ">\n> But it does: \"time_t is, by default, equivalent to __time64_t.\" See\n>\n> <\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/time-time32-time64?view=msvc-160\n> >\n>\n>\n> Maybe we need to dig a little more to see what's going on here.\n>\n\nHow about just a mention in the future documentation to never ever define\n_USE_32BIT_TIME_T when compiling PG under Windows? Should be enough, I\nsuppose.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nBut it does: \"time_t is, by default, equivalent to __time64_t.\" See\n\n<https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/time-time32-time64?view=msvc-160>\n\n\nMaybe we need to dig a little more to see what's going on here.How about just a mention in the future documentation to never ever define _USE_32BIT_TIME_T when compiling PG under Windows? Should be enough, I suppose.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 18 Nov 2020 17:32:57 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n>> Maybe we need to dig a little more to see what's going on here.\n\n> How about just a mention in the future documentation to never ever define\n> _USE_32BIT_TIME_T when compiling PG under Windows? Should be enough, I\n> suppose.\n\nHmm. Digging around, I see that Mkvcbuild.pm intentionally absorbs\n_USE_32BIT_TIME_T when building with a Perl that defines that.\nI don't know what the state of play is in terms of Windows Perl\ndistributions getting off of that, but maybe we should press people\nto not be using such Perl builds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Nov 2020 09:44:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "Yes, I agree.\n\nср, 18 нояб. 2020 г. в 18:44, Tom Lane <tgl@sss.pgh.pa.us>:\n\n> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> >> Maybe we need to dig a little more to see what's going on here.\n>\n> > How about just a mention in the future documentation to never ever define\n> > _USE_32BIT_TIME_T when compiling PG under Windows? Should be enough, I\n> > suppose.\n>\n> Hmm. Digging around, I see that Mkvcbuild.pm intentionally absorbs\n> _USE_32BIT_TIME_T when building with a Perl that defines that.\n> I don't know what the state of play is in terms of Windows Perl\n> distributions getting off of that, but maybe we should press people\n> to not be using such Perl builds.\n>\n> regards, tom lane\n>\n\nYes, I agree. ср, 18 нояб. 2020 г. в 18:44, Tom Lane <tgl@sss.pgh.pa.us>:Pavel Borisov <pashkin.elfe@gmail.com> writes:\n>> Maybe we need to dig a little more to see what's going on here.\n\n> How about just a mention in the future documentation to never ever define\n> _USE_32BIT_TIME_T when compiling PG under Windows? Should be enough, I\n> suppose.\n\nHmm. Digging around, I see that Mkvcbuild.pm intentionally absorbs\n_USE_32BIT_TIME_T when building with a Perl that defines that.\nI don't know what the state of play is in terms of Windows Perl\ndistributions getting off of that, but maybe we should press people\nto not be using such Perl builds.\n\n regards, tom lane",
"msg_date": "Wed, 18 Nov 2020 19:27:20 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "\nOn 11/18/20 9:44 AM, Tom Lane wrote:\n> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n>>> Maybe we need to dig a little more to see what's going on here.\n>> How about just a mention in the future documentation to never ever define\n>> _USE_32BIT_TIME_T when compiling PG under Windows? Should be enough, I\n>> suppose.\n> Hmm. Digging around, I see that Mkvcbuild.pm intentionally absorbs\n> _USE_32BIT_TIME_T when building with a Perl that defines that.\n> I don't know what the state of play is in terms of Windows Perl\n> distributions getting off of that, but maybe we should press people\n> to not be using such Perl builds.\n>\n> \t\t\t\n\n\nI think there's a good argument to ban it if we're doing a 64 bit build\n(and why would we do anything else?)\n\n\nNote that drongo appears not to need it - it's building against a 64 bit\nperl.\n\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2020-11-16%2012%3A59%3A17&stg=make\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 18 Nov 2020 11:52:16 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 11/18/20 9:44 AM, Tom Lane wrote:\n>> Hmm. Digging around, I see that Mkvcbuild.pm intentionally absorbs\n>> _USE_32BIT_TIME_T when building with a Perl that defines that.\n>> I don't know what the state of play is in terms of Windows Perl\n>> distributions getting off of that, but maybe we should press people\n>> to not be using such Perl builds.\n\n> I think there's a good argument to ban it if we're doing a 64 bit build\n> (and why would we do anything else?)\n\nI'm not really eager to ban it. If somebody is building with an old\nPerl distribution, and doesn't particularly care that the installation\nwill break in 2038, why should we force them to care?\n\nWhat I had in mind was more along the lines of making sure that\npopular PG-on-Windows installers (EDB for instance) are not still\nusing 32-bit-time_t Perl.\n\nBTW, just to clarify: AFAIK we *only* use the platform's time_t\nfor the result of time(2) and calculations involving relatively\nsmall offsets from that, such as timeout expiration points.\nAll our stored data has been Y2038-safe since we got rid of the\nabstime type.\n\nThus, I pretty much reject the OP's position that this is something\npeople really need to worry about years in advance. By the time\nit breaks for real, everything else on the platform will be broken\ntoo, unless the platform has done something about redefining time_t.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Nov 2020 12:16:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "On Wed, 18 Nov 2020 at 12:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 11/18/20 9:44 AM, Tom Lane wrote:\n> >> Hmm. Digging around, I see that Mkvcbuild.pm intentionally absorbs\n> >> _USE_32BIT_TIME_T when building with a Perl that defines that.\n> >> I don't know what the state of play is in terms of Windows Perl\n> >> distributions getting off of that, but maybe we should press people\n> >> to not be using such Perl builds.\n>\n> > I think there's a good argument to ban it if we're doing a 64 bit build\n> > (and why would we do anything else?)\n>\n> I'm not really eager to ban it. If somebody is building with an old\n> Perl distribution, and doesn't particularly care that the installation\n> will break in 2038, why should we force them to care?\n\nWait, is configuring with a Perl that has 32-bit time_t driving the\nrest of Postgres to use 32-bit timestamps? That seems like the tail\nwagging the dog.\n\nIt seems like a sensible compromise would be to have Postgres's\nconfigure default to 64-bit time_t and have a flag to choose 32-bit\ntime_t and then have a configure check that errors out if the time_t\nin Perl doesn't match with a hint to either find a newer Perl\ndistribution or configure with the flag to choose 32-bit. Ie, don't\nsilently assume users want 32-bit time_t but leave them the option to\nchoose it explicitly.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 19 Nov 2020 00:28:27 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "чт, 19 нояб. 2020 г. в 09:29, Greg Stark <stark@mit.edu>:\n\n> On Wed, 18 Nov 2020 at 12:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > On 11/18/20 9:44 AM, Tom Lane wrote:\n> > >> Hmm. Digging around, I see that Mkvcbuild.pm intentionally absorbs\n> > >> _USE_32BIT_TIME_T when building with a Perl that defines that.\n> > >> I don't know what the state of play is in terms of Windows Perl\n> > >> distributions getting off of that, but maybe we should press people\n> > >> to not be using such Perl builds.\n> >\n> > > I think there's a good argument to ban it if we're doing a 64 bit build\n> > > (and why would we do anything else?)\n> >\n> > I'm not really eager to ban it. If somebody is building with an old\n> > Perl distribution, and doesn't particularly care that the installation\n> > will break in 2038, why should we force them to care?\n>\n> Wait, is configuring with a Perl that has 32-bit time_t driving the\n> rest of Postgres to use 32-bit timestamps? That seems like the tail\n> wagging the dog.\n>\n> It seems like a sensible compromise would be to have Postgres's\n> configure default to 64-bit time_t and have a flag to choose 32-bit\n> time_t and then have a configure check that errors out if the time_t\n> in Perl doesn't match with a hint to either find a newer Perl\n> distribution or configure with the flag to choose 32-bit. Ie, don't\n> silently assume users want 32-bit time_t but leave them the option to\n> choose it explicitly.\n>\n\n _USE_32BIT_TIME_T is available only on 32-bit platforms so the proposed\nflag will not be able to force 32-bit time_t, only allow it. On 64-bit\nplatforms, we simply do not have a choice.\n\nI suppose that some 10+ years later the number of users willing to compile\non 32-bit with dinosaur-aged Perl distribution will be nearly zero. So I\nsuppose just mention this would be a funny fact in the documentation.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nчт, 19 нояб. 2020 г. в 09:29, Greg Stark <stark@mit.edu>:On Wed, 18 Nov 2020 at 12:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 11/18/20 9:44 AM, Tom Lane wrote:\n> >> Hmm. Digging around, I see that Mkvcbuild.pm intentionally absorbs\n> >> _USE_32BIT_TIME_T when building with a Perl that defines that.\n> >> I don't know what the state of play is in terms of Windows Perl\n> >> distributions getting off of that, but maybe we should press people\n> >> to not be using such Perl builds.\n>\n> > I think there's a good argument to ban it if we're doing a 64 bit build\n> > (and why would we do anything else?)\n>\n> I'm not really eager to ban it. If somebody is building with an old\n> Perl distribution, and doesn't particularly care that the installation\n> will break in 2038, why should we force them to care?\n\nWait, is configuring with a Perl that has 32-bit time_t driving the\nrest of Postgres to use 32-bit timestamps? That seems like the tail\nwagging the dog.\n\nIt seems like a sensible compromise would be to have Postgres's\nconfigure default to 64-bit time_t and have a flag to choose 32-bit\ntime_t and then have a configure check that errors out if the time_t\nin Perl doesn't match with a hint to either find a newer Perl\ndistribution or configure with the flag to choose 32-bit. Ie, don't\nsilently assume users want 32-bit time_t but leave them the option to\nchoose it explicitly. _USE_32BIT_TIME_T is available only on 32-bit platforms so the proposed flag will not be able to force 32-bit time_t, only allow it. On 64-bit platforms, we simply do not have a choice.I suppose that some 10+ years later the number of users willing to compile on 32-bit with dinosaur-aged Perl distribution will be nearly zero. So I suppose just mention this would be a funny fact in the documentation.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 19 Nov 2020 11:22:08 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> чт, 19 нояб. 2020 г. в 09:29, Greg Stark <stark@mit.edu>:\n>> Wait, is configuring with a Perl that has 32-bit time_t driving the\n>> rest of Postgres to use 32-bit timestamps? That seems like the tail\n>> wagging the dog.\n>> It seems like a sensible compromise would be to have Postgres's\n>> configure default to 64-bit time_t and have a flag to choose 32-bit\n>> time_t and then have a configure check that errors out if the time_t\n>> in Perl doesn't match with a hint to either find a newer Perl\n>> distribution or configure with the flag to choose 32-bit. Ie, don't\n>> silently assume users want 32-bit time_t but leave them the option to\n>> choose it explicitly.\n\n> _USE_32BIT_TIME_T is available only on 32-bit platforms so the proposed\n> flag will not be able to force 32-bit time_t, only allow it. On 64-bit\n> platforms, we simply do not have a choice.\n\n> I suppose that some 10+ years later the number of users willing to compile\n> on 32-bit with dinosaur-aged Perl distribution will be nearly zero. So I\n> suppose just mention this would be a funny fact in the documentation.\n\nYeah. I can't get excited about putting additional effort, and\nuser-visible complexity, into this issue. The only way it could matter\nto people building Postgres today is if you suppose that the executables\nthey are building today will still be in use in 2038. That seems a bit\nhard to credit. Ten years from now, that'd be a legitimate worry ...\nbut it's really hard to believe that these toolchains will still be\nin use then.\n\n(I would not be too surprised if we've dropped support for 32-bit\nbuilds altogether by 2030. Certainly, any platform that still\nhas 32-bit time_t by then is going to be in a world of hurt.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 02:33:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "Hi dear pgsql hackers\n\n\nThanks for replies.\n\nThere are no 32bit Windows version builds since Postgres 11, see:\n\nhttps://www.postgresql.org/download/windows/\n\nbut Postgres 13 still has the same 2038 problems.\n\n\n\nAs @Pavel Borisov hints , I can find `_USE_32BIT_TIME_T` code here:\n\nhttps://github.com/postgres/postgres/search?q=_USE_32BIT_TIME_T\n\n\nIs it a good idea to remove `_USE_32BIT_TIME_T` code and build with 64bit\nPerl\n\nmight solve 2038 problem?\n\n\nBest regards,\n\nFang\n\nOn Thu, Nov 19, 2020 at 4:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> > чт, 19 нояб. 2020 г. в 09:29, Greg Stark <stark@mit.edu>:\n> >> Wait, is configuring with a Perl that has 32-bit time_t driving the\n> >> rest of Postgres to use 32-bit timestamps? That seems like the tail\n> >> wagging the dog.\n> >> It seems like a sensible compromise would be to have Postgres's\n> >> configure default to 64-bit time_t and have a flag to choose 32-bit\n> >> time_t and then have a configure check that errors out if the time_t\n> >> in Perl doesn't match with a hint to either find a newer Perl\n> >> distribution or configure with the flag to choose 32-bit. Ie, don't\n> >> silently assume users want 32-bit time_t but leave them the option to\n> >> choose it explicitly.\n>\n> > _USE_32BIT_TIME_T is available only on 32-bit platforms so the proposed\n> > flag will not be able to force 32-bit time_t, only allow it. On 64-bit\n> > platforms, we simply do not have a choice.\n>\n> > I suppose that some 10+ years later the number of users willing to\n> compile\n> > on 32-bit with dinosaur-aged Perl distribution will be nearly zero. So I\n> > suppose just mention this would be a funny fact in the documentation.\n>\n> Yeah. I can't get excited about putting additional effort, and\n> user-visible complexity, into this issue. The only way it could matter\n> to people building Postgres today is if you suppose that the executables\n> they are building today will still be in use in 2038. That seems a bit\n> hard to credit. Ten years from now, that'd be a legitimate worry ...\n> but it's really hard to believe that these toolchains will still be\n> in use then.\n>\n> (I would not be too surprised if we've dropped support for 32-bit\n> builds altogether by 2030. Certainly, any platform that still\n> has 32-bit time_t by then is going to be in a world of hurt.)\n>\n> regards, tom lane\n>\n\n\nHi dear pgsql hackersThanks for replies.\nThere are no 32bit Windows version builds since Postgres 11, see:https://www.postgresql.org/download/windows/but Postgres 13 still has the same 2038 problems. As @Pavel Borisov hints , I can find `_USE_32BIT_TIME_T` code here:https://github.com/postgres/postgres/search?q=_USE_32BIT_TIME_TIs it a good idea to remove `_USE_32BIT_TIME_T` code and build with 64bit Perl might solve 2038 problem?Best regards,\nFang On Thu, Nov 19, 2020 at 4:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> чт, 19 нояб. 2020 г. в 09:29, Greg Stark <stark@mit.edu>:\n>> Wait, is configuring with a Perl that has 32-bit time_t driving the\n>> rest of Postgres to use 32-bit timestamps? That seems like the tail\n>> wagging the dog.\n>> It seems like a sensible compromise would be to have Postgres's\n>> configure default to 64-bit time_t and have a flag to choose 32-bit\n>> time_t and then have a configure check that errors out if the time_t\n>> in Perl doesn't match with a hint to either find a newer Perl\n>> distribution or configure with the flag to choose 32-bit. Ie, don't\n>> silently assume users want 32-bit time_t but leave them the option to\n>> choose it explicitly.\n\n> _USE_32BIT_TIME_T is available only on 32-bit platforms so the proposed\n> flag will not be able to force 32-bit time_t, only allow it. On 64-bit\n> platforms, we simply do not have a choice.\n\n> I suppose that some 10+ years later the number of users willing to compile\n> on 32-bit with dinosaur-aged Perl distribution will be nearly zero. So I\n> suppose just mention this would be a funny fact in the documentation.\n\nYeah. I can't get excited about putting additional effort, and\nuser-visible complexity, into this issue. The only way it could matter\nto people building Postgres today is if you suppose that the executables\nthey are building today will still be in use in 2038. That seems a bit\nhard to credit. Ten years from now, that'd be a legitimate worry ...\nbut it's really hard to believe that these toolchains will still be\nin use then.\n\n(I would not be too surprised if we've dropped support for 32-bit\nbuilds altogether by 2030. Certainly, any platform that still\nhas 32-bit time_t by then is going to be in a world of hurt.)\n\n regards, tom lane",
"msg_date": "Wed, 25 Nov 2020 19:10:13 +0900",
"msg_from": "=?UTF-8?B?5pa55b6z6Lyd?= <javaeecoding@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is postgres ready for 2038?"
},
{
"msg_contents": "> There are no 32bit Windows version builds since Postgres 11, see:\n>\n> https://www.postgresql.org/download/windows/\n>\n> but Postgres 13 still has the same 2038 problems.\n>\n>\n>\n> As @Pavel Borisov hints , I can find `_USE_32BIT_TIME_T` code here:\n>\n> https://github.com/postgres/postgres/search?q=_USE_32BIT_TIME_T\n>\n>\n> Is it a good idea to remove `_USE_32BIT_TIME_T` code and build with 64bit\n> Perl\n>\n> might solve 2038 problem?\n>\n\nAs it was mentioned above `_USE_32BIT_TIME_T` is not default flag for msvc\nbuilds, but perl can set this on purpose (or on the reason it is very\nancient). Postgres will not set `_USE_32BIT_TIME_T` itself and and by\ndefault compiles with 64bit time, which is correct. But it regards the\nperl's choice in case it is made on purpose. If you face the problem with\nPG compiled with non-2038 compatible time please first check your perl,\nthis is the reason.\n\nRegards, Pavel Borisov.\n\n>\n\nThere are no 32bit Windows version builds since Postgres 11, see:https://www.postgresql.org/download/windows/but Postgres 13 still has the same 2038 problems. As @Pavel Borisov hints , I can find `_USE_32BIT_TIME_T` code here:https://github.com/postgres/postgres/search?q=_USE_32BIT_TIME_TIs it a good idea to remove `_USE_32BIT_TIME_T` code and build with 64bit Perl might solve 2038 problem?As it was mentioned above `_USE_32BIT_TIME_T` is not default flag for msvc builds, but perl can set this on purpose (or on the reason it is very ancient). Postgres will not set `_USE_32BIT_TIME_T` itself and and by default compiles with 64bit time, which is correct. But it regards the perl's choice in case it is made on purpose. If you face the problem with PG compiled with non-2038 compatible time please first check your perl, this is the reason.Regards, Pavel Borisov.",
"msg_date": "Wed, 25 Nov 2020 18:22:41 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is postgres ready for 2038?"
}
] |
[
{
"msg_contents": "When arguments and other local variables in pl/pgsql functions have the\nsame name as columns referenced in queries it is necessary to disambiguate\nthe names. This can be done by prefixing the function name (e.g.\nmy_func.name), using the argument number is the case of an argument (e.g.\n$1), or renaming the variable (e.g. _name). It is also possible to use a\nGUC to always use the variable or the column but that seems dangerous to me.\n\nPrefixing names with an underscore works well enough for local variables,\nbut when using named arguments I prefer the external name not require an\nunderscore. I would like to suggest a standard prefix such as $ to\nreference a local variable or argument. $ followed by an integer already\nreferences an argument by ordinal. What if $ followed by a name meant a\nlocal reference (essentially it would expand to \"my_func.\")?\n\nFor example, currently I have to do something like this:\n\ncreate function update_item(id int, foo int, bar text) returns void\nlanguage plpgsql as $$\nbegin\n update items\n set foo = update_item.foo,\n bar = update_item.bar\n where items.id = update_item.id;\nend;\n$$;\n\nI would like to be able to do something like:\n\ncreate function update_item(id int, foo int, bar text) returns void\nlanguage plpgsql as $$\nbegin\n update items\n set foo = $foo,\n bar = $bar\n where items.id = $id;\nend;\n$$;\n\nAny opinions on the desirability of this feature? My C skills are rather\natrophied, but from the outside it seems like a small enough change I might\nbe able to tackle it...\n\nJack\n\nWhen arguments and other local variables in pl/pgsql functions have the same name as columns referenced in queries it is necessary to disambiguate the names. This can be done by prefixing the function name (e.g. my_func.name), using the argument number is the case of an argument (e.g. $1), or renaming the variable (e.g. _name). It is also possible to use a GUC to always use the variable or the column but that seems dangerous to me.Prefixing names with an underscore works well enough for local variables, but when using named arguments I prefer the external name not require an underscore. I would like to suggest a standard prefix such as $ to reference a local variable or argument. $ followed by an integer already references an argument by ordinal. What if $ followed by a name meant a local reference (essentially it would expand to \"my_func.\")?For example, currently I have to do something like this:create function update_item(id int, foo int, bar text) returns voidlanguage plpgsql as $$begin update items set foo = update_item.foo, bar = update_item.bar where items.id = update_item.id;end;$$;I would like to be able to do something like:create function update_item(id int, foo int, bar text) returns voidlanguage plpgsql as $$begin update items set foo = $foo, bar = $bar where items.id = $id;end;$$;Any opinions on the desirability of this feature? My C skills are rather atrophied, but from the outside it seems like a small enough change I might be able to tackle it...Jack",
"msg_date": "Tue, 17 Nov 2020 12:04:24 -0600",
"msg_from": "Jack Christensen <jack@jncsoftware.com>",
"msg_from_op": true,
"msg_subject": "pl/pgsql feature request: shorthand for argument and local variable\n references"
},
{
"msg_contents": "Hi\n\nút 17. 11. 2020 v 19:04 odesílatel Jack Christensen <jack@jncsoftware.com>\nnapsal:\n\n> When arguments and other local variables in pl/pgsql functions have the\n> same name as columns referenced in queries it is necessary to disambiguate\n> the names. This can be done by prefixing the function name (e.g.\n> my_func.name), using the argument number is the case of an argument (e.g.\n> $1), or renaming the variable (e.g. _name). It is also possible to use a\n> GUC to always use the variable or the column but that seems dangerous to me.\n>\n> Prefixing names with an underscore works well enough for local variables,\n> but when using named arguments I prefer the external name not require an\n> underscore. I would like to suggest a standard prefix such as $ to\n> reference a local variable or argument. $ followed by an integer already\n> references an argument by ordinal. What if $ followed by a name meant a\n> local reference (essentially it would expand to \"my_func.\")?\n>\n> For example, currently I have to do something like this:\n>\n> create function update_item(id int, foo int, bar text) returns void\n> language plpgsql as $$\n> begin\n> update items\n> set foo = update_item.foo,\n> bar = update_item.bar\n> where items.id = update_item.id;\n> end;\n> $$;\n>\n> I would like to be able to do something like:\n>\n> create function update_item(id int, foo int, bar text) returns void\n> language plpgsql as $$\n> begin\n> update items\n> set foo = $foo,\n> bar = $bar\n> where items.id = $id;\n> end;\n> $$;\n>\n> Any opinions on the desirability of this feature? My C skills are rather\n> atrophied, but from the outside it seems like a small enough change I might\n> be able to tackle it...\n>\n\nI don't like this proposal too much. Introducing the next different syntax\nfor writing local variables doesn't look like a good idea for me. More this\nsyntax is used by very different languages than is PL/pgSQL, and then it\ncan be messy. The behaviour of local variables in PHP or Perl or shell is\nreally very different.\n\nPersonally in your example I very much like notation \"update_item.id\",\nbecause there is a clean signal so \"id\" is the function's argument. When\nyou use \"$id\", then it is not clean if \"id\" is a local variable or\nfunction's argument. So your proposal decreases safety :-/. Plus this\nsyntax reduces collision only on one side, you should use aliases for sql\nidentifiers and again it is not balanced - In MS SQL I can write predicate\nid = @id. But it is not possible in your proposal (and it is not possible\nfrom compatibility reasons ever).\n\nMore we already has a possibility to do ALIAS of any variable\nhttps://www.postgresql.org/docs/current/plpgsql-declarations.html#PLPGSQL-DECLARATION-ALIAS\n\nI understand that there can be problems with functions with very long\nnames.\n\nWe already can do\n\nCREATE OR REPLACE FUNCTION public.fx(par1 integer, par2 integer)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\n<<b>>\ndeclare\n p1 alias for par1;\n p2 alias for par2;\nbegin\n raise notice '% % % %', par1, par2, b.p1, b.p2;\nend;\n$function$\n\nor safer\n\nCREATE OR REPLACE FUNCTION public.fx(par1 integer, par2 integer)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\n<<b>>\ndeclare\n p1 alias for fx.par1;\n p2 alias for fx.par2;\nbegin\n raise notice '% % % %', par1, par2, b.p1, b.p2;\nend;\n$function$\n\nSo I think introducing new syntax is not necessary. The open question is a\npossibility to do aliasing more comfortably. ADA language has a possibility\nto rename function or procedure. But it is much more stronger, than can be\nimplemented in plpgsql. Probably the most easy implementation can be a\npossibility to specify a new argument's label with already supported\n#option syntax.\n\nCREATE OR REPLACE FUNCTION very_long_name(par1 int)\nRETURNS int AS $$\n#routine_label lnm\nBEGIN\n RAISE NOTICE '%', lnm.par1;\n\nRegards\n\nPavel\n\n\n> Jack\n>\n\nHiút 17. 11. 2020 v 19:04 odesílatel Jack Christensen <jack@jncsoftware.com> napsal:When arguments and other local variables in pl/pgsql functions have the same name as columns referenced in queries it is necessary to disambiguate the names. This can be done by prefixing the function name (e.g. my_func.name), using the argument number is the case of an argument (e.g. $1), or renaming the variable (e.g. _name). It is also possible to use a GUC to always use the variable or the column but that seems dangerous to me.Prefixing names with an underscore works well enough for local variables, but when using named arguments I prefer the external name not require an underscore. I would like to suggest a standard prefix such as $ to reference a local variable or argument. $ followed by an integer already references an argument by ordinal. What if $ followed by a name meant a local reference (essentially it would expand to \"my_func.\")?For example, currently I have to do something like this:create function update_item(id int, foo int, bar text) returns voidlanguage plpgsql as $$begin update items set foo = update_item.foo, bar = update_item.bar where items.id = update_item.id;end;$$;I would like to be able to do something like:create function update_item(id int, foo int, bar text) returns voidlanguage plpgsql as $$begin update items set foo = $foo, bar = $bar where items.id = $id;end;$$;Any opinions on the desirability of this feature? My C skills are rather atrophied, but from the outside it seems like a small enough change I might be able to tackle it...I don't like this proposal too much. Introducing the next different syntax for writing local variables doesn't look like a good idea for me. More this syntax is used by very different languages than is PL/pgSQL, and then it can be messy. The behaviour of local variables in PHP or Perl or shell is really very different. Personally in your example I very much like notation \"update_item.id\", because there is a clean signal so \"id\" is the function's argument. When you use \"$id\", then it is not clean if \"id\" is a local variable or function's argument. So your proposal decreases safety :-/. Plus this syntax reduces collision only on one side, you should use aliases for sql identifiers and again it is not balanced - In MS SQL I can write predicate id = @id. But it is not possible in your proposal (and it is not possible from compatibility reasons ever).More we already has a possibility to do ALIAS of any variable https://www.postgresql.org/docs/current/plpgsql-declarations.html#PLPGSQL-DECLARATION-ALIASI understand that there can be problems with functions with very long names. We already can doCREATE OR REPLACE FUNCTION public.fx(par1 integer, par2 integer) RETURNS void LANGUAGE plpgsqlAS $function$<<b>>declare p1 alias for par1; p2 alias for par2;begin raise notice '% % % %', par1, par2, b.p1, b.p2;end;$function$or saferCREATE OR REPLACE FUNCTION public.fx(par1 integer, par2 integer) RETURNS void LANGUAGE plpgsqlAS $function$<<b>>declare p1 alias for fx.par1; p2 alias for fx.par2;begin raise notice '% % % %', par1, par2, b.p1, b.p2;end;$function$So I think introducing new syntax is not necessary. The open question is a possibility to do aliasing more comfortably. ADA language has a possibility to rename function or procedure. But it is much more stronger, than can be implemented in plpgsql. Probably the most easy implementation can be a possibility to specify a new argument's label with already supported #option syntax.CREATE OR REPLACE FUNCTION very_long_name(par1 int)RETURNS int AS $$#routine_label lnmBEGIN RAISE NOTICE '%', lnm.par1;RegardsPavelJack",
"msg_date": "Tue, 17 Nov 2020 20:35:33 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 1:36 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n> Personally in your example I very much like notation \"update_item.id\",\n> because there is a clean signal so \"id\" is the function's argument. When\n> you use \"$id\", then it is not clean if \"id\" is a local variable or\n> function's argument. So your proposal decreases safety :-/. Plus this\n> syntax reduces collision only on one side, you should use aliases for sql\n> identifiers and again it is not balanced - In MS SQL I can write predicate\n> id = @id. But it is not possible in your proposal (and it is not possible\n> from compatibility reasons ever).\n>\n> More we already has a possibility to do ALIAS of any variable\n> https://www.postgresql.org/docs/current/plpgsql-declarations.html#PLPGSQL-DECLARATION-ALIAS\n>\n> I understand that there can be problems with functions with very long\n> names.\n>\n\nRight. The problem is most pronounced when the function name is long. And\nin particular when there are a lot of optional arguments. In this case the\ncaller will be using named arguments so I want to avoid prefixing the\nparameter names. And while alias is an option, that can also get quite\nverbose when there are a large number of parameters.\n\n\n> So I think introducing new syntax is not necessary. The open question is a\n> possibility to do aliasing more comfortably. ADA language has a possibility\n> to rename function or procedure. But it is much more stronger, than can be\n> implemented in plpgsql. Probably the most easy implementation can be a\n> possibility to specify a new argument's label with already supported\n> #option syntax.\n>\n> CREATE OR REPLACE FUNCTION very_long_name(par1 int)\n> RETURNS int AS $$\n> #routine_label lnm\n> BEGIN\n> RAISE NOTICE '%', lnm.par1;\n>\n>\nI concur. This would be an improvement.\n\nJack\n\nOn Tue, Nov 17, 2020 at 1:36 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Personally in your example I very much like notation \"update_item.id\", because there is a clean signal so \"id\" is the function's argument. When you use \"$id\", then it is not clean if \"id\" is a local variable or function's argument. So your proposal decreases safety :-/. Plus this syntax reduces collision only on one side, you should use aliases for sql identifiers and again it is not balanced - In MS SQL I can write predicate id = @id. But it is not possible in your proposal (and it is not possible from compatibility reasons ever).More we already has a possibility to do ALIAS of any variable https://www.postgresql.org/docs/current/plpgsql-declarations.html#PLPGSQL-DECLARATION-ALIASI understand that there can be problems with functions with very long names. Right. The problem is most pronounced when the function name is long. And in particular when there are a lot of optional arguments. In this case the caller will be using named arguments so I want to avoid prefixing the parameter names. And while alias is an option, that can also get quite verbose when there are a large number of parameters.So I think introducing new syntax is not necessary. The open question is a possibility to do aliasing more comfortably. ADA language has a possibility to rename function or procedure. But it is much more stronger, than can be implemented in plpgsql. Probably the most easy implementation can be a possibility to specify a new argument's label with already supported #option syntax.CREATE OR REPLACE FUNCTION very_long_name(par1 int)RETURNS int AS $$#routine_label lnmBEGIN RAISE NOTICE '%', lnm.par1;I concur. This would be an improvement.Jack",
"msg_date": "Tue, 17 Nov 2020 14:18:51 -0600",
"msg_from": "Jack Christensen <jack@jncsoftware.com>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On 11/17/20 15:18, Jack Christensen wrote:\n>> CREATE OR REPLACE FUNCTION very_long_name(par1 int)\n>> RETURNS int AS $$\n>> #routine_label lnm\n>> BEGIN\n>> RAISE NOTICE '%', lnm.par1;\n\nCould that be somehow shoehorned into the existing ALIAS syntax, maybe as\n\nDECLARE\n lnm ALIAS FOR ALL very_long_name.*;\n\nor something?\n\n(And would it be cool if Table C.1 [1] had some sort of javascript-y\nfiltering on reservedness categories, for just such kinds of bikeshedding?)\n\nRegards,\n-Chap\n\n\n[1] https://www.postgresql.org/docs/13/sql-keywords-appendix.html#KEYWORDS-TABLE\n\n\n",
"msg_date": "Tue, 17 Nov 2020 15:45:11 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "út 17. 11. 2020 v 21:45 odesílatel Chapman Flack <chap@anastigmatix.net>\nnapsal:\n\n> On 11/17/20 15:18, Jack Christensen wrote:\n> >> CREATE OR REPLACE FUNCTION very_long_name(par1 int)\n> >> RETURNS int AS $$\n> >> #routine_label lnm\n> >> BEGIN\n> >> RAISE NOTICE '%', lnm.par1;\n>\n> Could that be somehow shoehorned into the existing ALIAS syntax, maybe as\n>\n> DECLARE\n> lnm ALIAS FOR ALL very_long_name.*;\n>\n> or something?\n>\n\nI thought about it - but I don't think so this syntax is correct - in your\ncase it should be\n\n lnm.* ALIAS FOR ALL very_long_name.*;\n\nbut it looks a little bit scary in ADA based language.\n\nMaybe\n\nDECLARE lnm LABEL ALIAS FOR very_long_name;\n\nor\n\nDECLARE lnm ALIAS FOR LABEL very_long_name;\n\nI think so implementation should not be hard. But there are advantages,\ndisadvantages - 1. label aliasing increases the complexity of variable\nsearching (instead comparing string with name of namespace, you should\ncompare list of names). Probably not too much. 2. The syntax is a little\nbit harder than #option. Implementation based on #option can just rename\ntop namespace, so there is not any overhead, and parsing #option syntax is\npretty simple (and the syntax is shorter). So from implementation reasons I\nprefer #option based syntax. There is clear zero impact on performance.\n\nRegards\n\nPavel\n\n\n>\n> (And would it be cool if Table C.1 [1] had some sort of javascript-y\n> filtering on reservedness categories, for just such kinds of bikeshedding?)\n>\n> Regards,\n> -Chap\n>\n>\n> [1]\n> https://www.postgresql.org/docs/13/sql-keywords-appendix.html#KEYWORDS-TABLE\n>\n\nút 17. 11. 2020 v 21:45 odesílatel Chapman Flack <chap@anastigmatix.net> napsal:On 11/17/20 15:18, Jack Christensen wrote:\n>> CREATE OR REPLACE FUNCTION very_long_name(par1 int)\n>> RETURNS int AS $$\n>> #routine_label lnm\n>> BEGIN\n>> RAISE NOTICE '%', lnm.par1;\n\nCould that be somehow shoehorned into the existing ALIAS syntax, maybe as\n\nDECLARE\n lnm ALIAS FOR ALL very_long_name.*;\n\nor something?I thought about it - but I don't think so this syntax is correct - in your case it should be lnm.* ALIAS FOR ALL very_long_name.*;but it looks a little bit scary in ADA based language.Maybe DECLARE lnm LABEL ALIAS FOR very_long_name;orDECLARE lnm ALIAS FOR LABEL very_long_name;I think so implementation should not be hard. But there are advantages, disadvantages - 1. label aliasing increases the complexity of variable searching (instead comparing string with name of namespace, you should compare list of names). Probably not too much. 2. The syntax is a little bit harder than #option. Implementation based on #option can just rename top namespace, so there is not any overhead, and parsing #option syntax is pretty simple (and the syntax is shorter). So from implementation reasons I prefer #option based syntax. There is clear zero impact on performance.RegardsPavel \n\n(And would it be cool if Table C.1 [1] had some sort of javascript-y\nfiltering on reservedness categories, for just such kinds of bikeshedding?)\n\nRegards,\n-Chap\n\n\n[1] https://www.postgresql.org/docs/13/sql-keywords-appendix.html#KEYWORDS-TABLE",
"msg_date": "Wed, 18 Nov 2020 06:58:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "st 18. 11. 2020 v 6:58 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 17. 11. 2020 v 21:45 odesílatel Chapman Flack <chap@anastigmatix.net>\n> napsal:\n>\n>> On 11/17/20 15:18, Jack Christensen wrote:\n>> >> CREATE OR REPLACE FUNCTION very_long_name(par1 int)\n>> >> RETURNS int AS $$\n>> >> #routine_label lnm\n>> >> BEGIN\n>> >> RAISE NOTICE '%', lnm.par1;\n>>\n>> Could that be somehow shoehorned into the existing ALIAS syntax, maybe as\n>>\n>> DECLARE\n>> lnm ALIAS FOR ALL very_long_name.*;\n>>\n>> or something?\n>>\n>\n> I thought about it - but I don't think so this syntax is correct - in your\n> case it should be\n>\n> lnm.* ALIAS FOR ALL very_long_name.*;\n>\n> but it looks a little bit scary in ADA based language.\n>\n> Maybe\n>\n> DECLARE lnm LABEL ALIAS FOR very_long_name;\n>\n> or\n>\n> DECLARE lnm ALIAS FOR LABEL very_long_name;\n>\n> I think so implementation should not be hard. But there are advantages,\n> disadvantages - 1. label aliasing increases the complexity of variable\n> searching (instead comparing string with name of namespace, you should\n> compare list of names). Probably not too much. 2. The syntax is a little\n> bit harder than #option. Implementation based on #option can just rename\n> top namespace, so there is not any overhead, and parsing #option syntax is\n> pretty simple (and the syntax is shorter). So from implementation reasons I\n> prefer #option based syntax. There is clear zero impact on performance.\n>\n> Regards\n>\n\nI checked code - and I have to change my opinion - the current\nimplementation of namespaces in plpgsql doesn't allow renaming or realising\nlabels elegantly. The design has low memory requirements but it is not\nflexible. I wrote a proof concept patch, and I had to hack the nsitem\nlittle bit.\n\npostgres=# create or replace function bubu(a int, b int)\nreturns void as $$\n#routine_label b\nbegin\n raise notice '% %', b.a, b.b;\nend;\n$$ language plpgsql;\nCREATE FUNCTION\npostgres=# select bubu(10,20);\nNOTICE: 10 20\n┌──────┐\n│ bubu │\n╞══════╡\n│ │\n└──────┘\n(1 row)\n\n\n\n> Pavel\n>\n>\n>>\n>> (And would it be cool if Table C.1 [1] had some sort of javascript-y\n>> filtering on reservedness categories, for just such kinds of\n>> bikeshedding?)\n>>\n>> Regards,\n>> -Chap\n>>\n>>\n>> [1]\n>> https://www.postgresql.org/docs/13/sql-keywords-appendix.html#KEYWORDS-TABLE\n>>\n>",
"msg_date": "Wed, 18 Nov 2020 21:21:06 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "st 18. 11. 2020 v 21:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 18. 11. 2020 v 6:58 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> út 17. 11. 2020 v 21:45 odesílatel Chapman Flack <chap@anastigmatix.net>\n>> napsal:\n>>\n>>> On 11/17/20 15:18, Jack Christensen wrote:\n>>> >> CREATE OR REPLACE FUNCTION very_long_name(par1 int)\n>>> >> RETURNS int AS $$\n>>> >> #routine_label lnm\n>>> >> BEGIN\n>>> >> RAISE NOTICE '%', lnm.par1;\n>>>\n>>> Could that be somehow shoehorned into the existing ALIAS syntax, maybe as\n>>>\n>>> DECLARE\n>>> lnm ALIAS FOR ALL very_long_name.*;\n>>>\n>>> or something?\n>>>\n>>\n>> I thought about it - but I don't think so this syntax is correct - in\n>> your case it should be\n>>\n>> lnm.* ALIAS FOR ALL very_long_name.*;\n>>\n>> but it looks a little bit scary in ADA based language.\n>>\n>> Maybe\n>>\n>> DECLARE lnm LABEL ALIAS FOR very_long_name;\n>>\n>> or\n>>\n>> DECLARE lnm ALIAS FOR LABEL very_long_name;\n>>\n>> I think so implementation should not be hard. But there are advantages,\n>> disadvantages - 1. label aliasing increases the complexity of variable\n>> searching (instead comparing string with name of namespace, you should\n>> compare list of names). Probably not too much. 2. The syntax is a little\n>> bit harder than #option. Implementation based on #option can just rename\n>> top namespace, so there is not any overhead, and parsing #option syntax is\n>> pretty simple (and the syntax is shorter). So from implementation reasons I\n>> prefer #option based syntax. There is clear zero impact on performance.\n>>\n>> Regards\n>>\n>\n> I checked code - and I have to change my opinion - the current\n> implementation of namespaces in plpgsql doesn't allow renaming or realising\n> labels elegantly. The design has low memory requirements but it is not\n> flexible. I wrote a proof concept patch, and I had to hack the nsitem\n> little bit.\n>\n> postgres=# create or replace function bubu(a int, b int)\n> returns void as $$\n> #routine_label b\n>\n\nmaybe a better name for this option can be \"routine_alias\".\n\n\n> begin\n> raise notice '% %', b.a, b.b;\n> end;\n> $$ language plpgsql;\n> CREATE FUNCTION\n> postgres=# select bubu(10,20);\n> NOTICE: 10 20\n> ┌──────┐\n> │ bubu │\n> ╞══════╡\n> │ │\n> └──────┘\n> (1 row)\n>\n>\n>\n>> Pavel\n>>\n>>\n>>>\n>>> (And would it be cool if Table C.1 [1] had some sort of javascript-y\n>>> filtering on reservedness categories, for just such kinds of\n>>> bikeshedding?)\n>>>\n>>> Regards,\n>>> -Chap\n>>>\n>>>\n>>> [1]\n>>> https://www.postgresql.org/docs/13/sql-keywords-appendix.html#KEYWORDS-TABLE\n>>>\n>>\n\nst 18. 11. 2020 v 21:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:st 18. 11. 2020 v 6:58 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 17. 11. 2020 v 21:45 odesílatel Chapman Flack <chap@anastigmatix.net> napsal:On 11/17/20 15:18, Jack Christensen wrote:\n>> CREATE OR REPLACE FUNCTION very_long_name(par1 int)\n>> RETURNS int AS $$\n>> #routine_label lnm\n>> BEGIN\n>> RAISE NOTICE '%', lnm.par1;\n\nCould that be somehow shoehorned into the existing ALIAS syntax, maybe as\n\nDECLARE\n lnm ALIAS FOR ALL very_long_name.*;\n\nor something?I thought about it - but I don't think so this syntax is correct - in your case it should be lnm.* ALIAS FOR ALL very_long_name.*;but it looks a little bit scary in ADA based language.Maybe DECLARE lnm LABEL ALIAS FOR very_long_name;orDECLARE lnm ALIAS FOR LABEL very_long_name;I think so implementation should not be hard. But there are advantages, disadvantages - 1. label aliasing increases the complexity of variable searching (instead comparing string with name of namespace, you should compare list of names). Probably not too much. 2. The syntax is a little bit harder than #option. Implementation based on #option can just rename top namespace, so there is not any overhead, and parsing #option syntax is pretty simple (and the syntax is shorter). So from implementation reasons I prefer #option based syntax. There is clear zero impact on performance.RegardsI checked code - and I have to change my opinion - the current implementation of namespaces in plpgsql doesn't allow renaming or realising labels elegantly. The design has low memory requirements but it is not flexible. I wrote a proof concept patch, and I had to hack the nsitem little bit.postgres=# create or replace function bubu(a int, b int)returns void as $$#routine_label bmaybe a better name for this option can be \"routine_alias\". begin raise notice '% %', b.a, b.b;end;$$ language plpgsql;CREATE FUNCTIONpostgres=# select bubu(10,20);NOTICE: 10 20┌──────┐│ bubu │╞══════╡│ │└──────┘(1 row)Pavel \n\n(And would it be cool if Table C.1 [1] had some sort of javascript-y\nfiltering on reservedness categories, for just such kinds of bikeshedding?)\n\nRegards,\n-Chap\n\n\n[1] https://www.postgresql.org/docs/13/sql-keywords-appendix.html#KEYWORDS-TABLE",
"msg_date": "Wed, 18 Nov 2020 21:24:48 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On 11/18/20 9:21 PM, Pavel Stehule wrote:\n> postgres=# create or replace function bubu(a int, b int)\n> returns void as $$\n> #routine_label b\n> begin\n> raise notice '% %', b.a, b.b;\n> end;\n> $$ language plpgsql;\n\nWhy not use the block labeling syntax we already have?\n\ncreate or replace function bubu(a int, b int)\nreturns void as $$\n<< b >>\nbegin\n raise notice '% %', b.a, b.b;\nend;\n$$ language plpgsql;\n\nThat doesn't currently work, but it could be made to.\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 19 Nov 2020 01:53:48 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "Hi\n\nthis is less invasive, and probably more correct work with ns items patch.\n\nčt 19. 11. 2020 v 1:54 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 11/18/20 9:21 PM, Pavel Stehule wrote:\n> > postgres=# create or replace function bubu(a int, b int)\n> > returns void as $$\n> > #routine_label b\n> > begin\n> > raise notice '% %', b.a, b.b;\n> > end;\n> > $$ language plpgsql;\n>\n> Why not use the block labeling syntax we already have?\n>\n> create or replace function bubu(a int, b int)\n> returns void as $$\n> << b >>\n> begin\n> raise notice '% %', b.a, b.b;\n> end;\n> $$ language plpgsql;\n>\n> That doesn't currently work, but it could be made to.\n>\n\nI don't think it is correct - in your example you are trying to merge two\ndifferent namespaces - so minimally it should allow duplicate names in one\nnamespace, or it breaks compatibility.\n\nAnd I don't think so we want to lose information and separate access to\nfunction's arguments.\n\nRegards\n\nPavel\n\n> --\n> Vik Fearing\n>",
"msg_date": "Thu, 19 Nov 2020 05:32:30 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "Hi\n\nfix broken regress test\n\nRegards\n\nPavel",
"msg_date": "Tue, 5 Jan 2021 09:56:45 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "I think that this can be a useful feature in many occasions, especially since\nthe overhead is almost inexistent.\n\nThe implementation is sensible and there are regression tests to cover the new\nfeature.\n\nThere's a problem however if you try to add multiple #routine_label commands in\nthe same function. At minimum the Assert on \"nse->itemno == (int)\nPLPGSQL_LABEL_BLOCK\" will fail.\n\nI don't think that it makes sense to have multiple occurences of this command,\nand we should simply error out if plpgsql_curr_compile->root_ns->itemno is\nPLPGSQL_LABEL_REPLACED. If any case this should also be covered in the\nregression tests.\n\n\n",
"msg_date": "Sat, 13 Mar 2021 16:53:51 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "Hi\n\nso 13. 3. 2021 v 9:53 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> I think that this can be a useful feature in many occasions, especially\n> since\n> the overhead is almost inexistent.\n>\n> The implementation is sensible and there are regression tests to cover the\n> new\n> feature.\n>\n> There's a problem however if you try to add multiple #routine_label\n> commands in\n> the same function. At minimum the Assert on \"nse->itemno == (int)\n> PLPGSQL_LABEL_BLOCK\" will fail.\n>\n> I don't think that it makes sense to have multiple occurences of this\n> command,\n> and we should simply error out if plpgsql_curr_compile->root_ns->itemno is\n> PLPGSQL_LABEL_REPLACED. If any case this should also be covered in the\n> regression tests.\n>\n\nI did it. Thank you for check\n\nI wrote few sentences to documentation\n\nRegards\n\nPavel",
"msg_date": "Sat, 13 Mar 2021 12:10:29 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 12:10:29PM +0100, Pavel Stehule wrote:\n> \n> so 13. 3. 2021 v 9:53 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> >\n> > I don't think that it makes sense to have multiple occurences of this\n> > command,\n> > and we should simply error out if plpgsql_curr_compile->root_ns->itemno is\n> > PLPGSQL_LABEL_REPLACED. If any case this should also be covered in the\n> > regression tests.\n> >\n> \n> I did it. Thank you for check\n\nThanks, LGTM.\n\n> I wrote few sentences to documentation\n\nGreat!\n\nI just had a few cosmetic comments:\n\n+ block can be changed by inserting special command at the start of the function\n[...]\n+ arguments) is possible with option <literal>routine_label</literal>:\n[...]\n+\t\t\t\t errhint(\"The option \\\"routine_label\\\" can be used only once in rutine.\"),\n[...]\n+-- Check root namespace renaming (routine_label option)\n\nYou're sometimes using \"command\" and \"sometimes\" option. Should we always use\nthe same term, maybe \"command\" as it's already used for #variable_conflict\ndocumentation?\n\nAlso\n\n+ variables can be qualified with the function's name. The name of this outer\n+ block can be changed by inserting special command at the start of the function\n+ <literal>#routine_label new_name</literal>.\n\nIt's missing a preposition before \"special command\". Maybe\n\n+ variables can be qualified with the function's name. The name of this outer\n+ block can be changed by inserting a special command\n+ <literal>#routine_label new_name</literal> at the start of the function.\n\n\n+ The function's argument can be qualified by function name:\n\nShould be \"qualified with the function name\"\n\n+ Sometimes the function name is too long and can be more practical to use\n+ some shorter label. An change of label of top namespace (with functions's\n+ arguments) is possible with option <literal>routine_label</literal>:\n\nFew similar issues. How about:\n\nSometimes the function name is too long and *it* can be more practical to use\nsome shorter label. *The top namespace label can be changed* (*along* with\n*the* functions' arguments) *using the option* <literal>routine_label</literal>:\n\nI'm not a native English speaker so the proposes changed may be wrong or not\nenough.\n\n\n",
"msg_date": "Sat, 13 Mar 2021 19:49:00 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "Hi\n\nso 13. 3. 2021 v 12:48 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Sat, Mar 13, 2021 at 12:10:29PM +0100, Pavel Stehule wrote:\n> >\n> > so 13. 3. 2021 v 9:53 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> > >\n> > > I don't think that it makes sense to have multiple occurences of this\n> > > command,\n> > > and we should simply error out if\n> plpgsql_curr_compile->root_ns->itemno is\n> > > PLPGSQL_LABEL_REPLACED. If any case this should also be covered in the\n> > > regression tests.\n> > >\n> >\n> > I did it. Thank you for check\n>\n> Thanks, LGTM.\n>\n> > I wrote few sentences to documentation\n>\n> Great!\n>\n> I just had a few cosmetic comments:\n>\n> + block can be changed by inserting special command at the start of\n> the function\n> [...]\n> + arguments) is possible with option <literal>routine_label</literal>:\n> [...]\n> + errhint(\"The option \\\"routine_label\\\" can\n> be used only once in rutine.\"),\n> [...]\n> +-- Check root namespace renaming (routine_label option)\n>\n> You're sometimes using \"command\" and \"sometimes\" option. Should we always\n> use\n> the same term, maybe \"command\" as it's already used for #variable_conflict\n> documentation?\n>\n> Also\n>\n> + variables can be qualified with the function's name. The name of\n> this outer\n> + block can be changed by inserting special command at the start of\n> the function\n> + <literal>#routine_label new_name</literal>.\n>\n> It's missing a preposition before \"special command\". Maybe\n>\n> + variables can be qualified with the function's name. The name of\n> this outer\n> + block can be changed by inserting a special command\n> + <literal>#routine_label new_name</literal> at the start of the\n> function.\n>\n>\n> + The function's argument can be qualified by function name:\n>\n> Should be \"qualified with the function name\"\n>\n> + Sometimes the function name is too long and can be more practical to\n> use\n> + some shorter label. An change of label of top namespace (with\n> functions's\n> + arguments) is possible with option <literal>routine_label</literal>:\n>\n> Few similar issues. How about:\n>\n> Sometimes the function name is too long and *it* can be more practical to\n> use\n> some shorter label. *The top namespace label can be changed* (*along* with\n> *the* functions' arguments) *using the option*\n> <literal>routine_label</literal>:\n>\n> I'm not a native English speaker so the proposes changed may be wrong or\n> not\n> enough.\n>\n\nMy English is good enough for taking beer everywhere in the world :) . Ti\nis not good, but a lot of people who don't understand to English understand\nmy simplified fork of English language.\n\nThank you for check\n\nupdated patch attached\n\nPavel",
"msg_date": "Sun, 14 Mar 2021 20:41:15 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Sun, Mar 14, 2021 at 08:41:15PM +0100, Pavel Stehule wrote:\n> \n> My English is good enough for taking beer everywhere in the world :) . Ti\n> is not good, but a lot of people who don't understand to English understand\n> my simplified fork of English language.\n\nI have the same problem. We can talk about it and other stuff while having a\nbeer sometimes :)\n\n> updated patch attached\n\nThanks, it looks good to me, I'm marking the patch as RFC!\n\n\n",
"msg_date": "Mon, 15 Mar 2021 11:54:55 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "po 15. 3. 2021 v 4:54 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Sun, Mar 14, 2021 at 08:41:15PM +0100, Pavel Stehule wrote:\n> >\n> > My English is good enough for taking beer everywhere in the world :) . Ti\n> > is not good, but a lot of people who don't understand to English\n> understand\n> > my simplified fork of English language.\n>\n> I have the same problem. We can talk about it and other stuff while\n> having a\n> beer sometimes :)\n>\n\n:)\n\n\n> > updated patch attached\n>\n> Thanks, it looks good to me, I'm marking the patch as RFC!\n>\n\nThank you very much\n\nRegards\n\nPavel\n\npo 15. 3. 2021 v 4:54 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sun, Mar 14, 2021 at 08:41:15PM +0100, Pavel Stehule wrote:\n> \n> My English is good enough for taking beer everywhere in the world :) . Ti\n> is not good, but a lot of people who don't understand to English understand\n> my simplified fork of English language.\n\nI have the same problem. We can talk about it and other stuff while having a\nbeer sometimes :):) \n\n> updated patch attached\n\nThanks, it looks good to me, I'm marking the patch as RFC!Thank you very muchRegardsPavel",
"msg_date": "Mon, 15 Mar 2021 05:31:18 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 05:31:18AM +0100, Pavel Stehule wrote:\n> Thank you very much\n\nI am not much a fan of the implementation done here. Based on my\nunderstanding of the PL code, the namespace lookup is organized as a\nlist of items, with the namespace associated to the routine name being\nthe first one pushed to the list after plpgsql_ns_init() initializes\nthe top of it. Then, the proposed patch hijacks the namespace lookup\nlist by doing a replacement of the root label while parsing the\nroutine and then tweaks the lookup logic when a variable is qualified\n(aka name2 != NULL) to bump the namespace list one level higher so as\nit does not look after the namespace with the routine name but instead\nfetches the one defined by ROUTINE_NAME. That seems rather bug-prone\nto me, to say the least. \n\nNo changes to plpgsql_compile_inline()?\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"redundant option\"),\n+ errhint(\"The command \\\"routine_label\\\" can be used\nonly once in rutine.\"),\n+ plpgsql_scanner_errposition(location)));\ns/rutine/routine/\n\n+ /* Don't allow repeated redefination of routine label */\nredefination?\n\nI am wondering whether it would be better to allow multiple aliases\nthough, and if it would bring more readability to the routines written\nif these are treated equal to the top-most namespace which is the\nroutine name now, meaning that we would maintain not one, but N top\nnamespace labels that could be used as aliases of the root one.\n--\nMichael",
"msg_date": "Wed, 17 Mar 2021 12:52:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "st 17. 3. 2021 v 4:52 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Mon, Mar 15, 2021 at 05:31:18AM +0100, Pavel Stehule wrote:\n> > Thank you very much\n>\n> I am not much a fan of the implementation done here. Based on my\n> understanding of the PL code, the namespace lookup is organized as a\n> list of items, with the namespace associated to the routine name being\n> the first one pushed to the list after plpgsql_ns_init() initializes\n> the top of it. Then, the proposed patch hijacks the namespace lookup\n> list by doing a replacement of the root label while parsing the\n> routine and then tweaks the lookup logic when a variable is qualified\n> (aka name2 != NULL) to bump the namespace list one level higher so as\n> it does not look after the namespace with the routine name but instead\n> fetches the one defined by ROUTINE_NAME. That seems rather bug-prone\n> to me, to say the least.\n>\n\nI'll check what can be done. I am not sure if it is possible to push a new\nlabel to the same level as the top label.\n\n\n> No changes to plpgsql_compile_inline()?\n>\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"redundant option\"),\n> + errhint(\"The command \\\"routine_label\\\" can be used\n> only once in rutine.\"),\n> + plpgsql_scanner_errposition(location)));\n> s/rutine/routine/\n>\n> + /* Don't allow repeated redefination of routine label */\n> redefination?\n>\n\nI'll check it. But inline block has not top label\n\n\n> I am wondering whether it would be better to allow multiple aliases\n> though, and if it would bring more readability to the routines written\n> if these are treated equal to the top-most namespace which is the\n> routine name now, meaning that we would maintain not one, but N top\n> namespace labels that could be used as aliases of the root one.\n>\n\nI do not have a strong opinion, but I am inclined to disallow this. I am\nafraid so code can be less readable.\n\nThere is a precedent - SQL doesn't allow you to use table names as\nqualifiers when you have an alias.\n\nBut it is a very important question. The selected behavior strongly impacts\nan implementation.\n\nWhat is the more common opinion about it? 1. Should we allow the original\ntop label or not? 2. Should we allow to define more top labels?\n\nRegards\n\nPavel\n\n\n\n\n\n> --\n> Michael\n>\n\nst 17. 3. 2021 v 4:52 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Mon, Mar 15, 2021 at 05:31:18AM +0100, Pavel Stehule wrote:\n> Thank you very much\n\nI am not much a fan of the implementation done here. Based on my\nunderstanding of the PL code, the namespace lookup is organized as a\nlist of items, with the namespace associated to the routine name being\nthe first one pushed to the list after plpgsql_ns_init() initializes\nthe top of it. Then, the proposed patch hijacks the namespace lookup\nlist by doing a replacement of the root label while parsing the\nroutine and then tweaks the lookup logic when a variable is qualified\n(aka name2 != NULL) to bump the namespace list one level higher so as\nit does not look after the namespace with the routine name but instead\nfetches the one defined by ROUTINE_NAME. That seems rather bug-prone\nto me, to say the least. I'll check what can be done. I am not sure if it is possible to push a new label to the same level as the top label.\n\nNo changes to plpgsql_compile_inline()?\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"redundant option\"),\n+ errhint(\"The command \\\"routine_label\\\" can be used\nonly once in rutine.\"),\n+ plpgsql_scanner_errposition(location)));\ns/rutine/routine/\n\n+ /* Don't allow repeated redefination of routine label */\nredefination?I'll check it. But inline block has not top label \n\nI am wondering whether it would be better to allow multiple aliases\nthough, and if it would bring more readability to the routines written\nif these are treated equal to the top-most namespace which is the\nroutine name now, meaning that we would maintain not one, but N top\nnamespace labels that could be used as aliases of the root one.I do not have a strong opinion, but I am inclined to disallow this. I am afraid so code can be less readable.There is a precedent - SQL doesn't allow you to use table names as qualifiers when you have an alias.But it is a very important question. The selected behavior strongly impacts an implementation.What is the more common opinion about it? 1. Should we allow the original top label or not? 2. Should we allow to define more top labels?RegardsPavel \n--\nMichael",
"msg_date": "Wed, 17 Mar 2021 05:30:30 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 05:30:30AM +0100, Pavel Stehule wrote:\n> st 17. 3. 2021 v 4:52 odes�latel Michael Paquier <michael@paquier.xyz>\n> \n> > I am wondering whether it would be better to allow multiple aliases\n> > though, and if it would bring more readability to the routines written\n> > if these are treated equal to the top-most namespace which is the\n> > routine name now, meaning that we would maintain not one, but N top\n> > namespace labels that could be used as aliases of the root one.\n> >\n> \n> I do not have a strong opinion, but I am inclined to disallow this. I am\n> afraid so code can be less readable.\n> \n> There is a precedent - SQL doesn't allow you to use table names as\n> qualifiers when you have an alias.\n\n+1\n\n> \n> But it is a very important question. The selected behavior strongly impacts\n> an implementation.\n> \n> What is the more common opinion about it? 1. Should we allow the original\n> top label or not? 2. Should we allow to define more top labels?\n\nI also think that there should be a single usable top label, otherwise it will\nlead to confusing code and it can be a source of bug.\n\n\n",
"msg_date": "Wed, 17 Mar 2021 14:06:57 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 02:06:57PM +0800, Julien Rouhaud wrote:\n> I also think that there should be a single usable top label, otherwise it will\n> lead to confusing code and it can be a source of bug.\n\nOkay, that's fine by me. Could it be possible to come up with an\napproach that does not hijack the namespace list contruction and the\nlookup logic as much as it does now? I get it that the patch is done\nthis way because of the ordering of operations done for the initial ns\nlist creation and the grammar parsing that adds the routine label on\ntop of the root one, but I'd like to believe that there are more solid\nways to achieve that, like for example something that decouples the\nroot label and its alias, or something that associates an alias\ndirectly with its PLpgSQL_nsitem entry?\n--\nMichael",
"msg_date": "Wed, 17 Mar 2021 17:19:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "st 17. 3. 2021 v 9:20 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Wed, Mar 17, 2021 at 02:06:57PM +0800, Julien Rouhaud wrote:\n> > I also think that there should be a single usable top label, otherwise\n> it will\n> > lead to confusing code and it can be a source of bug.\n>\n> Okay, that's fine by me. Could it be possible to come up with an\n> approach that does not hijack the namespace list contruction and the\n> lookup logic as much as it does now? I get it that the patch is done\n> this way because of the ordering of operations done for the initial ns\n> list creation and the grammar parsing that adds the routine label on\n> top of the root one, but I'd like to believe that there are more solid\n> ways to achieve that, like for example something that decouples the\n> root label and its alias, or something that associates an alias\n> directly with its PLpgSQL_nsitem entry?\n>\n\nI am checking it now, and I don't see any easy solution. The namespace is a\none direction tree - only backward iteration from leafs to roots is\nsupported. At the moment, when I try to replace the name of root ns_item,\nthis root ns_item is referenced by the function's arguments (NS_TYPE_VAR\ntype). So anytime I need some helper ns_item node, that can be a\npersistent target for these nodes. In this case is a memory overhead of\njust one ns_item.\n\norig_label <- argument1 <- argument2\n\nThe patch has to save the original orig_label because it can be referenced\nfrom argument1 or by \"FOUND\" variable. New graphs looks like\n\nnew_label <- invalidated orig_label <- argument1 <- ...\n\nThis tree has a different direction than is usual, and then replacing the\nroot node is not simple.\n\nSecond solution (significantly more simple) is an additional pointer in\nns_item structure. In almost all cases this pointer will not be used.\nBecause ns_item uses a flexible array, then union cannot be used. I\nimplemented this in a patch marked as \"alias-implementation\".\n\nWhat do you think about it?\n\nPavel\n\n\n\n\n\n\n\n> --\n> Michael\n>",
"msg_date": "Wed, 17 Mar 2021 17:04:48 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "why are you using yet another special syntax for this ?\n\nwould it not be better to do something like this:\nCREATE FUNCTION a_reall_long_and_winding_function_name(i int, out o int)\nLANGUAGE plpgsql AS $plpgsql$\nDECLARE\n args function_name_alias\nBEGIN\n args.o = 2 * args.i\nEND;\n$plpgsql$;\n\nor at least do something based on block labels (see end of\nhttps://www.postgresql.org/docs/13/plpgsql-structure.html )\n\nCREATE FUNCTION somefunc() RETURNS integer AS $$\n<< outerblock >>\nDECLARE\n...\n\nmaybe extend this to\n\nCREATE FUNCTION somefunc() RETURNS integer AS $$\n<< functionlabel:args >>\n<< outerblock >>\nDECLARE\n...\n\nor replace 'functionlabel' with something less ugly :)\n\nmaybe <<< args >>>\n\nCheers\nHannu\n\n\n\nOn Wed, Mar 17, 2021 at 5:05 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> st 17. 3. 2021 v 9:20 odesílatel Michael Paquier <michael@paquier.xyz> napsal:\n>>\n>> On Wed, Mar 17, 2021 at 02:06:57PM +0800, Julien Rouhaud wrote:\n>> > I also think that there should be a single usable top label, otherwise it will\n>> > lead to confusing code and it can be a source of bug.\n>>\n>> Okay, that's fine by me. Could it be possible to come up with an\n>> approach that does not hijack the namespace list contruction and the\n>> lookup logic as much as it does now? I get it that the patch is done\n>> this way because of the ordering of operations done for the initial ns\n>> list creation and the grammar parsing that adds the routine label on\n>> top of the root one, but I'd like to believe that there are more solid\n>> ways to achieve that, like for example something that decouples the\n>> root label and its alias, or something that associates an alias\n>> directly with its PLpgSQL_nsitem entry?\n>\n>\n> I am checking it now, and I don't see any easy solution. The namespace is a one direction tree - only backward iteration from leafs to roots is supported. At the moment, when I try to replace the name of root ns_item, this root ns_item is referenced by the function's arguments (NS_TYPE_VAR type). So anytime I need some helper ns_item node, that can be a persistent target for these nodes. In this case is a memory overhead of just one ns_item.\n>\n> orig_label <- argument1 <- argument2\n>\n> The patch has to save the original orig_label because it can be referenced from argument1 or by \"FOUND\" variable. New graphs looks like\n>\n> new_label <- invalidated orig_label <- argument1 <- ...\n>\n> This tree has a different direction than is usual, and then replacing the root node is not simple.\n>\n> Second solution (significantly more simple) is an additional pointer in ns_item structure. In almost all cases this pointer will not be used. Because ns_item uses a flexible array, then union cannot be used. I implemented this in a patch marked as \"alias-implementation\".\n>\n> What do you think about it?\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>>\n>> --\n>> Michael\n\n\n",
"msg_date": "Wed, 17 Mar 2021 23:16:06 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 05:04:48PM +0100, Pavel Stehule wrote:\n> This tree has a different direction than is usual, and then replacing the\n> root node is not simple.\n\nYeah, it is not like we should redesign this whole part just for the\nfeature discussed here, and that may impact performance as the current\nlist handling is cheap now.\n\n> Second solution (significantly more simple) is an additional pointer in\n> ns_item structure. In almost all cases this pointer will not be used.\n> Because ns_item uses a flexible array, then union cannot be used. I\n> implemented this in a patch marked as \"alias-implementation\".\n> \n> What do you think about it?\n\nI am not sure that it is necessary nor good to replace entirely the\nroot label. So associating a new field directly into it sounds like a\npromising approach?\n--\nMichael",
"msg_date": "Thu, 18 Mar 2021 07:32:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "st 17. 3. 2021 v 23:32 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Wed, Mar 17, 2021 at 05:04:48PM +0100, Pavel Stehule wrote:\n> > This tree has a different direction than is usual, and then replacing the\n> > root node is not simple.\n>\n> Yeah, it is not like we should redesign this whole part just for the\n> feature discussed here, and that may impact performance as the current\n> list handling is cheap now.\n>\n> > Second solution (significantly more simple) is an additional pointer in\n> > ns_item structure. In almost all cases this pointer will not be used.\n> > Because ns_item uses a flexible array, then union cannot be used. I\n> > implemented this in a patch marked as \"alias-implementation\".\n> >\n> > What do you think about it?\n>\n> I am not sure that it is necessary nor good to replace entirely the\n> root label. So associating a new field directly into it sounds like a\n> promising approach?\n>\n\nI have not a problem with this.\n\nPavel\n\n--\n> Michael\n>\n\nst 17. 3. 2021 v 23:32 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Wed, Mar 17, 2021 at 05:04:48PM +0100, Pavel Stehule wrote:\n> This tree has a different direction than is usual, and then replacing the\n> root node is not simple.\n\nYeah, it is not like we should redesign this whole part just for the\nfeature discussed here, and that may impact performance as the current\nlist handling is cheap now.\n\n> Second solution (significantly more simple) is an additional pointer in\n> ns_item structure. In almost all cases this pointer will not be used.\n> Because ns_item uses a flexible array, then union cannot be used. I\n> implemented this in a patch marked as \"alias-implementation\".\n> \n> What do you think about it?\n\nI am not sure that it is necessary nor good to replace entirely the\nroot label. So associating a new field directly into it sounds like a\npromising approach?I have not a problem with this.Pavel\n--\nMichael",
"msg_date": "Thu, 18 Mar 2021 05:09:38 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "st 17. 3. 2021 v 23:16 odesílatel Hannu Krosing <hannuk@google.com> napsal:\n\n> why are you using yet another special syntax for this ?\n>\n> would it not be better to do something like this:\n> CREATE FUNCTION a_reall_long_and_winding_function_name(i int, out o int)\n> LANGUAGE plpgsql AS $plpgsql$\n> DECLARE\n> args function_name_alias\n> BEGIN\n> args.o = 2 * args.i\n> END;\n> $plpgsql$;\n>\n> or at least do something based on block labels (see end of\n> https://www.postgresql.org/docs/13/plpgsql-structure.html )\n>\n> CREATE FUNCTION somefunc() RETURNS integer AS $$\n> << outerblock >>\n> DECLARE\n> ...\n>\n> maybe extend this to\n>\n> CREATE FUNCTION somefunc() RETURNS integer AS $$\n> << functionlabel:args >>\n> << outerblock >>\n> DECLARE\n> ...\n>\n> or replace 'functionlabel' with something less ugly :)\n>\n> maybe <<< args >>>\n>\n\nThere are few main reasons:\n\na) compile options are available already - so I don't need invent new\nsyntax - #OPTION DUMP, #PRINT_STRICT ON, #VARIABLE_CONFLICT ERROR\n\nb) compile options are processed before any any other operations, so\nimplementation can be very simple\n\nc) implementation is very simple\n\nI don't like an idea - <<prefix:value>> because you mix label syntax with\nsome functionality, and <<x>> and <<<x>>> are visually similar\n\n\"#routine_label xxx\" is not maybe visually nice (like other compile options\nin plpgsql), but has good readability, simplicity, verbosity\n\nBecause we already have this functionality (compile options), and because\nthis functionality is working well (there are not any limits from this\nsyntax), I strongly prefer to use it. We should not invite new syntax when\nsome already available syntax can work well. We can talk about the keyword\nROUTINE_LABEL - maybe #OPTION ROUTINE_LABEL xxx can look better, but I\nthink so using the compile option is a good solution.\n\nRegards\n\nPavel\n\n\n\n> Cheers\n> Hannu\n>\n>\n>\n> On Wed, Mar 17, 2021 at 5:05 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > st 17. 3. 2021 v 9:20 odesílatel Michael Paquier <michael@paquier.xyz>\n> napsal:\n> >>\n> >> On Wed, Mar 17, 2021 at 02:06:57PM +0800, Julien Rouhaud wrote:\n> >> > I also think that there should be a single usable top label,\n> otherwise it will\n> >> > lead to confusing code and it can be a source of bug.\n> >>\n> >> Okay, that's fine by me. Could it be possible to come up with an\n> >> approach that does not hijack the namespace list contruction and the\n> >> lookup logic as much as it does now? I get it that the patch is done\n> >> this way because of the ordering of operations done for the initial ns\n> >> list creation and the grammar parsing that adds the routine label on\n> >> top of the root one, but I'd like to believe that there are more solid\n> >> ways to achieve that, like for example something that decouples the\n> >> root label and its alias, or something that associates an alias\n> >> directly with its PLpgSQL_nsitem entry?\n> >\n> >\n> > I am checking it now, and I don't see any easy solution. The namespace\n> is a one direction tree - only backward iteration from leafs to roots is\n> supported. At the moment, when I try to replace the name of root ns_item,\n> this root ns_item is referenced by the function's arguments (NS_TYPE_VAR\n> type). So anytime I need some helper ns_item node, that can be a\n> persistent target for these nodes. In this case is a memory overhead of\n> just one ns_item.\n> >\n> > orig_label <- argument1 <- argument2\n> >\n> > The patch has to save the original orig_label because it can be\n> referenced from argument1 or by \"FOUND\" variable. New graphs looks like\n> >\n> > new_label <- invalidated orig_label <- argument1 <- ...\n> >\n> > This tree has a different direction than is usual, and then replacing\n> the root node is not simple.\n> >\n> > Second solution (significantly more simple) is an additional pointer in\n> ns_item structure. In almost all cases this pointer will not be used.\n> Because ns_item uses a flexible array, then union cannot be used. I\n> implemented this in a patch marked as \"alias-implementation\".\n> >\n> > What do you think about it?\n> >\n> > Pavel\n> >\n> >\n> >\n> >\n> >\n> >\n> >>\n> >> --\n> >> Michael\n>\n\nst 17. 3. 2021 v 23:16 odesílatel Hannu Krosing <hannuk@google.com> napsal:why are you using yet another special syntax for this ?\n\nwould it not be better to do something like this:\nCREATE FUNCTION a_reall_long_and_winding_function_name(i int, out o int)\nLANGUAGE plpgsql AS $plpgsql$\nDECLARE\n args function_name_alias\nBEGIN\n args.o = 2 * args.i\nEND;\n$plpgsql$;\n\nor at least do something based on block labels (see end of\nhttps://www.postgresql.org/docs/13/plpgsql-structure.html )\n\nCREATE FUNCTION somefunc() RETURNS integer AS $$\n<< outerblock >>\nDECLARE\n...\n\nmaybe extend this to\n\nCREATE FUNCTION somefunc() RETURNS integer AS $$\n<< functionlabel:args >>\n<< outerblock >>\nDECLARE\n...\n\nor replace 'functionlabel' with something less ugly :)\n\nmaybe <<< args >>>There are few main reasons:a) compile options are available already - so I don't need invent new syntax - #OPTION DUMP, #PRINT_STRICT ON, #VARIABLE_CONFLICT ERRORb) compile options are processed before any any other operations, so implementation can be very simplec) implementation is very simpleI don't like an idea - <<prefix:value>> because you mix label syntax with some functionality, and <<x>> and <<<x>>> are visually similar \"#routine_label xxx\" is not maybe visually nice (like other compile options in plpgsql), but has good readability, simplicity, verbosity Because we already have this functionality (compile options), and because this functionality is working well (there are not any limits from this syntax), I strongly prefer to use it. We should not invite new syntax when some already available syntax can work well. We can talk about the keyword ROUTINE_LABEL - maybe #OPTION ROUTINE_LABEL xxx can look better, but I think so using the compile option is a good solution.RegardsPavel\n\nCheers\nHannu\n\n\n\nOn Wed, Mar 17, 2021 at 5:05 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> st 17. 3. 2021 v 9:20 odesílatel Michael Paquier <michael@paquier.xyz> napsal:\n>>\n>> On Wed, Mar 17, 2021 at 02:06:57PM +0800, Julien Rouhaud wrote:\n>> > I also think that there should be a single usable top label, otherwise it will\n>> > lead to confusing code and it can be a source of bug.\n>>\n>> Okay, that's fine by me. Could it be possible to come up with an\n>> approach that does not hijack the namespace list contruction and the\n>> lookup logic as much as it does now? I get it that the patch is done\n>> this way because of the ordering of operations done for the initial ns\n>> list creation and the grammar parsing that adds the routine label on\n>> top of the root one, but I'd like to believe that there are more solid\n>> ways to achieve that, like for example something that decouples the\n>> root label and its alias, or something that associates an alias\n>> directly with its PLpgSQL_nsitem entry?\n>\n>\n> I am checking it now, and I don't see any easy solution. The namespace is a one direction tree - only backward iteration from leafs to roots is supported. At the moment, when I try to replace the name of root ns_item, this root ns_item is referenced by the function's arguments (NS_TYPE_VAR type). So anytime I need some helper ns_item node, that can be a persistent target for these nodes. In this case is a memory overhead of just one ns_item.\n>\n> orig_label <- argument1 <- argument2\n>\n> The patch has to save the original orig_label because it can be referenced from argument1 or by \"FOUND\" variable. New graphs looks like\n>\n> new_label <- invalidated orig_label <- argument1 <- ...\n>\n> This tree has a different direction than is usual, and then replacing the root node is not simple.\n>\n> Second solution (significantly more simple) is an additional pointer in ns_item structure. In almost all cases this pointer will not be used. Because ns_item uses a flexible array, then union cannot be used. I implemented this in a patch marked as \"alias-implementation\".\n>\n> What do you think about it?\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>>\n>> --\n>> Michael",
"msg_date": "Thu, 18 Mar 2021 05:27:14 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 5:27 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n> There are few main reasons:\n>\n> a) compile options are available already - so I don't need invent new syntax - #OPTION DUMP, #PRINT_STRICT ON, #VARIABLE_CONFLICT ERROR\n\nAre these documented anywhere ?\n\nAt least a quick search for pl/pgsql + #OPTION DUMP, #PRINT_STRICT ON,\n#VARIABLE_CONFLICT ERROR returned nothing relevant and also searching\nin\n\nIf pl/pgsql actually supports any of these, these should get documented!\n\n\n\nFor me the most logical place for declaring a function name alias\nwould be to allow ALIAS FOR the function name in DECLARE section.\n\nplpgsql_test=# CREATE FUNCTION\na_function_with_an_inconveniently_long_name(i int , OUT o int)\nLANGUAGE plpgsql AS $plpgsql$\nDECLARE\n fnarg ALIAS FOR a_function_with_an_inconveniently_long_name;\nBEGIN\n fnarg.o = 2 * fnarg.i;\nEND;\n$plpgsql$;\n\nbut unfortunately this currently returns\n\nERROR: 42704: variable \"a_function_with_an_inconveniently_long_name\"\ndoes not exist\nLINE 4: fnarg ALIAS FOR a_function_with_an_inconveniently_long_na...\n ^\nLOCATION: plpgsql_yyparse, pl_gram.y:677\n\nVariation could be\n\nDECLARE\n fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n\nso it is clear that we are aliasing current function name\n\nor even make the function name optional, as there can only be one, so\n\nDECLARE\n fnarg ALIAS FOR FUNCTION;\n\n\nCheers\nHannu\n\n\n",
"msg_date": "Thu, 18 Mar 2021 15:19:12 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "čt 18. 3. 2021 v 15:19 odesílatel Hannu Krosing <hannuk@google.com> napsal:\n\n> On Thu, Mar 18, 2021 at 5:27 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> > There are few main reasons:\n> >\n> > a) compile options are available already - so I don't need invent new\n> syntax - #OPTION DUMP, #PRINT_STRICT ON, #VARIABLE_CONFLICT ERROR\n>\n> Are these documented anywhere ?\n>\n> At least a quick search for pl/pgsql + #OPTION DUMP, #PRINT_STRICT ON,\n> #VARIABLE_CONFLICT ERROR returned nothing relevant and also searching\n> in\n>\n> If pl/pgsql actually supports any of these, these should get documented!\n>\n> it is documented, please look to following links (but #option dump is\nundocumented, others are documented)\n\nhttps://www.postgresql.org/docs/current/plpgsql-implementation.html#PLPGSQL-VAR-SUBST\nhttps://www.postgresql.org/docs/current/plpgsql-statements.html\n\nCREATE FUNCTION get_userid(username text) RETURNS int\nAS $$\n#print_strict_params on\nDECLARE\nuserid int;\nBEGIN\n SELECT users.userid INTO STRICT userid\n FROM users WHERE users.username = get_userid.username;\n RETURN userid;\nEND;\n$$ LANGUAGE plpgsql;\n\n\n\n>\n> For me the most logical place for declaring a function name alias\n> would be to allow ALIAS FOR the function name in DECLARE section.\n>\n> plpgsql_test=# CREATE FUNCTION\n> a_function_with_an_inconveniently_long_name(i int , OUT o int)\n> LANGUAGE plpgsql AS $plpgsql$\n> DECLARE\n> fnarg ALIAS FOR a_function_with_an_inconveniently_long_name;\n> BEGIN\n> fnarg.o = 2 * fnarg.i;\n> END;\n> $plpgsql$;\n>\n> but unfortunately this currently returns\n>\n> ERROR: 42704: variable \"a_function_with_an_inconveniently_long_name\"\n> does not exist\n> LINE 4: fnarg ALIAS FOR a_function_with_an_inconveniently_long_na...\n> ^\n> LOCATION: plpgsql_yyparse, pl_gram.y:677\n>\n> Variation could be\n>\n> DECLARE\n> fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n>\n> so it is clear that we are aliasing current function name\n>\n\n> or even make the function name optional, as there can only be one, so\n>\n> DECLARE\n> fnarg ALIAS FOR FUNCTION;\n>\n\nWhy you think so it is better than just\n\n#routine_label fnarg\n\n?\n\nMaybe the name of the option can be routine_alias? Is it better for you?\n\nMy objection to DECLARE is freedom of this clause. It can be used\neverywhere. the effect of DECLARE is block limited. So theoretically\nsomebody can write\n\nBEGIN\n ..\n DECLARE fnarg ALIAS FOR FUNCTION;\n BEGIN\n ..\n END;\n\nEND;\n\nIt disallows simple implementation.\n\nRegards\n\nPavel\n\n\n\n\n\n\n>\n> Cheers\n> Hannu\n>\n\nčt 18. 3. 2021 v 15:19 odesílatel Hannu Krosing <hannuk@google.com> napsal:On Thu, Mar 18, 2021 at 5:27 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n> There are few main reasons:\n>\n> a) compile options are available already - so I don't need invent new syntax - #OPTION DUMP, #PRINT_STRICT ON, #VARIABLE_CONFLICT ERROR\n\nAre these documented anywhere ?\n\nAt least a quick search for pl/pgsql + #OPTION DUMP, #PRINT_STRICT ON,\n#VARIABLE_CONFLICT ERROR returned nothing relevant and also searching\nin\n\nIf pl/pgsql actually supports any of these, these should get documented!\nit is documented, please look to following links (but #option dump is undocumented, others are documented) https://www.postgresql.org/docs/current/plpgsql-implementation.html#PLPGSQL-VAR-SUBST https://www.postgresql.org/docs/current/plpgsql-statements.htmlCREATE FUNCTION get_userid(username text) RETURNS int\nAS $$\n#print_strict_params on\nDECLARE\nuserid int;\nBEGIN\n SELECT users.userid INTO STRICT userid\n FROM users WHERE users.username = get_userid.username;\n RETURN userid;\nEND;\n$$ LANGUAGE plpgsql;\n\n\nFor me the most logical place for declaring a function name alias\nwould be to allow ALIAS FOR the function name in DECLARE section.\n\nplpgsql_test=# CREATE FUNCTION\na_function_with_an_inconveniently_long_name(i int , OUT o int)\nLANGUAGE plpgsql AS $plpgsql$\nDECLARE\n fnarg ALIAS FOR a_function_with_an_inconveniently_long_name;\nBEGIN\n fnarg.o = 2 * fnarg.i;\nEND;\n$plpgsql$;\n\nbut unfortunately this currently returns\n\nERROR: 42704: variable \"a_function_with_an_inconveniently_long_name\"\ndoes not exist\nLINE 4: fnarg ALIAS FOR a_function_with_an_inconveniently_long_na...\n ^\nLOCATION: plpgsql_yyparse, pl_gram.y:677\n\nVariation could be\n\nDECLARE\n fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n\nso it is clear that we are aliasing current function name\n\nor even make the function name optional, as there can only be one, so\n\nDECLARE\n fnarg ALIAS FOR FUNCTION;Why you think so it is better than just#routine_label fnarg?Maybe the name of the option can be routine_alias? Is it better for you?My objection to DECLARE is freedom of this clause. It can be used everywhere. the effect of DECLARE is block limited. So theoretically somebody can writeBEGIN .. DECLARE fnarg ALIAS FOR FUNCTION; BEGIN .. END;END;It disallows simple implementation. RegardsPavel \n\n\nCheers\nHannu",
"msg_date": "Thu, 18 Mar 2021 15:45:16 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "I went to archives and read the whole discussion for this from the beginning\n\nI did not really understand, why using the ALIAS FOR syntax is significantly\nharder to implement then #routine_label\n\nThe things you mentioned as currently using #OPTION seem to be in a different\ncategory from the aliases and block labels - they are more like pragmas - so to\nme it still feels inconsistent to use #routine_label for renaming the\nouter namespace.\n\n\nAlso checked code and indeed there is support for #OPTION DUMP and\n#VARIABLE_CONFLICT ERROR\nIs it documented and just hard to find ?\n\n\nOn Thu, Mar 18, 2021 at 3:19 PM Hannu Krosing <hannuk@google.com> wrote:\n>\n> On Thu, Mar 18, 2021 at 5:27 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> >\n> > There are few main reasons:\n> >\n> > a) compile options are available already - so I don't need invent new syntax - #OPTION DUMP, #PRINT_STRICT ON, #VARIABLE_CONFLICT ERROR\n>\n> Are these documented anywhere ?\n>\n> At least a quick search for pl/pgsql + #OPTION DUMP, #PRINT_STRICT ON,\n> #VARIABLE_CONFLICT ERROR returned nothing relevant and also searching\n> in\n>\n> If pl/pgsql actually supports any of these, these should get documented!\n>\n>\n>\n> For me the most logical place for declaring a function name alias\n> would be to allow ALIAS FOR the function name in DECLARE section.\n>\n> plpgsql_test=# CREATE FUNCTION\n> a_function_with_an_inconveniently_long_name(i int , OUT o int)\n> LANGUAGE plpgsql AS $plpgsql$\n> DECLARE\n> fnarg ALIAS FOR a_function_with_an_inconveniently_long_name;\n> BEGIN\n> fnarg.o = 2 * fnarg.i;\n> END;\n> $plpgsql$;\n>\n> but unfortunately this currently returns\n>\n> ERROR: 42704: variable \"a_function_with_an_inconveniently_long_name\"\n> does not exist\n> LINE 4: fnarg ALIAS FOR a_function_with_an_inconveniently_long_na...\n> ^\n> LOCATION: plpgsql_yyparse, pl_gram.y:677\n>\n> Variation could be\n>\n> DECLARE\n> fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n>\n> so it is clear that we are aliasing current function name\n>\n> or even make the function name optional, as there can only be one, so\n>\n> DECLARE\n> fnarg ALIAS FOR FUNCTION;\n>\n>\n> Cheers\n> Hannu\n\n\n",
"msg_date": "Thu, 18 Mar 2021 15:48:55 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 3:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> čt 18. 3. 2021 v 15:19 odesílatel Hannu Krosing <hannuk@google.com> napsal:\n...\n>> Variation could be\n>>\n>> DECLARE\n>> fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n>>\n>> so it is clear that we are aliasing current function name\n>>\n>>\n>> or even make the function name optional, as there can only be one, so\n>>\n>> DECLARE\n>> fnarg ALIAS FOR FUNCTION;\n>\n>\n> Why you think so it is better than just\n>\n> #routine_label fnarg\n>\n> ?\n\nIt just adds already a *third* way to (re)name things, in addition to\n<< blocklabel >>\nand ALIAS FOR\n\nFor some reason it feels to me similar to having completely different\nsyntax rules\nfor function names, argument names and variable names instead of all having to\nfollow rules for \"identifiers\"\n\nFor cleanness/orthogonality I would also prefer the blocklables to be in DECLARE\nfor each block, but this train has already left :)\nThough we probably could add alternative syntax ALIAS FOR BLOCK ?\n\n> Maybe the name of the option can be routine_alias? Is it better for you?\n\nI dont think the name itself is bad, as it is supposed to be used for\nboth FUNCTION\nand PROCEDURE renaming\n\n> My objection to DECLARE is freedom of this clause. It can be used everywhere.\n> the effect of DECLARE is block limited. So theoretically somebody can write\n>\n> BEGIN\n> ..\n> DECLARE fnarg ALIAS FOR FUNCTION;\n> BEGIN\n> ..\n> END;\n>\n> END;\n>\n> It disallows simple implementation.\n\nWe could limit ALIAS FOR FUNCTION to outermost block only, and revisit\nthis later\nif there are loud complaints (which I don't think there will be :) )\n\n\n\nCheers\nHannu\n\n\n",
"msg_date": "Thu, 18 Mar 2021 15:59:33 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "čt 18. 3. 2021 v 15:49 odesílatel Hannu Krosing <hannuk@google.com> napsal:\n\n> I went to archives and read the whole discussion for this from the\n> beginning\n>\n> I did not really understand, why using the ALIAS FOR syntax is\n> significantly\n> harder to implement then #routine_label\n>\n\njust because it can be used in different places - for example in nested\nblocks. And then you need to create really new aliases for these object in\ncurrent namespace. And it is not easy because we cannot to iterate in\nns_items tree from root\n\n\n\n> The things you mentioned as currently using #OPTION seem to be in a\n> different\n> category from the aliases and block labels - they are more like pragmas -\n> so to\n> me it still feels inconsistent to use #routine_label for renaming the\n> outer namespace.\n>\n\nI think this feature allows for more concepts.\n\n\n\n>\n> Also checked code and indeed there is support for #OPTION DUMP and\n> #VARIABLE_CONFLICT ERROR\n> Is it documented and just hard to find ?\n>\n\nplease, see my previous mail. There was links\n\n>\n> On Thu, Mar 18, 2021 at 3:19 PM Hannu Krosing <hannuk@google.com> wrote:\n> >\n> > On Thu, Mar 18, 2021 at 5:27 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > >\n> > >\n> > > There are few main reasons:\n> > >\n> > > a) compile options are available already - so I don't need invent new\n> syntax - #OPTION DUMP, #PRINT_STRICT ON, #VARIABLE_CONFLICT ERROR\n> >\n> > Are these documented anywhere ?\n> >\n> > At least a quick search for pl/pgsql + #OPTION DUMP, #PRINT_STRICT ON,\n> > #VARIABLE_CONFLICT ERROR returned nothing relevant and also searching\n> > in\n> >\n> > If pl/pgsql actually supports any of these, these should get documented!\n> >\n> >\n> >\n> > For me the most logical place for declaring a function name alias\n> > would be to allow ALIAS FOR the function name in DECLARE section.\n> >\n> > plpgsql_test=# CREATE FUNCTION\n> > a_function_with_an_inconveniently_long_name(i int , OUT o int)\n> > LANGUAGE plpgsql AS $plpgsql$\n> > DECLARE\n> > fnarg ALIAS FOR a_function_with_an_inconveniently_long_name;\n> > BEGIN\n> > fnarg.o = 2 * fnarg.i;\n> > END;\n> > $plpgsql$;\n> >\n> > but unfortunately this currently returns\n> >\n> > ERROR: 42704: variable \"a_function_with_an_inconveniently_long_name\"\n> > does not exist\n> > LINE 4: fnarg ALIAS FOR a_function_with_an_inconveniently_long_na...\n> > ^\n> > LOCATION: plpgsql_yyparse, pl_gram.y:677\n> >\n> > Variation could be\n> >\n> > DECLARE\n> > fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n> >\n> > so it is clear that we are aliasing current function name\n> >\n> > or even make the function name optional, as there can only be one, so\n> >\n> > DECLARE\n> > fnarg ALIAS FOR FUNCTION;\n> >\n> >\n> > Cheers\n> > Hannu\n>\n\nčt 18. 3. 2021 v 15:49 odesílatel Hannu Krosing <hannuk@google.com> napsal:I went to archives and read the whole discussion for this from the beginning\n\nI did not really understand, why using the ALIAS FOR syntax is significantly\nharder to implement then #routine_labeljust because it can be used in different places - for example in nested blocks. And then you need to create really new aliases for these object in current namespace. And it is not easy because we cannot to iterate in ns_items tree from root \n\nThe things you mentioned as currently using #OPTION seem to be in a different\ncategory from the aliases and block labels - they are more like pragmas - so to\nme it still feels inconsistent to use #routine_label for renaming the\nouter namespace.I think this feature allows for more concepts. \n\n\nAlso checked code and indeed there is support for #OPTION DUMP and\n#VARIABLE_CONFLICT ERROR\nIs it documented and just hard to find ?please, see my previous mail. There was links \n\nOn Thu, Mar 18, 2021 at 3:19 PM Hannu Krosing <hannuk@google.com> wrote:\n>\n> On Thu, Mar 18, 2021 at 5:27 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> >\n> > There are few main reasons:\n> >\n> > a) compile options are available already - so I don't need invent new syntax - #OPTION DUMP, #PRINT_STRICT ON, #VARIABLE_CONFLICT ERROR\n>\n> Are these documented anywhere ?\n>\n> At least a quick search for pl/pgsql + #OPTION DUMP, #PRINT_STRICT ON,\n> #VARIABLE_CONFLICT ERROR returned nothing relevant and also searching\n> in\n>\n> If pl/pgsql actually supports any of these, these should get documented!\n>\n>\n>\n> For me the most logical place for declaring a function name alias\n> would be to allow ALIAS FOR the function name in DECLARE section.\n>\n> plpgsql_test=# CREATE FUNCTION\n> a_function_with_an_inconveniently_long_name(i int , OUT o int)\n> LANGUAGE plpgsql AS $plpgsql$\n> DECLARE\n> fnarg ALIAS FOR a_function_with_an_inconveniently_long_name;\n> BEGIN\n> fnarg.o = 2 * fnarg.i;\n> END;\n> $plpgsql$;\n>\n> but unfortunately this currently returns\n>\n> ERROR: 42704: variable \"a_function_with_an_inconveniently_long_name\"\n> does not exist\n> LINE 4: fnarg ALIAS FOR a_function_with_an_inconveniently_long_na...\n> ^\n> LOCATION: plpgsql_yyparse, pl_gram.y:677\n>\n> Variation could be\n>\n> DECLARE\n> fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n>\n> so it is clear that we are aliasing current function name\n>\n> or even make the function name optional, as there can only be one, so\n>\n> DECLARE\n> fnarg ALIAS FOR FUNCTION;\n>\n>\n> Cheers\n> Hannu",
"msg_date": "Thu, 18 Mar 2021 16:00:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "čt 18. 3. 2021 v 15:59 odesílatel Hannu Krosing <hannuk@google.com> napsal:\n\n> On Thu, Mar 18, 2021 at 3:45 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > čt 18. 3. 2021 v 15:19 odesílatel Hannu Krosing <hannuk@google.com>\n> napsal:\n> ...\n> >> Variation could be\n> >>\n> >> DECLARE\n> >> fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n> >>\n> >> so it is clear that we are aliasing current function name\n> >>\n> >>\n> >> or even make the function name optional, as there can only be one, so\n> >>\n> >> DECLARE\n> >> fnarg ALIAS FOR FUNCTION;\n> >\n> >\n> > Why you think so it is better than just\n> >\n> > #routine_label fnarg\n> >\n> > ?\n>\n> It just adds already a *third* way to (re)name things, in addition to\n> << blocklabel >>\n> and ALIAS FOR\n>\n> For some reason it feels to me similar to having completely different\n> syntax rules\n> for function names, argument names and variable names instead of all\n> having to\n> follow rules for \"identifiers\"\n>\n> For cleanness/orthogonality I would also prefer the blocklables to be in\n> DECLARE\n> for each block, but this train has already left :)\n> Though we probably could add alternative syntax ALIAS FOR BLOCK ?\n>\n\nwhy? Is it a joke?\n\nyou are defining a block label, and you want to in the same block redefine\nsome outer label? I don't think it is a good idea.\n\n\n\n> > Maybe the name of the option can be routine_alias? Is it better for you?\n>\n> I dont think the name itself is bad, as it is supposed to be used for\n> both FUNCTION\n> and PROCEDURE renaming\n>\n> > My objection to DECLARE is freedom of this clause. It can be used\n> everywhere.\n> > the effect of DECLARE is block limited. So theoretically somebody can\n> write\n> >\n> > BEGIN\n> > ..\n> > DECLARE fnarg ALIAS FOR FUNCTION;\n> > BEGIN\n> > ..\n> > END;\n> >\n> > END;\n> >\n> > It disallows simple implementation.\n>\n> We could limit ALIAS FOR FUNCTION to outermost block only, and revisit\n> this later\n> if there are loud complaints (which I don't think there will be :) )\n>\n\nInside the parser you don't know so you are in the outermost block only.\n\nI play with it and I see two objections against DECLARE syntax\n\n1. the scope is in block. So you cannot use aliases for default arguments\n(or maybe yes, but is it correct?)\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nRETURNS ... AS $$\nDECLARE\n x1 int := fx.a;\n fnarg AS ALIAS FOR FUNCTION;\n x2 int := fnarg.a; -- should be this alias active now?\nBEGIN\n\n2. \"ALIAS FOR FUNCTION\" is not well named. In some languages, and in ADA\nlanguage too. You can rename an external function just for current scope.\nYou can find some examples like\n\nDECLARE\n s ALIAS FOR FUNCTION sin;\nBEGIN\n s(1); -- returns result of function sin\n\nother example - recursive function\n\nCREATE OR REPLACE FUNCTION some_very_long()\nRETURNS int AS $$\nDECLARE\n thisfx ALIAS FOR FUNCTION;\nBEGIN\n var := thisfx(...);\n\nBut we don't support this feature. We are changing just a top scope's\nlabel. So syntax \"ALIAS FOR FUNCTION is not good. The user can have false\nhopes\n\n\n\n\n\n\n\n>\n>\n> Cheers\n> Hannu\n>\n\nčt 18. 3. 2021 v 15:59 odesílatel Hannu Krosing <hannuk@google.com> napsal:On Thu, Mar 18, 2021 at 3:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> čt 18. 3. 2021 v 15:19 odesílatel Hannu Krosing <hannuk@google.com> napsal:\n...\n>> Variation could be\n>>\n>> DECLARE\n>> fnarg ALIAS FOR FUNCTION a_function_with_an_inconveniently_long_name;\n>>\n>> so it is clear that we are aliasing current function name\n>>\n>>\n>> or even make the function name optional, as there can only be one, so\n>>\n>> DECLARE\n>> fnarg ALIAS FOR FUNCTION;\n>\n>\n> Why you think so it is better than just\n>\n> #routine_label fnarg\n>\n> ?\n\nIt just adds already a *third* way to (re)name things, in addition to\n<< blocklabel >>\nand ALIAS FOR\n\nFor some reason it feels to me similar to having completely different\nsyntax rules\nfor function names, argument names and variable names instead of all having to\nfollow rules for \"identifiers\"\n\nFor cleanness/orthogonality I would also prefer the blocklables to be in DECLARE\nfor each block, but this train has already left :)\nThough we probably could add alternative syntax ALIAS FOR BLOCK ?why? Is it a joke?you are defining a block label, and you want to in the same block redefine some outer label? I don't think it is a good idea. \n\n> Maybe the name of the option can be routine_alias? Is it better for you?\n\nI dont think the name itself is bad, as it is supposed to be used for\nboth FUNCTION\nand PROCEDURE renaming\n\n> My objection to DECLARE is freedom of this clause. It can be used everywhere.\n> the effect of DECLARE is block limited. So theoretically somebody can write\n>\n> BEGIN\n> ..\n> DECLARE fnarg ALIAS FOR FUNCTION;\n> BEGIN\n> ..\n> END;\n>\n> END;\n>\n> It disallows simple implementation.\n\nWe could limit ALIAS FOR FUNCTION to outermost block only, and revisit\nthis later\nif there are loud complaints (which I don't think there will be :) )Inside the parser you don't know so you are in the outermost block only. I play with it and I see two objections against DECLARE syntax1. the scope is in block. So you cannot use aliases for default arguments (or maybe yes, but is it correct?)CREATE OR REPLACE FUNCTION fx(a int, b int)RETURNS ... AS $$DECLARE x1 int := fx.a; fnarg AS ALIAS FOR FUNCTION; x2 int := fnarg.a; -- should be this alias active now?BEGIN2. \"ALIAS FOR FUNCTION\" is not well named. In some languages, and in ADA language too. You can rename an external function just for current scope. You can find some examples likeDECLARE s ALIAS FOR FUNCTION sin;BEGIN s(1); -- returns result of function sinother example - recursive functionCREATE OR REPLACE FUNCTION some_very_long()RETURNS int AS $$DECLARE thisfx ALIAS FOR FUNCTION;BEGIN var := thisfx(...);But we don't support this feature. We are changing just a top scope's label. So syntax \"ALIAS FOR FUNCTION is not good. The user can have false hopes \n\n\n\nCheers\nHannu",
"msg_date": "Thu, 18 Mar 2021 16:22:25 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 4:23 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> But we don't support this feature. We are changing just a top scope's label. So syntax \"ALIAS FOR FUNCTION is not good. The user can have false hopes\n\nIn this case it looks like it should go together with other labels and\nhave << label_here >> syntax ?\n\nAnd we are back to the question of where to put this top scope label :)\n\nMaybe we cloud still pull in the function arguments into the outermost\nblocks scope, and advise users to have an extra block if they want to\nhave same names in both ?\n\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nRETURNS ... AS $$\n<< fnargs >>\nBEGIN\n << topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;\n BEGIN\n ....\n END;\nEND;\n$$;\n\nand we could make the empty outer block optional and treat two\nconsecutive labels as top scope label and outermost block label\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nRETURNS ... AS $$\n<< fnargs >>\n<< topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;\n BEGIN\n ....\nEND;\n$$;\n\nBut I agree that this is also not ideal.\n\n\nAnd\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nWITH (TOPSCOPE_LABEL fnargs)\nRETURNS ... AS $$\n<< topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;\n BEGIN\n ....\nEND;\n$$;\n\nIs even more inconsistent than #option syntax\n\n\nCheers\nHannu\n\nPS:\n\n>\n> For cleanness/orthogonality I would also prefer the blocklables to be in DECLARE\n> for each block, but this train has already left :)\n> Though we probably could add alternative syntax ALIAS FOR BLOCK ?\n\n>\n> why? Is it a joke?\n>\n> you are defining a block label, and you want to in the same block redefine some outer label? I don't think it is a good idea.\n\n\n",
"msg_date": "Fri, 19 Mar 2021 14:14:24 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "pá 19. 3. 2021 v 14:14 odesílatel Hannu Krosing <hannuk@google.com> napsal:\n\n> On Thu, Mar 18, 2021 at 4:23 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n> > But we don't support this feature. We are changing just a top scope's\n> label. So syntax \"ALIAS FOR FUNCTION is not good. The user can have false\n> hopes\n>\n> In this case it looks like it should go together with other labels and\n> have << label_here >> syntax ?\n>\n> And we are back to the question of where to put this top scope label :)\n>\n> Maybe we cloud still pull in the function arguments into the outermost\n> blocks scope, and advise users to have an extra block if they want to\n> have same names in both ?\n>\n>\n> CREATE OR REPLACE FUNCTION fx(a int, b int)\n> RETURNS ... AS $$\n> << fnargs >>\n> BEGIN\n> << topblock >>\n> DECLARE\n> a int := fnargs.a * 2;\n> b int := topblock.a + 2;\n> BEGIN\n> ....\n> END;\n> END;\n> $$;\n>\n\nIt looks pretty messy.\n\n\n> and we could make the empty outer block optional and treat two\n> consecutive labels as top scope label and outermost block label\n>\n> CREATE OR REPLACE FUNCTION fx(a int, b int)\n> RETURNS ... AS $$\n> << fnargs >>\n> << topblock >>\n> DECLARE\n> a int := fnargs.a * 2;\n> b int := topblock.a + 2;\n> BEGIN\n> ....\n> END;\n> $$;\n>\n> But I agree that this is also not ideal.\n>\n\nI agree with you :). The problem is in semantics - top outer namespace is\nexternal - is not visible, and so placing some label anywhere in code\nshould to looks scary\n\n\n>\n> And\n>\n> CREATE OR REPLACE FUNCTION fx(a int, b int)\n> WITH (TOPSCOPE_LABEL fnargs)\n> RETURNS ... AS $$\n> << topblock >>\n> DECLARE\n> a int := fnargs.a * 2;\n> b int := topblock.a + 2;\n> BEGIN\n> ....\n> END;\n> $$;\n>\n\n> Is even more inconsistent than #option syntax\n>\n\nyes. This syntax has sense. But this syntax should be implemented from zero\n(although it will be only a few lines, and then it is not a big issue).\n\nBigger issue is a risk of confusion with the \"WITH ()\" clause of CREATE\nTABLE, because syntax is exactly the same, but it holds a list of GUC\nsettings. And it does exactly what compile options does.\n\nCREATE TABLE foo(...) WITH (autovacuum off)\n\nPlease try to compare:\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nWITH (TOPSCOPE_LABEL fnargs)\nRETURNS ... AS $$\n<< topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nRETURNS ... AS $$\n#TOPSCOPE_LABEL fnargs\n<< topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;\n\nI don't see too big a difference, and with the compile option, I don't need\nto introduce new possibly confusing syntax. If we want to implement your\nproposed syntax, then we should support all plpgsql compile options there\ntoo (for consistency). The syntax that you propose has logic. But I am\nafraid about possible confusion with the same clause with different\nsemantics of other DDL commands, and then I am not sure if cost is adequate\nto benefit.\n\nRegards\n\nPavel\n\n\n\n>\n> Cheers\n> Hannu\n>\n> PS:\n>\n> >\n> > For cleanness/orthogonality I would also prefer the blocklables to be in\n> DECLARE\n> > for each block, but this train has already left :)\n> > Though we probably could add alternative syntax ALIAS FOR BLOCK ?\n>\n> >\n> > why? Is it a joke?\n> >\n> > you are defining a block label, and you want to in the same block\n> redefine some outer label? I don't think it is a good idea.\n>\n\npá 19. 3. 2021 v 14:14 odesílatel Hannu Krosing <hannuk@google.com> napsal:On Thu, Mar 18, 2021 at 4:23 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> But we don't support this feature. We are changing just a top scope's label. So syntax \"ALIAS FOR FUNCTION is not good. The user can have false hopes\n\nIn this case it looks like it should go together with other labels and\nhave << label_here >> syntax ?\n\nAnd we are back to the question of where to put this top scope label :)\n\nMaybe we cloud still pull in the function arguments into the outermost\nblocks scope, and advise users to have an extra block if they want to\nhave same names in both ?\n\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nRETURNS ... AS $$\n<< fnargs >>\nBEGIN\n << topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;\n BEGIN\n ....\n END;\nEND;\n$$;It looks pretty messy.\n\nand we could make the empty outer block optional and treat two\nconsecutive labels as top scope label and outermost block label\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nRETURNS ... AS $$\n<< fnargs >>\n<< topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;\n BEGIN\n ....\nEND;\n$$;\n\nBut I agree that this is also not ideal.I agree with you :). The problem is in semantics - top outer namespace is external - is not visible, and so placing some label anywhere in code should to looks scary \n\n\nAnd\n\nCREATE OR REPLACE FUNCTION fx(a int, b int)\nWITH (TOPSCOPE_LABEL fnargs)\nRETURNS ... AS $$\n<< topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;\n BEGIN\n ....\nEND;\n$$; \n\nIs even more inconsistent than #option syntaxyes. This syntax has sense. But this syntax should be implemented from zero (although it will be only a few lines, and then it is not a big issue). Bigger issue is a risk of confusion with the \"WITH ()\" clause of CREATE TABLE, because syntax is exactly the same, but it holds a list of GUC settings. And it does exactly what compile options does. CREATE TABLE foo(...) WITH (autovacuum off)Please try to compare:CREATE OR REPLACE FUNCTION fx(a int, b int)\nWITH (TOPSCOPE_LABEL fnargs)\nRETURNS ... AS $$\n<< topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;CREATE OR REPLACE FUNCTION fx(a int, b int)\nRETURNS ... AS $$#TOPSCOPE_LABEL fnargs\n<< topblock >>\n DECLARE\n a int := fnargs.a * 2;\n b int := topblock.a + 2;I don't see too big a difference, and with the compile option, I don't need to introduce new possibly confusing syntax. If we want to implement your proposed syntax, then we should support all plpgsql compile options there too (for consistency). The syntax that you propose has logic. But I am afraid about possible confusion with the same clause with different semantics of other DDL commands, and then I am not sure if cost is adequate to benefit.RegardsPavel \n\n\nCheers\nHannu\n\nPS:\n\n>\n> For cleanness/orthogonality I would also prefer the blocklables to be in DECLARE\n> for each block, but this train has already left :)\n> Though we probably could add alternative syntax ALIAS FOR BLOCK ?\n\n>\n> why? Is it a joke?\n>\n> you are defining a block label, and you want to in the same block redefine some outer label? I don't think it is a good idea.",
"msg_date": "Sun, 21 Mar 2021 16:33:41 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "st 17. 3. 2021 v 23:32 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Wed, Mar 17, 2021 at 05:04:48PM +0100, Pavel Stehule wrote:\n> > This tree has a different direction than is usual, and then replacing the\n> > root node is not simple.\n>\n> Yeah, it is not like we should redesign this whole part just for the\n> feature discussed here, and that may impact performance as the current\n> list handling is cheap now.\n>\n> > Second solution (significantly more simple) is an additional pointer in\n> > ns_item structure. In almost all cases this pointer will not be used.\n> > Because ns_item uses a flexible array, then union cannot be used. I\n> > implemented this in a patch marked as \"alias-implementation\".\n> >\n> > What do you think about it?\n>\n> I am not sure that it is necessary nor good to replace entirely the\n> root label. So associating a new field directly into it sounds like a\n> promising approach?\n>\n\nHere is rebased patch\n\nI am continuing to talk with Hannu about syntax. I think so using the\ncompile option is a good direction, but maybe renaming from \"routine_label\"\nto \"topscope_label\" (proposed by Hannu) or maybe \"outerscope_label\" can be\na good idea.\n\nRegards\n\nPavel\n\n\n--\n> Michael\n>",
"msg_date": "Wed, 24 Mar 2021 08:09:13 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "> On 2021.03.24. 08:09 Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> [...]\n> [plpgsql-routine-label-20210324.patch]\n\n\nHi,\n\nIn that sgml\n\n\"the functions' arguments\"\n\nshould be\n\n\"the function's arguments\"\n\n\nErik\n\n\n",
"msg_date": "Wed, 24 Mar 2021 08:42:30 +0100 (CET)",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "st 24. 3. 2021 v 8:42 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> > On 2021.03.24. 08:09 Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > [...]\n> > [plpgsql-routine-label-20210324.patch]\n>\n>\n> Hi,\n>\n> In that sgml\n>\n> \"the functions' arguments\"\n>\n> should be\n>\n> \"the function's arguments\"\n>\n\nfixed,\n\nthank you for check\n\n\n\n>\n> Erik\n>",
"msg_date": "Wed, 24 Mar 2021 19:44:33 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "+1\n\nI really like this feature.\n\nIs there a way to avoid having to write the \"#ROUTINE_LABEL\" for every function defined?\n\nI'm thinking if a company writing a lot of plpgsql functions want to decide on a company wide hard-coded label to use for all their plpgsql functions. Could it be set globally and enforced for the entire code base somehow?\n\nIt would of course be a problem since other plpgsql extensions could be in conflict with such a label,\nso maybe we could allow creating a new language, \"plmycorp\", using the same handlers as plpgsql,\nwith the addition of a hard-coded #ROUTINE_LABEL as a part of the language definition,\nwhich would then be forbidden to override in function definitions.\n\n/Joel\n+1I really like this feature.Is there a way to avoid having to write the \"#ROUTINE_LABEL\" for every function defined?I'm thinking if a company writing a lot of plpgsql functions want to decide on a company wide hard-coded label to use for all their plpgsql functions. Could it be set globally and enforced for the entire code base somehow?It would of course be a problem since other plpgsql extensions could be in conflict with such a label,so maybe we could allow creating a new language, \"plmycorp\", using the same handlers as plpgsql,with the addition of a hard-coded #ROUTINE_LABEL as a part of the language definition,which would then be forbidden to override in function definitions./Joel",
"msg_date": "Sun, 28 Mar 2021 13:48:35 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_pl/pgsql_feature_request:_shorthand_for_argument_and_local?=\n =?UTF-8?Q?_variable_references?="
},
{
"msg_contents": "ne 28. 3. 2021 v 13:49 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> +1\n>\n> I really like this feature.\n>\n> Is there a way to avoid having to write the \"#ROUTINE_LABEL\" for every\n> function defined?\n>\n> I'm thinking if a company writing a lot of plpgsql functions want to\n> decide on a company wide hard-coded label to use for all their plpgsql\n> functions. Could it be set globally and enforced for the entire code base\n> somehow?\n>\n> It would of course be a problem since other plpgsql extensions could be in\n> conflict with such a label,\n> so maybe we could allow creating a new language, \"plmycorp\", using the\n> same handlers as plpgsql,\n> with the addition of a hard-coded #ROUTINE_LABEL as a part of the language\n> definition,\n> which would then be forbidden to override in function definitions.\n>\n\nThere is no technical problem, but if I remember well, the committers don't\nlike configuration settings that globally modify the behaviour. There can\nbe a lot of issues related to recovery from backups or porting to others\nsystems. Personally I understand and I accept it. There should be some\npress for ensuring some level of compatibility. And can be really unfanny\nif you cannot read routine from backups because you have \"bad\" postgres\nconfiguration. It can work in your company, but this feature can be used by\nothers, and they can have problems.\n\nOn second hand - it is very easy to write a tool that checks wanted\n#ROUTINE_LABEL in every routine.\n\nRegards\n\nPavel\n\n\n\n> /Joel\n>\n\nne 28. 3. 2021 v 13:49 odesílatel Joel Jacobson <joel@compiler.org> napsal:+1I really like this feature.Is there a way to avoid having to write the \"#ROUTINE_LABEL\" for every function defined?I'm thinking if a company writing a lot of plpgsql functions want to decide on a company wide hard-coded label to use for all their plpgsql functions. Could it be set globally and enforced for the entire code base somehow?It would of course be a problem since other plpgsql extensions could be in conflict with such a label,so maybe we could allow creating a new language, \"plmycorp\", using the same handlers as plpgsql,with the addition of a hard-coded #ROUTINE_LABEL as a part of the language definition,which would then be forbidden to override in function definitions.There is no technical problem, but if I remember well, the committers don't like configuration settings that globally modify the behaviour. There can be a lot of issues related to recovery from backups or porting to others systems. Personally I understand and I accept it. There should be some press for ensuring some level of compatibility. And can be really unfanny if you cannot read routine from backups because you have \"bad\" postgres configuration. It can work in your company, but this feature can be used by others, and they can have problems. On second hand - it is very easy to write a tool that checks wanted #ROUTINE_LABEL in every routine.RegardsPavel /Joel",
"msg_date": "Sun, 28 Mar 2021 14:27:51 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Sun, Mar 28, 2021, at 14:27, Pavel Stehule wrote:\n> ne 28. 3. 2021 v 13:49 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>> __It would of course be a problem since other plpgsql extensions could be in conflict with such a label,\n>> so maybe we could allow creating a new language, \"plmycorp\", using the same handlers as plpgsql,\n>> with the addition of a hard-coded #ROUTINE_LABEL as a part of the language definition,\n>> which would then be forbidden to override in function definitions.\n> \n> There is no technical problem, but if I remember well, the committers don't like configuration settings that globally modify the behaviour. There can be a lot of issues related to recovery from backups or porting to others systems. Personally I understand and I accept it. There should be some press for ensuring some level of compatibility. And can be really unfanny if you cannot read routine from backups because you have \"bad\" postgres configuration. It can work in your company, but this feature can be used by others, and they can have problems. \n\nI also dislike configuration that modify behaviour, but I don't think that's what I proposed. Creating a new language would not be configuration, it would be part of the dumpable/restorable schema, just like the functions.\n\n/Joel\n\n\n\nOn Sun, Mar 28, 2021, at 14:27, Pavel Stehule wrote:ne 28. 3. 2021 v 13:49 odesílatel Joel Jacobson <joel@compiler.org> napsal:It would of course be a problem since other plpgsql extensions could be in conflict with such a label,so maybe we could allow creating a new language, \"plmycorp\", using the same handlers as plpgsql,with the addition of a hard-coded #ROUTINE_LABEL as a part of the language definition,which would then be forbidden to override in function definitions.There is no technical problem, but if I remember well, the committers don't like configuration settings that globally modify the behaviour. There can be a lot of issues related to recovery from backups or porting to others systems. Personally I understand and I accept it. There should be some press for ensuring some level of compatibility. And can be really unfanny if you cannot read routine from backups because you have \"bad\" postgres configuration. It can work in your company, but this feature can be used by others, and they can have problems. I also dislike configuration that modify behaviour, but I don't think that's what I proposed. Creating a new language would not be configuration, it would be part of the dumpable/restorable schema, just like the functions./Joel",
"msg_date": "Sun, 28 Mar 2021 14:44:51 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_pl/pgsql_feature_request:_shorthand_for_argument_and_local?=\n =?UTF-8?Q?_variable_references?="
},
{
"msg_contents": "ne 28. 3. 2021 v 14:45 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Sun, Mar 28, 2021, at 14:27, Pavel Stehule wrote:\n>\n> ne 28. 3. 2021 v 13:49 odesílatel Joel Jacobson <joel@compiler.org>\n> napsal:\n>\n> It would of course be a problem since other plpgsql extensions could be in\n> conflict with such a label,\n> so maybe we could allow creating a new language, \"plmycorp\", using the\n> same handlers as plpgsql,\n> with the addition of a hard-coded #ROUTINE_LABEL as a part of the language\n> definition,\n> which would then be forbidden to override in function definitions.\n>\n>\n> There is no technical problem, but if I remember well, the committers\n> don't like configuration settings that globally modify the behaviour. There\n> can be a lot of issues related to recovery from backups or porting to\n> others systems. Personally I understand and I accept it. There should be\n> some press for ensuring some level of compatibility. And can be really\n> unfanny if you cannot read routine from backups because you have \"bad\"\n> postgres configuration. It can work in your company, but this feature can\n> be used by others, and they can have problems.\n>\n>\n> I also dislike configuration that modify behaviour, but I don't think\n> that's what I proposed. Creating a new language would not be configuration,\n> it would be part of the dumpable/restorable schema, just like the functions.\n>\n\nMaybe I don't understand your proposal well, I am sorry. Creating your own\nlanguage should not be hard, but in this case the plpgsql is not well\nextendable. The important structures are private. You can do it, but you\nshould do a full copy of plpgsql. I don't think so you can reuse handler's\nroutines - it is not prepared for it. Unfortunately, the handler expects\nonly function oid and arguments, and there is not a possibility how to pass\nany options (if I know).\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> /Joel\n>\n>\n>\n>\n\nne 28. 3. 2021 v 14:45 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Sun, Mar 28, 2021, at 14:27, Pavel Stehule wrote:ne 28. 3. 2021 v 13:49 odesílatel Joel Jacobson <joel@compiler.org> napsal:It would of course be a problem since other plpgsql extensions could be in conflict with such a label,so maybe we could allow creating a new language, \"plmycorp\", using the same handlers as plpgsql,with the addition of a hard-coded #ROUTINE_LABEL as a part of the language definition,which would then be forbidden to override in function definitions.There is no technical problem, but if I remember well, the committers don't like configuration settings that globally modify the behaviour. There can be a lot of issues related to recovery from backups or porting to others systems. Personally I understand and I accept it. There should be some press for ensuring some level of compatibility. And can be really unfanny if you cannot read routine from backups because you have \"bad\" postgres configuration. It can work in your company, but this feature can be used by others, and they can have problems. I also dislike configuration that modify behaviour, but I don't think that's what I proposed. Creating a new language would not be configuration, it would be part of the dumpable/restorable schema, just like the functions.Maybe I don't understand your proposal well, I am sorry. Creating your own language should not be hard, but in this case the plpgsql is not well extendable. The important structures are private. You can do it, but you should do a full copy of plpgsql. I don't think so you can reuse handler's routines - it is not prepared for it. Unfortunately, the handler expects only function oid and arguments, and there is not a possibility how to pass any options (if I know). RegardsPavel /Joel",
"msg_date": "Sun, 28 Mar 2021 15:12:39 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Sun, Mar 28, 2021, at 15:12, Pavel Stehule wrote:\n> Maybe I don't understand your proposal well, I am sorry. Creating your own language should not be hard, but in this case the plpgsql is not well extendable. The important structures are private. You can do it, but you should do a full copy of plpgsql. I don't think so you can reuse handler's routines - it is not prepared for it. Unfortunately, the handler expects only function oid and arguments, and there is not a possibility how to pass any options (if I know). \n\nSorry, let me clarify what I mean.\n\nI mean something along the lines of adding a new nullable column to \"pg_language\", maybe \"lanroutinelabel\"?\nAll other columns (lanispl, lanpltrusted, lanplcallfoid, laninline, lanvalidator) would reuse the same values as plpgsql.\n\n/Joel\nOn Sun, Mar 28, 2021, at 15:12, Pavel Stehule wrote:Maybe I don't understand your proposal well, I am sorry. Creating your own language should not be hard, but in this case the plpgsql is not well extendable. The important structures are private. You can do it, but you should do a full copy of plpgsql. I don't think so you can reuse handler's routines - it is not prepared for it. Unfortunately, the handler expects only function oid and arguments, and there is not a possibility how to pass any options (if I know). Sorry, let me clarify what I mean.I mean something along the lines of adding a new nullable column to \"pg_language\", maybe \"lanroutinelabel\"?All other columns (lanispl, lanpltrusted, lanplcallfoid, laninline, lanvalidator) would reuse the same values as plpgsql./Joel",
"msg_date": "Sun, 28 Mar 2021 15:52:35 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_pl/pgsql_feature_request:_shorthand_for_argument_and_local?=\n =?UTF-8?Q?_variable_references?="
},
{
"msg_contents": "On Sun, Mar 28, 2021 at 03:52:35PM +0200, Joel Jacobson wrote:\n> On Sun, Mar 28, 2021, at 15:12, Pavel Stehule wrote:\n> > Maybe I don't understand your proposal well, I am sorry. Creating your own language should not be hard, but in this case the plpgsql is not well extendable. The important structures are private. You can do it, but you should do a full copy of plpgsql. I don't think so you can reuse handler's routines - it is not prepared for it. Unfortunately, the handler expects only function oid and arguments, and there is not a possibility how to pass any options (if I know). \n> \n> Sorry, let me clarify what I mean.\n> \n> I mean something along the lines of adding a new nullable column to \"pg_language\", maybe \"lanroutinelabel\"?\n> All other columns (lanispl, lanpltrusted, lanplcallfoid, laninline, lanvalidator) would reuse the same values as plpgsql.\n\nIt doesn't seem like a good way to handle some customization of existing\nlanguage. It's too specialized and soon people will ask for fields to change\nthe default behavior of many other things and a default \"routine label\" may not\nmake sense in all languages. If we were to do that we should probably add a\nnew function that would be called to setup all language specific stuff that\nusers may want to change.\n\n\n",
"msg_date": "Sun, 28 Mar 2021 22:04:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "ne 28. 3. 2021 v 16:04 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Sun, Mar 28, 2021 at 03:52:35PM +0200, Joel Jacobson wrote:\n> > On Sun, Mar 28, 2021, at 15:12, Pavel Stehule wrote:\n> > > Maybe I don't understand your proposal well, I am sorry. Creating your\n> own language should not be hard, but in this case the plpgsql is not well\n> extendable. The important structures are private. You can do it, but you\n> should do a full copy of plpgsql. I don't think so you can reuse handler's\n> routines - it is not prepared for it. Unfortunately, the handler expects\n> only function oid and arguments, and there is not a possibility how to pass\n> any options (if I know).\n> >\n> > Sorry, let me clarify what I mean.\n> >\n> > I mean something along the lines of adding a new nullable column to\n> \"pg_language\", maybe \"lanroutinelabel\"?\n> > All other columns (lanispl, lanpltrusted, lanplcallfoid, laninline,\n> lanvalidator) would reuse the same values as plpgsql.\n>\n> It doesn't seem like a good way to handle some customization of existing\n> language. It's too specialized and soon people will ask for fields to\n> change\n> the default behavior of many other things and a default \"routine label\"\n> may not\n> make sense in all languages. If we were to do that we should probably add\n> a\n> new function that would be called to setup all language specific stuff that\n> users may want to change.\n>\n\nYes, this has a benefit just for plpgsql. I can imagine a little bit\ndifferent API of plpgsql that allows more direct calls than now. For example\n\nstatic PLpgSQL_function *\ndo_compile(FunctionCallInfo fcinfo,\n<--><--> HeapTuple procTup,\n<--><--> PLpgSQL_function *function,\n<--><--> PLpgSQL_func_hashkey *hashkey,\n<--><--> bool forValidator)\n{\n\nif the function do_compile will be public, and if there will be new\nargument - compile_options, then can easily force these options, and is\nrelatively easy to reuse plpgsql as base for own language.\n\nIt can be interesting for plpgsql_check too. But I am not sure if it is too\nstrong an argument for Tom :)\n\nRegards\n\nPavel\n\nne 28. 3. 2021 v 16:04 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sun, Mar 28, 2021 at 03:52:35PM +0200, Joel Jacobson wrote:\n> On Sun, Mar 28, 2021, at 15:12, Pavel Stehule wrote:\n> > Maybe I don't understand your proposal well, I am sorry. Creating your own language should not be hard, but in this case the plpgsql is not well extendable. The important structures are private. You can do it, but you should do a full copy of plpgsql. I don't think so you can reuse handler's routines - it is not prepared for it. Unfortunately, the handler expects only function oid and arguments, and there is not a possibility how to pass any options (if I know). \n> \n> Sorry, let me clarify what I mean.\n> \n> I mean something along the lines of adding a new nullable column to \"pg_language\", maybe \"lanroutinelabel\"?\n> All other columns (lanispl, lanpltrusted, lanplcallfoid, laninline, lanvalidator) would reuse the same values as plpgsql.\n\nIt doesn't seem like a good way to handle some customization of existing\nlanguage. It's too specialized and soon people will ask for fields to change\nthe default behavior of many other things and a default \"routine label\" may not\nmake sense in all languages. If we were to do that we should probably add a\nnew function that would be called to setup all language specific stuff that\nusers may want to change.Yes, this has a benefit just for plpgsql. I can imagine a little bit different API of plpgsql that allows more direct calls than now. For examplestatic PLpgSQL_function *do_compile(FunctionCallInfo fcinfo,<--><--> HeapTuple procTup,<--><--> PLpgSQL_function *function,<--><--> PLpgSQL_func_hashkey *hashkey,<--><--> bool forValidator){if the function do_compile will be public, and if there will be new argument - compile_options, then can easily force these options, and is relatively easy to reuse plpgsql as base for own language.It can be interesting for plpgsql_check too. But I am not sure if it is too strong an argument for Tom :)RegardsPavel",
"msg_date": "Sun, 28 Mar 2021 17:12:10 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Sun, Mar 28, 2021 at 05:12:10PM +0200, Pavel Stehule wrote:\n> \n> Yes, this has a benefit just for plpgsql. I can imagine a little bit\n> different API of plpgsql that allows more direct calls than now. For example\n> \n> static PLpgSQL_function *\n> do_compile(FunctionCallInfo fcinfo,\n> <--><--> HeapTuple procTup,\n> <--><--> PLpgSQL_function *function,\n> <--><--> PLpgSQL_func_hashkey *hashkey,\n> <--><--> bool forValidator)\n> {\n> \n> if the function do_compile will be public, and if there will be new\n> argument - compile_options, then can easily force these options, and is\n> relatively easy to reuse plpgsql as base for own language.\n> \n> It can be interesting for plpgsql_check too. But I am not sure if it is too\n> strong an argument for Tom :)\n\nI think that it would be sensible to do something along those lines.\nEspecially if it allows better static analysis for tools like plpgsql_check.\nBut that's definitely out of this patch's scope, and not pg14 material.\n\n\n",
"msg_date": "Sun, 28 Mar 2021 23:58:51 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "ne 28. 3. 2021 v 17:58 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Sun, Mar 28, 2021 at 05:12:10PM +0200, Pavel Stehule wrote:\n> >\n> > Yes, this has a benefit just for plpgsql. I can imagine a little bit\n> > different API of plpgsql that allows more direct calls than now. For\n> example\n> >\n> > static PLpgSQL_function *\n> > do_compile(FunctionCallInfo fcinfo,\n> > <--><--> HeapTuple procTup,\n> > <--><--> PLpgSQL_function *function,\n> > <--><--> PLpgSQL_func_hashkey *hashkey,\n> > <--><--> bool forValidator)\n> > {\n> >\n> > if the function do_compile will be public, and if there will be new\n> > argument - compile_options, then can easily force these options, and is\n> > relatively easy to reuse plpgsql as base for own language.\n> >\n> > It can be interesting for plpgsql_check too. But I am not sure if it is\n> too\n> > strong an argument for Tom :)\n>\n> I think that it would be sensible to do something along those lines.\n> Especially if it allows better static analysis for tools like\n> plpgsql_check.\n> But that's definitely out of this patch's scope, and not pg14 material.\n>\n\nyes, sure.\n\nPavel\n\nne 28. 3. 2021 v 17:58 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sun, Mar 28, 2021 at 05:12:10PM +0200, Pavel Stehule wrote:\n> \n> Yes, this has a benefit just for plpgsql. I can imagine a little bit\n> different API of plpgsql that allows more direct calls than now. For example\n> \n> static PLpgSQL_function *\n> do_compile(FunctionCallInfo fcinfo,\n> <--><--> HeapTuple procTup,\n> <--><--> PLpgSQL_function *function,\n> <--><--> PLpgSQL_func_hashkey *hashkey,\n> <--><--> bool forValidator)\n> {\n> \n> if the function do_compile will be public, and if there will be new\n> argument - compile_options, then can easily force these options, and is\n> relatively easy to reuse plpgsql as base for own language.\n> \n> It can be interesting for plpgsql_check too. But I am not sure if it is too\n> strong an argument for Tom :)\n\nI think that it would be sensible to do something along those lines.\nEspecially if it allows better static analysis for tools like plpgsql_check.\nBut that's definitely out of this patch's scope, and not pg14 material.yes, sure.Pavel",
"msg_date": "Sun, 28 Mar 2021 18:00:55 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "Hi Pavel,\n\nOn Wed, Mar 24, 2021 at 08:09:13AM +0100, Pavel Stehule wrote:\n> \n> I am continuing to talk with Hannu about syntax. I think so using the\n> compile option is a good direction, but maybe renaming from \"routine_label\"\n> to \"topscope_label\" (proposed by Hannu) or maybe \"outerscope_label\" can be\n> a good idea.\n\nIt's not clear to me what is the status of this patch. You sent some updated\npatch since with some typo fixes but I don't see any conclusion about the\ndirection this patch should go and/or how to name the new option.\n\nShould the patch be reviewed or switched to Waiting on Author?\n\n\n",
"msg_date": "Thu, 6 Jan 2022 15:02:39 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "čt 6. 1. 2022 v 8:02 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi Pavel,\n>\n> On Wed, Mar 24, 2021 at 08:09:13AM +0100, Pavel Stehule wrote:\n> >\n> > I am continuing to talk with Hannu about syntax. I think so using the\n> > compile option is a good direction, but maybe renaming from\n> \"routine_label\"\n> > to \"topscope_label\" (proposed by Hannu) or maybe \"outerscope_label\" can\n> be\n> > a good idea.\n>\n> It's not clear to me what is the status of this patch. You sent some\n> updated\n> patch since with some typo fixes but I don't see any conclusion about the\n> direction this patch should go and/or how to name the new option.\n>\n> Should the patch be reviewed or switched to Waiting on Author?\n>\n\nI am not sure if there is enough agreement and if there is enough necessity\nfor this feature.\n\nIn this discussion there were more general questions about future\ndevelopment of plpgsql (about possible configurations and compiler\nparametrizations, and how to switch the configuration on global and on\nlocal levels). This is not fully solved yet. This is probably the reason\nwhy discussion about this specific feature (and very simple feature) has\nnot reached a clear end. Generally, this is the same problem in Postgres.\n\nPersonally, I don't want to write a complicated patch for this relatively\ntrivial (and not too important feature). Sure, then it cannot solve more\ngeneral questions. I think the plpgsql option can work well - maybe with\nthe keyword \"top_label\" or \"root_label\". Probably some enhancing of the\nDECLARE statement can work too (if it will be done well). But it requires\nmore work, because the DECLARE statement has block scope, and then the\nalias of the top label has to have block scope too. Theoretically we can\npropagate it to GUC too - same like plpgsql.variable_conflict. This simple\ndesign can work (my opinion), but I understand so it is not a general reply\nfor people who can have more control over the behavior of plpgsql. But I\nthink it is a different issue and should be solved separately (and this\npretty complex problem, because Postgres together with PLpgSQL mix\ndifferent access models - access rights, searching path, local scope, and\nwe miss possible parametrization on schema level what has most sense for\nPLpgSQL).\n\nRegards\n\nPavel\n\nčt 6. 1. 2022 v 8:02 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi Pavel,\n\nOn Wed, Mar 24, 2021 at 08:09:13AM +0100, Pavel Stehule wrote:\n> \n> I am continuing to talk with Hannu about syntax. I think so using the\n> compile option is a good direction, but maybe renaming from \"routine_label\"\n> to \"topscope_label\" (proposed by Hannu) or maybe \"outerscope_label\" can be\n> a good idea.\n\nIt's not clear to me what is the status of this patch. You sent some updated\npatch since with some typo fixes but I don't see any conclusion about the\ndirection this patch should go and/or how to name the new option.\n\nShould the patch be reviewed or switched to Waiting on Author?I am not sure if there is enough agreement and if there is enough necessity for this feature.In this discussion there were more general questions about future development of plpgsql (about possible configurations and compiler parametrizations, and how to switch the configuration on global and on local levels). This is not fully solved yet. This is probably the reason why discussion about this specific feature (and very simple feature) has not reached a clear end. Generally, this is the same problem in Postgres. Personally, I don't want to write a complicated patch for this relatively trivial (and not too important feature). Sure, then it cannot solve more general questions. I think the plpgsql option can work well - maybe with the keyword \"top_label\" or \"root_label\". Probably some enhancing of the DECLARE statement can work too (if it will be done well). But it requires more work, because the DECLARE statement has block scope, and then the alias of the top label has to have block scope too. Theoretically we can propagate it to GUC too - same like plpgsql.variable_conflict. This simple design can work (my opinion), but I understand so it is not a general reply for people who can have more control over the behavior of plpgsql. But I think it is a different issue and should be solved separately (and this pretty complex problem, because Postgres together with PLpgSQL mix different access models - access rights, searching path, local scope, and we miss possible parametrization on schema level what has most sense for PLpgSQL).RegardsPavel",
"msg_date": "Thu, 6 Jan 2022 08:52:23 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Jan 06, 2022 at 08:52:23AM +0100, Pavel Stehule wrote:\n> \n> I am not sure if there is enough agreement and if there is enough necessity\n> for this feature.\n> \n> In this discussion there were more general questions about future\n> development of plpgsql (about possible configurations and compiler\n> parametrizations, and how to switch the configuration on global and on\n> local levels). This is not fully solved yet. This is probably the reason\n> why discussion about this specific feature (and very simple feature) has\n> not reached a clear end. Generally, this is the same problem in Postgres.\n\nI agree, but on the other hand I don't see how defining a top level block\nalias identical for every single plpgsql function would really make sense.\nNot every function has a very long name and would benefit from it, and no one\ncan really assume that e.g. \"root\" or whatever configured name won't be used in\nany plpgsql function on that database or cluster. So while having some global\nconfiguration infrastructure can be useful I still don't think that it could be\nused for this purpose.\n\nAnyway, the only committer that showed some interest in the feature is Michael,\nand he seemed ok in principle with the \"alias-implementation\" approach.\nMichael, did you have a look at this version ([1]), or do you think it should\nsimply be rejected?\n\n[1]: https://www.postgresql.org/message-id/attachment/120789/plpgsql-routine-label-option-alias-implementation.patch\n\n\n",
"msg_date": "Thu, 6 Jan 2022 17:05:32 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Jan 6, 2022, at 10:05, Julien Rouhaud wrote:\n> I agree, but on the other hand I don't see how defining a top level block\n> alias identical for every single plpgsql function would really make sense.\n> Not every function has a very long name and would benefit from it, and no one\n> can really assume that e.g. \"root\" or whatever configured name won't be used in\n> any plpgsql function on that database or cluster. So while having some global\n> configuration infrastructure can be useful I still don't think that it could be\n> used for this purpose.\n\nHow about using the existing reserved keyword \"in\" followed by \".\" (dot) and then the function parameter name?\n\nThis idea is based on the assumption \"in.\" would always be a syntax error everywhere in both SQL and PL/pgSQL,\nso if we would introduce such a syntax, no existing code could be affected, except currently invalid code.\n\nI wouldn't mind using \"in.\" to refer to IN/OUT/INOUT parameters and not only IN ones, it's a minor confusion that could be explained in the docs.\n\nAlso, \"out.\" wouldn't work, since \"out\" is not a reserved keyword.\n\n/Joel\n\nOn Thu, Jan 6, 2022, at 10:05, Julien Rouhaud wrote:> I agree, but on the other hand I don't see how defining a top level block> alias identical for every single plpgsql function would really make sense.> Not every function has a very long name and would benefit from it, and no one> can really assume that e.g. \"root\" or whatever configured name won't be used in> any plpgsql function on that database or cluster. So while having some global> configuration infrastructure can be useful I still don't think that it could be> used for this purpose.How about using the existing reserved keyword \"in\" followed by \".\" (dot) and then the function parameter name?This idea is based on the assumption \"in.\" would always be a syntax error everywhere in both SQL and PL/pgSQL,so if we would introduce such a syntax, no existing code could be affected, except currently invalid code.I wouldn't mind using \"in.\" to refer to IN/OUT/INOUT parameters and not only IN ones, it's a minor confusion that could be explained in the docs.Also, \"out.\" wouldn't work, since \"out\" is not a reserved keyword./Joel",
"msg_date": "Thu, 06 Jan 2022 14:28:10 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable\n references"
},
{
"msg_contents": "čt 6. 1. 2022 v 14:28 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jan 6, 2022, at 10:05, Julien Rouhaud wrote:\n> > I agree, but on the other hand I don't see how defining a top level block\n> > alias identical for every single plpgsql function would really make\n> sense.\n> > Not every function has a very long name and would benefit from it, and\n> no one\n> > can really assume that e.g. \"root\" or whatever configured name won't be\n> used in\n> > any plpgsql function on that database or cluster. So while having some\n> global\n> > configuration infrastructure can be useful I still don't think that it\n> could be\n> > used for this purpose.\n>\n> How about using the existing reserved keyword \"in\" followed by \".\" (dot)\n> and then the function parameter name?\n>\n> This idea is based on the assumption \"in.\" would always be a syntax error\n> everywhere in both SQL and PL/pgSQL,\n> so if we would introduce such a syntax, no existing code could be\n> affected, except currently invalid code.\n>\n> I wouldn't mind using \"in.\" to refer to IN/OUT/INOUT parameters and not\n> only IN ones, it's a minor confusion that could be explained in the docs.\n>\n\nYou are right, in.outvar looks messy. Moreover, maybe the SQL parser can\nhave a problem with it.\n\nRegards\n\nPavel\n\n\n\n\n>\n> Also, \"out.\" wouldn't work, since \"out\" is not a reserved keyword.\n>\n> /Joel\n>\n>\n\nčt 6. 1. 2022 v 14:28 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Jan 6, 2022, at 10:05, Julien Rouhaud wrote:> I agree, but on the other hand I don't see how defining a top level block> alias identical for every single plpgsql function would really make sense.> Not every function has a very long name and would benefit from it, and no one> can really assume that e.g. \"root\" or whatever configured name won't be used in> any plpgsql function on that database or cluster. So while having some global> configuration infrastructure can be useful I still don't think that it could be> used for this purpose.How about using the existing reserved keyword \"in\" followed by \".\" (dot) and then the function parameter name?This idea is based on the assumption \"in.\" would always be a syntax error everywhere in both SQL and PL/pgSQL,so if we would introduce such a syntax, no existing code could be affected, except currently invalid code.I wouldn't mind using \"in.\" to refer to IN/OUT/INOUT parameters and not only IN ones, it's a minor confusion that could be explained in the docs.You are right, in.outvar looks messy. Moreover, maybe the SQL parser can have a problem with it. RegardsPavel Also, \"out.\" wouldn't work, since \"out\" is not a reserved keyword./Joel",
"msg_date": "Thu, 6 Jan 2022 15:05:12 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Jan 6, 2022, at 15:05, Pavel Stehule wrote:\n>>čt 6. 1. 2022 v 14:28 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>>How about using the existing reserved keyword \"in\" followed by \".\" (dot) and then the function parameter name?\n>>\n>>This idea is based on the assumption \"in.\" would always be a syntax error everywhere in both SQL and PL/pgSQL,\n>>so if we would introduce such a syntax, no existing code could be affected, except currently invalid code.\n>>\n>>I wouldn't mind using \"in.\" to refer to IN/OUT/INOUT parameters and not only IN ones, it's a minor confusion that could be >>explained in the docs.\n>\n>You are right, in.outvar looks messy.\n\nI think you misunderstood what I meant, I suggested \"in.outvar\" would actually be OK.\n\n> Moreover, maybe the SQL parser can have a problem with it. \n\nHow could the SQL parser have a problem with it, if \"in\" is currently never followed by \".\" (dot)?\nNot an expert in the SQL parser, trying to understand why it would be a problem.\n\n/Joel\nOn Thu, Jan 6, 2022, at 15:05, Pavel Stehule wrote:>>čt 6. 1. 2022 v 14:28 odesílatel Joel Jacobson <joel@compiler.org> napsal:>>How about using the existing reserved keyword \"in\" followed by \".\" (dot) and then the function parameter name?>>>>This idea is based on the assumption \"in.\" would always be a syntax error everywhere in both SQL and PL/pgSQL,>>so if we would introduce such a syntax, no existing code could be affected, except currently invalid code.>>>>I wouldn't mind using \"in.\" to refer to IN/OUT/INOUT parameters and not only IN ones, it's a minor confusion that could be >>explained in the docs.>>You are right, in.outvar looks messy.I think you misunderstood what I meant, I suggested \"in.outvar\" would actually be OK.> Moreover, maybe the SQL parser can have a problem with it. How could the SQL parser have a problem with it, if \"in\" is currently never followed by \".\" (dot)?Not an expert in the SQL parser, trying to understand why it would be a problem./Joel",
"msg_date": "Thu, 06 Jan 2022 16:58:57 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable\n references"
},
{
"msg_contents": "čt 6. 1. 2022 v 16:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jan 6, 2022, at 15:05, Pavel Stehule wrote:\n> >>čt 6. 1. 2022 v 14:28 odesílatel Joel Jacobson <joel@compiler.org>\n> napsal:\n> >>How about using the existing reserved keyword \"in\" followed by \".\" (dot)\n> and then the function parameter name?\n> >>\n> >>This idea is based on the assumption \"in.\" would always be a syntax\n> error everywhere in both SQL and PL/pgSQL,\n> >>so if we would introduce such a syntax, no existing code could be\n> affected, except currently invalid code.\n> >>\n> >>I wouldn't mind using \"in.\" to refer to IN/OUT/INOUT parameters and not\n> only IN ones, it's a minor confusion that could be >>explained in the docs.\n> >\n> >You are right, in.outvar looks messy.\n>\n> I think you misunderstood what I meant, I suggested \"in.outvar\" would\n> actually be OK.\n>\n\nI understand well, and I don't think it's nice.\n\nAre there some similar features in other programming languages?\n\n\n> > Moreover, maybe the SQL parser can have a problem with it.\n>\n> How could the SQL parser have a problem with it, if \"in\" is currently\n> never followed by \".\" (dot)?\n> Not an expert in the SQL parser, trying to understand why it would be a\n> problem.\n>\n\nyou can check it. It is true, so IN is usually followed by \"(\", but until\ncheck I am not able to say if there will be an unwanted shift or collision\nor not.\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> /Joel\n>\n\nčt 6. 1. 2022 v 16:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Jan 6, 2022, at 15:05, Pavel Stehule wrote:>>čt 6. 1. 2022 v 14:28 odesílatel Joel Jacobson <joel@compiler.org> napsal:>>How about using the existing reserved keyword \"in\" followed by \".\" (dot) and then the function parameter name?>>>>This idea is based on the assumption \"in.\" would always be a syntax error everywhere in both SQL and PL/pgSQL,>>so if we would introduce such a syntax, no existing code could be affected, except currently invalid code.>>>>I wouldn't mind using \"in.\" to refer to IN/OUT/INOUT parameters and not only IN ones, it's a minor confusion that could be >>explained in the docs.>>You are right, in.outvar looks messy.I think you misunderstood what I meant, I suggested \"in.outvar\" would actually be OK.I understand well, and I don't think it's nice. Are there some similar features in other programming languages? > Moreover, maybe the SQL parser can have a problem with it. How could the SQL parser have a problem with it, if \"in\" is currently never followed by \".\" (dot)?Not an expert in the SQL parser, trying to understand why it would be a problem.you can check it. It is true, so IN is usually followed by \"(\", but until check I am not able to say if there will be an unwanted shift or collision or not.RegardsPavel /Joel",
"msg_date": "Thu, 6 Jan 2022 17:10:29 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Jan 6, 2022, at 17:10, Pavel Stehule wrote:\n> I understand well, and I don't think it's nice. \n>\n> Are there some similar features in other programming languages? \n\nIt would be similar to \"this\" in Javascript/Java/C++,\nbut instead using \"in\" to access function parameters.\n\nCurrently, we need to prefix the parameter name if it's in conflict with a column name:\n\nCREATE FUNCTION very_long_function_name(id int, some_value text)\nRETURNS boolean\nLANGUAGE sql AS $$\nUPDATE some_table\nSET some_value = very_long_function_name.some_value\nWHERE id = very_long_function_name.id RETURNING TRUE\n$$;\n\nThis is cumbersome as function names can be long, and if changing the function name,\nyou would need to update all occurrences of the function name in the code.\n\nIf we could instead refer to the parameters by prefixing them with \"in.\", we could write:\n\nCREATE FUNCTION very_long_function_name(id int, some_value text)\nRETURNS boolean\nLANGUAGE sql AS $$\nUPDATE some_table\nSET some_value = in.some_value\nWHERE id = in.id RETURNING TRUE\n$$;\n\nI think this would be nice, even if it would only work for IN parameters,\nsince you seldom need to access OUT parameters in the problematic WHERE-clauses anyway.\nI mostly use OUT parameters when setting them on a separate row:\nsome_out_var := some_value;\n...or, when SELECTin INTO an OUT parameter, which wouldn't be a problem.\n\n> you can check it. It is true, so IN is usually followed by \"(\", but until check I am not able to say if there will be an unwanted\n> shift or collision or not.\n\nI checked gram.y, and IN_P is never followed by '.', but not sure if it could cause problems anyway, hope someone with parser knowledge can comment on this.\n\n/Joel\nOn Thu, Jan 6, 2022, at 17:10, Pavel Stehule wrote:> I understand well, and I don't think it's nice. >> Are there some similar features in other programming languages? It would be similar to \"this\" in Javascript/Java/C++,but instead using \"in\" to access function parameters.Currently, we need to prefix the parameter name if it's in conflict with a column name:CREATE FUNCTION very_long_function_name(id int, some_value text)RETURNS booleanLANGUAGE sql AS $$UPDATE some_tableSET some_value = very_long_function_name.some_valueWHERE id = very_long_function_name.id RETURNING TRUE$$;This is cumbersome as function names can be long, and if changing the function name,you would need to update all occurrences of the function name in the code.If we could instead refer to the parameters by prefixing them with \"in.\", we could write:CREATE FUNCTION very_long_function_name(id int, some_value text)RETURNS booleanLANGUAGE sql AS $$UPDATE some_tableSET some_value = in.some_valueWHERE id = in.id RETURNING TRUE$$;I think this would be nice, even if it would only work for IN parameters,since you seldom need to access OUT parameters in the problematic WHERE-clauses anyway.I mostly use OUT parameters when setting them on a separate row:some_out_var := some_value;...or, when SELECTin INTO an OUT parameter, which wouldn't be a problem.> you can check it. It is true, so IN is usually followed by \"(\", but until check I am not able to say if there will be an unwanted> shift or collision or not.I checked gram.y, and IN_P is never followed by '.', but not sure if it could cause problems anyway, hope someone with parser knowledge can comment on this./Joel",
"msg_date": "Thu, 06 Jan 2022 17:47:48 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable\n references"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> čt 6. 1. 2022 v 16:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>> How could the SQL parser have a problem with it, if \"in\" is currently\n>> never followed by \".\" (dot)?\n\n> you can check it. It is true, so IN is usually followed by \"(\", but until\n> check I am not able to say if there will be an unwanted shift or collision\n> or not.\n\nEven if it works today, we could be letting ourselves in for future\ntrouble. The SQL standard is a moving target, and they could easily\nstandardize some future syntax involving IN that creates a problem here.\n\nI think we already have two perfectly satisfactory answers:\n* qualify parameters with the function name to disambiguate them;\n* use the ALIAS feature to create an internal, shorter name.\nI see no problem here that is worth locking ourselves into unnecessary\nsyntax assumptions to fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jan 2022 11:55:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Jan 6, 2022, at 17:55, Tom Lane wrote:\n> Even if it works today, we could be letting ourselves in for future\n> trouble. The SQL standard is a moving target, and they could easily\n> standardize some future syntax involving IN that creates a problem here.\n\nPerhaps the \"in.\" notation could be standardized by the SQL standard,\nallowing vendors to use such notation without fear of future trouble?\n\n> I think we already have two perfectly satisfactory answers:\n> * qualify parameters with the function name to disambiguate them;\n> * use the ALIAS feature to create an internal, shorter name.\n\nI would say we have two OK workarounds, far from perfect:\n* Qualifying parameters is too verbose. Function names can be long.\n* Having to remap parameters using ALIAS is cumbersome.\n\nThis problem is one of my top annoyances with PL/pgSQL.\n\n/Joel\n\n\n\n\nOn Thu, Jan 6, 2022, at 17:55, Tom Lane wrote:> Even if it works today, we could be letting ourselves in for future> trouble. The SQL standard is a moving target, and they could easily> standardize some future syntax involving IN that creates a problem here.Perhaps the \"in.\" notation could be standardized by the SQL standard,allowing vendors to use such notation without fear of future trouble?> I think we already have two perfectly satisfactory answers:> * qualify parameters with the function name to disambiguate them;> * use the ALIAS feature to create an internal, shorter name.I would say we have two OK workarounds, far from perfect:* Qualifying parameters is too verbose. Function names can be long.* Having to remap parameters using ALIAS is cumbersome.This problem is one of my top annoyances with PL/pgSQL./Joel",
"msg_date": "Thu, 06 Jan 2022 18:17:55 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable\n references"
},
{
"msg_contents": "čt 6. 1. 2022 v 17:48 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jan 6, 2022, at 17:10, Pavel Stehule wrote:\n> > I understand well, and I don't think it's nice.\n> >\n> > Are there some similar features in other programming languages?\n>\n> It would be similar to \"this\" in Javascript/Java/C++,\n> but instead using \"in\" to access function parameters.\n>\n> Currently, we need to prefix the parameter name if it's in conflict with a\n> column name:\n>\n> CREATE FUNCTION very_long_function_name(id int, some_value text)\n> RETURNS boolean\n> LANGUAGE sql AS $$\n> UPDATE some_table\n> SET some_value = very_long_function_name.some_value\n> WHERE id = very_long_function_name.id RETURNING TRUE\n> $$;\n>\n> This is cumbersome as function names can be long, and if changing the\n> function name,\n> you would need to update all occurrences of the function name in the code.\n>\n> If we could instead refer to the parameters by prefixing them with \"in.\",\n> we could write:\n>\n> CREATE FUNCTION very_long_function_name(id int, some_value text)\n> RETURNS boolean\n> LANGUAGE sql AS $$\n> UPDATE some_table\n> SET some_value = in.some_value\n> WHERE id = in.id RETURNING TRUE\n> $$;\n>\n> I think this would be nice, even if it would only work for IN parameters,\n> since you seldom need to access OUT parameters in the problematic\n> WHERE-clauses anyway.\n> I mostly use OUT parameters when setting them on a separate row:\n> some_out_var := some_value;\n> ...or, when SELECTin INTO an OUT parameter, which wouldn't be a problem.\n>\n\nThere is full agreement in a description of the issue. Just I don't like\nthe proposed solution. The word \"in '' is not too practical (where there\nare out, and inout) - and it goes against the philosophy of ADA, where all\nlabels are parametrized (there is not any buildin label). The ADA language\nhas two things that plpgsql has not (unfortunately): a) possibility to\nmodify (configure) compiler by PRAGMA directive, b) possibility to define\nPRAGMA on more levels - package, function, block. The possibility to define\na label dynamically is a better solution (not by some buildin keyword),\nbecause it allows some possibility for the end user to define what he\nprefers. For some cases \"in\" can be ok, but when you have only two out\nvariables, then \"in\" looks a little bit strange, and I prefer \"fx\", other\npeople can prefer \"a\" or \"args\".\n\nThere is no technical problem in the definition of alias of top namespace.\nThe problem is in syntax and in forcing this setting to some set of\nroutines. Theoretically we can have GUC plpgsql.rootns. I can set there\n\"fx\", and you can set there \"in\" and we both can be happy. But the real\nquestion is - how to force this setting to all functions. GUC can be\noverwritten in session, and although you can set this GUC in every\nfunction (by option or by assigned GUC), it is not nice, and somebody can\nforget about this setting. For me, there are more interesting (important)\nissues than the possibility to specify the root namespace that can be nice\nto control. I miss some configuration mechanism independent of GUC that is\nstatic (and that emulates static compile options), that can be assigned to\ndatabase (as synonym for project) or schema (as synonym for module or\npackage). With this mechanism this thread will be significantly shorter,\nand all discussion about plpgsql2 was not.\n\n\n>\n> > you can check it. It is true, so IN is usually followed by \"(\", but\n> until check I am not able to say if there will be an unwanted\n> > shift or collision or not.\n>\n> I checked gram.y, and IN_P is never followed by '.', but not sure if it\n> could cause problems anyway, hope someone with parser knowledge can comment\n> on this.\n>\n> /Joel\n>\n\nčt 6. 1. 2022 v 17:48 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Jan 6, 2022, at 17:10, Pavel Stehule wrote:> I understand well, and I don't think it's nice. >> Are there some similar features in other programming languages? It would be similar to \"this\" in Javascript/Java/C++,but instead using \"in\" to access function parameters.Currently, we need to prefix the parameter name if it's in conflict with a column name:CREATE FUNCTION very_long_function_name(id int, some_value text)RETURNS booleanLANGUAGE sql AS $$UPDATE some_tableSET some_value = very_long_function_name.some_valueWHERE id = very_long_function_name.id RETURNING TRUE$$;This is cumbersome as function names can be long, and if changing the function name,you would need to update all occurrences of the function name in the code.If we could instead refer to the parameters by prefixing them with \"in.\", we could write:CREATE FUNCTION very_long_function_name(id int, some_value text)RETURNS booleanLANGUAGE sql AS $$UPDATE some_tableSET some_value = in.some_valueWHERE id = in.id RETURNING TRUE$$;I think this would be nice, even if it would only work for IN parameters,since you seldom need to access OUT parameters in the problematic WHERE-clauses anyway.I mostly use OUT parameters when setting them on a separate row:some_out_var := some_value;...or, when SELECTin INTO an OUT parameter, which wouldn't be a problem.There is full agreement in a description of the issue. Just I don't like the proposed solution. The word \"in '' is not too practical (where there are out, and inout) - and it goes against the philosophy of ADA, where all labels are parametrized (there is not any buildin label). The ADA language has two things that plpgsql has not (unfortunately): a) possibility to modify (configure) compiler by PRAGMA directive, b) possibility to define PRAGMA on more levels - package, function, block. The possibility to define a label dynamically is a better solution (not by some buildin keyword), because it allows some possibility for the end user to define what he prefers. For some cases \"in\" can be ok, but when you have only two out variables, then \"in\" looks a little bit strange, and I prefer \"fx\", other people can prefer \"a\" or \"args\". There is no technical problem in the definition of alias of top namespace. The problem is in syntax and in forcing this setting to some set of routines. Theoretically we can have GUC plpgsql.rootns. I can set there \"fx\", and you can set there \"in\" and we both can be happy. But the real question is - how to force this setting to all functions. GUC can be overwritten in session, and although you can set this GUC in every function (by option or by assigned GUC), it is not nice, and somebody can forget about this setting. For me, there are more interesting (important) issues than the possibility to specify the root namespace that can be nice to control. I miss some configuration mechanism independent of GUC that is static (and that emulates static compile options), that can be assigned to database (as synonym for project) or schema (as synonym for module or package). With this mechanism this thread will be significantly shorter, and all discussion about plpgsql2 was not. > you can check it. It is true, so IN is usually followed by \"(\", but until check I am not able to say if there will be an unwanted> shift or collision or not.I checked gram.y, and IN_P is never followed by '.', but not sure if it could cause problems anyway, hope someone with parser knowledge can comment on this./Joel",
"msg_date": "Thu, 6 Jan 2022 19:03:47 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "čt 6. 1. 2022 v 18:18 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jan 6, 2022, at 17:55, Tom Lane wrote:\n> > Even if it works today, we could be letting ourselves in for future\n> > trouble. The SQL standard is a moving target, and they could easily\n> > standardize some future syntax involving IN that creates a problem here.\n>\n> Perhaps the \"in.\" notation could be standardized by the SQL standard,\n> allowing vendors to use such notation without fear of future trouble?\n>\n> > I think we already have two perfectly satisfactory answers:\n> > * qualify parameters with the function name to disambiguate them;\n> > * use the ALIAS feature to create an internal, shorter name.\n>\n> I would say we have two OK workarounds, far from perfect:\n> * Qualifying parameters is too verbose. Function names can be long.\n> * Having to remap parameters using ALIAS is cumbersome.\n>\n> This problem is one of my top annoyances with PL/pgSQL.\n>\n\nIt depends on the programming style. I am accepting so this can be a real\nissue, although I have had this necessity only a few times, and I have not\nreceived any feedback from customers about it.\n\nProbably there can be an agreement so somebody uses it more often and\nsomebody only a few times. If you need it every time, then you need some\nglobal solution. If you use it only a few times, then a local verbose\nsolution will be prefered, and a global solution can be the source of new\nissues.\n\nMaybe they can help if we accept that there are two different problems, and\nwe should design two different solutions.\n\n1. how to set globally plpgsql root namespace\n2. how to set locally plpgsql root namespace\n\n@1 can be solved by (very dirty) GUC plpggsql.root_ns_alias (this is just\nfirst shot, nothing more)\n@2 can be solved by #option root_ns_alias or by extending DECLARE by syntax\n\"fx ALIAS FOR LABEL functioname\n\nI accept that both issues are valid, and I don't think we can find a good\ndesign that solves both issues, because there are different objective\nexpectations.\n\nRegards\n\nPavel\n\n\n\n> /Joel\n>\n>\n>\n>\n>\n\nčt 6. 1. 2022 v 18:18 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Jan 6, 2022, at 17:55, Tom Lane wrote:> Even if it works today, we could be letting ourselves in for future> trouble. The SQL standard is a moving target, and they could easily> standardize some future syntax involving IN that creates a problem here.Perhaps the \"in.\" notation could be standardized by the SQL standard,allowing vendors to use such notation without fear of future trouble?> I think we already have two perfectly satisfactory answers:> * qualify parameters with the function name to disambiguate them;> * use the ALIAS feature to create an internal, shorter name.I would say we have two OK workarounds, far from perfect:* Qualifying parameters is too verbose. Function names can be long.* Having to remap parameters using ALIAS is cumbersome.This problem is one of my top annoyances with PL/pgSQL.It depends on the programming style. I am accepting so this can be a real issue, although I have had this necessity only a few times, and I have not received any feedback from customers about it.Probably there can be an agreement so somebody uses it more often and somebody only a few times. If you need it every time, then you need some global solution. If you use it only a few times, then a local verbose solution will be prefered, and a global solution can be the source of new issues. Maybe they can help if we accept that there are two different problems, and we should design two different solutions. 1. how to set globally plpgsql root namespace 2. how to set locally plpgsql root namespace@1 can be solved by (very dirty) GUC plpggsql.root_ns_alias (this is just first shot, nothing more)@2 can be solved by #option root_ns_alias or by extending DECLARE by syntax \"fx ALIAS FOR LABEL functionameI accept that both issues are valid, and I don't think we can find a good design that solves both issues, because there are different objective expectations.RegardsPavel/Joel",
"msg_date": "Thu, 6 Jan 2022 19:28:44 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": ">\n>\n>\n> I accept that both issues are valid, and I don't think we can find a good\n> design that solves both issues, because there are different objective\n> expectations.\n>\n\nI accept that both issues are valid, and I don't think we can find a\n**one** good design that solves both issues, because there are different\nobjective expectations.\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> /Joel\n>>\n>>\n>>\n>>\n>>\n\nI accept that both issues are valid, and I don't think we can find a good design that solves both issues, because there are different objective expectations.I accept that both issues are valid, and I don't think we can find a \n**one** good design that solves both issues, because there are different \nobjective expectations. RegardsPavel/Joel",
"msg_date": "Thu, 6 Jan 2022 19:32:54 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Jan 6, 2022, at 19:03, Pavel Stehule wrote:\n> The possibility to define a label dynamically is a better solution (not by some buildin keyword),\n> because it allows some possibility for the end user to define what he prefers.\n\nI'm trying to understand why you think a user-defined notation is desirable,\nwhy it wouldn't be better if the SQL standard would endorse a notation,\nso we could all write code in the same way, avoiding ugly GUCs or PRAGMAs altogether?\n\nIf \"in.\" would work, due to \"in\" being a reserved SQL keyword,\ndon't you think the benefits of a SQL standardized solution would outweigh our\npersonal preferences on what word each one of us prefer?\n\n/Joel\nOn\nThu, Jan 6, 2022, at 19:03, Pavel Stehule wrote:> The\npossibility to define a label dynamically is a better solution (not by\nsome buildin keyword),> because it allows some\npossibility for the end user to define what he\nprefers.I'm trying to understand why you\nthink a user-defined notation is desirable,why it\nwouldn't be better if the SQL standard would endorse a\nnotation,so we could all write code in the same way,\navoiding ugly GUCs or PRAGMAs\naltogether?If \"in.\" would work, due to\n\"in\" being a reserved SQL keyword,don't you think the\nbenefits of a SQL standardized solution would outweigh\nourpersonal preferences on what word each one of us\nprefer?/Joel",
"msg_date": "Thu, 06 Jan 2022 20:02:03 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable\n references"
},
{
"msg_contents": "čt 6. 1. 2022 v 20:03 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jan 6, 2022, at 19:03, Pavel Stehule wrote:\n> > The possibility to define a label dynamically is a better solution (not\n> by some buildin keyword),\n> > because it allows some possibility for the end user to define what he\n> prefers.\n>\n> I'm trying to understand why you think a user-defined notation is\n> desirable,\n> why it wouldn't be better if the SQL standard would endorse a notation,\n> so we could all write code in the same way, avoiding ugly GUCs or PRAGMAs\n> altogether?\n>\n\nBut there is nothing similar in standard. Standard doesn't specify any\ncolumn or table or label names in the custom area.\n\nThe ADA language, an PL/SQL origin, and the PL/pgSQL origin has not nothing\nsimilar too. Moreover it (ADA language) was designed as a safe, very\nverbose language without implicit conventions.\n\nI think we have different positions, because we see different usage, based\non, probably, a different programming style. I can understand the request\nfor special common notation for access for routine parameters. But the \"in\"\nkeyword for this case is not good, and I really think it is better to give\nsome freedom to the user to choose their own label, if we don't know the\nbest one.\n\n\n\n> If \"in.\" would work, due to \"in\" being a reserved SQL keyword,\n> don't you think the benefits of a SQL standardized solution would outweigh\n> our\n> personal preferences on what word each one of us prefer?\n>\n\nI know that \"in\" is a reserved word in SQL, but I have not any knowledge of\nit being used as alias in SQL functions or in SQL/PSM functions.\n\n\n\n> /Joel\n>\n\nčt 6. 1. 2022 v 20:03 odesílatel Joel Jacobson <joel@compiler.org> napsal:On\nThu, Jan 6, 2022, at 19:03, Pavel Stehule wrote:> The\npossibility to define a label dynamically is a better solution (not by\nsome buildin keyword),> because it allows some\npossibility for the end user to define what he\nprefers.I'm trying to understand why you\nthink a user-defined notation is desirable,why it\nwouldn't be better if the SQL standard would endorse a\nnotation,so we could all write code in the same way,\navoiding ugly GUCs or PRAGMAs\naltogether?But there is nothing similar in standard. Standard doesn't specify any column or table or label names in the custom area. The ADA language, an PL/SQL origin, and the PL/pgSQL origin has not nothing similar too. Moreover it (ADA language) was designed as a safe, very verbose language without implicit conventions. I think we have different positions, because we see different usage, based on, probably, a different programming style. I can understand the request for special common notation for access for routine parameters. But the \"in\" keyword for this case is not good, and I really think it is better to give some freedom to the user to choose their own label, if we don't know the best one. If \"in.\" would work, due to\n\"in\" being a reserved SQL keyword,don't you think the\nbenefits of a SQL standardized solution would outweigh\nourpersonal preferences on what word each one of us\nprefer?I know that \"in\" is a reserved word in SQL, but I have not any knowledge of it being used as alias in SQL functions or in SQL/PSM functions. /Joel",
"msg_date": "Thu, 6 Jan 2022 20:24:17 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Jan 6, 2022, at 20:24, Pavel Stehule wrote:\n> But there is nothing similar in standard.\n> Standard doesn't specify any column or table or label names in the custom area. \n\nI think that's an unfair comparison.\nThis isn't at all the same thing as dictating column or table names.\nI merely suggest reusing an existing reserved keyword.\n\n>>čt 6. 1. 2022 v 20:03 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>>\n>>If \"in.\" would work, due to \"in\" being a reserved SQL keyword,\n>>don't you think the benefits of a SQL standardized solution would outweigh our\n>>personal preferences on what word each one of us prefer?\n>\n>I know that \"in\" is a reserved word in SQL, but I have not any knowledge of it being used as alias in SQL functions or in >SQL/PSM functions. \n\nAre you concerned \"in\" might already be used as an alias somehow?\n\nI did some testing:\n\n\"in\" can be used as a column alias:\n\n=> SELECT 123 AS in;\nin\n-----\n123\n(1 row)\n\nBut it cannot be used as a table alias, which is what matters:\n\n=> WITH in AS (SELECT 1) SELECT * FROM in;\nERROR: syntax error at or near \"in\"\n\n=> SELECT * FROM t in;\nERROR: syntax error at or near \"in\"\n\n=> SELECT * FROM t AS in;\nERROR: syntax error at or near \"in\"\n\nIt's also currently not possible to use it as a PL/pgSQL alias:\n\n=> CREATE FUNCTION f(id int)\nRETURNS void\nLANGUAGE plpgsql AS $$\nDECLARE\nin ALIAS FOR $1;\nBEGIN\nEND\n$$;\nERROR: syntax error at or near \"in\"\nLINE 5: in ALIAS FOR $1;\n\n/Joel\n\nOn Thu, Jan 6, 2022, at 20:24, Pavel Stehule wrote:> But there is nothing similar in standard.> Standard doesn't specify any column or table or label names in the custom area. I think that's an unfair comparison.This isn't at all the same thing as dictating column or table names.I merely suggest reusing an existing reserved keyword.>>čt 6. 1. 2022 v 20:03 odesílatel Joel Jacobson <joel@compiler.org> napsal:>>>>If \"in.\" would work, due to\n\"in\" being a reserved SQL keyword,>>don't you think the\nbenefits of a SQL standardized solution would outweigh\nour>>personal preferences on what word each one of us\nprefer?>>I know that \"in\" is a reserved word in SQL, but I have not any knowledge of it being used as alias in SQL functions or in >SQL/PSM functions. Are you concerned \"in\" might already be used as an alias somehow?I did some testing:\"in\" can be used as a column alias:=> SELECT 123 AS in;in-----123(1 row)But it cannot be used as a table alias, which is what matters:=> WITH in AS (SELECT 1) SELECT * FROM in;ERROR: syntax error at or near \"in\"=> SELECT * FROM t in;ERROR: syntax error at or near \"in\"=> SELECT * FROM t AS in;ERROR: syntax error at or near \"in\"It's also currently not possible to use it as a PL/pgSQL alias:=> CREATE FUNCTION f(id int)RETURNS voidLANGUAGE plpgsql AS $$DECLAREin ALIAS FOR $1;BEGINEND$$;ERROR: syntax error at or near \"in\"LINE 5: in ALIAS FOR $1;/Joel",
"msg_date": "Thu, 06 Jan 2022 21:04:17 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable\n references"
},
{
"msg_contents": "čt 6. 1. 2022 v 21:04 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jan 6, 2022, at 20:24, Pavel Stehule wrote:\n> > But there is nothing similar in standard.\n> > Standard doesn't specify any column or table or label names in the\n> custom area.\n>\n> I think that's an unfair comparison.\n> This isn't at all the same thing as dictating column or table names.\n> I merely suggest reusing an existing reserved keyword.\n>\n> >>čt 6. 1. 2022 v 20:03 odesílatel Joel Jacobson <joel@compiler.org>\n> napsal:\n> >>\n> >>If \"in.\" would work, due to \"in\" being a reserved SQL keyword,\n> >>don't you think the benefits of a SQL standardized solution would\n> outweigh our\n> >>personal preferences on what word each one of us prefer?\n> >\n> >I know that \"in\" is a reserved word in SQL, but I have not any knowledge\n> of it being used as alias in SQL functions or in >SQL/PSM functions.\n>\n> Are you concerned \"in\" might already be used as an alias somehow?\n>\n\nI did not fully correct sentence. I wanted to write \"alias of arguments of\nSQL functions or SQL/PSM functions\". I am sorry.\n\n\n> I did some testing:\n>\n> \"in\" can be used as a column alias:\n>\n> => SELECT 123 AS in;\n> in\n> -----\n> 123\n> (1 row)\n>\n> But it cannot be used as a table alias, which is what matters:\n>\n> => WITH in AS (SELECT 1) SELECT * FROM in;\n> ERROR: syntax error at or near \"in\"\n>\n> => SELECT * FROM t in;\n> ERROR: syntax error at or near \"in\"\n>\n> => SELECT * FROM t AS in;\n> ERROR: syntax error at or near \"in\"\n>\n> It's also currently not possible to use it as a PL/pgSQL alias:\n>\n> => CREATE FUNCTION f(id int)\n> RETURNS void\n> LANGUAGE plpgsql AS $$\n> DECLARE\n> in ALIAS FOR $1;\n> BEGIN\n> END\n> $$;\n> ERROR: syntax error at or near \"in\"\n> LINE 5: in ALIAS FOR $1;\n>\n>\nI didn't say, so \"IN\" cannot be used. Technically, maybe (I didn't write a\npatch). I say, semantically - how I understand the meaning of the word \"in\"\nis not good to use it for generic alias of function arguments (when we have\nout arguments too).\n\nPavel\n\n\n\n\n> /Joel\n>\n>\n\nčt 6. 1. 2022 v 21:04 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Jan 6, 2022, at 20:24, Pavel Stehule wrote:> But there is nothing similar in standard.> Standard doesn't specify any column or table or label names in the custom area. I think that's an unfair comparison.This isn't at all the same thing as dictating column or table names.I merely suggest reusing an existing reserved keyword.>>čt 6. 1. 2022 v 20:03 odesílatel Joel Jacobson <joel@compiler.org> napsal:>>>>If \"in.\" would work, due to\n\"in\" being a reserved SQL keyword,>>don't you think the\nbenefits of a SQL standardized solution would outweigh\nour>>personal preferences on what word each one of us\nprefer?>>I know that \"in\" is a reserved word in SQL, but I have not any knowledge of it being used as alias in SQL functions or in >SQL/PSM functions. Are you concerned \"in\" might already be used as an alias somehow? I did not fully correct sentence. I wanted to write \"alias of arguments of SQL functions or SQL/PSM functions\". I am sorry. I did some testing:\"in\" can be used as a column alias:=> SELECT 123 AS in;in-----123(1 row)But it cannot be used as a table alias, which is what matters:=> WITH in AS (SELECT 1) SELECT * FROM in;ERROR: syntax error at or near \"in\"=> SELECT * FROM t in;ERROR: syntax error at or near \"in\"=> SELECT * FROM t AS in;ERROR: syntax error at or near \"in\"It's also currently not possible to use it as a PL/pgSQL alias:=> CREATE FUNCTION f(id int)RETURNS voidLANGUAGE plpgsql AS $$DECLAREin ALIAS FOR $1;BEGINEND$$;ERROR: syntax error at or near \"in\"LINE 5: in ALIAS FOR $1;I didn't say, so \"IN\" cannot be used. Technically, maybe (I didn't write a patch). I say, semantically - how I understand the meaning of the word \"in\" is not good to use it for generic alias of function arguments (when we have out arguments too).Pavel /Joel",
"msg_date": "Thu, 6 Jan 2022 21:38:38 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Thu, Jan 6, 2022, at 21:38, Pavel Stehule wrote:\n> I say, semantically - how I understand the meaning of the word \"in\" is not good to use it for generic alias of function arguments (when we have out arguments too).\n\nTrying to imagine a situation where you would need a shorthand also for OUT parameters in real-life PL/pgSQL function.\nCan you give an example?\n\nI can only think of situations where it is needed for IN parameters.\n\n/Joel\nOn Thu, Jan 6, 2022, at 21:38, Pavel Stehule wrote:> I say, semantically - how I understand the meaning of the word \"in\" is not good to use it for generic alias of function arguments (when we have out arguments too).Trying to imagine a situation where you would need a shorthand also for OUT parameters in real-life PL/pgSQL function.Can you give an example?I can only think of situations where it is needed for IN parameters./Joel",
"msg_date": "Thu, 06 Jan 2022 22:12:44 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable\n references"
},
{
"msg_contents": "On Thu, Jan 06, 2022 at 05:05:32PM +0800, Julien Rouhaud wrote:\n>\n> Anyway, the only committer that showed some interest in the feature is Michael,\n> and he seemed ok in principle with the \"alias-implementation\" approach.\n> Michael, did you have a look at this version ([1]), or do you think it should\n> simply be rejected?\n\nAfter a couple of months I see that there is unfortunately no agreement on this\npatch, or even its need.\n\nI think we should close the patch as Rejected, and probably add an entry in the\n\"Feature we do not want\" wiki page.\n\nI will do that in a few days unless someone shows up with new arguments.\n\n\n",
"msg_date": "Sat, 5 Mar 2022 16:54:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Sat, Mar 05, 2022 at 04:54:18PM +0800, Julien Rouhaud wrote:\n> On Thu, Jan 06, 2022 at 05:05:32PM +0800, Julien Rouhaud wrote:\n>> Anyway, the only committer that showed some interest in the feature is Michael,\n>> and he seemed ok in principle with the \"alias-implementation\" approach.\n>> Michael, did you have a look at this version ([1]), or do you think it should\n>> simply be rejected?\n> \n> After a couple of months I see that there is unfortunately no agreement on this\n> patch, or even its need.\n\nI got a short look at what was proposed in the patch a couple of\nmonths ago, and still found the implementation confusing with the way\naliases are handled, particularly when it came to several layers of\npl/pgsql. I am fine to let it go for now.\n--\nMichael",
"msg_date": "Sat, 5 Mar 2022 19:31:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Sat, Mar 05, 2022 at 07:31:53PM +0900, Michael Paquier wrote:\n> I got a short look at what was proposed in the patch a couple of\n> months ago, and still found the implementation confusing with the way\n> aliases are handled, particularly when it came to several layers of\n> pl/pgsql. I am fine to let it go for now.\n\nJust had an extra look at the patch, still same impression. So done\nthis way.\n--\nMichael",
"msg_date": "Mon, 7 Mar 2022 11:27:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 11:27:14AM +0900, Michael Paquier wrote:\n> On Sat, Mar 05, 2022 at 07:31:53PM +0900, Michael Paquier wrote:\n> > I got a short look at what was proposed in the patch a couple of\n> > months ago, and still found the implementation confusing with the way\n> > aliases are handled, particularly when it came to several layers of\n> > pl/pgsql. I am fine to let it go for now.\n>\n> Just had an extra look at the patch, still same impression. So done\n> this way.\n\nI was actually waiting a bit to make sure that Pavel could read the thread,\nsince it was the weekend and right now it's 3:30 AM in Czech Republic...\n\n\n",
"msg_date": "Mon, 7 Mar 2022 10:31:40 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 10:31:40AM +0800, Julien Rouhaud wrote:\n> I was actually waiting a bit to make sure that Pavel could read the thread,\n> since it was the weekend and right now it's 3:30 AM in Czech Republic...\n\nSorry about that. I have reset the state of the patch.\n--\nMichael",
"msg_date": "Mon, 7 Mar 2022 14:06:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "Hi\n\npo 7. 3. 2022 v 3:31 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Mon, Mar 07, 2022 at 11:27:14AM +0900, Michael Paquier wrote:\n> > On Sat, Mar 05, 2022 at 07:31:53PM +0900, Michael Paquier wrote:\n> > > I got a short look at what was proposed in the patch a couple of\n> > > months ago, and still found the implementation confusing with the way\n> > > aliases are handled, particularly when it came to several layers of\n> > > pl/pgsql. I am fine to let it go for now.\n> >\n> > Just had an extra look at the patch, still same impression. So done\n> > this way.\n>\n> I was actually waiting a bit to make sure that Pavel could read the\n> thread,\n> since it was the weekend and right now it's 3:30 AM in Czech Republic...\n>\n\nthis patch should be rejected. There is no consensus.\n\nRegards\n\nPavel\n\nHipo 7. 3. 2022 v 3:31 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Mon, Mar 07, 2022 at 11:27:14AM +0900, Michael Paquier wrote:\n> On Sat, Mar 05, 2022 at 07:31:53PM +0900, Michael Paquier wrote:\n> > I got a short look at what was proposed in the patch a couple of\n> > months ago, and still found the implementation confusing with the way\n> > aliases are handled, particularly when it came to several layers of\n> > pl/pgsql. I am fine to let it go for now.\n>\n> Just had an extra look at the patch, still same impression. So done\n> this way.\n\nI was actually waiting a bit to make sure that Pavel could read the thread,\nsince it was the weekend and right now it's 3:30 AM in Czech Republic...this patch should be rejected. There is no consensus.RegardsPavel",
"msg_date": "Mon, 7 Mar 2022 06:35:45 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 06:35:45AM +0100, Pavel Stehule wrote:\n> \n> this patch should be rejected. There is no consensus.\n\nThanks for the confirmation, I will take care of it!\n\n\n",
"msg_date": "Mon, 7 Mar 2022 13:48:55 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql feature request: shorthand for argument and local\n variable references"
}
] |
[
{
"msg_contents": "Hi,\n\n\nCurrently, EXPLAIN is the only way to know whether the plan is generic \nor custom according to the manual of PREPARE.\n\n https://www.postgresql.org/docs/devel/sql-prepare.html\n\nAfter commit d05b172, we can also use pg_prepared_statements view to \nexamine the plan types.\n\nHow about adding this explanation like the attached patch?\n\n\nRegards,\n\n--\nAtsushi Torikoshi",
"msg_date": "Wed, 18 Nov 2020 11:04:34 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "[doc] adding way to examine the plan type of prepared statements"
},
{
"msg_contents": "On 2020-11-18 11:04, torikoshia wrote:\n> Hi,\n> \n> \n> Currently, EXPLAIN is the only way to know whether the plan is generic\n> or custom according to the manual of PREPARE.\n> \n> https://www.postgresql.org/docs/devel/sql-prepare.html\n> \n> After commit d05b172, we can also use pg_prepared_statements view to\n> examine the plan types.\n> \n> How about adding this explanation like the attached patch?\n\nSorry, but on second thought, since it seems better to add\nthe explanation to the current description of pg_prepared_statements,\nI modified the patch.\n\n\nRegards,\n\n--\nAtsushi Torikoshi",
"msg_date": "Thu, 19 Nov 2020 15:19:26 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [doc] adding way to examine the plan type of prepared statements"
},
{
"msg_contents": "Hi Torikoshi-san,\n\nOn Thu, Nov 19, 2020 at 3:19 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> On 2020-11-18 11:04, torikoshia wrote:\n> > Hi,\n> >\n> >\n> > Currently, EXPLAIN is the only way to know whether the plan is generic\n> > or custom according to the manual of PREPARE.\n> >\n> > https://www.postgresql.org/docs/devel/sql-prepare.html\n> >\n> > After commit d05b172, we can also use pg_prepared_statements view to\n> > examine the plan types.\n> >\n> > How about adding this explanation like the attached patch?\n>\n> Sorry, but on second thought, since it seems better to add\n> the explanation to the current description of pg_prepared_statements,\n> I modified the patch.\n>\n\nYou sent in your patch, v2-0001-add-way-to-examine-plan-type.patch to\npgsql-hackers on Nov 19, but you did not post it to the next\nCommitFest[1]. If this was intentional, then you need to take no\naction. However, if you want your patch to be reviewed as part of the\nupcoming CommitFest, then you need to add it yourself before\n2021-01-01 AoE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 19:47:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [doc] adding way to examine the plan type of prepared statements"
}
] |
[
{
"msg_contents": "Hi,\n\nAFAIU, when the planner statistics are updated, generic plans are \ninvalidated and PostgreSQL recreates. However, the manual doesn't seem \nto explain it explicitly.\n\n https://www.postgresql.org/docs/devel/sql-prepare.html\n\nI guess this case is included in 'whenever database objects used in the \nstatement have definitional (DDL) changes undergone', but I feel it's \nhard to infer.\n\nSince updates of the statistics can often happen, how about describing \nthis case explicitly like an attached patch?\n\n\nRegards,\n\n--\nAtsushi Torikoshi",
"msg_date": "Wed, 18 Nov 2020 11:04:53 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "[doc] plan invalidation when statistics are update"
},
{
"msg_contents": "\n\nOn 2020/11/18 11:04, torikoshia wrote:\n> Hi,\n> \n> AFAIU, when the planner statistics are updated, generic plans are invalidated and PostgreSQL recreates. However, the manual doesn't seem to explain it explicitly.\n> \n> � https://www.postgresql.org/docs/devel/sql-prepare.html\n> \n> I guess this case is included in 'whenever database objects used in the statement have definitional (DDL) changes undergone', but I feel it's hard to infer.\n> \n> Since updates of the statistics can often happen, how about describing this case explicitly like an attached patch?\n\n+1 to add that note.\n\n- statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n+ statement. For example, when the planner statistics of the statement\n+ are updated, <productname>PostgreSQL</productname> re-analyzes and\n+ re-plans the statement.\n\nI don't think \"For example,\" is necessary.\n\n\"planner statistics of the statement\" sounds vague? Does the statement\nis re-analyzed and re-planned only when the planner statistics of database\nobjects used in the statement are updated? If yes, we should describe\nthat to make the note a bit more explicitly?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 18 Nov 2020 11:35:44 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [doc] plan invalidation when statistics are update"
},
{
"msg_contents": "On 2020-11-18 11:35, Fujii Masao wrote:\n\nThanks for your comment!\n\n> On 2020/11/18 11:04, torikoshia wrote:\n>> Hi,\n>> \n>> AFAIU, when the planner statistics are updated, generic plans are \n>> invalidated and PostgreSQL recreates. However, the manual doesn't seem \n>> to explain it explicitly.\n>> \n>> https://www.postgresql.org/docs/devel/sql-prepare.html\n>> \n>> I guess this case is included in 'whenever database objects used in \n>> the statement have definitional (DDL) changes undergone', but I feel \n>> it's hard to infer.\n>> \n>> Since updates of the statistics can often happen, how about describing \n>> this case explicitly like an attached patch?\n> \n> +1 to add that note.\n> \n> - statement. Also, if the value of <xref linkend=\"guc-search-path\"/> \n> changes\n> + statement. For example, when the planner statistics of the \n> statement\n> + are updated, <productname>PostgreSQL</productname> re-analyzes and\n> + re-plans the statement.\n> \n> I don't think \"For example,\" is necessary.\n> \n> \"planner statistics of the statement\" sounds vague? Does the statement\n> is re-analyzed and re-planned only when the planner statistics of \n> database\n> objects used in the statement are updated? If yes, we should describe\n> that to make the note a bit more explicitly?\n\nYes. As far as I confirmed, updating statistics which are not used in\nprepared statements doesn't trigger re-analyze and re-plan.\n\nSince plan invalidations for DDL changes and statistcal changes are \ncaused\nby PlanCacheRelCallback(Oid 'relid'), only the prepared statements using\n'relid' relation seem invalidated.\n\nAttached updated patch.\n\n\nRegards,\n\n-\nAtsushi Torikoshi",
"msg_date": "Thu, 19 Nov 2020 14:33:54 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [doc] plan invalidation when statistics are update"
},
{
"msg_contents": "\n\nOn 2020/11/19 14:33, torikoshia wrote:\n> On 2020-11-18 11:35, Fujii Masao wrote:\n> \n> Thanks for your comment!\n> \n>> On 2020/11/18 11:04, torikoshia wrote:\n>>> Hi,\n>>>\n>>> AFAIU, when the planner statistics are updated, generic plans are invalidated and PostgreSQL recreates. However, the manual doesn't seem to explain it explicitly.\n>>>\n>>> https://www.postgresql.org/docs/devel/sql-prepare.html\n>>>\n>>> I guess this case is included in 'whenever database objects used in the statement have definitional (DDL) changes undergone', but I feel it's hard to infer.\n>>>\n>>> Since updates of the statistics can often happen, how about describing this case explicitly like an attached patch?\n>>\n>> +1 to add that note.\n>>\n>> - statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n>> + statement. For example, when the planner statistics of the statement\n>> + are updated, <productname>PostgreSQL</productname> re-analyzes and\n>> + re-plans the statement.\n>>\n>> I don't think \"For example,\" is necessary.\n>>\n>> \"planner statistics of the statement\" sounds vague? Does the statement\n>> is re-analyzed and re-planned only when the planner statistics of database\n>> objects used in the statement are updated? If yes, we should describe\n>> that to make the note a bit more explicitly?\n> \n> Yes. As far as I confirmed, updating statistics which are not used in\n> prepared statements doesn't trigger re-analyze and re-plan.\n> \n> Since plan invalidations for DDL changes and statistcal changes are caused\n> by PlanCacheRelCallback(Oid 'relid'), only the prepared statements using\n> 'relid' relation seem invalidated.> \n> Attached updated patch.\n\nThanks for confirming that and updating the patch!\nBarring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 24 Nov 2020 23:14:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [doc] plan invalidation when statistics are update"
},
{
"msg_contents": "\n\nOn 2020/11/24 23:14, Fujii Masao wrote:\n> \n> \n> On 2020/11/19 14:33, torikoshia wrote:\n>> On 2020-11-18 11:35, Fujii Masao wrote:\n>>\n>> Thanks for your comment!\n>>\n>>> On 2020/11/18 11:04, torikoshia wrote:\n>>>> Hi,\n>>>>\n>>>> AFAIU, when the planner statistics are updated, generic plans are invalidated and PostgreSQL recreates. However, the manual doesn't seem to explain it explicitly.\n>>>>\n>>>> https://www.postgresql.org/docs/devel/sql-prepare.html\n>>>>\n>>>> I guess this case is included in 'whenever database objects used in the statement have definitional (DDL) changes undergone', but I feel it's hard to infer.\n>>>>\n>>>> Since updates of the statistics can often happen, how about describing this case explicitly like an attached patch?\n>>>\n>>> +1 to add that note.\n>>>\n>>> - statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n>>> + statement. For example, when the planner statistics of the statement\n>>> + are updated, <productname>PostgreSQL</productname> re-analyzes and\n>>> + re-plans the statement.\n>>>\n>>> I don't think \"For example,\" is necessary.\n>>>\n>>> \"planner statistics of the statement\" sounds vague? Does the statement\n>>> is re-analyzed and re-planned only when the planner statistics of database\n>>> objects used in the statement are updated? If yes, we should describe\n>>> that to make the note a bit more explicitly?\n>>\n>> Yes. As far as I confirmed, updating statistics which are not used in\n>> prepared statements doesn't trigger re-analyze and re-plan.\n>>\n>> Since plan invalidations for DDL changes and statistcal changes are caused\n>> by PlanCacheRelCallback(Oid 'relid'), only the prepared statements using\n>> 'relid' relation seem invalidated.> Attached updated patch.\n> \n> Thanks for confirming that and updating the patch!\n\n force re-analysis and re-planning of the statement before using it\n whenever database objects used in the statement have undergone\n definitional (DDL) changes since the previous use of the prepared\n- statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n+ statement. Similarly, whenever the planner statistics of database\n+ objects used in the statement have updated, re-analysis and re-planning\n+ happen.\n\n\"been\" should be added between \"have\" and \"updated\" in the above \"objects\n used in the statement have updated\"?\n\nI'm inclined to add \"since the previous use of the prepared statement\" into\nalso the second description, to make it clear. But if we do that, it's better\nto merge the above two description into one, as follows?\n\n whenever database objects used in the statement have undergone\n- definitional (DDL) changes since the previous use of the prepared\n+ definitional (DDL) changes or the planner statistics of them have\n+ been updated since the previous use of the prepared\n statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 25 Nov 2020 14:13:17 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [doc] plan invalidation when statistics are update"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 1:13 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/11/24 23:14, Fujii Masao wrote:\n> >\n> >\n> > On 2020/11/19 14:33, torikoshia wrote:\n> >> On 2020-11-18 11:35, Fujii Masao wrote:\n> >>\n> >> Thanks for your comment!\n> >>\n> >>> On 2020/11/18 11:04, torikoshia wrote:\n> >>>> Hi,\n> >>>>\n> >>>> AFAIU, when the planner statistics are updated, generic plans are\n> invalidated and PostgreSQL recreates. However, the manual doesn't seem to\n> explain it explicitly.\n> >>>>\n> >>>> https://www.postgresql.org/docs/devel/sql-prepare.html\n> >>>>\n> >>>> I guess this case is included in 'whenever database objects used in\n> the statement have definitional (DDL) changes undergone', but I feel it's\n> hard to infer.\n> >>>>\n> >>>> Since updates of the statistics can often happen, how about\n> describing this case explicitly like an attached patch?\n> >>>\n> >>> +1 to add that note.\n> >>>\n> >>> - statement. Also, if the value of <xref\n> linkend=\"guc-search-path\"/> changes\n> >>> + statement. For example, when the planner statistics of the\n> statement\n> >>> + are updated, <productname>PostgreSQL</productname> re-analyzes and\n> >>> + re-plans the statement.\n> >>>\n> >>> I don't think \"For example,\" is necessary.\n> >>>\n> >>> \"planner statistics of the statement\" sounds vague? Does the statement\n> >>> is re-analyzed and re-planned only when the planner statistics of\n> database\n> >>> objects used in the statement are updated? If yes, we should describe\n> >>> that to make the note a bit more explicitly?\n> >>\n> >> Yes. As far as I confirmed, updating statistics which are not used in\n> >> prepared statements doesn't trigger re-analyze and re-plan.\n> >>\n> >> Since plan invalidations for DDL changes and statistcal changes are\n> caused\n> >> by PlanCacheRelCallback(Oid 'relid'), only the prepared statements using\n> >> 'relid' relation seem invalidated.> Attached updated patch.\n> >\n> > Thanks for confirming that and updating the patch!\n>\n> force re-analysis and re-planning of the statement before using it\n> whenever database objects used in the statement have undergone\n> definitional (DDL) changes since the previous use of the prepared\n> - statement. Also, if the value of <xref linkend=\"guc-search-path\"/>\n> changes\n> + statement. Similarly, whenever the planner statistics of database\n> + objects used in the statement have updated, re-analysis and re-planning\n> + happen.\n>\n> \"been\" should be added between \"have\" and \"updated\" in the above \"objects\n> used in the statement have updated\"?\n>\n> I'm inclined to add \"since the previous use of the prepared statement\" into\n> also the second description, to make it clear. But if we do that, it's\n> better\n> to merge the above two description into one, as follows?\n>\n> whenever database objects used in the statement have undergone\n> - definitional (DDL) changes since the previous use of the prepared\n> + definitional (DDL) changes or the planner statistics of them have\n> + been updated since the previous use of the prepared\n> statement. Also, if the value of <xref linkend=\"guc-search-path\"/>\n> changes\n>\n>\n> +1 for documenting this case since I just spent time reading code last\nweek for it. and\n+1 for the above sentence to describe this case.\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Nov 25, 2020 at 1:13 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/11/24 23:14, Fujii Masao wrote:\n> \n> \n> On 2020/11/19 14:33, torikoshia wrote:\n>> On 2020-11-18 11:35, Fujii Masao wrote:\n>>\n>> Thanks for your comment!\n>>\n>>> On 2020/11/18 11:04, torikoshia wrote:\n>>>> Hi,\n>>>>\n>>>> AFAIU, when the planner statistics are updated, generic plans are invalidated and PostgreSQL recreates. However, the manual doesn't seem to explain it explicitly.\n>>>>\n>>>> https://www.postgresql.org/docs/devel/sql-prepare.html\n>>>>\n>>>> I guess this case is included in 'whenever database objects used in the statement have definitional (DDL) changes undergone', but I feel it's hard to infer.\n>>>>\n>>>> Since updates of the statistics can often happen, how about describing this case explicitly like an attached patch?\n>>>\n>>> +1 to add that note.\n>>>\n>>> - statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n>>> + statement. For example, when the planner statistics of the statement\n>>> + are updated, <productname>PostgreSQL</productname> re-analyzes and\n>>> + re-plans the statement.\n>>>\n>>> I don't think \"For example,\" is necessary.\n>>>\n>>> \"planner statistics of the statement\" sounds vague? Does the statement\n>>> is re-analyzed and re-planned only when the planner statistics of database\n>>> objects used in the statement are updated? If yes, we should describe\n>>> that to make the note a bit more explicitly?\n>>\n>> Yes. As far as I confirmed, updating statistics which are not used in\n>> prepared statements doesn't trigger re-analyze and re-plan.\n>>\n>> Since plan invalidations for DDL changes and statistcal changes are caused\n>> by PlanCacheRelCallback(Oid 'relid'), only the prepared statements using\n>> 'relid' relation seem invalidated.> Attached updated patch.\n> \n> Thanks for confirming that and updating the patch!\n\n force re-analysis and re-planning of the statement before using it\n whenever database objects used in the statement have undergone\n definitional (DDL) changes since the previous use of the prepared\n- statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n+ statement. Similarly, whenever the planner statistics of database\n+ objects used in the statement have updated, re-analysis and re-planning\n+ happen.\n\n\"been\" should be added between \"have\" and \"updated\" in the above \"objects\n used in the statement have updated\"?\n\nI'm inclined to add \"since the previous use of the prepared statement\" into\nalso the second description, to make it clear. But if we do that, it's better\nto merge the above two description into one, as follows?\n\n whenever database objects used in the statement have undergone\n- definitional (DDL) changes since the previous use of the prepared\n+ definitional (DDL) changes or the planner statistics of them have\n+ been updated since the previous use of the prepared\n statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n\n+1 for documenting this case since I just spent time reading code last week for it. and+1 for the above sentence to describe this case.-- Best RegardsAndy Fan",
"msg_date": "Wed, 25 Nov 2020 13:24:14 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [doc] plan invalidation when statistics are update"
},
{
"msg_contents": "On 2020-11-25 14:13, Fujii Masao wrote:\n> On 2020/11/24 23:14, Fujii Masao wrote:\n>> \n>> \n>> On 2020/11/19 14:33, torikoshia wrote:\n>>> On 2020-11-18 11:35, Fujii Masao wrote:\n>>> \n>>> Thanks for your comment!\n>>> \n>>>> On 2020/11/18 11:04, torikoshia wrote:\n>>>>> Hi,\n>>>>> \n>>>>> AFAIU, when the planner statistics are updated, generic plans are \n>>>>> invalidated and PostgreSQL recreates. However, the manual doesn't \n>>>>> seem to explain it explicitly.\n>>>>> \n>>>>> https://www.postgresql.org/docs/devel/sql-prepare.html\n>>>>> \n>>>>> I guess this case is included in 'whenever database objects used in \n>>>>> the statement have definitional (DDL) changes undergone', but I \n>>>>> feel it's hard to infer.\n>>>>> \n>>>>> Since updates of the statistics can often happen, how about \n>>>>> describing this case explicitly like an attached patch?\n>>>> \n>>>> +1 to add that note.\n>>>> \n>>>> - statement. Also, if the value of <xref \n>>>> linkend=\"guc-search-path\"/> changes\n>>>> + statement. For example, when the planner statistics of the \n>>>> statement\n>>>> + are updated, <productname>PostgreSQL</productname> re-analyzes \n>>>> and\n>>>> + re-plans the statement.\n>>>> \n>>>> I don't think \"For example,\" is necessary.\n>>>> \n>>>> \"planner statistics of the statement\" sounds vague? Does the \n>>>> statement\n>>>> is re-analyzed and re-planned only when the planner statistics of \n>>>> database\n>>>> objects used in the statement are updated? If yes, we should \n>>>> describe\n>>>> that to make the note a bit more explicitly?\n>>> \n>>> Yes. As far as I confirmed, updating statistics which are not used in\n>>> prepared statements doesn't trigger re-analyze and re-plan.\n>>> \n>>> Since plan invalidations for DDL changes and statistcal changes are \n>>> caused\n>>> by PlanCacheRelCallback(Oid 'relid'), only the prepared statements \n>>> using\n>>> 'relid' relation seem invalidated.> Attached updated patch.\n>> \n>> Thanks for confirming that and updating the patch!\n> \n> force re-analysis and re-planning of the statement before using it\n> whenever database objects used in the statement have undergone\n> definitional (DDL) changes since the previous use of the prepared\n> - statement. Also, if the value of <xref linkend=\"guc-search-path\"/> \n> changes\n> + statement. Similarly, whenever the planner statistics of database\n> + objects used in the statement have updated, re-analysis and \n> re-planning\n> + happen.\n> \n> \"been\" should be added between \"have\" and \"updated\" in the above \n> \"objects\n> used in the statement have updated\"?\n\nYou're right.\n\n> I'm inclined to add \"since the previous use of the prepared statement\" \n> into\n> also the second description, to make it clear. But if we do that, it's \n> better\n> to merge the above two description into one, as follows?\n> \n> whenever database objects used in the statement have undergone\n> - definitional (DDL) changes since the previous use of the prepared\n> + definitional (DDL) changes or the planner statistics of them have\n> + been updated since the previous use of the prepared\n> statement. Also, if the value of <xref linkend=\"guc-search-path\"/> \n> changes\n\nThanks, it seems better.\n\n\nRegards,\n\n\n",
"msg_date": "Thu, 26 Nov 2020 14:30:34 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [doc] plan invalidation when statistics are update"
},
{
"msg_contents": "\n\nOn 2020/11/26 14:30, torikoshia wrote:\n> On 2020-11-25 14:13, Fujii Masao wrote:\n>> On 2020/11/24 23:14, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/11/19 14:33, torikoshia wrote:\n>>>> On 2020-11-18 11:35, Fujii Masao wrote:\n>>>>\n>>>> Thanks for your comment!\n>>>>\n>>>>> On 2020/11/18 11:04, torikoshia wrote:\n>>>>>> Hi,\n>>>>>>\n>>>>>> AFAIU, when the planner statistics are updated, generic plans are invalidated and PostgreSQL recreates. However, the manual doesn't seem to explain it explicitly.\n>>>>>>\n>>>>>> https://www.postgresql.org/docs/devel/sql-prepare.html\n>>>>>>\n>>>>>> I guess this case is included in 'whenever database objects used in the statement have definitional (DDL) changes undergone', but I feel it's hard to infer.\n>>>>>>\n>>>>>> Since updates of the statistics can often happen, how about describing this case explicitly like an attached patch?\n>>>>>\n>>>>> +1 to add that note.\n>>>>>\n>>>>> - statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n>>>>> + statement. For example, when the planner statistics of the statement\n>>>>> + are updated, <productname>PostgreSQL</productname> re-analyzes and\n>>>>> + re-plans the statement.\n>>>>>\n>>>>> I don't think \"For example,\" is necessary.\n>>>>>\n>>>>> \"planner statistics of the statement\" sounds vague? Does the statement\n>>>>> is re-analyzed and re-planned only when the planner statistics of database\n>>>>> objects used in the statement are updated? If yes, we should describe\n>>>>> that to make the note a bit more explicitly?\n>>>>\n>>>> Yes. As far as I confirmed, updating statistics which are not used in\n>>>> prepared statements doesn't trigger re-analyze and re-plan.\n>>>>\n>>>> Since plan invalidations for DDL changes and statistcal changes are caused\n>>>> by PlanCacheRelCallback(Oid 'relid'), only the prepared statements using\n>>>> 'relid' relation seem invalidated.> Attached updated patch.\n>>>\n>>> Thanks for confirming that and updating the patch!\n>>\n>> force re-analysis and re-planning of the statement before using it\n>> whenever database objects used in the statement have undergone\n>> definitional (DDL) changes since the previous use of the prepared\n>> - statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n>> + statement. Similarly, whenever the planner statistics of database\n>> + objects used in the statement have updated, re-analysis and re-planning\n>> + happen.\n>>\n>> \"been\" should be added between \"have\" and \"updated\" in the above \"objects\n>> used in the statement have updated\"?\n> \n> You're right.\n> \n>> I'm inclined to add \"since the previous use of the prepared statement\" into\n>> also the second description, to make it clear. But if we do that, it's better\n>> to merge the above two description into one, as follows?\n>>\n>> whenever database objects used in the statement have undergone\n>> - definitional (DDL) changes since the previous use of the prepared\n>> + definitional (DDL) changes or the planner statistics of them have\n>> + been updated since the previous use of the prepared\n>> statement. Also, if the value of <xref linkend=\"guc-search-path\"/> changes\n> \n> Thanks, it seems better.\n\nPushed. Thanks!\n\n\n> +1 for documenting this case since I just spent time reading code last week for it. and\n> +1 for the above sentence to describe this case.\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 26 Nov 2020 16:21:39 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [doc] plan invalidation when statistics are update"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nRecently, we encounter an issue that if the database grant too many roles that `datacl` toasted, vacuum freeze will fail with error \"wrong tuple length\".\r\n\r\nTo reproduce the issue, please follow below steps:\r\nCREATE DATABASE vacuum_freeze_test;\r\n\r\n-- create helper function\r\ncreate or replace function toast_pg_database_datacl() returns text as $body$\r\ndeclare\r\n mycounter int;\r\nbegin\r\n for mycounter in select i from generate_series(1, 2800) i loop\r\n execute 'create role aaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb' || mycounter;\r\n execute 'grant ALL on database vacuum_freeze_test to aaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb' || mycounter;\r\n end loop;\r\n return 'ok';\r\nend;\r\n$body$ language plpgsql volatile strict;\r\n\r\n-- create roles and grant on the database\r\nselect toast_pg_database_datacl();\r\n\r\n-- connect to the database\r\n\\c vacuum_freeze_test\r\n\r\n-- chech the size of column datacl\r\nselect datname, pg_column_size(datacl) as datacl_size, age(datfrozenxid) from pg_database where datname='vacuum_freeze_test';\r\n\r\n-- execute vacuum freeze and it should raise \"wrong tuple length\"\r\nvacuum freeze;\r\nThe root cause is that vac_update_datfrozenxid fetch a copy of pg_database tuple from system cache.\r\nBut the system cache flatten any toast attributes, which cause the length chech failed in heap_inplace_update.\r\n\r\nA path is attached co auther by Ashwin Agrawal, the solution is to fetch the pg_database tuple from disk instead of system cache if needed.",
"msg_date": "Wed, 18 Nov 2020 06:32:51 +0000",
"msg_from": "Junfeng Yang <yjerome@vmware.com>",
"msg_from_op": true,
"msg_subject": "vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
},
{
"msg_contents": "Hi hackers,\r\n\r\nCan anyone help to verify this?\r\n\r\n\n\n\n\n\n\n\n\n\n\nHi hackers,\n\n\n\n\nCan anyone help to verify this?",
"msg_date": "Tue, 24 Nov 2020 01:38:40 +0000",
"msg_from": "Junfeng Yang <yjerome@vmware.com>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?u9i4tDogdmFjX3VwZGF0ZV9kYXRmcm96ZW54aWQgd2lsbCByYWlzZSAid3Jv?=\n =?gb2312?B?bmcgdHVwbGUgbGVuZ3RoIiBpZiBwZ19kYXRhYmFzZSB0dXBsZSBjb250YWlu?=\n =?gb2312?Q?s_toast_attribute.?="
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 2:07 PM Junfeng Yang <yjerome@vmware.com> wrote:\n>\n> Hi hackers,\n>\n> Can anyone help to verify this?\n>\n\nI think one way to get feedback is to register this patch for the next\ncommit fest (https://commitfest.postgresql.org/31/)\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Nov 2020 19:30:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 06:32:51AM +0000, Junfeng Yang wrote:\n> A path is attached co auther by Ashwin Agrawal, the solution is to\n> fetch the pg_database tuple from disk instead of system cache if\n> needed.\n\nYeah, we had better fix and I guess backpatch something here. That's\nannoying.\n\n+DROP DATABASE IF EXISTS vacuum_freeze_test;\n+NOTICE: database \"vacuum_freeze_test\" does not exist, skipping\n+CREATE DATABASE vacuum_freeze_test;\nThis test is costly. Extracting that into a TAP test would be more\nadapted, still I am not really convinced that this is a case worth\nspending extra cycles on.\n\n+ tuple = systable_getnext(sscan);\n+ if (HeapTupleIsValid(tuple))\n+ tuple = heap_copytuple(tuple);\nThat's an unsafe pattern as the tuple scanned could finish by being\ninvalid.\n\nOne thing that strikes me as unwise is that we could run into similar\nproblems with vac_update_relstats() in the future, and there have been\nrecent talks about having more toast tables like for pg_class. That\nis not worth caring about on stable branches because it is not an\nactive problem there, but we could do something better on HEAD.\n\n+ /* Get the pg_database tuple to scribble on */\n+ cached_tuple = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(MyDatabaseId));\n+ cached_dbform = (Form_pg_database) GETSTRUCT(cached_tuple);\nEr, shouldn't we *not* use the system cache at all here then? Or am I\nmissing something obvious? Please see the attached, that is more\nsimpleg.\n--\nMichael",
"msg_date": "Wed, 25 Nov 2020 16:12:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 04:12:15PM +0900, Michael Paquier wrote:\n> Yeah, we had better fix and I guess backpatch something here. That's\n> annoying.\n\nI have considered that over the weekend, and let's be conservative.\npg_database has gained a toast table since 12, and before that we\nwould just have an error \"row is too big\", but nobody has really\ncomplained about that either.\n\n> One thing that strikes me as unwise is that we could run into similar\n> problems with vac_update_relstats() in the future, and there have been\n> recent talks about having more toast tables like for pg_class. That\n> is not worth caring about on stable branches because it is not an\n> active problem there, but we could do something better on HEAD.\n\nFor now, I have added just a comment at the top of\nheap_inplace_update() to warn callers.\n\nJunfeng and Ashwin also mentioned to me off-list that their patch used\na second copy for performance reasons, but I don't see why this could\nbecome an issue as we update each pg_database row only once a job is\ndone. So I'd like to apply something like the attached on HEAD,\ncomments are welcome. \n--\nMichael",
"msg_date": "Mon, 30 Nov 2020 10:27:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 5:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> > One thing that strikes me as unwise is that we could run into similar\n> > problems with vac_update_relstats() in the future, and there have been\n> > recent talks about having more toast tables like for pg_class. That\n> > is not worth caring about on stable branches because it is not an\n> > active problem there, but we could do something better on HEAD.\n>\n> For now, I have added just a comment at the top of\n> heap_inplace_update() to warn callers.\n>\n\nI am thinking if there is some way to assert this aspect, but seems no way.\nSo, yes, having at least a comment is good for now.\n\nJunfeng and Ashwin also mentioned to me off-list that their patch used\n> a second copy for performance reasons, but I don't see why this could\n> become an issue as we update each pg_database row only once a job is\n> done. So I'd like to apply something like the attached on HEAD,\n> comments are welcome.\n\n\nYes the attached patch looks good to me for PostgreSQL. Thanks Michael.\n(In Greenplum, due to per table dispatch to segments, during database wide\nvacuum this function gets called per table instead of only at the end, hence\nwe were trying to be conservative.)\n\nOn Sun, Nov 29, 2020 at 5:27 PM Michael Paquier <michael@paquier.xyz> wrote:> One thing that strikes me as unwise is that we could run into similar\n> problems with vac_update_relstats() in the future, and there have been\n> recent talks about having more toast tables like for pg_class. That\n> is not worth caring about on stable branches because it is not an\n> active problem there, but we could do something better on HEAD.\n\nFor now, I have added just a comment at the top of\nheap_inplace_update() to warn callers.I am thinking if there is some way to assert this aspect, but seems no way.So, yes, having at least a comment is good for now.\nJunfeng and Ashwin also mentioned to me off-list that their patch used\na second copy for performance reasons, but I don't see why this could\nbecome an issue as we update each pg_database row only once a job is\ndone. So I'd like to apply something like the attached on HEAD,\ncomments are welcome. Yes the attached patch looks good to me for PostgreSQL. Thanks Michael.(In Greenplum, due to per table dispatch to segments, during database widevacuum this function gets called per table instead of only at the end, hencewe were trying to be conservative.)",
"msg_date": "Mon, 30 Nov 2020 16:37:14 -0800",
"msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 04:37:14PM -0800, Ashwin Agrawal wrote:\n> I am thinking if there is some way to assert this aspect, but seems no way.\n> So, yes, having at least a comment is good for now.\n\nYeah, I have been thinking about this one without coming to a clear\nidea, but that would be nice.\n\n> Yes the attached patch looks good to me for PostgreSQL. Thanks Michael.\n\nOkay, committed to HEAD then.\n--\nMichael",
"msg_date": "Tue, 8 Dec 2020 12:32:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
},
{
"msg_contents": "Hi Michael,\n\nI got a customer case that matches this issue. The customer is on an\nold version of 12, but I see the patch only went into 14:\n\nOn Tue, 8 Dec 2020 at 00:32, Michael Paquier <michael@paquier.xyz> wrote:\n> > Yes the attached patch looks good to me for PostgreSQL. Thanks Michael.\n> Okay, committed to HEAD then.\n\ncommit 947789f1f5fb61daf663f26325cbe7cad8197d58\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: Tue Dec 8 12:13:19 2020 +0900\n\n Avoid using tuple from syscache for update of pg_database.datfrozenxid\n\nWould it be possible to backport the patch to 12 and 13?\n\n-- \nArthur Nascimento\nEDB\n\n\n",
"msg_date": "Thu, 19 Jan 2023 10:42:47 -0300",
"msg_from": "Arthur Nascimento <tureba@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 10:42:47AM -0300, Arthur Nascimento wrote:\n> Would it be possible to backport the patch to 12 and 13?\n\nThis was recently back-patched [0] [1] [2].\n\n[0] https://postgr.es/m/Y70XNVbUWQsR2Car@paquier.xyz\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c0ee694\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=72b6098\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 19 Jan 2023 09:10:37 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
},
{
"msg_contents": "On Thu, 19 Jan 2023 at 14:10, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> This was recently back-patched [0] [1] [2].\n\nOh, I see that now. Thanks! Sorry for the noise.\n\n--\nArthur Nascimento\nEDB\n\n\n",
"msg_date": "Thu, 19 Jan 2023 14:15:52 -0300",
"msg_from": "Arthur Nascimento <tureba@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vac_update_datfrozenxid will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute."
}
] |
[
{
"msg_contents": "Today, we build and update the docs on\nhttps://www.postgresql.org/docs/devel/ are rebuilt and deployed by the\nbuildfarm automatically every 4 hours.\n\nIf there are no changes at all made to the docs, they are *still*\nkicked out of all caches and the search indexes are rebuilt, because\nwe change the \"time of load\" at the top of the page.\n\nIt would be trivial to change this so that it only actually updates\npages if they have been changed.\n\nHowever, the result of that is that the timestamp on the docs pages\nwill then stay unchanged until there is an actual commit that has made\na docs change. (It would still reload the full set of docs at once,\njust skip them completely when there are no changes at all, so the\nvalue between different pages would remain unchanged)\n\nI think that's a good change, and I don't really see a usecase where\nhaving that date update every 4 hours \"just because\", but before\nmaking a change I wanted to throw it out here and see if someone else\nhas a usecase where the current behaviour would be better?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 18 Nov 2020 11:56:38 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Devel docs on website reloading"
},
{
"msg_contents": "On 18/11/2020 12:56, Magnus Hagander wrote:\n> Today, we build and update the docs on\n> https://www.postgresql.org/docs/devel/ are rebuilt and deployed by the\n> buildfarm automatically every 4 hours.\n> \n> If there are no changes at all made to the docs, they are *still*\n> kicked out of all caches and the search indexes are rebuilt, because\n> we change the \"time of load\" at the top of the page.\n> \n> It would be trivial to change this so that it only actually updates\n> pages if they have been changed.\n> \n> However, the result of that is that the timestamp on the docs pages\n> will then stay unchanged until there is an actual commit that has made\n> a docs change. (It would still reload the full set of docs at once,\n> just skip them completely when there are no changes at all, so the\n> value between different pages would remain unchanged)\n> \n> I think that's a good change, and I don't really see a usecase where\n> having that date update every 4 hours \"just because\", but before\n> making a change I wanted to throw it out here and see if someone else\n> has a usecase where the current behaviour would be better?\n\nSeems reasonable. While we're at it, would it be possible to print the \ncommit id next to the timestamp? Or instead of the timestamp.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 18 Nov 2020 13:09:13 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Devel docs on website reloading"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 12:09 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 18/11/2020 12:56, Magnus Hagander wrote:\n> > Today, we build and update the docs on\n> > https://www.postgresql.org/docs/devel/ are rebuilt and deployed by the\n> > buildfarm automatically every 4 hours.\n> >\n> > If there are no changes at all made to the docs, they are *still*\n> > kicked out of all caches and the search indexes are rebuilt, because\n> > we change the \"time of load\" at the top of the page.\n> >\n> > It would be trivial to change this so that it only actually updates\n> > pages if they have been changed.\n> >\n> > However, the result of that is that the timestamp on the docs pages\n> > will then stay unchanged until there is an actual commit that has made\n> > a docs change. (It would still reload the full set of docs at once,\n> > just skip them completely when there are no changes at all, so the\n> > value between different pages would remain unchanged)\n> >\n> > I think that's a good change, and I don't really see a usecase where\n> > having that date update every 4 hours \"just because\", but before\n> > making a change I wanted to throw it out here and see if someone else\n> > has a usecase where the current behaviour would be better?\n>\n> Seems reasonable. While we're at it, would it be possible to print the\n> commit id next to the timestamp? Or instead of the timestamp.\n\nThat is actually much harder, because the docs are loaded out of the\ndevelopment snapshot .tar.gz (in order to keep the loading process\nidentical to that we use for release docs), and that one doesn't have\nit...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 18 Nov 2020 12:22:57 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Devel docs on website reloading"
},
{
"msg_contents": "On 2020-Nov-18, Magnus Hagander wrote:\n\n> It would be trivial to change this so that it only actually updates\n> pages if they have been changed.\n\nI think this means we could also check much more frequently whether a\nrebuild is needed, right? We could do that every 30 mins or so, since\nmost of the time it would be a no-op.\n\n\n",
"msg_date": "Wed, 18 Nov 2020 09:31:22 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Devel docs on website reloading"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 1:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Nov-18, Magnus Hagander wrote:\n>\n> > It would be trivial to change this so that it only actually updates\n> > pages if they have been changed.\n>\n> I think this means we could also check much more frequently whether a\n> rebuild is needed, right? We could do that every 30 mins or so, since\n> most of the time it would be a no-op.\n\nNo that'd be unrelated. We don't have a dedicated buildfarm animal for\nit, we just piggyback on the existing run, which runs on any changes,\nnot just docs.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 18 Nov 2020 13:44:45 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Devel docs on website reloading"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 1:44 PM Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Wed, Nov 18, 2020 at 1:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> >\n> > On 2020-Nov-18, Magnus Hagander wrote:\n> >\n> > > It would be trivial to change this so that it only actually updates\n> > > pages if they have been changed.\n> >\n> > I think this means we could also check much more frequently whether a\n> > rebuild is needed, right? We could do that every 30 mins or so, since\n> > most of the time it would be a no-op.\n>\n> No that'd be unrelated. We don't have a dedicated buildfarm animal for\n> it, we just piggyback on the existing run, which runs on any changes,\n> not just docs.\n>\n\nLess than 2 years later this is now actually done. That is, we are now\nloading the docs from a git checkout instead of a tarball, which means the\ndevel docs are now stamped with the git revision that they were built from\n(and they're only rebuilt when something changes the docs, but when they\ndo, it should normally take <2 minutes for the new docs to appear on the\nwebsite)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Nov 18, 2020 at 1:44 PM Magnus Hagander <magnus@hagander.net> wrote:On Wed, Nov 18, 2020 at 1:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Nov-18, Magnus Hagander wrote:\n>\n> > It would be trivial to change this so that it only actually updates\n> > pages if they have been changed.\n>\n> I think this means we could also check much more frequently whether a\n> rebuild is needed, right? We could do that every 30 mins or so, since\n> most of the time it would be a no-op.\n\nNo that'd be unrelated. We don't have a dedicated buildfarm animal for\nit, we just piggyback on the existing run, which runs on any changes,\nnot just docs.Less than 2 years later this is now actually done. That is, we are now loading the docs from a git checkout instead of a tarball, which means the devel docs are now stamped with the git revision that they were built from (and they're only rebuilt when something changes the docs, but when they do, it should normally take <2 minutes for the new docs to appear on the website) -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 21 Jun 2022 11:43:40 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Devel docs on website reloading"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 11:43:40AM +0200, Magnus Hagander wrote:\n> On Wed, Nov 18, 2020 at 1:44 PM Magnus Hagander <magnus@hagander.net> wrote:\n> No that'd be unrelated. We don't have a dedicated buildfarm animal for\n> it, we just piggyback on the existing run, which runs on any changes,\n> not just docs.\n> \n> Less than 2 years later this is now actually done. That is, we are now loading\n> the docs from a git checkout instead of a tarball, which means the devel docs\n> are now stamped with the git revision that they were built from (and they're\n> only rebuilt when something changes the docs, but when they do, it should\n> normally take <2 minutes for the new docs to appear on the website)\n\nThat is a big help for committers who want to email a URL of the new\ndoc commit output.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 22 Jun 2022 11:45:22 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Devel docs on website reloading"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 8:45 AM Bruce Momjian <bruce@momjian.us> wrote:\n> That is a big help for committers who want to email a URL of the new\n> doc commit output.\n\nYes, it's a nice \"quality of life\" improvement.\n\nThanks for finally getting this done!\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Jun 2022 10:23:52 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Devel docs on website reloading"
}
] |
[
{
"msg_contents": "Hi, I wonder why there's no function to aggregate arrays by\nconcatenation out of the box?\n\nThere is a usual function `array_cat(anyarray, anyarray)`, but it\ndoesn't seamlessly work with grouping.\n\nWouldn't it be natural to have this:\n\nCREATE AGGREGATE array_cat (anyarray)\n(\n sfunc = array_cat,\n stype = anyarray,\n initcond = '{}'\n);\n\nThanks,\nVlad\n\n\n\n\n",
"msg_date": "Wed, 18 Nov 2020 22:43:17 +0300",
"msg_from": "Vlad Bokov <vlad@razum2um.me>",
"msg_from_op": true,
"msg_subject": "CREATE AGGREGATE array_cat"
},
{
"msg_contents": "On Wednesday, November 18, 2020, Vlad Bokov <vlad@razum2um.me> wrote:\n\n> Hi, I wonder why there's no function to aggregate arrays by\n> concatenation out of the box?\n>\n\nSee array_agg(...)\n\nDavid J.\n\nOn Wednesday, November 18, 2020, Vlad Bokov <vlad@razum2um.me> wrote:Hi, I wonder why there's no function to aggregate arrays by\nconcatenation out of the box?\nSee array_agg(...)David J.",
"msg_date": "Wed, 18 Nov 2020 15:19:16 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE AGGREGATE array_cat"
},
{
"msg_contents": "On 11/18/20 11:19 PM, David G. Johnston wrote:\n> On Wednesday, November 18, 2020, Vlad Bokov <vlad@razum2um.me> wrote:\n> \n>> Hi, I wonder why there's no function to aggregate arrays by\n>> concatenation out of the box?\n>>\n> \n> See array_agg(...)\n\n\nWhy? That doesn't do what is wanted.\n\n\nvik=# select array_agg(a) from (values (array[1]), (array[2])) as v(a);\n array_agg\n-----------\n {{1},{2}}\n(1 row)\n\nvik=# CREATE AGGREGATE array_cat (anyarray)\nvik-# (\nvik(# sfunc = array_cat,\nvik(# stype = anyarray,\nvik(# initcond = '{}'\nvik(# );\nCREATE AGGREGATE\n\nvik=# select array_cat(a) from (values (array[1]), (array[2])) as v(a);\n array_cat\n-----------\n {1,2}\n(1 row)\n\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 19 Nov 2020 01:37:26 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: CREATE AGGREGATE array_cat"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 5:37 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 11/18/20 11:19 PM, David G. Johnston wrote:\n> > On Wednesday, November 18, 2020, Vlad Bokov <vlad@razum2um.me> wrote:\n> >\n> >> Hi, I wonder why there's no function to aggregate arrays by\n> >> concatenation out of the box?\n> >>\n> >\n> > See array_agg(...)\n>\n>\n> Why? That doesn't do what is wanted.\n>\n>\nSorry, I did not read closely enough.\n\nI doubt there is any substantial resistance to including such a function\nbut it would have to be written in C.\n\n\n> vik=# select array_agg(a) from (values (array[1]), (array[2])) as v(a);\n> array_agg\n> -----------\n> {{1},{2}}\n> (1 row)\n>\n\nAnd it's not too hard to work the system to get what you want even without\na custom aggregate.\n\nselect array_agg(b) from (values (array[1]), (array[2])) as v(a), unnest(a)\nas w(b);\n\nvik=# select array_cat(a) from (values (array[1]), (array[2])) as v(a);\n> array_cat\n> -----------\n> {1,2}\n> (1 row)\n>\n>\nDavid J.\n\nOn Wed, Nov 18, 2020 at 5:37 PM Vik Fearing <vik@postgresfriends.org> wrote:On 11/18/20 11:19 PM, David G. Johnston wrote:\n> On Wednesday, November 18, 2020, Vlad Bokov <vlad@razum2um.me> wrote:\n> \n>> Hi, I wonder why there's no function to aggregate arrays by\n>> concatenation out of the box?\n>>\n> \n> See array_agg(...)\n\n\nWhy? That doesn't do what is wanted.\nSorry, I did not read closely enough.I doubt there is any substantial resistance to including such a function but it would have to be written in C.\n\nvik=# select array_agg(a) from (values (array[1]), (array[2])) as v(a);\n array_agg\n-----------\n {{1},{2}}\n(1 row)And it's not too hard to work the system to get what you want even without a custom aggregate.select array_agg(b) from (values (array[1]), (array[2])) as v(a), unnest(a) as w(b);\nvik=# select array_cat(a) from (values (array[1]), (array[2])) as v(a);\n array_cat\n-----------\n {1,2}\n(1 row)David J.",
"msg_date": "Wed, 18 Nov 2020 17:46:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE AGGREGATE array_cat"
},
{
"msg_contents": "On 11/18/20 19:46, David G. Johnston wrote:\n\n> I doubt there is any substantial resistance to including such a function\n> but it would have to be written in C.\n\nWould anything have to be written at all, save the CREATE AGGREGATE\nsuggested in the original message, using the existing array_cat as the\nstate transition function?\n\nI suppose one might add an optimization to the existing array_cat to\ndetect the aggregate case, and realize it could clobber its left argument.\n(But I'm not sure how much that would save, with arrays.)\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 18 Nov 2020 19:54:52 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: CREATE AGGREGATE array_cat"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 5:54 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 11/18/20 19:46, David G. Johnston wrote:\n>\n> > I doubt there is any substantial resistance to including such a function\n> > but it would have to be written in C.\n>\n> Would anything have to be written at all, save the CREATE AGGREGATE\n> suggested in the original message, using the existing array_cat as the\n> state transition function?\n>\n> I suppose one might add an optimization to the existing array_cat to\n> detect the aggregate case, and realize it could clobber its left argument.\n> (But I'm not sure how much that would save, with arrays.)\n>\n>\nOutside my particular area of involvement really; it may be sufficient.\n\nDavid J.\n\nOn Wed, Nov 18, 2020 at 5:54 PM Chapman Flack <chap@anastigmatix.net> wrote:On 11/18/20 19:46, David G. Johnston wrote:\n\n> I doubt there is any substantial resistance to including such a function\n> but it would have to be written in C.\n\nWould anything have to be written at all, save the CREATE AGGREGATE\nsuggested in the original message, using the existing array_cat as the\nstate transition function?\n\nI suppose one might add an optimization to the existing array_cat to\ndetect the aggregate case, and realize it could clobber its left argument.\n(But I'm not sure how much that would save, with arrays.)Outside my particular area of involvement really; it may be sufficient.David J.",
"msg_date": "Wed, 18 Nov 2020 17:57:29 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE AGGREGATE array_cat"
},
{
"msg_contents": "On 11/19/20 1:54 AM, Chapman Flack wrote:\n> On 11/18/20 19:46, David G. Johnston wrote:\n> \n>> I doubt there is any substantial resistance to including such a function\n>> but it would have to be written in C.\n> \n> Would anything have to be written at all, save the CREATE AGGREGATE\n> suggested in the original message, using the existing array_cat as the\n> state transition function?\nNope. As my example showed.\n\nOne could imagine extending it with an inverse transition function for\nuse in windows (small w) but that's about it.\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 19 Nov 2020 01:59:31 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: CREATE AGGREGATE array_cat"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 11/19/20 1:54 AM, Chapman Flack wrote:\n>> Would anything have to be written at all, save the CREATE AGGREGATE\n>> suggested in the original message, using the existing array_cat as the\n>> state transition function?\n\n> Nope. As my example showed.\n\nBut by the same token, anybody who wants that can trivially make it.\nI think if we're going to bother, we should strive for an implementation\nof efficiency comparable to array_agg, and that will take some bespoke\ncode.\n\nIt might also be worth looking at 9a00f03e4, which addressed the fact\nthat anyone who had made a custom aggregate depending on array_append\nwas going to be hurting performance-wise. The same would be true of\ncustom aggregates depending on array_cat, and maybe we should try\nto alleviate that even if we're providing a new built-in aggregate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Nov 2020 20:25:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE AGGREGATE array_cat"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-18 19:54:52 -0500, Chapman Flack wrote:\n> On 11/18/20 19:46, David G. Johnston wrote:\n> \n> > I doubt there is any substantial resistance to including such a function\n> > but it would have to be written in C.\n> \n> Would anything have to be written at all, save the CREATE AGGREGATE\n> suggested in the original message, using the existing array_cat as the\n> state transition function?\n\nUsing array_cat() as the transition function essentially is O(N^2). And\nI don't think there's a good way to solve that in array_cat() itself, at\nleast not compared to just using similar logic to array_agg.\n\n\n> I suppose one might add an optimization to the existing array_cat to\n> detect the aggregate case, and realize it could clobber its left argument.\n> (But I'm not sure how much that would save, with arrays.)\n\nI don't immediately see how clobbering the left arg would work\nreliably. That's easy enough for in-place modifications of types that\nhave a fixed width, but for an arbitrary width type that's much\nharder. You could probably hack something together by inquiring about\nthe actual memory allocation size in aset.c etc, but that's pretty ugly.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Nov 2020 17:52:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CREATE AGGREGATE array_cat"
}
] |
[
{
"msg_contents": "I saw in the docs for CREATE AGGREGATE, under argmode, \"Aggregate functions\ndo not support OUT arguments\".\n\nAnd because I tend to poke things I'm told not to poke, I thought \"but\nwait, the CREATE AGGREGATE doesn't have much to say about the return type\nanyway, that's just the FINALFUNC's return type.\" And I made an aggregate\nwith a finalfunc that has OUT parameters, and lo, it works, and a record\nis returned with the expected aggregate results.\n\nThe only flaw seems to be that the result can't be accessed by field,\nbecause its tupledesc isn't known. I /suspect/ (haven't chased it down\nthough) that the finalfunc produces a blessed tupledesc for its result,\nbut the identifying typmod is not propagated along to the aggregate result.\n\nDoes that sound close? Would it possibly be that simple to make such\naggregates usable? Or is there a larger can of worms I'm overlooking?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 18 Nov 2020 20:17:00 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "speaking of CREATE AGGREGATE ..."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nBgwriter process will call `pgstat_send_bgwriter` to send statistics to stat collector, but stat collector is not started when standby is not in hot standby mode.\n\n```\n if (pmState == PM_RUN || pmState == PM_HOT_STANDBY)\n PgStatPID = pgstat_start();\n```\n\nThis would lead to the kernel Recv-Q buffer being filled up by checking `netstat -du`.\nI think we should disable to send statistics in `pgstat_send_bgwriter` when stat collector is not running. So here are three questions about how to fix it.\n\n 1. Is there any way we could effectively check the status of stat collector in bgwriter process?\n 2. Is there any way we check the bgwriter is running on a standby but not in hot stanby mode?\n 3. Is there any other process will send statistics except the bgwriter in standby? We should fix it one by one or add a check in `pgstat_send` directly?\n\nThanks,\nHubert Zhang\n\n\n\n\n\n\n\n\nHi hackers,\n\n\n\n\nBgwriter process will call `pgstat_send_bgwriter` to send statistics to stat collector, but stat collector is not started when standby is not in hot standby\n mode. \n\n\n\n\n```\n\n if (pmState == PM_RUN || pmState == PM_HOT_STANDBY) \n\n PgStatPID = pgstat_start();\n\n\n```\n\n\n\n\nThis would lead to the kernel Recv-Q buffer being filled up by checking `netstat\n -du`.\n\nI think we should disable to send statistics in `pgstat_send_bgwriter` when stat\n collector is not running. So here are three questions about how to fix it.\n\n\nIs there any way we could effectively check the status of stat collector\n in bgwriter process? Is there any way we check the bgwriter is running on a standby but not in\n hot stanby mode?\nIs there any other process will send statistics except the bgwriter in standby? We should fix it one by one or add a check in `pgstat_send` directly?\n\n\nThanks,\nHubert Zhang",
"msg_date": "Thu, 19 Nov 2020 02:18:17 +0000",
"msg_from": "Hubert Zhang <zhubert@vmware.com>",
"msg_from_op": true,
"msg_subject": "Recv-Q buffer is filled up due to bgwriter continue sending\n statistics to un-launched stat collector"
}
] |
[
{
"msg_contents": "Hackers,\n\nOver in news [1] Josh Drake and Eric Ridge discovered the undocumented\nfeature \"IS [NOT] OF\"; introduced seemingly as an \"oh-by-the-way\" in 2002\nvia commit eb121ba2cfe [2].\n\nIs there any reason not to document this back to 9.5, probably with a note\nnearby to pg_typeof(any), which is a convoluted but documented way of\nmaking this kind of test?\n\nDavid J.\n\n[1] https://www.commandprompt.com/blog/is-of/\n[2]\nhttps://github.com/postgres/postgres/commit/eb121ba2cfe1dba9463301f612743df9b63e35ce\n\nHackers,Over in news [1] Josh Drake and Eric Ridge discovered the undocumented feature \"IS [NOT] OF\"; introduced seemingly as an \"oh-by-the-way\" in 2002 via commit eb121ba2cfe [2].Is there any reason not to document this back to 9.5, probably with a note nearby to pg_typeof(any), which is a convoluted but documented way of making this kind of test?David J.[1] https://www.commandprompt.com/blog/is-of/[2] https://github.com/postgres/postgres/commit/eb121ba2cfe1dba9463301f612743df9b63e35ce",
"msg_date": "Wed, 18 Nov 2020 22:44:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Should we document IS [NOT] OF?"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Over in news [1] Josh Drake and Eric Ridge discovered the undocumented\n> feature \"IS [NOT] OF\"; introduced seemingly as an \"oh-by-the-way\" in 2002\n> via commit eb121ba2cfe [2].\n\n> Is there any reason not to document this back to 9.5,\n\nAs far as I can tell from reading the SQL spec, this has nothing much in\ncommon with the SQL feature of that name except for the syntax. The SQL\nfeature seems to be a *run time* test on subtype inclusion, not something\nthat can be answered in parse analysis. Even if I'm getting that wrong,\nit's clear that the spec intends IS OF to return true for subtype\nrelationships, not only exact type equality which is the only thing\ntransformAExprOf considers.\n\nSo my vote would be to rip it out, not document it. Somebody can try\nagain in future, perhaps. But if we document it we're just locking\nourselves into a SQL incompatibility.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 01:16:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "I wrote:\n> So my vote would be to rip it out, not document it. Somebody can try\n> again in future, perhaps. But if we document it we're just locking\n> ourselves into a SQL incompatibility.\n\nApparently, somebody already had that thought. See func.sgml\nlines 765-782, which were commented out by 8272fc3f7.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 01:23:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "On Wednesday, November 18, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > So my vote would be to rip it out, not document it. Somebody can try\n> > again in future, perhaps. But if we document it we're just locking\n> > ourselves into a SQL incompatibility.\n>\n> Apparently, somebody already had that thought. See func.sgml\n> lines 765-782, which were commented out by 8272fc3f7.\n>\n>\nIs there a feature code? I skimmed the standard and non-standard tables in\nour appendix and couldn’t find this in either.\n\nDavid J.\n\nOn Wednesday, November 18, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> So my vote would be to rip it out, not document it. Somebody can try\n> again in future, perhaps. But if we document it we're just locking\n> ourselves into a SQL incompatibility.\n\nApparently, somebody already had that thought. See func.sgml\nlines 765-782, which were commented out by 8272fc3f7.\nIs there a feature code? I skimmed the standard and non-standard tables in our appendix and couldn’t find this in either.David J.",
"msg_date": "Wed, 18 Nov 2020 23:59:42 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Is there a feature code? I skimmed the standard and non-standard tables in\n> our appendix and couldn’t find this in either.\n\na19d9d3c4 seems to have thought it was S151.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 02:03:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "On 11/19/20 2:03 AM, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> Is there a feature code? I skimmed the standard and non-standard tables in\n>> our appendix and couldn’t find this in either.\n> \n> a19d9d3c4 seems to have thought it was S151.\n\nHere is a link to previous list discussions:\n\nhttps://www.postgresql.org/message-id/45DA44F3.3010401%40joeconway.com\n\nHTH,\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Thu, 19 Nov 2020 09:02:00 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Here is a link to previous list discussions:\n> https://www.postgresql.org/message-id/45DA44F3.3010401%40joeconway.com\n\nAh, thanks, I was intending to go do some searching for that.\n\nSo at this point this has been re-debated at least three times.\nI think it's time to put it out of its misery. You can get\nequivalent behavior using something like\n pg_typeof(foo) in ('int'::regtype, 'float8'::regtype)\nso we've got the \"another way to do it\" covered. And I'm\nstill convinced that the existing implementation is not\nanywhere near per-spec. The issue about NULL is somewhat\nminor perhaps, but the real problem is that I think the\nspec intends \"foo is of (sometype)\" to return true if foo\nis of any subtype of sometype. We don't have subtypes in\nthe way SQL intends, but we do have inheritance children\nwhich are more or less the same idea. The closest you can\nget right now is to do something like\n\n select * from parent_table t where t.tableoid = 'child1'::regclass\n\nbut this will fail to match grandchilden of child1.\nI think a minimum expectation is that you could do\nsomething like \"where (t.*) is of (child1)\", but\nour existing code is nowhere near making that work.\n\nLet's just rip it out and be done. If anyone is ever\nmotivated to make it work per spec, they can resurrect\nwhatever seems useful from the git history.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 11:06:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "On 11/19/20 11:06 AM, Tom Lane wrote:\n> Let's just rip it out and be done. If anyone is ever\n> motivated to make it work per spec, they can resurrect\n> whatever seems useful from the git history.\n\n+1\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Thu, 19 Nov 2020 11:15:33 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 11:15:33AM -0500, Joe Conway wrote:\n> On 11/19/20 11:06 AM, Tom Lane wrote:\n> > Let's just rip it out and be done. If anyone is ever\n> > motivated to make it work per spec, they can resurrect\n> > whatever seems useful from the git history.\n> \n> +1\n\n+1\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 19 Nov 2020 11:17:16 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, Nov 19, 2020 at 11:15:33AM -0500, Joe Conway wrote:\n>> On 11/19/20 11:06 AM, Tom Lane wrote:\n>>> Let's just rip it out and be done. If anyone is ever\n>>> motivated to make it work per spec, they can resurrect\n>>> whatever seems useful from the git history.\n\n>> +1\n\n> +1\n\nHere's a proposed patch for that. I was amused to discover that we have\na couple of regression test cases making use of IS OF. However, I think\nusing pg_typeof() is actually better for those tests anyway, since\nprinting the regtype result is clearer, and easier to debug if the test\never goes wrong.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 19 Nov 2020 12:08:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "On 11/19/20 12:08 PM, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n>> On Thu, Nov 19, 2020 at 11:15:33AM -0500, Joe Conway wrote:\n>>> On 11/19/20 11:06 AM, Tom Lane wrote:\n>>>> Let's just rip it out and be done. If anyone is ever\n>>>> motivated to make it work per spec, they can resurrect\n>>>> whatever seems useful from the git history.\n> \n>>> +1\n> \n>> +1\n> \n> Here's a proposed patch for that. I was amused to discover that we have\n> a couple of regression test cases making use of IS OF.\n\n\nI didn't check but those might be my fault ;-)\n\n\n> However, I think using pg_typeof() is actually better for those tests anyway, since\n> printing the regtype result is clearer, and easier to debug if the test\n> ever goes wrong.\n\nLooks good to me.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Thu, 19 Nov 2020 15:09:39 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 11/19/20 12:08 PM, Tom Lane wrote:\n>> Here's a proposed patch for that. I was amused to discover that we have\n>> a couple of regression test cases making use of IS OF.\n\n> I didn't check but those might be my fault ;-)\n\nI suspect at least one of them is mine ;-). But I didn't check.\n\n> Looks good to me.\n\nThanks for looking!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 15:14:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "After digging a bit more I noticed that we'd discussed removing\nIS OF in the 2007 thread, but forebore because there wasn't an easy\nreplacement. pg_typeof() was added a year later (b8fab2411), so we\ncould have done this at any point since then.\n\nPushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 17:42:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "Howdy,\n\nWell I certainly wasn't trying to make work out of that blog but I am glad\nto see it was productive.\n\nJD\n\nOn Thu, Nov 19, 2020 at 2:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> After digging a bit more I noticed that we'd discussed removing\n> IS OF in the 2007 thread, but forebore because there wasn't an easy\n> replacement. pg_typeof() was added a year later (b8fab2411), so we\n> could have done this at any point since then.\n>\n> Pushed.\n>\n> regards, tom lane\n>\n>\n>\n\nHowdy,Well I certainly wasn't trying to make work out of that blog but I am glad to see it was productive.JDOn Thu, Nov 19, 2020 at 2:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:After digging a bit more I noticed that we'd discussed removing\nIS OF in the 2007 thread, but forebore because there wasn't an easy\nreplacement. pg_typeof() was added a year later (b8fab2411), so we\ncould have done this at any point since then.\n\nPushed.\n\n regards, tom lane",
"msg_date": "Thu, 19 Nov 2020 16:49:52 -0800",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 6:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> After digging a bit more I noticed that we'd discussed removing\n> IS OF in the 2007 thread, but forebore because there wasn't an easy\n> replacement. pg_typeof() was added a year later (b8fab2411), so we\n> could have done this at any point since then.\n>\n> Pushed.\n>\n\nDocumenting or improving IS OF was a TODO, so I've removed that entry.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Nov 19, 2020 at 6:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:After digging a bit more I noticed that we'd discussed removing\nIS OF in the 2007 thread, but forebore because there wasn't an easy\nreplacement. pg_typeof() was added a year later (b8fab2411), so we\ncould have done this at any point since then.\n\nPushed.Documenting or improving IS OF was a TODO, so I've removed that entry.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 20 Nov 2020 11:56:15 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we document IS [NOT] OF?"
}
] |
[
{
"msg_contents": "Hi,\n\nIn the lwlock.c, InitializeLWLocks() calculate the LWLock offset by \nitself (c319991bcad),\nhowever, there are macros defined in lwlock.h, I think, we can use the \nmacros.\n\ndiff --git a/src/backend/storage/lmgr/lwlock.c \nb/src/backend/storage/lmgr/lwlock.c\nindex 2fa90cc095..108e652179 100644\n--- a/src/backend/storage/lmgr/lwlock.c\n+++ b/src/backend/storage/lmgr/lwlock.c\n@@ -525,18 +525,17 @@ InitializeLWLocks(void)\n LWLockInitialize(&lock->lock, id);\n\n /* Initialize buffer mapping LWLocks in main array */\n- lock = MainLWLockArray + NUM_INDIVIDUAL_LWLOCKS;\n+ lock = MainLWLockArray + BUFFER_MAPPING_LWLOCK_OFFSET;\n for (id = 0; id < NUM_BUFFER_PARTITIONS; id++, lock++)\n LWLockInitialize(&lock->lock, LWTRANCHE_BUFFER_MAPPING);\n\n /* Initialize lmgrs' LWLocks in main array */\n- lock = MainLWLockArray + NUM_INDIVIDUAL_LWLOCKS + \nNUM_BUFFER_PARTITIONS;\n+ lock = MainLWLockArray + LOCK_MANAGER_LWLOCK_OFFSET;\n for (id = 0; id < NUM_LOCK_PARTITIONS; id++, lock++)\n LWLockInitialize(&lock->lock, LWTRANCHE_LOCK_MANAGER);\n\n /* Initialize predicate lmgrs' LWLocks in main array */\n- lock = MainLWLockArray + NUM_INDIVIDUAL_LWLOCKS +\n- NUM_BUFFER_PARTITIONS + NUM_LOCK_PARTITIONS;\n+ lock = MainLWLockArray + PREDICATELOCK_MANAGER_LWLOCK_OFFSET;\n for (id = 0; id < NUM_PREDICATELOCK_PARTITIONS; id++, lock++)\n LWLockInitialize(&lock->lock, \nLWTRANCHE_PREDICATE_LOCK_MANAGER);\n\n\n--\nBest regards\nJapin Li",
"msg_date": "Thu, 19 Nov 2020 15:39:51 +0800",
"msg_from": "japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Use macros for calculating LWLock offset"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 03:39:51PM +0800, japin wrote:\n> In the lwlock.c, InitializeLWLocks() calculate the LWLock offset by itself\n> (c319991bcad),\n> however, there are macros defined in lwlock.h, I think, we can use the\n> macros.\n\nI agree that this makes this code a bit cleaner, so let's use those\nmacros. Others may have some comments here, so let's wait a bit\nfirst.\n--\nMichael",
"msg_date": "Fri, 20 Nov 2020 15:25:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use macros for calculating LWLock offset"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 03:25:50PM +0900, Michael Paquier wrote:\n> I agree that this makes this code a bit cleaner, so let's use those\n> macros. Others may have some comments here, so let's wait a bit\n> first.\n\nGot this one committed as of d03d754.\n--\nMichael",
"msg_date": "Tue, 24 Nov 2020 12:51:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use macros for calculating LWLock offset"
},
{
"msg_contents": "Thanks!\r\n\r\nOn Nov 24, 2020, at 11:51 AM, Michael Paquier <michael@paquier.xyz<mailto:michael@paquier.xyz>> wrote:\r\n\r\nOn Fri, Nov 20, 2020 at 03:25:50PM +0900, Michael Paquier wrote:\r\nI agree that this makes this code a bit cleaner, so let's use those\r\nmacros. Others may have some comments here, so let's wait a bit\r\nfirst.\r\n\r\nGot this one committed as of d03d754.\r\n—\r\nMichael\r\n\r\n--\r\nBest regards\r\nJapin Li\r\n\n\n\n\n\n\r\nThanks!\n\n\n\n\n\n\n\n\nOn Nov 24, 2020, at 11:51 AM, Michael Paquier <michael@paquier.xyz> wrote:\n\n\nOn Fri, Nov 20, 2020 at 03:25:50PM +0900, Michael Paquier wrote:\nI agree that this makes this code a bit cleaner, so let's use those\r\nmacros. Others may have some comments here, so let's wait a bit\r\nfirst.\n\n\r\nGot this one committed as of d03d754.\r\n—\r\nMichael\n\n\n\n\n\n\n--\nBest regards\nJapin Li",
"msg_date": "Tue, 24 Nov 2020 06:23:18 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use macros for calculating LWLock offset"
}
] |
[
{
"msg_contents": "Hi all\n\nHere's a patch I wrote a while ago to detect and report when a\nLWLockAcquire() results in a simple self-deadlock due to the caller already\nholding the LWLock.\n\nTo avoid affecting hot-path performance, it only fires the check on the\nfirst iteration through the retry loops in LWLockAcquire() and\nLWLockWaitForVar(), and just before we sleep, once the fast-path has been\nmissed.\n\nI wrote an earlier version of this when I was chasing down some hairy\nissues with background workers deadlocking on some exit paths because\nereport(ERROR) or elog(ERROR) calls fired when a LWLock was held would\ncause a before_shmem_exit or on_shmem_exit cleanup function to deadlock\nwhen it tried to acquire the same lock.\n\nBut it's an easy enough mistake to make and a seriously annoying one to\ntrack down, so I figured I'd post it for consideration. Maybe someone else\nwill get some use out of it even if nobody likes the idea of merging it.\n\nAs written the check runs only for --enable-cassert builds or when\nLOCK_DEBUG is defined.",
"msg_date": "Thu, 19 Nov 2020 18:31:36 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "This looks useful. LWLockCheckSelfDeadlock() could use LWLockHeldByMe\nvariant instead of copying that code with possibly a change in that\nfunction to return the required information.\n\nI am also seeing a pattern\nAssert(LWLockHeldByMe*())\nLWLockAcquire()\n\nat some places. Should we change LWLockAcquire to do\nAssert(LWLockHelpByMe()) always to detect such occurrences? Enhance\nthat pattern to print the information that your patch prints?\n\nIt looks weird that we can detect a self deadlock but not handle it.\nBut handling it requires remembering multiple LWLocks held and then\nrelease them those many times. That may not leave LWLocks LW anymore.\n\nOn Thu, Nov 19, 2020 at 4:02 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> Hi all\n>\n> Here's a patch I wrote a while ago to detect and report when a LWLockAcquire() results in a simple self-deadlock due to the caller already holding the LWLock.\n>\n> To avoid affecting hot-path performance, it only fires the check on the first iteration through the retry loops in LWLockAcquire() and LWLockWaitForVar(), and just before we sleep, once the fast-path has been missed.\n>\n> I wrote an earlier version of this when I was chasing down some hairy issues with background workers deadlocking on some exit paths because ereport(ERROR) or elog(ERROR) calls fired when a LWLock was held would cause a before_shmem_exit or on_shmem_exit cleanup function to deadlock when it tried to acquire the same lock.\n>\n> But it's an easy enough mistake to make and a seriously annoying one to track down, so I figured I'd post it for consideration. Maybe someone else will get some use out of it even if nobody likes the idea of merging it.\n>\n> As written the check runs only for --enable-cassert builds or when LOCK_DEBUG is defined.\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 24 Nov 2020 19:41:26 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 10:11 PM Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> wrote:\n\n> This looks useful. LWLockCheckSelfDeadlock() could use LWLockHeldByMe\n> variant instead of copying that code with possibly a change in that\n> function to return the required information.\n>\n\nYes, possibly so. I was reluctant to mess with the rest of the code too\nmuch.\n\nI am also seeing a pattern\n> Assert(!LWLockHeldByMe());\n> LWLockAcquire()\n>\n> at some places. Should we change LWLockAcquire to do\n> Assert(!LWLockHeldByMe()) always to detect such occurrences?\n\n\nI'm inclined not to, at least not without benchmarking it, because that'd\ndo the check before we attempt the fast-path. cassert builds are still\nsupposed to perform decently and be suitable for day to day development and\nI'd rather not risk a slowdown.\n\nI'd prefer to make the lock self deadlock check run for production builds,\nnot just cassert builds. It'd print a normal LOG (with backtrace if\nsupported) then Assert(). So on an assert build we'd get a crash and core,\nand on a non-assert build we'd carry on and self-deadlock anyway.\n\nThat's probably the safest thing to do. We can't expect to safely ERROR out\nof the middle of an LWLockAcquire(), not without introducing a new and\nreally hard to test code path that'll also be surprising to callers. We\nprobably don't want to PANIC the whole server for non-assert builds since\nit might enter a panic-loop. So it's probably better to self-deadlock. We\ncould HINT that a -m immediate shutdown will be necessary to recover though.\n\nI don't think it makes sense to introduce much complexity for a feature\nthat's mainly there to help developers and to catch corner-case bugs.\n\n(FWIW a variant of this patch has been in 2ndQPostgres for some time. It\nhelped catch an extension bug just the other day.)\n\nOn Tue, Nov 24, 2020 at 10:11 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:This looks useful. LWLockCheckSelfDeadlock() could use LWLockHeldByMe\nvariant instead of copying that code with possibly a change in that\nfunction to return the required information.Yes, possibly so. I was reluctant to mess with the rest of the code too much.\n\nI am also seeing a pattern\nAssert(!LWLockHeldByMe());\nLWLockAcquire()\n\nat some places. Should we change LWLockAcquire to do\nAssert(!LWLockHeldByMe()) always to detect such occurrences?I'm inclined not to, at least not without benchmarking it, because that'd do the check before we attempt the fast-path. cassert builds are still supposed to perform decently and be suitable for day to day development and I'd rather not risk a slowdown.I'd prefer to make the lock self deadlock check run for production builds, not just cassert builds. It'd print a normal LOG (with backtrace if supported) then Assert(). So on an assert build we'd get a crash and core, and on a non-assert build we'd carry on and self-deadlock anyway.That's probably the safest thing to do. We can't expect to safely ERROR out of the middle of an LWLockAcquire(), not without introducing a new and really hard to test code path that'll also be surprising to callers. We probably don't want to PANIC the whole server for non-assert builds since it might enter a panic-loop. So it's probably better to self-deadlock. We could HINT that a -m immediate shutdown will be necessary to recover though.I don't think it makes sense to introduce much complexity for a feature that's mainly there to help developers and to catch corner-case bugs.(FWIW a variant of this patch has been in 2ndQPostgres for some time. It helped catch an extension bug just the other day.)",
"msg_date": "Wed, 25 Nov 2020 14:16:53 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 11:47 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>> I am also seeing a pattern\n>> Assert(!LWLockHeldByMe());\n>> LWLockAcquire()\n>>\n>> at some places. Should we change LWLockAcquire to do\n>> Assert(!LWLockHeldByMe()) always to detect such occurrences?\n>\n>\n> I'm inclined not to, at least not without benchmarking it, because that'd do the check before we attempt the fast-path. cassert builds are still supposed to perform decently and be suitable for day to day development and I'd rather not risk a slowdown.\n>\n> I'd prefer to make the lock self deadlock check run for production builds, not just cassert builds. It'd print a normal LOG (with backtrace if supported) then Assert(). So on an assert build we'd get a crash and core, and on a non-assert build we'd carry on and self-deadlock anyway.\n>\n> That's probably the safest thing to do. We can't expect to safely ERROR out of the middle of an LWLockAcquire(), not without introducing a new and really hard to test code path that'll also be surprising to callers. We probably don't want to PANIC the whole server for non-assert builds since it might enter a panic-loop. So it's probably better to self-deadlock. We could HINT that a -m immediate shutdown will be necessary to recover though.\n\nI agree that it will be helpful to print something in the logs\nindicating the reason for this hang in case the hang happens in a\nproduction build. In your patch you have used ereport(PANIC, ) which\nmay simply be replaced by an Assert() in an assert enabled build. We\nalready have Assert(!LWLockHeldByMe()) so that should be safe. It will\nbe good to have -m immediate hint in LOG message. But it might just be\nbetter to kill -9 that process to get rid of it. That will cause the\nserver to restart and not just shutdown.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 25 Nov 2020 18:53:47 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 9:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Wed, Nov 25, 2020 at 11:47 AM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n> >> I am also seeing a pattern\n> >> Assert(!LWLockHeldByMe());\n> >> LWLockAcquire()\n> >>\n> >> at some places. Should we change LWLockAcquire to do\n> >> Assert(!LWLockHeldByMe()) always to detect such occurrences?\n> >\n> >\n> > I'm inclined not to, at least not without benchmarking it, because\n> that'd do the check before we attempt the fast-path. cassert builds are\n> still supposed to perform decently and be suitable for day to day\n> development and I'd rather not risk a slowdown.\n> >\n> > I'd prefer to make the lock self deadlock check run for production\n> builds, not just cassert builds. It'd print a normal LOG (with backtrace if\n> supported) then Assert(). So on an assert build we'd get a crash and core,\n> and on a non-assert build we'd carry on and self-deadlock anyway.\n> >\n> > That's probably the safest thing to do. We can't expect to safely ERROR\n> out of the middle of an LWLockAcquire(), not without introducing a new and\n> really hard to test code path that'll also be surprising to callers. We\n> probably don't want to PANIC the whole server for non-assert builds since\n> it might enter a panic-loop. So it's probably better to self-deadlock. We\n> could HINT that a -m immediate shutdown will be necessary to recover though.\n>\n> I agree that it will be helpful to print something in the logs\n> indicating the reason for this hang in case the hang happens in a\n> production build. In your patch you have used ereport(PANIC, ) which\n> may simply be replaced by an Assert() in an assert enabled build.\n\n\nI'd either select between PANIC (assert build) and LOG (non assert build),\nor always LOG then put an Assert() afterwards. It's strongly desirable to\nget the log output before any crash because it's a pain to get the LWLock\ninfo from a core.\n\n\n> We\n> already have Assert(!LWLockHeldByMe()) so that should be safe. It will\n> be good to have -m immediate hint in LOG message. But it might just be\n> better to kill -9 that process to get rid of it. That will cause the\n> server to restart and not just shutdown.\n>\n\nSure. Won't work on Windows though. And I am hesitant about telling users\nto \"kill -9\" anything.\n\npg_ctl -m immediate restart then.\n\nOn Wed, Nov 25, 2020 at 9:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Wed, Nov 25, 2020 at 11:47 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>> I am also seeing a pattern\n>> Assert(!LWLockHeldByMe());\n>> LWLockAcquire()\n>>\n>> at some places. Should we change LWLockAcquire to do\n>> Assert(!LWLockHeldByMe()) always to detect such occurrences?\n>\n>\n> I'm inclined not to, at least not without benchmarking it, because that'd do the check before we attempt the fast-path. cassert builds are still supposed to perform decently and be suitable for day to day development and I'd rather not risk a slowdown.\n>\n> I'd prefer to make the lock self deadlock check run for production builds, not just cassert builds. It'd print a normal LOG (with backtrace if supported) then Assert(). So on an assert build we'd get a crash and core, and on a non-assert build we'd carry on and self-deadlock anyway.\n>\n> That's probably the safest thing to do. We can't expect to safely ERROR out of the middle of an LWLockAcquire(), not without introducing a new and really hard to test code path that'll also be surprising to callers. We probably don't want to PANIC the whole server for non-assert builds since it might enter a panic-loop. So it's probably better to self-deadlock. We could HINT that a -m immediate shutdown will be necessary to recover though.\n\nI agree that it will be helpful to print something in the logs\nindicating the reason for this hang in case the hang happens in a\nproduction build. In your patch you have used ereport(PANIC, ) which\nmay simply be replaced by an Assert() in an assert enabled build.I'd either select between PANIC (assert build) and LOG (non assert build), or always LOG then put an Assert() afterwards. It's strongly desirable to get the log output before any crash because it's a pain to get the LWLock info from a core. We\nalready have Assert(!LWLockHeldByMe()) so that should be safe. It will\nbe good to have -m immediate hint in LOG message. But it might just be\nbetter to kill -9 that process to get rid of it. That will cause the\nserver to restart and not just shutdown.Sure. Won't work on Windows though. And I am hesitant about telling users to \"kill -9\" anything.pg_ctl -m immediate restart then.",
"msg_date": "Thu, 26 Nov 2020 10:19:00 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> On Wed, Nov 25, 2020 at 9:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n> wrote:\n>> I'd prefer to make the lock self deadlock check run for production\n>> builds, not just cassert builds.\n\nI'd like to register a strong objection to spending any cycles whatsoever\non this. If you're using LWLocks in a way that could deadlock, you're\ndoing it wrong. The entire point of that mechanism is to be Light Weight\nand not have all the overhead that the heavyweight lock mechanism has.\nBuilding in deadlock checks and such is exactly the wrong direction.\nIf you think you need that, you should be using a heavyweight lock.\n\nMaybe there's some case for a cassert-only check of this sort, but running\nit in production is right out.\n\nI'd also opine that checking for self-deadlock, but not any more general\ndeadlock, seems pretty pointless. Careless coding is far more likely\nto create opportunities for the latter. (Thus, I have little use for\nthe existing assertions of this sort, either.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Nov 2020 21:50:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "On 26/11/2020 04:50, Tom Lane wrote:\n> Craig Ringer <craig.ringer@enterprisedb.com> writes:\n>> On Wed, Nov 25, 2020 at 9:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n>> wrote:\n>>> I'd prefer to make the lock self deadlock check run for production\n>>> builds, not just cassert builds.\n> \n> I'd like to register a strong objection to spending any cycles whatsoever\n> on this. If you're using LWLocks in a way that could deadlock, you're\n> doing it wrong. The entire point of that mechanism is to be Light Weight\n> and not have all the overhead that the heavyweight lock mechanism has.\n> Building in deadlock checks and such is exactly the wrong direction.\n> If you think you need that, you should be using a heavyweight lock.\n> \n> Maybe there's some case for a cassert-only check of this sort, but running\n> it in production is right out.\n> \n> I'd also opine that checking for self-deadlock, but not any more general\n> deadlock, seems pretty pointless. Careless coding is far more likely\n> to create opportunities for the latter. (Thus, I have little use for\n> the existing assertions of this sort, either.)\n\nI've made the mistake of forgetting to release an LWLock many times, \nleading to self-deadlock. And I just reviewed a patch that did that this \nweek [1]. You usually find that mistake very quickly when you start \ntesting though, I don't think I've seen it happen in production.\n\nSo yeah, I agree it's not worth spending cycles on this. Maybe it would \nbe worth it if it's really simple to check, and you only do it after \nwaiting X seconds. (I haven't looked at this patch at all)\n\n[1] \nhttps://www.postgresql.org/message-id/8e1cda9b-1cb9-a634-d383-f757bf67b820@iki.fi\n\n- Heikki\n\n\n",
"msg_date": "Fri, 27 Nov 2020 20:22:41 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 10:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I've made the mistake of forgetting to release an LWLock many times,\n> leading to self-deadlock. And I just reviewed a patch that did that this\n> week [1]. You usually find that mistake very quickly when you start\n> testing though, I don't think I've seen it happen in production.\n\n+1. The fact that you don't get deadlock detection with LWLocks is a\ncornerstone of the whole design. This assumption is common to all\ndatabase systems (LWLocks are generally called latches in the database\nresearch community, but the concept is exactly the same).\n\n> So yeah, I agree it's not worth spending cycles on this. Maybe it would\n> be worth it if it's really simple to check, and you only do it after\n> waiting X seconds. (I haven't looked at this patch at all)\n\n-1 for that, unless it's only for debug builds. Even if it is\npractically free, it still means committing to the wrong priorities.\nPlus the performance validation would be very arduous as a practical\nmatter.\n\nWe've made LWLocks much more scalable in the last 10 years. Why\nshouldn't we expect to do the same again in the next 10 years? I\nwouldn't bet against it. I might even do the opposite (predict further\nimprovements to LWLocks).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 27 Nov 2020 11:08:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 11:08:49AM -0800, Peter Geoghegan wrote:\n> On Fri, Nov 27, 2020 at 10:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> I've made the mistake of forgetting to release an LWLock many times,\n>> leading to self-deadlock. And I just reviewed a patch that did that this\n>> week [1]. You usually find that mistake very quickly when you start\n>> testing though, I don't think I've seen it happen in production.\n> \n> +1. The fact that you don't get deadlock detection with LWLocks is a\n> cornerstone of the whole design. This assumption is common to all\n> database systems (LWLocks are generally called latches in the database\n> research community, but the concept is exactly the same).\n\n+1.\n\n>> So yeah, I agree it's not worth spending cycles on this. Maybe it would\n>> be worth it if it's really simple to check, and you only do it after\n>> waiting X seconds. (I haven't looked at this patch at all)\n>\n> -1 for that, unless it's only for debug builds. Even if it is\n> practically free, it still means committing to the wrong priorities.\n> Plus the performance validation would be very arduous as a practical\n> matter.\n\nLooking at the git history, we have a much larger history of bugs that\nrelate to race conditions when it comes to LWLocks. I am not really\nconvinced that it is worth spending time on this even for debug\nbuilds, FWIW.\n--\nMichael",
"msg_date": "Sun, 29 Nov 2020 15:16:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-27 20:22:41 +0200, Heikki Linnakangas wrote:\n> On 26/11/2020 04:50, Tom Lane wrote:\n> > Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> > > On Wed, Nov 25, 2020 at 9:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n> > > wrote:\n> > > > I'd prefer to make the lock self deadlock check run for production\n> > > > builds, not just cassert builds.\n> > \n> > I'd like to register a strong objection to spending any cycles whatsoever\n> > on this. If you're using LWLocks in a way that could deadlock, you're\n> > doing it wrong. The entire point of that mechanism is to be Light Weight\n> > and not have all the overhead that the heavyweight lock mechanism has.\n> > Building in deadlock checks and such is exactly the wrong direction.\n> > If you think you need that, you should be using a heavyweight lock.\n> > \n> > Maybe there's some case for a cassert-only check of this sort, but running\n> > it in production is right out.\n> > \n> > I'd also opine that checking for self-deadlock, but not any more general\n> > deadlock, seems pretty pointless. Careless coding is far more likely\n> > to create opportunities for the latter. (Thus, I have little use for\n> > the existing assertions of this sort, either.)\n> \n> I've made the mistake of forgetting to release an LWLock many times, leading\n> to self-deadlock. And I just reviewed a patch that did that this week [1].\n> You usually find that mistake very quickly when you start testing though, I\n> don't think I've seen it happen in production.\n\nI think something roughly along Craig's original patch, basically adding\nassert checks against holding a lock already, makes sense. Compared to\nthe other costs of running an assert build this isn't a huge cost, and\nit's helpful.\n\nI entirely concur that doing this outside of assertion builds is a\nseriously bad idea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 29 Dec 2020 18:11:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
},
{
"msg_contents": "On Wed, 30 Dec 2020 at 10:11, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-11-27 20:22:41 +0200, Heikki Linnakangas wrote:\n> > On 26/11/2020 04:50, Tom Lane wrote:\n> > > Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> > > > On Wed, Nov 25, 2020 at 9:23 PM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com>\n> > > > wrote:\n> > > > > I'd prefer to make the lock self deadlock check run for production\n> > > > > builds, not just cassert builds.\n> > >\n> > > I'd like to register a strong objection to spending any cycles\n> whatsoever\n> > > on this. If you're using LWLocks in a way that could deadlock, you're\n> > > doing it wrong. The entire point of that mechanism is to be Light\n> Weight\n> > > and not have all the overhead that the heavyweight lock mechanism has.\n> > > Building in deadlock checks and such is exactly the wrong direction.\n> > > If you think you need that, you should be using a heavyweight lock.\n> > >\n> > > Maybe there's some case for a cassert-only check of this sort, but\n> running\n> > > it in production is right out.\n> > >\n> > > I'd also opine that checking for self-deadlock, but not any more\n> general\n> > > deadlock, seems pretty pointless. Careless coding is far more likely\n> > > to create opportunities for the latter. (Thus, I have little use for\n> > > the existing assertions of this sort, either.)\n> >\n> > I've made the mistake of forgetting to release an LWLock many times,\n> leading\n> > to self-deadlock. And I just reviewed a patch that did that this week\n> [1].\n> > You usually find that mistake very quickly when you start testing\n> though, I\n> > don't think I've seen it happen in production.\n>\n> I think something roughly along Craig's original patch, basically adding\n> assert checks against holding a lock already, makes sense. Compared to\n> the other costs of running an assert build this isn't a huge cost, and\n> it's helpful.\n>\n> I entirely concur that doing this outside of assertion builds is a\n> seriously bad idea.\n>\n\nYeah, given it only targets developer error that's sensible.\n\nOn Wed, 30 Dec 2020 at 10:11, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-11-27 20:22:41 +0200, Heikki Linnakangas wrote:\n> On 26/11/2020 04:50, Tom Lane wrote:\n> > Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> > > On Wed, Nov 25, 2020 at 9:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n> > > wrote:\n> > > > I'd prefer to make the lock self deadlock check run for production\n> > > > builds, not just cassert builds.\n> > \n> > I'd like to register a strong objection to spending any cycles whatsoever\n> > on this. If you're using LWLocks in a way that could deadlock, you're\n> > doing it wrong. The entire point of that mechanism is to be Light Weight\n> > and not have all the overhead that the heavyweight lock mechanism has.\n> > Building in deadlock checks and such is exactly the wrong direction.\n> > If you think you need that, you should be using a heavyweight lock.\n> > \n> > Maybe there's some case for a cassert-only check of this sort, but running\n> > it in production is right out.\n> > \n> > I'd also opine that checking for self-deadlock, but not any more general\n> > deadlock, seems pretty pointless. Careless coding is far more likely\n> > to create opportunities for the latter. (Thus, I have little use for\n> > the existing assertions of this sort, either.)\n> \n> I've made the mistake of forgetting to release an LWLock many times, leading\n> to self-deadlock. And I just reviewed a patch that did that this week [1].\n> You usually find that mistake very quickly when you start testing though, I\n> don't think I've seen it happen in production.\n\nI think something roughly along Craig's original patch, basically adding\nassert checks against holding a lock already, makes sense. Compared to\nthe other costs of running an assert build this isn't a huge cost, and\nit's helpful.\n\nI entirely concur that doing this outside of assertion builds is a\nseriously bad idea.Yeah, given it only targets developer error that's sensible.",
"msg_date": "Tue, 5 Jan 2021 14:04:17 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] LWLock self-deadlock detection"
}
] |
[
{
"msg_contents": "We can reproduce this difference with the following steps.\n\ncreate table su (a int, b int);\ninsert into su values(1, 1);\n\n- session 1:\nbegin;\nupdate su set b = 2 where b = 1;\n\n- sess 2:\nselect * from su where a in (select a from su where b = 1) for update;\n\n- sess 1:\ncommit;\n\nThen session 2 can get the result.\n\nPostgreSQL:\n\n a | b\n---+---\n 1 | 2\n(1 row)\n\n\nOracle: It gets 0 rows.\n\nOracle's plan is pretty similar to Postgres.\n\nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------\nPlan hash value: 2828511618\n\n-----------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n-----------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 1 | 52 | 4 (0)| 00:00:01 |\n| 1 | FOR UPDATE | | | | | |\n| 2 | BUFFER SORT | | | | | |\n|* 3 | HASH JOIN SEMI | | 1 | 52 | 4 (0)| 00:00:01 |\n| 4 | TABLE ACCESS FULL| SU | 1 | 26 | 2 (0)| 00:00:01 |\n|* 5 | TABLE ACCESS FULL| SU | 1 | 26 | 2 (0)| 00:00:01 |\n\nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------\n-----------------------------------------------------------------------------\n\nAny thoughts on who is wrong?\n\n\n-- \nBest Regards\nAndy Fan\n\nWe can reproduce this difference with the following steps. create table su (a int, b int);insert into su values(1, 1);- session 1:begin;update su set b = 2 where b = 1;- sess 2:select * from su where a in (select a from su where b = 1) for update;- sess 1:commit;Then session 2 can get the result. PostgreSQL: a | b---+--- 1 | 2(1 row)Oracle: It gets 0 rows.Oracle's plan is pretty similar to Postgres. PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------Plan hash value: 2828511618-----------------------------------------------------------------------------| Id | Operation\t | Name | Rows | Bytes | Cost (%CPU)| Time |-----------------------------------------------------------------------------| 0 | SELECT STATEMENT |\t |\t 1 |\t 52 |\t 4 (0)| 00:00:01 || 1 | FOR UPDATE\t |\t |\t |\t |\t\t |\t || 2 | BUFFER SORT\t |\t |\t |\t |\t\t |\t ||* 3 | HASH JOIN SEMI |\t |\t 1 |\t 52 |\t 4 (0)| 00:00:01 || 4 | TABLE ACCESS FULL| SU |\t 1 |\t 26 |\t 2 (0)| 00:00:01 ||* 5 | TABLE ACCESS FULL| SU |\t 1 |\t 26 |\t 2 (0)| 00:00:01 |PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------------------------------------------------------------------------------------Any thoughts on who is wrong? -- Best RegardsAndy Fan",
"msg_date": "Thu, 19 Nov 2020 21:11:48 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> create table su (a int, b int);\n> insert into su values(1, 1);\n\n> - session 1:\n> begin;\n> update su set b = 2 where b = 1;\n\n> - sess 2:\n> select * from su where a in (select a from su where b = 1) for update;\n\nThis'd probably work the way you expect if there were \"for update\"\nin the sub-select as well. As is, the sub-select will happily\nreturn \"1\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 10:49:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 11:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > create table su (a int, b int);\n> > insert into su values(1, 1);\n>\n> > - session 1:\n> > begin;\n> > update su set b = 2 where b = 1;\n>\n> > - sess 2:\n> > select * from su where a in (select a from su where b = 1) for update;\n>\n> This'd probably work the way you expect if there were \"for update\"\n> in the sub-select as well. As is, the sub-select will happily\n> return \"1\".\n>\n> regards, tom lane\n>\n\nHi Tom:\n\nThank you for your attention. Your suggestion would fix the issue. However\nThe difference will cause some risks when users move their application from\nOracle\nto PostgreSQL. So I'd like to think which behavior is more reasonable.\n\nIn our current pg strategy, we would get\n\nselect * from su where a in (select a from su where b = 1) for update;\n\n a | b\n---+---\n 1 | 2\n(1 row)\n\nThe data is obtained from 4 steps:\n\n1. In the subquery, it gets su(a=1, b=1).\n2. in the outer query, it is blocked.\n3. session 1 is committed. sess 2 can continue.\n4. outer query gets su(a=1, b=2).\n\nBy comparing the result in step 1 & step 4 in the same query, it\nlooks like we are using 2 different snapshots for the same query.\nI think it is a bit strange.\n\nIf we think it is an issue, I think we can fix it with something like this:\n\n+/*\n+ * We should use the same RowMarkType for the RangeTblEntry\n+ * if the underline relation is the same. Doesn't handle SubQuery for now.\n+ */\n+static RowMarkType\n+select_rowmark_type_for_rangetable(RangeTblEntry *rte,\n+ List\n*rowmarks,\n+\n LockClauseStrength strength)\n+{\n+ ListCell *lc;\n+ foreach(lc, rowmarks)\n+ {\n+ PlanRowMark *rc = lfirst_node(PlanRowMark, lc);\n+ if (rc->relid == rte->relid)\n+ return rc->markType;\n+ }\n+ return select_rowmark_type(rte, strength);\n+}\n+\n /*\n * preprocess_rowmarks - set up PlanRowMarks if needed\n */\n@@ -2722,6 +2743,7 @@ preprocess_rowmarks(PlannerInfo *root)\n newrc->strength = rc->strength;\n newrc->waitPolicy = rc->waitPolicy;\n newrc->isParent = false;\n+ newrc->relid = rte->relid;\n\n prowmarks = lappend(prowmarks, newrc);\n }\n@@ -2742,18 +2764,18 @@ preprocess_rowmarks(PlannerInfo *root)\n newrc = makeNode(PlanRowMark);\n newrc->rti = newrc->prti = i;\n newrc->rowmarkId = ++(root->glob->lastRowMarkId);\n- newrc->markType = select_rowmark_type(rte, LCS_NONE);\n+ newrc->markType = select_rowmark_type_for_rangetable(rte,\nprowmarks, LCS_NONE);\n newrc->allMarkTypes = (1 << newrc->markType);\n newrc->strength = LCS_NONE;\n newrc->waitPolicy = LockWaitBlock; /* doesn't matter */\n newrc->isParent = false;\n-\n+ newrc->relid = rte->relid;\n prowmarks = lappend(prowmarks, newrc);\n }\n-\n root->rowMarks = prowmarks;\n }\n\n+\n /*\n * Select RowMarkType to use for a given table\n */\ndiff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h\nindex ac5685da64..926086a69a 100644\n--- a/src/include/nodes/plannodes.h\n+++ b/src/include/nodes/plannodes.h\n@@ -1103,6 +1103,7 @@ typedef struct PlanRowMark\n LockClauseStrength strength; /* LockingClause's strength, or\nLCS_NONE */\n LockWaitPolicy waitPolicy; /* NOWAIT and SKIP LOCKED options */\n bool isParent; /* true if this is a\n\"dummy\" parent entry */\n+ Oid relid; /* relation oid */\n } PlanRowMark;\n\nDo you think it is a bug and the above method is the right way to go?\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Nov 19, 2020 at 11:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> create table su (a int, b int);\n> insert into su values(1, 1);\n\n> - session 1:\n> begin;\n> update su set b = 2 where b = 1;\n\n> - sess 2:\n> select * from su where a in (select a from su where b = 1) for update;\n\nThis'd probably work the way you expect if there were \"for update\"\nin the sub-select as well. As is, the sub-select will happily\nreturn \"1\".\n\n regards, tom lane\nHi Tom:Thank you for your attention. Your suggestion would fix the issue. HoweverThe difference will cause some risks when users move their application from Oracleto PostgreSQL. So I'd like to think which behavior is more reasonable.In our current pg strategy, we would get select * from su where a in (select a from su where b = 1) for update; a | b---+--- 1 | 2(1 row)The data is obtained from 4 steps:1. In the subquery, it gets su(a=1, b=1).2. in the outer query, it is blocked.3. session 1 is committed. sess 2 can continue.4. outer query gets su(a=1, b=2).By comparing the result in step 1 & step 4 in the same query, itlooks like we are using 2 different snapshots for the same query. I think it is a bit strange.If we think it is an issue, I think we can fix it with something like this:+/*+ * We should use the same RowMarkType for the RangeTblEntry+ * if the underline relation is the same. Doesn't handle SubQuery for now.+ */+static RowMarkType+select_rowmark_type_for_rangetable(RangeTblEntry *rte,+ List *rowmarks,+ LockClauseStrength strength)+{+ ListCell *lc;+ foreach(lc, rowmarks)+ {+ PlanRowMark *rc = lfirst_node(PlanRowMark, lc);+ if (rc->relid == rte->relid)+ return rc->markType;+ }+ return select_rowmark_type(rte, strength);+}+ /* * preprocess_rowmarks - set up PlanRowMarks if needed */@@ -2722,6 +2743,7 @@ preprocess_rowmarks(PlannerInfo *root) newrc->strength = rc->strength; newrc->waitPolicy = rc->waitPolicy; newrc->isParent = false;+ newrc->relid = rte->relid; prowmarks = lappend(prowmarks, newrc); }@@ -2742,18 +2764,18 @@ preprocess_rowmarks(PlannerInfo *root) newrc = makeNode(PlanRowMark); newrc->rti = newrc->prti = i; newrc->rowmarkId = ++(root->glob->lastRowMarkId);- newrc->markType = select_rowmark_type(rte, LCS_NONE);+ newrc->markType = select_rowmark_type_for_rangetable(rte, prowmarks, LCS_NONE); newrc->allMarkTypes = (1 << newrc->markType); newrc->strength = LCS_NONE; newrc->waitPolicy = LockWaitBlock; /* doesn't matter */ newrc->isParent = false;-+ newrc->relid = rte->relid; prowmarks = lappend(prowmarks, newrc); }- root->rowMarks = prowmarks; }+ /* * Select RowMarkType to use for a given table */diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.hindex ac5685da64..926086a69a 100644--- a/src/include/nodes/plannodes.h+++ b/src/include/nodes/plannodes.h@@ -1103,6 +1103,7 @@ typedef struct PlanRowMark LockClauseStrength strength; /* LockingClause's strength, or LCS_NONE */ LockWaitPolicy waitPolicy; /* NOWAIT and SKIP LOCKED options */ bool isParent; /* true if this is a \"dummy\" parent entry */+ Oid relid; /* relation oid */ } PlanRowMark;Do you think it is a bug and the above method is the right way to go? -- Best RegardsAndy Fan",
"msg_date": "Fri, 20 Nov 2020 16:57:09 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "On 11/20/20 9:57 AM, Andy Fan wrote:\n> Thank you for your attention. Your suggestion would fix the issue. However\n> The difference will cause some risks when users move their application \n> from Oracle\n> to PostgreSQL. So I'd like to think which behavior is more reasonable.\n\nI think PostgreSQL's behavior is more reasonable since it only locks the \nrows it claims to lock and no extra rows. This makes the code easy to \nreason about. And PostgreSQL does not re-evaluate sub queries after \ngrabbing the lock which while it might be surprising to some people is \nalso a quite nice consistent behavior in practice as long as you are \naware of it.\n\nI do not see why these two scenarios should behave differently (which I \nthink they would with your proposed patch):\n\n== Scenario 1\n\ncreate table su (a int, b int);\ninsert into su values(1, 1);\n\n- session 1:\nbegin;\nupdate su set b = 2 where b = 1;\n\n- sess 2:\nselect * from su where a in (select a from su where b = 1) for update;\n\n- sess 1:\ncommit;\n\n== Scenario 2\n\ncreate table su (a int, b int);\ninsert into su values(1, 1);\n\ncreate table su2 (a int, b int);\ninsert into su2 values(1, 1);\n\n- session 1:\nbegin;\nupdate su set b = 2 where b = 1;\nupdate su2 set b = 2 where b = 1;\n\n- sess 2:\nselect * from su where a in (select a from su2 where b = 1) for update;\n\n- sess 1:\ncommit;\n\nAndreas\n\n\n\n",
"msg_date": "Fri, 20 Nov 2020 14:37:03 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "Hi Andreas:\n\nThanks for your input.\n\nOn Fri, Nov 20, 2020 at 9:37 PM Andreas Karlsson <andreas@proxel.se> wrote:\n\n> On 11/20/20 9:57 AM, Andy Fan wrote:\n> > Thank you for your attention. Your suggestion would fix the issue.\n> However\n> > The difference will cause some risks when users move their application\n> > from Oracle\n> > to PostgreSQL. So I'd like to think which behavior is more reasonable.\n>\n> I think PostgreSQL's behavior is more reasonable since it only locks the\n> rows it claims to lock and no extra rows. This makes the code easy to\n> reason about. And PostgreSQL does not re-evaluate sub queries after\n> grabbing the lock which while it might be surprising to some people is\n> also a quite nice consistent behavior in practice as long as you are\n> aware of it.\n>\n\nI admit my way is bad after reading your below question, but I\nwould not think *it might be surprising to some people* is a good signal\nfor a design. Would you think \"re-evaluate the quals\" after grabbing the\nlock should be a good idea? And do you know if any other database uses\nthe postgres's way or Oracle's way? I just heard Oracle might do the\nre-check just some minutes before reading your reply and I also found\nOracle doesn't lock the extra rows per my test.\n\n\n\n> I do not see why these two scenarios should behave differently (which I\n> think they would with your proposed patch):\n>\n>\nGood question! I think my approach doesn't make sense now!\n\n\n\n-- \nBest Regards\nAndy Fan\n\nHi Andreas:Thanks for your input. On Fri, Nov 20, 2020 at 9:37 PM Andreas Karlsson <andreas@proxel.se> wrote:On 11/20/20 9:57 AM, Andy Fan wrote:\n> Thank you for your attention. Your suggestion would fix the issue. However\n> The difference will cause some risks when users move their application \n> from Oracle\n> to PostgreSQL. So I'd like to think which behavior is more reasonable.\n\nI think PostgreSQL's behavior is more reasonable since it only locks the \nrows it claims to lock and no extra rows. This makes the code easy to \nreason about. And PostgreSQL does not re-evaluate sub queries after \ngrabbing the lock which while it might be surprising to some people is \nalso a quite nice consistent behavior in practice as long as you are \naware of it. I admit my way is bad after reading your below question, but Iwould not think *it might be surprising to some people* is a good signalfor a design. Would you think \"re-evaluate the quals\" after grabbing thelock should be a good idea? And do you know if any other database usesthe postgres's way or Oracle's way? I just heard Oracle might do the re-check just some minutes before reading your reply and I also foundOracle doesn't lock the extra rows per my test. \nI do not see why these two scenarios should behave differently (which I \nthink they would with your proposed patch):\n Good question! I think my approach doesn't make sense now! -- Best RegardsAndy Fan",
"msg_date": "Fri, 20 Nov 2020 22:25:23 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "On 11/20/20 3:25 PM, Andy Fan wrote:> On Fri, Nov 20, 2020 at 9:37 PM \nAndreas Karlsson <andreas@proxel.se\n> <mailto:andreas@proxel.se>> wrote:\n> \n> On 11/20/20 9:57 AM, Andy Fan wrote:\n> > Thank you for your attention. Your suggestion would fix the\n> issue. However\n> > The difference will cause some risks when users move their\n> application\n> > from Oracle\n> > to PostgreSQL. So I'd like to think which behavior is more\n> reasonable.\n> \n> I think PostgreSQL's behavior is more reasonable since it only locks\n> the\n> rows it claims to lock and no extra rows. This makes the code easy to\n> reason about. And PostgreSQL does not re-evaluate sub queries after\n> grabbing the lock which while it might be surprising to some people is\n> also a quite nice consistent behavior in practice as long as you are\n> aware of it.\n> \n> I admit my way is bad after reading your below question, but I\n> would not think *it might be surprising to some people* is a good signal\n> for a design. Would you think \"re-evaluate the quals\" after grabbing the\n> lock should be a good idea? And do you know if any other database uses\n> the postgres's way or Oracle's way? I just heard Oracle might do the\n> re-check just some minutes before reading your reply and I also found\n> Oracle doesn't lock the extra rows per my test.\n\nRe-evaluating the sub queries is probably a very bad idea in practice \nsince a sub query can have side effects, side effects which could really \nmess up some poor developer's database if they are unaware of it. The \ntradeoff PostgreSQL has made is not perfect but on top of my head I \ncannot think of anything less bad.\n\nI am sadly not familiar enough with Oracle or have access to any Oracle \nlicense so I cannot comment on how Oracle have implemented their behvior \nor what tradeoffs they have made.\n\nAndreas\n\n\n",
"msg_date": "Sat, 21 Nov 2020 00:03:32 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 3:04 PM Andreas Karlsson <andreas@proxel.se> wrote:\n> I am sadly not familiar enough with Oracle or have access to any Oracle\n> license so I cannot comment on how Oracle have implemented their behvior\n> or what tradeoffs they have made.\n\nI bet that Oracle does a statement-level rollback for READ COMMITTED\nmode's conflict handling. I'm not sure if this means that it locks\nmultiple rows or not. I think that it only uses one snapshot, which\nisn't quite what we do in the Postgres case. It's really complicated\nin both systems.\n\nAndy is right to say that it looks like Postgres is using 2 different\nsnapshots for the same query. That's *kind of* what happens here.\nTechnically the executor doesn't take a new snapshot, but it does the\nmoral equivalent. See the EvalPlanQual() section of the executor\nREADME.\n\nFWIW this area is something that isn't very well standardized, despite\nwhat you may hear. For example, InnoDBs REPEATABLE READ doesn't even\nuse the transaction snapshot for UPDATEs and DELETEs at all:\n\nhttps://dev.mysql.com/doc/refman/8.0/en/innodb-consistent-read.html\n\nWorst of all, you can update rows that were not visible to the\ntransaction snapshot, thus rendering them visible (see the \"Note\" box\nin the documentation for an example of this). InnoDB won't throw a\nserialization error at any isolation level. So it could be worse!\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 20 Nov 2020 17:04:20 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "Thank all of you for your great insight!\n\nOn Sat, Nov 21, 2020 at 9:04 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Fri, Nov 20, 2020 at 3:04 PM Andreas Karlsson <andreas@proxel.se>\n> wrote:\n> > I am sadly not familiar enough with Oracle or have access to any Oracle\n> > license so I cannot comment on how Oracle have implemented their behvior\n> > or what tradeoffs they have made.\n>\n> I bet that Oracle does a statement-level rollback for READ COMMITTED\n> mode's conflict handling.\n\n\nI'd agree with you about this point, this difference can cause more\ndifferent\nbehavior between Postgres & Oracle (not just select .. for update).\n\ncreate table dml(a int, b int);\ninsert into dml values(1, 1), (2,2);\n\n-- session 1:\nbegin;\ndelete from dml where a in (select min(a) from dml);\n\n--session 2:\ndelete from dml where a in (select min(a) from dml);\n\n-- session 1:\ncommit;\n\nIn Oracle: 1 row deleted in sess 2.\nIn PG: 0 rows are deleted.\n\n\n> I'm not sure if this means that it locks multiple rows or not.\n\n\nThis is something not really exists and you can ignore this part:)\n\nAbout the statement level rollback, Another difference is related.\n\ncreate table t (a int primary key, b int);\nbegin;\ninsert into t values(1,1);\ninsert into t values(1, 1);\ncommit;\n\nOracle : t has 1 row, PG: t has 0 row (since the whole transaction is\naborted).\n\nI don't mean we need to be the same as Oracle, but to support a\ncustomer who comes from Oracle, it would be good to know the\ndifference.\n\n-- \nBest Regards\nAndy Fan\n\nThank all of you for your great insight!On Sat, Nov 21, 2020 at 9:04 AM Peter Geoghegan <pg@bowt.ie> wrote:On Fri, Nov 20, 2020 at 3:04 PM Andreas Karlsson <andreas@proxel.se> wrote:\n> I am sadly not familiar enough with Oracle or have access to any Oracle\n> license so I cannot comment on how Oracle have implemented their behvior\n> or what tradeoffs they have made.\n\nI bet that Oracle does a statement-level rollback for READ COMMITTED\nmode's conflict handling. I'd agree with you about this point, this difference can cause more differentbehavior between Postgres & Oracle (not just select .. for update). create table dml(a int, b int);insert into dml values(1, 1), (2,2);-- session 1: begin; delete from dml where a in (select min(a) from dml); --session 2: delete from dml where a in (select min(a) from dml); -- session 1: commit;In Oracle: 1 row deleted in sess 2. In PG: 0 rows are deleted. I'm not sure if this means that it locks multiple rows or not. This is something not really exists and you can ignore this part:) About the statement level rollback, Another difference is related. create table t (a int primary key, b int);begin;insert into t values(1,1);insert into t values(1, 1);commit; Oracle : t has 1 row, PG: t has 0 row (since the whole transaction isaborted).I don't mean we need to be the same as Oracle, but to support a customer who comes from Oracle, it would be good to know the difference. -- Best RegardsAndy Fan",
"msg_date": "Sat, 21 Nov 2020 16:58:47 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "so 21. 11. 2020 v 9:59 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\nnapsal:\n\n> Thank all of you for your great insight!\n>\n> On Sat, Nov 21, 2020 at 9:04 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n>> On Fri, Nov 20, 2020 at 3:04 PM Andreas Karlsson <andreas@proxel.se>\n>> wrote:\n>> > I am sadly not familiar enough with Oracle or have access to any Oracle\n>> > license so I cannot comment on how Oracle have implemented their behvior\n>> > or what tradeoffs they have made.\n>>\n>> I bet that Oracle does a statement-level rollback for READ COMMITTED\n>> mode's conflict handling.\n>\n>\n> I'd agree with you about this point, this difference can cause more\n> different\n> behavior between Postgres & Oracle (not just select .. for update).\n>\n> create table dml(a int, b int);\n> insert into dml values(1, 1), (2,2);\n>\n> -- session 1:\n> begin;\n> delete from dml where a in (select min(a) from dml);\n>\n> --session 2:\n> delete from dml where a in (select min(a) from dml);\n>\n> -- session 1:\n> commit;\n>\n> In Oracle: 1 row deleted in sess 2.\n> In PG: 0 rows are deleted.\n>\n>\n>> I'm not sure if this means that it locks multiple rows or not.\n>\n>\n> This is something not really exists and you can ignore this part:)\n>\n> About the statement level rollback, Another difference is related.\n>\n> create table t (a int primary key, b int);\n> begin;\n> insert into t values(1,1);\n> insert into t values(1, 1);\n> commit;\n>\n> Oracle : t has 1 row, PG: t has 0 row (since the whole transaction is\n> aborted).\n>\n> I don't mean we need to be the same as Oracle, but to support a\n> customer who comes from Oracle, it would be good to know the\n> difference.\n>\n\nyes, it would be nice to be better documented, somewhere - it should not be\npart of Postgres documentation. Unfortunately, people who know Postgres\nperfectly do not have the same knowledge about Oracle.\n\nSome differences are documented in Orafce documentation\nhttps://github.com/orafce/orafce/tree/master/doc\n\nbut I am afraid so there is nothing about the different behaviour of\nsnapshots.\n\nRegards\n\nPavel\n\n\n> --\n> Best Regards\n> Andy Fan\n>\n\nso 21. 11. 2020 v 9:59 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:Thank all of you for your great insight!On Sat, Nov 21, 2020 at 9:04 AM Peter Geoghegan <pg@bowt.ie> wrote:On Fri, Nov 20, 2020 at 3:04 PM Andreas Karlsson <andreas@proxel.se> wrote:\n> I am sadly not familiar enough with Oracle or have access to any Oracle\n> license so I cannot comment on how Oracle have implemented their behvior\n> or what tradeoffs they have made.\n\nI bet that Oracle does a statement-level rollback for READ COMMITTED\nmode's conflict handling. I'd agree with you about this point, this difference can cause more differentbehavior between Postgres & Oracle (not just select .. for update). create table dml(a int, b int);insert into dml values(1, 1), (2,2);-- session 1: begin; delete from dml where a in (select min(a) from dml); --session 2: delete from dml where a in (select min(a) from dml); -- session 1: commit;In Oracle: 1 row deleted in sess 2. In PG: 0 rows are deleted. I'm not sure if this means that it locks multiple rows or not. This is something not really exists and you can ignore this part:) About the statement level rollback, Another difference is related. create table t (a int primary key, b int);begin;insert into t values(1,1);insert into t values(1, 1);commit; Oracle : t has 1 row, PG: t has 0 row (since the whole transaction isaborted).I don't mean we need to be the same as Oracle, but to support a customer who comes from Oracle, it would be good to know the difference. yes, it would be nice to be better documented, somewhere - it should not be part of Postgres documentation. Unfortunately, people who know Postgres perfectly do not have the same knowledge about Oracle.Some differences are documented in Orafce documentation https://github.com/orafce/orafce/tree/master/docbut I am afraid so there is nothing about the different behaviour of snapshots.RegardsPavel-- Best RegardsAndy Fan",
"msg_date": "Sat, 21 Nov 2020 16:27:09 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "On Sat, Nov 21, 2020 at 12:58 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I don't mean we need to be the same as Oracle, but to support a\n> customer who comes from Oracle, it would be good to know the\n> difference.\n\nActually, it is documented here:\nhttps://www.postgresql.org/docs/devel/transaction-iso.html\n\nThe description starts with: \"UPDATE, DELETE, SELECT FOR UPDATE, and\nSELECT FOR SHARE commands behave the same as SELECT in terms of\nsearching for target rows...\".\n\nI imagine that the number of application developers that are aware of\nthis specific aspect of transaction isolation in PostgreSQL (READ\nCOMMITTED conflict handling/EvalPlanQual()) is extremely small. In\npractice it doesn't come up that often. Though Postgres hackers tend\nto think about it a lot because it is hard to maintain.\n\nI'm not saying that that's good or bad. Just that that has been my experience.\n\nI am sure that some application developers really do understand the\nsingle most important thing about READ COMMITTED mode's behavior: each\nnew command gets its own MVCC snapshot. But I believe that Oracle is\nno different. So in practice application developers probably don't\nnotice any difference between READ COMMITTED mode in practically all\ncases. (Again, just my opinion.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 21 Nov 2020 13:55:57 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "On Sat, Nov 21, 2020 at 11:27 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> so 21. 11. 2020 v 9:59 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\n> napsal:\n>\n>> Thank all of you for your great insight!\n>>\n>> On Sat, Nov 21, 2020 at 9:04 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>>\n>>> On Fri, Nov 20, 2020 at 3:04 PM Andreas Karlsson <andreas@proxel.se>\n>>> wrote:\n>>> > I am sadly not familiar enough with Oracle or have access to any Oracle\n>>> > license so I cannot comment on how Oracle have implemented their\n>>> behvior\n>>> > or what tradeoffs they have made.\n>>>\n>>> I bet that Oracle does a statement-level rollback for READ COMMITTED\n>>> mode's conflict handling.\n>>\n>>\n>> I'd agree with you about this point, this difference can cause more\n>> different\n>> behavior between Postgres & Oracle (not just select .. for update).\n>>\n>> create table dml(a int, b int);\n>> insert into dml values(1, 1), (2,2);\n>>\n>> -- session 1:\n>> begin;\n>> delete from dml where a in (select min(a) from dml);\n>>\n>> --session 2:\n>> delete from dml where a in (select min(a) from dml);\n>>\n>> -- session 1:\n>> commit;\n>>\n>> In Oracle: 1 row deleted in sess 2.\n>> In PG: 0 rows are deleted.\n>>\n>>\n>>> I'm not sure if this means that it locks multiple rows or not.\n>>\n>>\n>> This is something not really exists and you can ignore this part:)\n>>\n>> About the statement level rollback, Another difference is related.\n>>\n>> create table t (a int primary key, b int);\n>> begin;\n>> insert into t values(1,1);\n>> insert into t values(1, 1);\n>> commit;\n>>\n>> Oracle : t has 1 row, PG: t has 0 row (since the whole transaction is\n>> aborted).\n>>\n>> I don't mean we need to be the same as Oracle, but to support a\n>> customer who comes from Oracle, it would be good to know the\n>> difference.\n>>\n>\n> yes, it would be nice to be better documented, somewhere - it should not\n> be part of Postgres documentation. Unfortunately, people who know Postgres\n> perfectly do not have the same knowledge about Oracle.\n>\n> Some differences are documented in Orafce documentation\n> https://github.com/orafce/orafce/tree/master/doc\n>\n>\norafce project is awesome!\n\n\n> but I am afraid so there is nothing about the different behaviour of\n> snapshots.\n>\n>\nhttps://github.com/orafce/orafce/pull/120 is opened for this.\n\n-- \nBest Regards\nAndy Fan\n\nOn Sat, Nov 21, 2020 at 11:27 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:so 21. 11. 2020 v 9:59 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:Thank all of you for your great insight!On Sat, Nov 21, 2020 at 9:04 AM Peter Geoghegan <pg@bowt.ie> wrote:On Fri, Nov 20, 2020 at 3:04 PM Andreas Karlsson <andreas@proxel.se> wrote:\n> I am sadly not familiar enough with Oracle or have access to any Oracle\n> license so I cannot comment on how Oracle have implemented their behvior\n> or what tradeoffs they have made.\n\nI bet that Oracle does a statement-level rollback for READ COMMITTED\nmode's conflict handling. I'd agree with you about this point, this difference can cause more differentbehavior between Postgres & Oracle (not just select .. for update). create table dml(a int, b int);insert into dml values(1, 1), (2,2);-- session 1: begin; delete from dml where a in (select min(a) from dml); --session 2: delete from dml where a in (select min(a) from dml); -- session 1: commit;In Oracle: 1 row deleted in sess 2. In PG: 0 rows are deleted. I'm not sure if this means that it locks multiple rows or not. This is something not really exists and you can ignore this part:) About the statement level rollback, Another difference is related. create table t (a int primary key, b int);begin;insert into t values(1,1);insert into t values(1, 1);commit; Oracle : t has 1 row, PG: t has 0 row (since the whole transaction isaborted).I don't mean we need to be the same as Oracle, but to support a customer who comes from Oracle, it would be good to know the difference. yes, it would be nice to be better documented, somewhere - it should not be part of Postgres documentation. Unfortunately, people who know Postgres perfectly do not have the same knowledge about Oracle.Some differences are documented in Orafce documentation https://github.com/orafce/orafce/tree/master/docorafce project is awesome! but I am afraid so there is nothing about the different behaviour of snapshots.https://github.com/orafce/orafce/pull/120 is opened for this. -- Best RegardsAndy Fan",
"msg_date": "Sun, 22 Nov 2020 10:29:21 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 5:56 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Sat, Nov 21, 2020 at 12:58 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> > I don't mean we need to be the same as Oracle, but to support a\n> > customer who comes from Oracle, it would be good to know the\n> > difference.\n>\n> Actually, it is documented here:\n> https://www.postgresql.org/docs/devel/transaction-iso.html\n>\n> The description starts with: \"UPDATE, DELETE, SELECT FOR UPDATE, and\n> SELECT FOR SHARE commands behave the same as SELECT in terms of\n> searching for target rows...\".\n>\n> I imagine that the number of application developers that are aware of\n> this specific aspect of transaction isolation in PostgreSQL (READ\n> COMMITTED conflict handling/EvalPlanQual()) is extremely small. In\n> practice it doesn't come up that often.\n\n\nTotally agree with that.\n\n\n> Though Postgres hackers tend to think about it a lot because it is hard to\n> maintain.\n>\n\nHackers may care about this if they run into a real user case :)\n\n\n> I'm not saying that that's good or bad. Just that that has been my\n> experience.\n>\n> I am sure that some application developers really do understand the\n> single most important thing about READ COMMITTED mode's behavior: each\n> new command gets its own MVCC snapshot. But I believe that Oracle is\n> no different. So in practice application developers probably don't\n> notice any difference between READ COMMITTED mode in practically all\n> cases. (Again, just my opinion.)\n>\n> --\n> Peter Geoghegan\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Sun, Nov 22, 2020 at 5:56 AM Peter Geoghegan <pg@bowt.ie> wrote:On Sat, Nov 21, 2020 at 12:58 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I don't mean we need to be the same as Oracle, but to support a\n> customer who comes from Oracle, it would be good to know the\n> difference.\n\nActually, it is documented here:\nhttps://www.postgresql.org/docs/devel/transaction-iso.html\n\nThe description starts with: \"UPDATE, DELETE, SELECT FOR UPDATE, and\nSELECT FOR SHARE commands behave the same as SELECT in terms of\nsearching for target rows...\".\n\nI imagine that the number of application developers that are aware of\nthis specific aspect of transaction isolation in PostgreSQL (READ\nCOMMITTED conflict handling/EvalPlanQual()) is extremely small. In\npractice it doesn't come up that often. Totally agree with that. Though Postgres hackers tend to think about it a lot because it is hard to maintain.Hackers may care about this if they run into a real user case :) \n\nI'm not saying that that's good or bad. Just that that has been my experience.\n\nI am sure that some application developers really do understand the\nsingle most important thing about READ COMMITTED mode's behavior: each\nnew command gets its own MVCC snapshot. But I believe that Oracle is\nno different. So in practice application developers probably don't\nnotice any difference between READ COMMITTED mode in practically all\ncases. (Again, just my opinion.)\n\n-- \nPeter Geoghegan\n-- Best RegardsAndy Fan",
"msg_date": "Sun, 22 Nov 2020 10:35:30 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Different results between PostgreSQL and Oracle for \"for update\"\n statement"
}
] |
[
{
"msg_contents": "Hi All,\n\nThere are currently 2 ways as per my knowledge to get the creation time\n(CREATE TABLE/FUNCTION/PROC/etc...) or modified time (ALTER ....) of an\nObject.\n\n1. Using track_commit_timestamp\n - Can be a slight overhead as this is applicable globally.\n\n2. Using Event Triggers.\n - Does every statement executed in the database go through an\nadditional check when event triggers are used ? Or is this handled smartly\nin the source ?\n\nAlso, is someone working on this logic in the code to make some metadata\navailable that exposes creation/modified time without the need of the above\n2 methods ?\n\nIn case I am missing any already existing logic (other than log files),\nplease let me know.\n-- \nRegards,\nAvinash Vallarapu\n\nHi All,There are currently 2 ways as per my knowledge to get the creation time (CREATE TABLE/FUNCTION/PROC/etc...) or modified time (ALTER ....) of an Object. 1. Using track_commit_timestamp - Can be a slight overhead as this is applicable globally.2. Using Event Triggers. - Does every statement executed in the database go through an additional check when event triggers are used ? Or is this handled smartly in the source ? Also, is someone working on this logic in the code to make some metadata available that exposes creation/modified time without the need of the above 2 methods ? In case I am missing any already existing logic (other than log files), please let me know. -- Regards,Avinash Vallarapu",
"msg_date": "Thu, 19 Nov 2020 11:06:36 -0400",
"msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Tracking Object creation or modified time !"
}
] |
[
{
"msg_contents": "\nI noticed somewhat to my surprise as I was prepping the tests for the\n\"match the whole DN\" patch that pg_ident.conf is parsed using the same\nroutines used for pg_hba.conf, and as a result the DN almost always\nneeds to be quoted, because they almost all contain a comma e.g.\n\"O=PGDG,OU=Testing\". Even if we didn't break on commas we would probably\nneed to quote most of them, because it's very common to include spaces\ne.g. \"O=Acme Corp,OU=Marketing\". Nevertheless it seems rather odd to\nbreak on commas, since nothing in the file is meant to be a list - this\nis a file with exactly three single-valued fields. Not sure if it's\nworth doing anything about this, or we should just live with it the way\nit is.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 19 Nov 2020 14:31:14 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "parsing pg_ident.conf"
},
{
"msg_contents": "Hello Andrew,\n\n> I noticed somewhat to my surprise as I was prepping the tests for the\n> \"match the whole DN\" patch that pg_ident.conf is parsed using the same\n> routines used for pg_hba.conf, and as a result the DN almost always\n> needs to be quoted, because they almost all contain a comma e.g.\n> \"O=PGDG,OU=Testing\". Even if we didn't break on commas we would probably\n> need to quote most of them, because it's very common to include spaces\n> e.g. \"O=Acme Corp,OU=Marketing\". Nevertheless it seems rather odd to\n> break on commas, since nothing in the file is meant to be a list - this\n> is a file with exactly three single-valued fields. Not sure if it's\n> worth doing anything about this, or we should just live with it the way\n> it is.\n\nMy 0.02 €:\n\nISTM that having to quote long strings which may contains space or other \nseparators is a good thing from a readability point of view, even if it \nwould be possible to parse it without the quotes.\n\nSo I'm in favor of keeping it that way.\n\nNote that since 8f8154a503, continuations are allowed on \"pg_hba.conf\" and \n\"pg_ident.conf\", and that you may continuate within a quoted string, which \nmay be of interest when considering LDAP links.\n\n-- \nFabien.",
"msg_date": "Fri, 20 Nov 2020 08:19:24 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: parsing pg_ident.conf"
},
{
"msg_contents": "\nOn 11/20/20 2:19 AM, Fabien COELHO wrote:\n>\n> Hello Andrew,\n>\n>> I noticed somewhat to my surprise as I was prepping the tests for the\n>> \"match the whole DN\" patch that pg_ident.conf is parsed using the same\n>> routines used for pg_hba.conf, and as a result the DN almost always\n>> needs to be quoted, because they almost all contain a comma e.g.\n>> \"O=PGDG,OU=Testing\". Even if we didn't break on commas we would probably\n>> need to quote most of them, because it's very common to include spaces\n>> e.g. \"O=Acme Corp,OU=Marketing\". Nevertheless it seems rather odd to\n>> break on commas, since nothing in the file is meant to be a list - this\n>> is a file with exactly three single-valued fields. Not sure if it's\n>> worth doing anything about this, or we should just live with it the way\n>> it is.\n>\n> My 0.02 €:\n>\n> ISTM that having to quote long strings which may contains space or\n> other separators is a good thing from a readability point of view,\n> even if it would be possible to parse it without the quotes.\n>\n> So I'm in favor of keeping it that way.\n>\n> Note that since 8f8154a503, continuations are allowed on \"pg_hba.conf\"\n> and \"pg_ident.conf\", and that you may continuate within a quoted\n> string, which may be of interest when considering LDAP links.\n\n\nMaybe we should add a comment at the top of the file about when quoting\nis needed.\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Fri, 20 Nov 2020 08:31:20 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: parsing pg_ident.conf"
},
{
"msg_contents": "\n>>> I noticed somewhat to my surprise as I was prepping the tests for the\n>>> \"match the whole DN\" patch that pg_ident.conf is parsed using the same\n>>> routines used for pg_hba.conf, and as a result the DN almost always\n>>> needs to be quoted, because they almost all contain a comma e.g.\n>>> \"O=PGDG,OU=Testing\". Even if we didn't break on commas we would probably\n>>> need to quote most of them, because it's very common to include spaces\n>\n> Maybe we should add a comment at the top of the file about when quoting\n> is needed.\n\nYes, that is a good place to point that out. Possibly it would also be \nworth to add something in 20.2, including an example?\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 20 Nov 2020 15:54:24 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: parsing pg_ident.conf"
}
] |
[
{
"msg_contents": "It would appear weak symbol linking is not handled properly without\n'isysroot' parameter passed to linker.\n\nFixes:\nUndefined symbols for architecture x86_64:\n \"___darwin_check_fd_set_overflow\", referenced from:\n _ClientAuthentication in auth.o\n _pgstat_init in pgstat.o\n _ServerLoop in postmaster.o\nld: symbol(s) not found for architecture x86_64\n\nSee:\nhttps://github.com/qt/qtwebengine-chromium/commit/d5c4b6230b7f915f6e044e230c0c575249938400\n---\n configure | 4 +++-\n configure.ac | 4 +++-\n src/template/darwin | 2 ++\n 3 files changed, 8 insertions(+), 2 deletions(-)\n\ndiff --git a/configure b/configure\nindex ace4ed5dec..92b089907e 100755\n--- a/configure\n+++ b/configure\n@@ -19041,7 +19041,6 @@ if test \"$with_readline\" = yes; then\n else\n link_test_func=exit\n fi\n-\n if test \"$PORTNAME\" = \"darwin\"; then\n { $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether $CC supports -Wl,-dead_strip_dylibs\" >&5\n $as_echo_n \"checking whether $CC supports -Wl,-dead_strip_dylibs... \" >&6; }\n@@ -19194,6 +19193,9 @@ _ACEOF\n # we've finished all configure checks that depend on CPPFLAGS.\n if test x\"$PG_SYSROOT\" != x; then\n CPPFLAGS=`echo \"$CPPFLAGS\" | sed -e \"s| $PG_SYSROOT | \\\\\\$(PG_SYSROOT) |\"`\n+ if test \"$PORTNAME\" = \"darwin\"; then\n+ LDFLAGS=`echo \"$LDFLAGS\" | sed -e \"s| $PG_SYSROOT | \\\\\\$(PG_SYSROOT) |\"`\n+ fi\n fi\n \n \ndiff --git a/configure.ac b/configure.ac\nindex 5b91c83fd0..a82cbbc147 100644\n--- a/configure.ac\n+++ b/configure.ac\n@@ -2326,7 +2326,6 @@ if test \"$with_readline\" = yes; then\n else\n link_test_func=exit\n fi\n-\n if test \"$PORTNAME\" = \"darwin\"; then\n PGAC_PROG_CC_LDFLAGS_OPT([-Wl,-dead_strip_dylibs], $link_test_func)\n elif test \"$PORTNAME\" = \"openbsd\"; then\n@@ -2362,6 +2361,9 @@ AC_SUBST(PG_VERSION_NUM)\n # we've finished all configure checks that depend on CPPFLAGS.\n if test x\"$PG_SYSROOT\" != x; then\n CPPFLAGS=`echo \"$CPPFLAGS\" | sed -e \"s| $PG_SYSROOT | \\\\\\$(PG_SYSROOT) |\"`\n+ if test \"$PORTNAME\" = \"darwin\"; then\n+ LDFLAGS=`echo \"$LDFLAGS\" | sed -e \"s| $PG_SYSROOT | \\\\\\$(PG_SYSROOT) |\"`\n+ fi\n fi\n AC_SUBST(PG_SYSROOT)\n \ndiff --git a/src/template/darwin b/src/template/darwin\nindex f4d4e9d7cf..83d20a8cdd 100644\n--- a/src/template/darwin\n+++ b/src/template/darwin\n@@ -11,6 +11,8 @@ fi\n if test x\"$PG_SYSROOT\" != x\"\" ; then\n if test -d \"$PG_SYSROOT\" ; then\n CPPFLAGS=\"-isysroot $PG_SYSROOT $CPPFLAGS\"\n+ # Prevent undefined symbols errors for weak linked symbols.\n+ LDFLAGS=\"-isysroot $PG_SYSROOT $LDFLAGS\"\n else\n PG_SYSROOT=\"\"\n fi\n-- \n2.29.2\n\n\n\n",
"msg_date": "Thu, 19 Nov 2020 17:33:14 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> It would appear weak symbol linking is not handled properly without\n> 'isysroot' parameter passed to linker.\n\nNobody else has reported this problem, so maybe you should tell us\nhow to reproduce it before you suggest a build process change.\n\n(And yes, we have buildfarm members testing Xcode > 11.4 ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 19:40:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > It would appear weak symbol linking is not handled properly without\n> > 'isysroot' parameter passed to linker.\n>\n> Nobody else has reported this problem, so maybe you should tell us\n> how to reproduce it before you suggest a build process change.\nHmm, maybe it's a more recent issue then, I took the version number\nfrom the qt patch assuming it was the same issue, I hit it trying to build\nmaster on Xcode 12.2 Build version 12B45b on Catalina version 10.15.7.\n>\n> (And yes, we have buildfarm members testing Xcode > 11.4 ...)\n>\n> regards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 17:45:29 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "I think I figured it out, it only happens on systems where the Xcode version is\nnewer than the OS it seems.\n\nSee:\nhttps://forum.unity.com/threads/il2cpp-macstandalone-and-xcode-11-4.855187/\n\nSo looks like the failure happens due to the system sysroot or something like\nthat missing symbols from the Xcode sysroot, so I think this patch is\nstill correct\nin that case as it forces the use of the Xcode sysroot for linking which should\nbe what we want as we want it to match sysroot passed to the compiler.\n\nOn Thu, Nov 19, 2020 at 5:45 PM James Hilliard\n<james.hilliard1@gmail.com> wrote:\n>\n> On Thu, Nov 19, 2020 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > James Hilliard <james.hilliard1@gmail.com> writes:\n> > > It would appear weak symbol linking is not handled properly without\n> > > 'isysroot' parameter passed to linker.\n> >\n> > Nobody else has reported this problem, so maybe you should tell us\n> > how to reproduce it before you suggest a build process change.\n> Hmm, maybe it's a more recent issue then, I took the version number\n> from the qt patch assuming it was the same issue, I hit it trying to build\n> master on Xcode 12.2 Build version 12B45b on Catalina version 10.15.7.\n> >\n> > (And yes, we have buildfarm members testing Xcode > 11.4 ...)\n> >\n> > regards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 18:02:37 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> Hmm, maybe it's a more recent issue then, I took the version number\n> from the qt patch assuming it was the same issue, I hit it trying to build\n> master on Xcode 12.2 Build version 12B45b on Catalina version 10.15.7.\n\nHm, maybe you're using some unusual configure options or weird build\nenvironment?\n\nThe cases we've got in the buildfarm are Xcode 12.0 on Catalina (10.15.7)\nand Xcode 12.2 on Big Sur (11.0.1 ... although that one is ARM not Intel).\nMaybe you're found some corner case in between those, but I guess it's\nmore likely due to a configuration choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 20:04:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > Hmm, maybe it's a more recent issue then, I took the version number\n> > from the qt patch assuming it was the same issue, I hit it trying to build\n> > master on Xcode 12.2 Build version 12B45b on Catalina version 10.15.7.\n>\n> Hm, maybe you're using some unusual configure options or weird build\n> environment?\n>\n> The cases we've got in the buildfarm are Xcode 12.0 on Catalina (10.15.7)\n> and Xcode 12.2 on Big Sur (11.0.1 ... although that one is ARM not Intel).\n> Maybe you're found some corner case in between those, but I guess it's\n> more likely due to a configuration choice.\nI guess to verify one could try compiling with Xcode 12.2 on catalina 10.15.7.\n\nThis looks to be the function using the missing symbols, called by\nFD_SET in postgres:\n\n__header_always_inline int\n__darwin_check_fd_set(int _a, const void *_b)\n{\n#ifdef __clang__\n#pragma clang diagnostic push\n#pragma clang diagnostic ignored \"-Wunguarded-availability-new\"\n#endif\n if ((uintptr_t)&__darwin_check_fd_set_overflow != (uintptr_t) 0) {\n#if defined(_DARWIN_UNLIMITED_SELECT) || defined(_DARWIN_C_SOURCE)\n return __darwin_check_fd_set_overflow(_a, _b, 1);\n#else\n return __darwin_check_fd_set_overflow(_a, _b, 0);\n#endif\n } else {\n return 1;\n }\n#ifdef __clang__\n#pragma clang diagnostic pop\n#endif\n}\n\nWhich is located here:\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.0.sdk/usr/include/sys/_types/_fd_def.h\n\nThe header file indicates the missing symbols are only available in 11.0 and up:\nint __darwin_check_fd_set_overflow(int, const void *, int)\n__API_AVAILABLE(macosx(11.0), ios(14.0), tvos(14.0), watchos(7.0));\n\nSo I think it's fairly clear what's happening now, the linker would normally use\nthe system libraries to determine that those missing symbols are\nweakly linked, but since\nthe base OS is too old that fails unless we tell the linker where to\nlook(in the SDK).\n>\n> regards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 18:55:18 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Thu, Nov 19, 2020 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The cases we've got in the buildfarm are Xcode 12.0 on Catalina (10.15.7)\n>> and Xcode 12.2 on Big Sur (11.0.1 ... although that one is ARM not Intel).\n>> Maybe you're found some corner case in between those, but I guess it's\n>> more likely due to a configuration choice.\n\n> I guess to verify one could try compiling with Xcode 12.2 on catalina 10.15.7.\n\nTo check this, I updated my laptop (still on Catalina 10.15.7) to latest\nXcode, 12.2 (12B45b), from 12.0 --- and behold, I could duplicate it:\n\nUndefined symbols for architecture x86_64:\n \"___darwin_check_fd_set_overflow\", referenced from:\n _ClientAuthentication in auth.o\n _pgstat_init in pgstat.o\n _ServerLoop in postmaster.o\nld: symbol(s) not found for architecture x86_64\n\nHowever ... it then occurred to me to blow away my ccache and accache,\nand after rebuilding from scratch, everything's fine. So, are you\nusing ccache?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 21:20:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 7:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > On Thu, Nov 19, 2020 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The cases we've got in the buildfarm are Xcode 12.0 on Catalina (10.15.7)\n> >> and Xcode 12.2 on Big Sur (11.0.1 ... although that one is ARM not Intel).\n> >> Maybe you're found some corner case in between those, but I guess it's\n> >> more likely due to a configuration choice.\n>\n> > I guess to verify one could try compiling with Xcode 12.2 on catalina 10.15.7.\n>\n> To check this, I updated my laptop (still on Catalina 10.15.7) to latest\n> Xcode, 12.2 (12B45b), from 12.0 --- and behold, I could duplicate it:\n>\n> Undefined symbols for architecture x86_64:\n> \"___darwin_check_fd_set_overflow\", referenced from:\n> _ClientAuthentication in auth.o\n> _pgstat_init in pgstat.o\n> _ServerLoop in postmaster.o\n> ld: symbol(s) not found for architecture x86_64\n>\n> However ... it then occurred to me to blow away my ccache and accache,\n> and after rebuilding from scratch, everything's fine. So, are you\n> using ccache?\nDon't think so, I first hit this issue on a clean clone of master and\nI don't think\nI even have ccache or accache(not familiar with this variant) installed at all.\nAre you sure something else didn't get changed when you cleared the caches?\nIs configure using the right SDK directory?\n>\n> regards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 19:30:45 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "I wrote:\n> However ... it then occurred to me to blow away my ccache and accache,\n> and after rebuilding from scratch, everything's fine. So, are you\n> using ccache?\n\nOh, scratch that, I fat-fingered the experiment somehow. The issue\nis still there. Still, I'm hesitant to apply the fix you suggest,\nbecause of the law of unintended consequences. In particular, I'm\nafraid that using -isysroot in the link step might result in executables\nthat are bound to that sysroot and will not work if it's not there\n--- thus causing a problem for packagers trying to distribute PG\nto non-developers.\n\nWe had a great deal of trouble in the initial experiments with\n-isysroot, cf\n\nhttps://www.postgresql.org/message-id/flat/20840.1537850987%40sss.pgh.pa.us\n\nso I'm not that eager to mess with it. I'm inclined to think that\nthis is more a case of \"Apple broke ABI and they ought to fix that\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 21:48:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 7:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > However ... it then occurred to me to blow away my ccache and accache,\n> > and after rebuilding from scratch, everything's fine. So, are you\n> > using ccache?\n>\n> Oh, scratch that, I fat-fingered the experiment somehow. The issue\n> is still there. Still, I'm hesitant to apply the fix you suggest,\n> because of the law of unintended consequences. In particular, I'm\n> afraid that using -isysroot in the link step might result in executables\n> that are bound to that sysroot and will not work if it's not there\n> --- thus causing a problem for packagers trying to distribute PG\n> to non-developers.\n\nSince apple doesn't actually put real dylib's in the SDK path I think\nwe're fine:\n\n$ find /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.0.sdk\n-name \"*.dylib\"\n$\n\nInstead they use .tbd or text-based stub files there which appear to provide\nenough info for the linker to determine that the undefined symbols are\nweak linked. And since the linker doesn't actually link against the\n.tbd file paths\nin the way it would a dylib I don't think the SDK path could get used\naccidentally.\n\n>\n> We had a great deal of trouble in the initial experiments with\n> -isysroot, cf\n>\n> https://www.postgresql.org/message-id/flat/20840.1537850987%40sss.pgh.pa.us\n>\n> so I'm not that eager to mess with it. I'm inclined to think that\n> this is more a case of \"Apple broke ABI and they ought to fix that\".\n\nThis seems to be a longstanding issue, I expect they don't care much about\nautotools compared with their normal Xcode project build system. So probably\nbetter to just give this a try and see how it goes as it's likely to\nkeep popping up\notherwise.\n\n>\n> regards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 20:22:27 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Thu, Nov 19, 2020 at 7:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oh, scratch that, I fat-fingered the experiment somehow. The issue\n>> is still there. Still, I'm hesitant to apply the fix you suggest,\n>> because of the law of unintended consequences. In particular, I'm\n>> afraid that using -isysroot in the link step might result in executables\n>> that are bound to that sysroot and will not work if it's not there\n>> --- thus causing a problem for packagers trying to distribute PG\n>> to non-developers.\n\n> Since apple doesn't actually put real dylib's in the SDK path I think\n> we're fine:\n> $ find /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.0.sdk\n> -name \"*.dylib\"\n\nTrue. Also, while actual documentation on -isysroot seems to be damn\nnear nonexistent, all the example usages I can find on the net appear\nto put it into both compile and link steps. So maybe we're just doing\nit wrong and happened to get away with that this long.\n\n> This seems to be a longstanding issue, I expect they don't care much about\n> autotools compared with their normal Xcode project build system. So probably\n> better to just give this a try and see how it goes as it's likely to\n> keep popping up otherwise.\n\nYeah, the one saving grace here is that, unlike the last time, we're not\nhard up against a release deadline. If we push this now we can hope to\nget a fair amount of testing from PG developers before we have to ship\nit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 23:02:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 9:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > On Thu, Nov 19, 2020 at 7:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Oh, scratch that, I fat-fingered the experiment somehow. The issue\n> >> is still there. Still, I'm hesitant to apply the fix you suggest,\n> >> because of the law of unintended consequences. In particular, I'm\n> >> afraid that using -isysroot in the link step might result in executables\n> >> that are bound to that sysroot and will not work if it's not there\n> >> --- thus causing a problem for packagers trying to distribute PG\n> >> to non-developers.\n>\n> > Since apple doesn't actually put real dylib's in the SDK path I think\n> > we're fine:\n> > $ find /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.0.sdk\n> > -name \"*.dylib\"\n>\n> True. Also, while actual documentation on -isysroot seems to be damn\n> near nonexistent, all the example usages I can find on the net appear\n> to put it into both compile and link steps. So maybe we're just doing\n> it wrong and happened to get away with that this long.\nYeah, pretty sure that's what happened from what I'm seeing.\n>\n> > This seems to be a longstanding issue, I expect they don't care much about\n> > autotools compared with their normal Xcode project build system. So probably\n> > better to just give this a try and see how it goes as it's likely to\n> > keep popping up otherwise.\n>\n> Yeah, the one saving grace here is that, unlike the last time, we're not\n> hard up against a release deadline. If we push this now we can hope to\n> get a fair amount of testing from PG developers before we have to ship\n> it.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Thu, 19 Nov 2020 21:13:00 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
},
{
"msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Thu, Nov 19, 2020 at 9:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> True. Also, while actual documentation on -isysroot seems to be damn\n>> near nonexistent, all the example usages I can find on the net appear\n>> to put it into both compile and link steps. So maybe we're just doing\n>> it wrong and happened to get away with that this long.\n\n> Yeah, pretty sure that's what happened from what I'm seeing.\n\nMy buildfarm critters seem to be OK with this, so pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Nov 2020 00:59:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Fix compilation on mac with Xcode >= 11.4."
}
] |
[
{
"msg_contents": "Hi,\n\nI'm not sure how to reach the cause and reproduce it.\nAny suggestion will be appreciated.\nPostgreSQL 11.7\npostgresql-42.2.12.jar\nWindows Server 2016\n\nThanks,\nKiyoshi\n\n2020-11-13 21:11:51.535 JST [3652] LOG: CreateProcess call failed: A\nblocking operation was interrupted by a call to WSACancelBlockingCall.\n\n(error code 1455)\n2020-11-13 21:11:51.536 JST [3652] LOG: could not fork autovacuum worker\nprocess: No error\n2020-11-13 21:11:52.548 JST [3652] LOG: CreateProcess call failed: A\nblocking operation was interrupted by a call to WSACancelBlockingCall.\n\n(error code 1455)\n2020-11-13 21:11:52.548 JST [3652] LOG: could not fork autovacuum worker\nprocess: No error\n2020-11-13 21:11:53.562 JST [3652] LOG: CreateProcess call failed: A\nblocking operation was interrupted by a call to WSACancelBlockingCall.\n\n(error code 1455)\n2020-11-13 21:11:53.562 JST [3652] LOG: could not fork autovacuum worker\nprocess: No error\n2020-11-13 21:11:54.578 JST [3652] LOG: CreateProcess call failed: A\nblocking operation was interrupted by a call to WSACancelBlockingCall.\n\n(error code 1455)\n2020-11-13 21:11:54.578 JST [3652] LOG: could not fork autovacuum worker\nprocess: No error\n2020-11-13 21:11:55.578 JST [3652] LOG: CreateProcess call failed: A\nblocking operation was interrupted by a call to WSACancelBlockingCall.\n\n(error code 1455)\n2020-11-13 21:11:55.578 JST [3652] LOG: could not fork autovacuum worker\nprocess: No error\n2020-11-13 21:11:55.957 JST [3652] LOG: CreateProcess call failed: A\nblocking operation was interrupted by a call to WSACancelBlockingCall.\n\n(error code 1455)\n2020-11-13 21:11:55.957 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:55.958 JST [3652] LOG: CreateProcess call failed: No\nerror (error code 1455)\n2020-11-13 21:11:55.958 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:55.967 JST [3652] LOG: CreateProcess call failed: No\nerror (error code 1455)\n2020-11-13 21:11:55.967 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:55.968 JST [3652] LOG: CreateProcess call failed: No\nerror (error code 1455)\n2020-11-13 21:11:55.968 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:55.974 JST [3652] LOG: CreateProcess call failed: No\nerror (error code 1455)\n2020-11-13 21:11:55.974 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:55.975 JST [3652] LOG: CreateProcess call failed: No\nerror (error code 1455)\n2020-11-13 21:11:55.975 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:55.979 JST [3652] LOG: CreateProcess call failed: No\nerror (error code 1455)\n2020-11-13 21:11:55.979 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:55.980 JST [3652] LOG: CreateProcess call failed: No\nerror (error code 1455)\n2020-11-13 21:11:55.980 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:56.594 JST [3652] LOG: CreateProcess call failed: A\nblocking operation was interrupted by a call to WSACancelBlockingCall.\n\n(error code 1455)\n2020-11-13 21:11:56.594 JST [3652] LOG: could not fork autovacuum worker\nprocess: No error\n2020-11-13 21:11:57.115 JST [3652] LOG: CreateProcess call failed: A\nblocking operation was interrupted by a call to WSACancelBlockingCall.\n\n(error code 1450)\n2020-11-13 21:11:57.115 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 21:11:57.116 JST [3652] LOG: could not map backend parameter\nmemory: error code 6\n2020-11-13 21:11:57.116 JST [3652] LOG: could not fork new process for\nconnection: No error\n2020-11-13 22:08:16.028 JST [17660] LOG: could not receive data from\nclient: An existing connection was forcibly closed by the remote host.\n\n\n2020-11-13 22:08:16.028 JST [12560] LOG: could not receive data from\nclient: An existing connection was forcibly closed by the remote host.\n\n\nHi,I'm not sure how to reach the cause and reproduce it. Any suggestion will be appreciated.PostgreSQL 11.7postgresql-42.2.12.jarWindows Server 2016Thanks,Kiyoshi 2020-11-13\n 21:11:51.535 JST [3652] LOG: CreateProcess call failed: A blocking \noperation was interrupted by a call to WSACancelBlockingCall.\t (error code 1455)2020-11-13 21:11:51.536 JST [3652] LOG: could not fork autovacuum worker process: No error2020-11-13\n 21:11:52.548 JST [3652] LOG: CreateProcess call failed: A blocking \noperation was interrupted by a call to WSACancelBlockingCall.\t (error code 1455)2020-11-13 21:11:52.548 JST [3652] LOG: could not fork autovacuum worker process: No error2020-11-13\n 21:11:53.562 JST [3652] LOG: CreateProcess call failed: A blocking \noperation was interrupted by a call to WSACancelBlockingCall.\t (error code 1455)2020-11-13 21:11:53.562 JST [3652] LOG: could not fork autovacuum worker process: No error2020-11-13\n 21:11:54.578 JST [3652] LOG: CreateProcess call failed: A blocking \noperation was interrupted by a call to WSACancelBlockingCall.\t (error code 1455)2020-11-13 21:11:54.578 JST [3652] LOG: could not fork autovacuum worker process: No error2020-11-13\n 21:11:55.578 JST [3652] LOG: CreateProcess call failed: A blocking \noperation was interrupted by a call to WSACancelBlockingCall.\t (error code 1455)2020-11-13 21:11:55.578 JST [3652] LOG: could not fork autovacuum worker process: No error2020-11-13\n 21:11:55.957 JST [3652] LOG: CreateProcess call failed: A blocking \noperation was interrupted by a call to WSACancelBlockingCall.\t (error code 1455)2020-11-13 21:11:55.957 JST [3652] LOG: could not fork new process for connection: No error2020-11-13 21:11:55.958 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)2020-11-13 21:11:55.958 JST [3652] LOG: could not fork new process for connection: No error2020-11-13 21:11:55.967 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)2020-11-13 21:11:55.967 JST [3652] LOG: could not fork new process for connection: No error2020-11-13 21:11:55.968 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)2020-11-13 21:11:55.968 JST [3652] LOG: could not fork new process for connection: No error2020-11-13 21:11:55.974 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)2020-11-13 21:11:55.974 JST [3652] LOG: could not fork new process for connection: No error2020-11-13 21:11:55.975 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)2020-11-13 21:11:55.975 JST [3652] LOG: could not fork new process for connection: No error2020-11-13 21:11:55.979 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)2020-11-13 21:11:55.979 JST [3652] LOG: could not fork new process for connection: No error2020-11-13 21:11:55.980 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)2020-11-13 21:11:55.980 JST [3652] LOG: could not fork new process for connection: No error2020-11-13\n 21:11:56.594 JST [3652] LOG: CreateProcess call failed: A blocking \noperation was interrupted by a call to WSACancelBlockingCall.\t (error code 1455)2020-11-13 21:11:56.594 JST [3652] LOG: could not fork autovacuum worker process: No error2020-11-13\n 21:11:57.115 JST [3652] LOG: CreateProcess call failed: A blocking \noperation was interrupted by a call to WSACancelBlockingCall.\t (error code 1450)2020-11-13 21:11:57.115 JST [3652] LOG: could not fork new process for connection: No error2020-11-13 21:11:57.116 JST [3652] LOG: could not map backend parameter memory: error code 62020-11-13 21:11:57.116 JST [3652] LOG: could not fork new process for connection: No error2020-11-13\n 22:08:16.028 JST [17660] LOG: could not receive data from client: An \nexisting connection was forcibly closed by the remote host. 2020-11-13\n 22:08:16.028 JST [12560] LOG: could not receive data from client: An \nexisting connection was forcibly closed by the remote host.",
"msg_date": "Fri, 20 Nov 2020 11:55:37 +0900",
"msg_from": "=?UTF-8?B?5rGf5bed5r2U?= <egawa@apteq.jp>",
"msg_from_op": true,
"msg_subject": "CreateProcess call failed: A blocking operation was interrupted by a\n call to WSACancelBlockingCall"
},
{
"msg_contents": "Here's a mailing list for Postgres development. Please post user questions to pgsql-general@lists.postgresql.org.\r\n\r\nWith that said, your system seems to be short of memory.\r\n\r\nSystem Error Codes (1300-1699) (WinError.h) - Win32 apps | Microsoft Docs\r\nhttps://docs.microsoft.com/en-us/windows/win32/debug/system-error-codes--1300-1699-\r\n\r\n\r\nERROR_NO_SYSTEM_RESOURCES\r\n1450 (0x5AA)\r\nInsufficient system resources exist to complete the requested service.\r\n\r\nERROR_COMMITMENT_LIMIT\r\n1455 (0x5AF)\r\nThe paging file is too small for this operation.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\nFrom: 江川潔 <egawa@apteq.jp>\r\nSent: Friday, November 20, 2020 11:56 AM\r\nTo: pgsql-hackers@lists.postgresql.org\r\nSubject: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall\r\n\r\nHi,\r\n\r\nI'm not sure how to reach the cause and reproduce it.\r\nAny suggestion will be appreciated.\r\nPostgreSQL 11.7\r\npostgresql-42.2.12.jar\r\nWindows Server 2016\r\n\r\nThanks,\r\nKiyoshi\r\n\r\n2020-11-13 21:11:51.535 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\r\n\r\n(error code 1455)\r\n2020-11-13 21:11:51.536 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:52.548 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\r\n\r\n(error code 1455)\r\n2020-11-13 21:11:52.548 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:53.562 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\r\n\r\n(error code 1455)\r\n2020-11-13 21:11:53.562 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:54.578 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\r\n\r\n(error code 1455)\r\n2020-11-13 21:11:54.578 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:55.578 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\r\n\r\n(error code 1455)\r\n2020-11-13 21:11:55.578 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:55.957 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\r\n\r\n(error code 1455)\r\n2020-11-13 21:11:55.957 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.958 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.958 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.967 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.967 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.968 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.968 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.974 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.974 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.975 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.975 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.979 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.979 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.980 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.980 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:56.594 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\r\n\r\n(error code 1455)\r\n2020-11-13 21:11:56.594 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:57.115 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\r\n\r\n(error code 1450)\r\n2020-11-13 21:11:57.115 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:57.116 JST [3652] LOG: could not map backend parameter memory: error code 6\r\n2020-11-13 21:11:57.116 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 22:08:16.028 JST [17660] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\r\n\r\n\r\n2020-11-13 22:08:16.028 JST [12560] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\r\n\r\n\n\n\n\n\n\n\n\n\nHere's a mailing list for Postgres development. Please post user questions to pgsql-general@lists.postgresql.org.\n \nWith that said, your system seems to be short of memory.\n \nSystem Error Codes (1300-1699) (WinError.h) - Win32 apps | Microsoft Docs\nhttps://docs.microsoft.com/en-us/windows/win32/debug/system-error-codes--1300-1699-\n \n \nERROR_NO_SYSTEM_RESOURCES\n1450 (0x5AA)\nInsufficient system resources exist to complete the requested service.\n \nERROR_COMMITMENT_LIMIT\n1455 (0x5AF)\nThe paging file is too small for this operation.\n \n \nRegards\nTakayuki Tsunakawa\n \n \n \n \n\n\n\nFrom:\n江川潔 <egawa@apteq.jp>\r\n\nSent: Friday, November 20, 2020 11:56 AM\nTo: pgsql-hackers@lists.postgresql.org\nSubject: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall\n\n\n \n\n\nHi,\n\n\n \n\n\nI'm not sure how to reach the cause and reproduce it.\r\n\n\n\nAny suggestion will be appreciated.\n\n\nPostgreSQL 11.7\n\n\npostgresql-42.2.12.jar\n\n\nWindows Server 2016\n\n\n \n\n\nThanks,\n\n\nKiyoshi \n\n\n \n\n\n2020-11-13 21:11:51.535 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\r\n(error code 1455)\r\n2020-11-13 21:11:51.536 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:52.548 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\r\n(error code 1455)\r\n2020-11-13 21:11:52.548 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:53.562 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\r\n(error code 1455)\r\n2020-11-13 21:11:53.562 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:54.578 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\r\n(error code 1455)\r\n2020-11-13 21:11:54.578 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:55.578 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\r\n(error code 1455)\r\n2020-11-13 21:11:55.578 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:55.957 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\r\n(error code 1455)\r\n2020-11-13 21:11:55.957 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.958 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.958 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.967 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.967 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.968 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.968 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.974 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.974 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.975 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.975 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.979 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.979 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:55.980 JST [3652] LOG: CreateProcess call failed: No error (error code 1455)\r\n2020-11-13 21:11:55.980 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:56.594 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\r\n(error code 1455)\r\n2020-11-13 21:11:56.594 JST [3652] LOG: could not fork autovacuum worker process: No error\r\n2020-11-13 21:11:57.115 JST [3652] LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\r\n(error code 1450)\r\n2020-11-13 21:11:57.115 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 21:11:57.116 JST [3652] LOG: could not map backend parameter memory: error code 6\r\n2020-11-13 21:11:57.116 JST [3652] LOG: could not fork new process for connection: No error\r\n2020-11-13 22:08:16.028 JST [17660] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.\n\n\r\n2020-11-13 22:08:16.028 JST [12560] LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.",
"msg_date": "Fri, 20 Nov 2020 05:42:57 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: CreateProcess call failed: A blocking operation was interrupted\n by a call to WSACancelBlockingCall"
}
] |
[
{
"msg_contents": "Hi,\n\nThe pg_stat_progress_cluster view can report incorrect\nheap_blks_scanned values when synchronize_seqscans is enabled, because\nit allows the sequential heap scan to not start at block 0. This can\nresult in wraparounds in the heap_blks_scanned column when the table\nscan wraps around, and starting the next phase with heap_blks_scanned\n!= heap_blks_total. This issue was introduced with the\npg_stat_progress_cluster view.\n\nThe attached patch fixes the issue by accounting for a non-0\nheapScan->rs_startblock and calculating the correct number with a\nnon-0 heapScan->rs_startblock in mind.\n\nThe issue is reproducible starting from PG12 by stopping a\nCLUSTER-command while it is sequential-scanning the table (seqscan\nenforceable through enable_indexscan = off), and then re-starting the\nseqscanning CLUSTER command (without other load/seq-scans on the\ntable).\n\n\nAny thoughts?\n\nMatthias van de Meent",
"msg_date": "Fri, 20 Nov 2020 18:32:44 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "[patch] CLUSTER blocks scanned progress reporting"
},
{
"msg_contents": "\n\nOn 2020/11/21 2:32, Matthias van de Meent wrote:\n> Hi,\n> \n> The pg_stat_progress_cluster view can report incorrect\n> heap_blks_scanned values when synchronize_seqscans is enabled, because\n> it allows the sequential heap scan to not start at block 0. This can\n> result in wraparounds in the heap_blks_scanned column when the table\n> scan wraps around, and starting the next phase with heap_blks_scanned\n> != heap_blks_total. This issue was introduced with the\n> pg_stat_progress_cluster view.\n\nGood catch! I agree that this is a bug.\n\n> \n> The attached patch fixes the issue by accounting for a non-0\n> heapScan->rs_startblock and calculating the correct number with a\n> non-0 heapScan->rs_startblock in mind.\n\nThanks for the patch! It basically looks good to me.\n\nIt's a bit waste of cycles to calculate and update the number of scanned\nblocks every cycles. So I'm inclined to change the code as follows.\nThought?\n\n+\tBlockNumber\tprev_cblock = InvalidBlockNumber;\n<snip>\n+\t\t\tif (prev_cblock != heapScan->rs_cblock)\n+\t\t\t{\n+\t\t\t\tpgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_BLKS_SCANNED,\n+\t\t\t\t\t\t\t\t\t\t\t (heapScan->rs_cblock +\n+\t\t\t\t\t\t\t\t\t\t\t heapScan->rs_nblocks -\n+\t\t\t\t\t\t\t\t\t\t\t heapScan->rs_startblock\n+\t\t\t\t\t\t\t\t\t\t\t\t ) % heapScan->rs_nblocks + 1);\n+\t\t\t\tprev_cblock = heapScan->rs_cblock;\n+\t\t\t}\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 24 Nov 2020 23:05:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] CLUSTER blocks scanned progress reporting"
},
{
"msg_contents": "On Tue, 24 Nov 2020 at 15:05, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/11/21 2:32, Matthias van de Meent wrote:\n> > Hi,\n> >\n> > The pg_stat_progress_cluster view can report incorrect\n> > heap_blks_scanned values when synchronize_seqscans is enabled, because\n> > it allows the sequential heap scan to not start at block 0. This can\n> > result in wraparounds in the heap_blks_scanned column when the table\n> > scan wraps around, and starting the next phase with heap_blks_scanned\n> > != heap_blks_total. This issue was introduced with the\n> > pg_stat_progress_cluster view.\n>\n> Good catch! I agree that this is a bug.\n>\n> >\n> > The attached patch fixes the issue by accounting for a non-0\n> > heapScan->rs_startblock and calculating the correct number with a\n> > non-0 heapScan->rs_startblock in mind.\n>\n> Thanks for the patch! It basically looks good to me.\n\nThanks for the feedback!\n\n> It's a bit waste of cycles to calculate and update the number of scanned\n> blocks every cycles. So I'm inclined to change the code as follows.\n> Thought?\n>\n> + BlockNumber prev_cblock = InvalidBlockNumber;\n> <snip>\n> + if (prev_cblock != heapScan->rs_cblock)\n> + {\n> + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_BLKS_SCANNED,\n> + (heapScan->rs_cblock +\n> + heapScan->rs_nblocks -\n> + heapScan->rs_startblock\n> + ) % heapScan->rs_nblocks + 1);\n> + prev_cblock = heapScan->rs_cblock;\n> + }\n\nThat seems quite reasonable.\n\nI noticed that with my proposed patch it is still possible to go to\nthe next phase while heap_blks_scanned != heap_blks_total. This can\nhappen when the final heap pages contain only dead tuples, so no tuple\nis returned from the last heap page(s) of the scan. As the\nheapScan->rs_cblock is set to InvalidBlockNumber when the scan is\nfinished (see heapam.c#1060-1072), I think it would be correct to set\nheap_blks_scanned to heapScan->rs_nblocks at the end of the scan\ninstead.\n\nPlease find attached a patch applying the suggested changes.\n\nMatthias van de Meent",
"msg_date": "Tue, 24 Nov 2020 16:25:03 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] CLUSTER blocks scanned progress reporting"
},
{
"msg_contents": "\n\nOn 2020/11/25 0:25, Matthias van de Meent wrote:\n> On Tue, 24 Nov 2020 at 15:05, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/11/21 2:32, Matthias van de Meent wrote:\n>>> Hi,\n>>>\n>>> The pg_stat_progress_cluster view can report incorrect\n>>> heap_blks_scanned values when synchronize_seqscans is enabled, because\n>>> it allows the sequential heap scan to not start at block 0. This can\n>>> result in wraparounds in the heap_blks_scanned column when the table\n>>> scan wraps around, and starting the next phase with heap_blks_scanned\n>>> != heap_blks_total. This issue was introduced with the\n>>> pg_stat_progress_cluster view.\n>>\n>> Good catch! I agree that this is a bug.\n>>\n>>>\n>>> The attached patch fixes the issue by accounting for a non-0\n>>> heapScan->rs_startblock and calculating the correct number with a\n>>> non-0 heapScan->rs_startblock in mind.\n>>\n>> Thanks for the patch! It basically looks good to me.\n> \n> Thanks for the feedback!\n> \n>> It's a bit waste of cycles to calculate and update the number of scanned\n>> blocks every cycles. So I'm inclined to change the code as follows.\n>> Thought?\n>>\n>> + BlockNumber prev_cblock = InvalidBlockNumber;\n>> <snip>\n>> + if (prev_cblock != heapScan->rs_cblock)\n>> + {\n>> + pgstat_progress_update_param(PROGRESS_CLUSTER_HEAP_BLKS_SCANNED,\n>> + (heapScan->rs_cblock +\n>> + heapScan->rs_nblocks -\n>> + heapScan->rs_startblock\n>> + ) % heapScan->rs_nblocks + 1);\n>> + prev_cblock = heapScan->rs_cblock;\n>> + }\n> \n> That seems quite reasonable.\n> \n> I noticed that with my proposed patch it is still possible to go to\n> the next phase while heap_blks_scanned != heap_blks_total. This can\n> happen when the final heap pages contain only dead tuples, so no tuple\n> is returned from the last heap page(s) of the scan. As the\n> heapScan->rs_cblock is set to InvalidBlockNumber when the scan is\n> finished (see heapam.c#1060-1072), I think it would be correct to set\n> heap_blks_scanned to heapScan->rs_nblocks at the end of the scan\n> instead.\n\nThanks for updating the patch!\n\nPlease let me check my understanding about this. I was thinking that even\nwhen the last page contains only dead tuples, table_scan_getnextslot()\nreturns the last page (because SnapshotAny is used) and heap_blks_scanned\nis incremented properly. And then, heapam_relation_copy_for_cluster()\nhandles all the dead tuples in that page. No?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 25 Nov 2020 18:35:41 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] CLUSTER blocks scanned progress reporting"
},
{
"msg_contents": "On Wed, 25 Nov 2020 at 10:35, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/11/25 0:25, Matthias van de Meent wrote:\n> >\n> > I noticed that with my proposed patch it is still possible to go to\n> > the next phase while heap_blks_scanned != heap_blks_total. This can\n> > happen when the final heap pages contain only dead tuples, so no tuple\n> > is returned from the last heap page(s) of the scan. As the\n> > heapScan->rs_cblock is set to InvalidBlockNumber when the scan is\n> > finished (see heapam.c#1060-1072), I think it would be correct to set\n> > heap_blks_scanned to heapScan->rs_nblocks at the end of the scan\n> > instead.\n>\n> Thanks for updating the patch!\n>\n> Please let me check my understanding about this. I was thinking that even\n> when the last page contains only dead tuples, table_scan_getnextslot()\n> returns the last page (because SnapshotAny is used) and heap_blks_scanned\n> is incremented properly. And then, heapam_relation_copy_for_cluster()\n> handles all the dead tuples in that page. No?\n\nYes, my description was incorrect.\n\nThe potential for a discrepancy exists for seemingly empty pages. I\nthought that 'filled with only dead tuples' would be correct for\n'seemingly empty', but SnapshotAny indeed returns all tuples on a\npage. But pages can still be empty with SnapshotAny, through being\nemptied by vacuum, so the last pages scanned can still be empty pages.\nThen, there would be no tuple returned at the last pages of the scan,\nand subsequently the CLUSTER/VACUUM FULL would start the next phase\nwithout reporting on the last pages that were scanned and had no\ntuples in them (such that heap_blks_scanned != heap_blks_total).\n\nVacuum truncates empty blocks from the end of the relation, and this\nprevents empty blocks at the end of CLUSTER for the default case of\ntable scans starting at 0; but because the table scan might not start\nat block 0, we can have an empty page at the end of the table scan due\nto the last page of the scan not being the last page of the table.\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 25 Nov 2020 12:52:11 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] CLUSTER blocks scanned progress reporting"
},
{
"msg_contents": "\n\nOn 2020/11/25 20:52, Matthias van de Meent wrote:\n> On Wed, 25 Nov 2020 at 10:35, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/11/25 0:25, Matthias van de Meent wrote:\n>>>\n>>> I noticed that with my proposed patch it is still possible to go to\n>>> the next phase while heap_blks_scanned != heap_blks_total. This can\n>>> happen when the final heap pages contain only dead tuples, so no tuple\n>>> is returned from the last heap page(s) of the scan. As the\n>>> heapScan->rs_cblock is set to InvalidBlockNumber when the scan is\n>>> finished (see heapam.c#1060-1072), I think it would be correct to set\n>>> heap_blks_scanned to heapScan->rs_nblocks at the end of the scan\n>>> instead.\n>>\n>> Thanks for updating the patch!\n>>\n>> Please let me check my understanding about this. I was thinking that even\n>> when the last page contains only dead tuples, table_scan_getnextslot()\n>> returns the last page (because SnapshotAny is used) and heap_blks_scanned\n>> is incremented properly. And then, heapam_relation_copy_for_cluster()\n>> handles all the dead tuples in that page. No?\n> \n> Yes, my description was incorrect.\n> \n> The potential for a discrepancy exists for seemingly empty pages. I\n> thought that 'filled with only dead tuples' would be correct for\n> 'seemingly empty', but SnapshotAny indeed returns all tuples on a\n> page. But pages can still be empty with SnapshotAny, through being\n> emptied by vacuum, so the last pages scanned can still be empty pages.\n> Then, there would be no tuple returned at the last pages of the scan,\n> and subsequently the CLUSTER/VACUUM FULL would start the next phase\n> without reporting on the last pages that were scanned and had no\n> tuples in them (such that heap_blks_scanned != heap_blks_total).\n> \n> Vacuum truncates empty blocks from the end of the relation, and this\n> prevents empty blocks at the end of CLUSTER for the default case of\n> table scans starting at 0; but because the table scan might not start\n> at block 0, we can have an empty page at the end of the table scan due\n> to the last page of the scan not being the last page of the table.\n\nThanks for explaining that! I understood that.\nThat situation can happen easily also by using VACUUM (TRUNCATE off) command.\n\n+ * A heap scan need not return tuples for the last page it has\n+ * scanned. To ensure that heap_blks_scanned is equivalent to\n+ * total_heap_blks after the table scan phase, this parameter\n+ * is manually updated to the correct value when the table scan\n+ * finishes.\n\nSo it's better to update this comment a bit? For example,\n\n If the scanned last pages are empty, it's possible to go to the next phase\n while heap_blks_scanned != heap_blks_total. To ensure that they are\n equivalet after the table scan phase, this parameter is manually updated\n to the correct value when the table scan finishes.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 26 Nov 2020 08:55:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] CLUSTER blocks scanned progress reporting"
},
{
"msg_contents": "On Thu, 26 Nov 2020 at 00:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> + * A heap scan need not return tuples for the last page it has\n> + * scanned. To ensure that heap_blks_scanned is equivalent to\n> + * total_heap_blks after the table scan phase, this parameter\n> + * is manually updated to the correct value when the table scan\n> + * finishes.\n>\n> So it's better to update this comment a bit? For example,\n>\n> If the scanned last pages are empty, it's possible to go to the next phase\n> while heap_blks_scanned != heap_blks_total. To ensure that they are\n> equivalet after the table scan phase, this parameter is manually updated\n> to the correct value when the table scan finishes.\n>\n\nPFA a patch with updated message and comment. I've reworded the\nmessages to specifically mention empty pages and the need for setting\nheap_blks_scanned = total_heap_blks explicitly.\n\nFeel free to update the specifics, other than that I think this is a\ncomplete fix for the issues at hand.\n\nRegards,\n\nMatthias van de Meent",
"msg_date": "Thu, 26 Nov 2020 17:45:21 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] CLUSTER blocks scanned progress reporting"
},
{
"msg_contents": "\n\nOn 2020/11/27 1:45, Matthias van de Meent wrote:\n> On Thu, 26 Nov 2020 at 00:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> + * A heap scan need not return tuples for the last page it has\n>> + * scanned. To ensure that heap_blks_scanned is equivalent to\n>> + * total_heap_blks after the table scan phase, this parameter\n>> + * is manually updated to the correct value when the table scan\n>> + * finishes.\n>>\n>> So it's better to update this comment a bit? For example,\n>>\n>> If the scanned last pages are empty, it's possible to go to the next phase\n>> while heap_blks_scanned != heap_blks_total. To ensure that they are\n>> equivalet after the table scan phase, this parameter is manually updated\n>> to the correct value when the table scan finishes.\n>>\n> \n> PFA a patch with updated message and comment. I've reworded the\n> messages to specifically mention empty pages and the need for setting\n> heap_blks_scanned = total_heap_blks explicitly.\n\nThanks for updating the patch! It looks good to me.\nBarring any objection, I will commit this patch (also back-patch to v12).\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 27 Nov 2020 01:51:20 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] CLUSTER blocks scanned progress reporting"
},
{
"msg_contents": "\n\nOn 2020/11/27 1:51, Fujii Masao wrote:\n> \n> \n> On 2020/11/27 1:45, Matthias van de Meent wrote:\n>> On Thu, 26 Nov 2020 at 00:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>> + * A heap scan need not return tuples for the last page it has\n>>> + * scanned. To ensure that heap_blks_scanned is equivalent to\n>>> + * total_heap_blks after the table scan phase, this parameter\n>>> + * is manually updated to the correct value when the table scan\n>>> + * finishes.\n>>>\n>>> So it's better to update this comment a bit? For example,\n>>>\n>>> If the scanned last pages are empty, it's possible to go to the next phase\n>>> while heap_blks_scanned != heap_blks_total. To ensure that they are\n>>> equivalet after the table scan phase, this parameter is manually updated\n>>> to the correct value when the table scan finishes.\n>>>\n>>\n>> PFA a patch with updated message and comment. I've reworded the\n>> messages to specifically mention empty pages and the need for setting\n>> heap_blks_scanned = total_heap_blks explicitly.\n> \n> Thanks for updating the patch! It looks good to me.\n> Barring any objection, I will commit this patch (also back-patch to v12).\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 27 Nov 2020 20:21:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] CLUSTER blocks scanned progress reporting"
}
] |
[
{
"msg_contents": "Greetings,\n\nAttached is a patch to move from 'default role' terminology to\n'predefined role' in the documentation. In the code, I figured it made\nmore sense to avoid saying either one and instead opted for just\n'ROLE_NAME_OF_ROLE' as the #define, which we were very close to (one\nremoving the 'DEFAULT' prefix) but didn't actually entirely follow.\n\nThis would only apply to HEAD, and we'd use the doc redirect + doc alias\nfeatures of the website to prevent links to things like /current from\nbreaking and to provide a smooth path. In the back-branches there would\nbe a smaller update that just notes that in v14 'default roles' were\nrenamed to 'predefined roles'.\n\nCompiles and passes regressions, and I didn't find any other obvious\nreferences to 'default roles'. Where I did find such comments in the\ncode, I opted generally to just remove the 'default' qualification as it\nwasn't really needed.\n\nI had contemplated calling these 'capabilities' or similar but ended up\ndeciding not to- they really are roles, after all.\n\nThoughts?\n\nThanks,\n\nStephen",
"msg_date": "Fri, 20 Nov 2020 16:13:05 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Default role -> Predefined role"
},
{
"msg_contents": "> On 20 Nov 2020, at 22:13, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Attached is a patch to move from 'default role' terminology to\n> 'predefined role' in the documentation. In the code, I figured it made\n> more sense to avoid saying either one and instead opted for just\n> 'ROLE_NAME_OF_ROLE' as the #define, which we were very close to (one\n> removing the 'DEFAULT' prefix) but didn't actually entirely follow.\n\nPredefined is a much better terminology than default for these roles, makes the\nconcept a lot clearer. +1 on this patch.\n\n> I had contemplated calling these 'capabilities' or similar but ended up\n> deciding not to- they really are roles, after all.\n\nAgreed, roles is better here.\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 6 Dec 2020 22:20:45 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Default role -> Predefined role"
},
{
"msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 20 Nov 2020, at 22:13, Stephen Frost <sfrost@snowman.net> wrote:\n> > Attached is a patch to move from 'default role' terminology to\n> > 'predefined role' in the documentation. In the code, I figured it made\n> > more sense to avoid saying either one and instead opted for just\n> > 'ROLE_NAME_OF_ROLE' as the #define, which we were very close to (one\n> > removing the 'DEFAULT' prefix) but didn't actually entirely follow.\n> \n> Predefined is a much better terminology than default for these roles, makes the\n> concept a lot clearer. +1 on this patch.\n\nAlright, now that 3b0c647bbfc52894d979976f1e6d60e40649bba7 has gone in,\nwe can come back to the topic of renaming default roles to predefined\nroles as of v14.\n\nAttached is a patch which does that and adds a section to the obsolete\nappendix which should result in the existing links to the default roles\npage continuing to work (but going to the page explaining that they've\nbeen renamed to predefined roles) once v14 is released with the update.\n\nUnless there's anything further on this, I'll plan to push in the next\nday or so.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 31 Mar 2021 17:15:27 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Default role -> Predefined role"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> Unless there's anything further on this, I'll plan to push in the next\n> day or so.\n\n... and done.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 1 Apr 2021 15:33:45 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Default role -> Predefined role"
}
] |
[
{
"msg_contents": "I figured out that my TLS version was too low in the libpq call and increased it to TLS v1.1\nShould I go to 1.2? I am wondering because I do not want to limit compatibility.\n\nOnce I got past that hurdle, I am getting the error \"ssl error: the certificate verify failed\"\nSince I built the certificates myself self-signed, I am assuming I did something that Postgres does not like.\nI should mention that I am using the Windows environment for testing (I will test Linux after Windows succeeds).\n\nI would like to have all my certificates and keys on the same machine (localhost for local connections and dcorbit for tcp/ip).\nI found a couple tutorials and tried them but it failed.\nI saw one document that said the common name should be the postgres user name and that it should also be the connecting machine name. Is that correct?\nIs there a document or tutorial that explains the correct steps?\nEqually important, is there a way to get more complete diagnostics when something goes wrong (like WHY did the certificate verify fail)?\n\n\n\n\n\n\n\n\n\nI figured out that my TLS version was too low in the libpq call and increased it to TLS v1.1\n\nShould I go to 1.2? I am wondering because I do not want to limit compatibility.\n\n\n\n\nOnce I got past that hurdle, I am getting the error \"ssl error: the certificate verify failed\"\n\nSince I built the certificates myself self-signed, I am assuming I did something that Postgres does not like.\n\nI should mention that I am using the Windows environment for testing (I will test Linux after Windows succeeds).\n\n\n\n\nI would like to have all my certificates and keys on the same machine (localhost for local connections and dcorbit for tcp/ip).\n\nI found a couple tutorials and tried them but it failed.\n\nI saw one document that said the common name should be the postgres user name and that it should also be the connecting machine name. Is that correct?\n\nIs there a document or tutorial that explains the correct steps?\n\nEqually important, is there a way to get more complete diagnostics when something goes wrong (like WHY did the certificate verify fail)?",
"msg_date": "Fri, 20 Nov 2020 21:54:59 +0000",
"msg_from": "\"Corbit, Dann\" <Dann.Corbit@softwareag.com>",
"msg_from_op": true,
"msg_subject": "Connection using ODBC and SSL"
},
{
"msg_contents": "\"Corbit, Dann\" <Dann.Corbit@softwareag.com> writes:\n> I figured out that my TLS version was too low in the libpq call and increased it to TLS v1.1\n> Should I go to 1.2? I am wondering because I do not want to limit compatibility.\n\nPG 13 and up consider that 1.2 is the *minimum* secure version.\nQuoting from the commit log:\n\n Change libpq's default ssl_min_protocol_version to TLSv1.2.\n \n When we initially created this parameter, in commit ff8ca5fad, we left\n the default as \"allow any protocol version\" on grounds of backwards\n compatibility. However, that's inconsistent with the backend's default\n since b1abfec82; protocol versions prior to 1.2 are not considered very\n secure; and OpenSSL has had TLSv1.2 support since 2012, so the number\n of PG servers that need a lesser minimum is probably quite small.\n \n On top of those things, it emerges that some popular distros (including\n Debian and RHEL) set MinProtocol=TLSv1.2 in openssl.cnf. Thus, far\n from having \"allow any protocol version\" behavior in practice, what\n we actually have as things stand is a platform-dependent lower limit.\n \n So, change our minds and set the min version to TLSv1.2. Anybody\n wanting to connect with a new libpq to a pre-2012 server can either\n set ssl_min_protocol_version=TLSv1 or accept the fallback to non-SSL.\n \n Back-patch to v13 where the aforementioned patches appeared.\n\n> Once I got past that hurdle, I am getting the error \"ssl error: the certificate verify failed\"\n> Since I built the certificates myself self-signed, I am assuming I did something that Postgres does not like.\n\nThe process in our docs worked for me last time I tried it:\n\nhttps://www.postgresql.org/docs/current/ssl-tcp.html#SSL-CERTIFICATE-CREATION\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Nov 2020 13:07:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Connection using ODBC and SSL"
},
{
"msg_contents": "\nOn 11/20/20 4:54 PM, Corbit, Dann wrote:\n>\n> I would like to have all my certificates and keys on the same machine\n> (localhost for local connections and dcorbit for tcp/ip).\n> I found a couple tutorials and tried them but it failed.\n> I saw one document that said the common name should be the postgres\n> user name and that it should also be the connecting machine name.� Is\n> that correct?\n> Is there a document or tutorial that explains the correct steps?\n\n\n\nI did a webinar about a year ago that went into some detail about what\nyou need in the CN, where the certificates go, etc.\n\n\nSee\n<https://resources.2ndquadrant.com/using-ssl-with-postgresql-and-pgbouncer>\n(Yes, this is a corporate webinar, sorry about that)\n\n\n\n\n> Equally important, is there a way to get more complete diagnostics\n> when something goes wrong (like WHY did the certificate verify fail)?\n>\n\nThe diagnostics in the Postgres log are usually fairly explanatory.\n\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Sat, 21 Nov 2020 14:14:22 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Connection using ODBC and SSL"
},
{
"msg_contents": "Thank you for the assistance.\n\n________________________________\nFrom: Andrew Dunstan <andrew@dunslane.net>\nSent: Saturday, November 21, 2020 11:14\nTo: Corbit, Dann <Dann.Corbit@softwareag.com>; PostgreSQL Developers <pgsql-hackers@lists.postgresql.org>\nCc: Luton, Bill <Bill.Luton@softwareag.com>; Fifer, Brian <Brian.Fifer@softwareag.com>; Lao, Alexander <Alexander.Lao@softwareag.com>\nSubject: Re: Connection using ODBC and SSL\n\n\nOn 11/20/20 4:54 PM, Corbit, Dann wrote:\n>\n> I would like to have all my certificates and keys on the same machine\n> (localhost for local connections and dcorbit for tcp/ip).\n> I found a couple tutorials and tried them but it failed.\n> I saw one document that said the common name should be the postgres\n> user name and that it should also be the connecting machine name. Is\n> that correct?\n> Is there a document or tutorial that explains the correct steps?\n\n\n\nI did a webinar about a year ago that went into some detail about what\nyou need in the CN, where the certificates go, etc.\n\n\nSee\n<https://resources.2ndquadrant.com/using-ssl-with-postgresql-and-pgbouncer>\n(Yes, this is a corporate webinar, sorry about that)\n\n\n\n\n> Equally important, is there a way to get more complete diagnostics\n> when something goes wrong (like WHY did the certificate verify fail)?\n>\n\nThe diagnostics in the Postgres log are usually fairly explanatory.\n\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n\n\n\nThank you for the assistance.\n\n\n\n\n\n\nFrom: Andrew Dunstan <andrew@dunslane.net>\nSent: Saturday, November 21, 2020 11:14\nTo: Corbit, Dann <Dann.Corbit@softwareag.com>; PostgreSQL Developers <pgsql-hackers@lists.postgresql.org>\nCc: Luton, Bill <Bill.Luton@softwareag.com>; Fifer, Brian <Brian.Fifer@softwareag.com>; Lao, Alexander <Alexander.Lao@softwareag.com>\nSubject: Re: Connection using ODBC and SSL\n \n\n\n\nOn 11/20/20 4:54 PM, Corbit, Dann wrote:\n>\n> I would like to have all my certificates and keys on the same machine\n> (localhost for local connections and dcorbit for tcp/ip).\n> I found a couple tutorials and tried them but it failed.\n> I saw one document that said the common name should be the postgres\n> user name and that it should also be the connecting machine name. Is\n> that correct?\n> Is there a document or tutorial that explains the correct steps?\n\n\n\nI did a webinar about a year ago that went into some detail about what\nyou need in the CN, where the certificates go, etc.\n\n\nSee\n<https://resources.2ndquadrant.com/using-ssl-with-postgresql-and-pgbouncer>\n(Yes, this is a corporate webinar, sorry about that)\n\n\n\n\n> Equally important, is there a way to get more complete diagnostics\n> when something goes wrong (like WHY did the certificate verify fail)?\n>\n\nThe diagnostics in the Postgres log are usually fairly explanatory.\n\n\n\ncheers\n\n\nandrew",
"msg_date": "Mon, 23 Nov 2020 17:49:01 +0000",
"msg_from": "\"Corbit, Dann\" <Dann.Corbit@softwareag.com>",
"msg_from_op": true,
"msg_subject": "Re: Connection using ODBC and SSL"
}
] |
[
{
"msg_contents": "While looking at another issue I noticed that create_gather_merge_plan\ncalls make_sort if the subplan isn't sufficiently sorted. In all of\nthe cases I've seen where a gather merge path (not plan) is created\nthe input path is expected to be properly sorted, so I was wondering\nif anyone happened to know what case is being handled by the make_sort\ncall. Removing it doesn't seem to break any tests.\n\nThanks,\nJames\n\n\n",
"msg_date": "Fri, 20 Nov 2020 17:24:31 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why does create_gather_merge_plan need make_sort?"
},
{
"msg_contents": "On 11/20/20 11:24 PM, James Coleman wrote:\n> While looking at another issue I noticed that create_gather_merge_plan\n> calls make_sort if the subplan isn't sufficiently sorted. In all of\n> the cases I've seen where a gather merge path (not plan) is created\n> the input path is expected to be properly sorted, so I was wondering\n> if anyone happened to know what case is being handled by the make_sort\n> call. Removing it doesn't seem to break any tests.\n> \n\nYeah, I think you're right this is dead code, essentially. We're only\never calling create_gather_merge_path() with pathkeys matching the\nsubpath. And it's like that on REL_12_STABLE too, i.e. before the\nincremental sort was introduced.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 22 Nov 2020 22:22:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does create_gather_merge_plan need make_sort?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 11/20/20 11:24 PM, James Coleman wrote:\n>> While looking at another issue I noticed that create_gather_merge_plan\n>> calls make_sort if the subplan isn't sufficiently sorted. In all of\n>> the cases I've seen where a gather merge path (not plan) is created\n>> the input path is expected to be properly sorted, so I was wondering\n>> if anyone happened to know what case is being handled by the make_sort\n>> call. Removing it doesn't seem to break any tests.\n\n> Yeah, I think you're right this is dead code, essentially. We're only\n> ever calling create_gather_merge_path() with pathkeys matching the\n> subpath. And it's like that on REL_12_STABLE too, i.e. before the\n> incremental sort was introduced.\n\nIt's probably there by analogy to the other callers of\nprepare_sort_from_pathkeys, which all do at least a conditional\nmake_sort(). I'd be inclined to leave it there; it's cheap insurance\nagainst somebody weakening the existing behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Nov 2020 16:31:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why does create_gather_merge_plan need make_sort?"
},
{
"msg_contents": "On 11/22/20 10:31 PM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 11/20/20 11:24 PM, James Coleman wrote:\n>>> While looking at another issue I noticed that create_gather_merge_plan\n>>> calls make_sort if the subplan isn't sufficiently sorted. In all of\n>>> the cases I've seen where a gather merge path (not plan) is created\n>>> the input path is expected to be properly sorted, so I was wondering\n>>> if anyone happened to know what case is being handled by the make_sort\n>>> call. Removing it doesn't seem to break any tests.\n> \n>> Yeah, I think you're right this is dead code, essentially. We're only\n>> ever calling create_gather_merge_path() with pathkeys matching the\n>> subpath. And it's like that on REL_12_STABLE too, i.e. before the\n>> incremental sort was introduced.\n> \n> It's probably there by analogy to the other callers of\n> prepare_sort_from_pathkeys, which all do at least a conditional\n> make_sort(). I'd be inclined to leave it there; it's cheap insurance\n> against somebody weakening the existing behavior.\n> \n\nBut how do we know it's safe to actually do the sort there, e.g. in\nlight of the volatility/parallel-safety issues discussed in other threads?\n\nI agree the check may be useful, but maybe we should just do elog(ERROR)\ninstead.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 22 Nov 2020 23:07:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does create_gather_merge_plan need make_sort?"
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 5:07 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 11/22/20 10:31 PM, Tom Lane wrote:\n> > Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> >> On 11/20/20 11:24 PM, James Coleman wrote:\n> >>> While looking at another issue I noticed that create_gather_merge_plan\n> >>> calls make_sort if the subplan isn't sufficiently sorted. In all of\n> >>> the cases I've seen where a gather merge path (not plan) is created\n> >>> the input path is expected to be properly sorted, so I was wondering\n> >>> if anyone happened to know what case is being handled by the make_sort\n> >>> call. Removing it doesn't seem to break any tests.\n> >\n> >> Yeah, I think you're right this is dead code, essentially. We're only\n> >> ever calling create_gather_merge_path() with pathkeys matching the\n> >> subpath. And it's like that on REL_12_STABLE too, i.e. before the\n> >> incremental sort was introduced.\n> >\n> > It's probably there by analogy to the other callers of\n> > prepare_sort_from_pathkeys, which all do at least a conditional\n> > make_sort(). I'd be inclined to leave it there; it's cheap insurance\n> > against somebody weakening the existing behavior.\n> >\n>\n> But how do we know it's safe to actually do the sort there, e.g. in\n> light of the volatility/parallel-safety issues discussed in other threads?\n>\n> I agree the check may be useful, but maybe we should just do elog(ERROR)\n> instead.\n\nThat was my thought. I'm not a big fan of maintaining a \"this might be\nuseful\" path particularly when there isn't any case in the code or\ntests at all that exercises it. In other words, not only is it not\nuseful [yet], but also we don't actually have any signal to know that\nit works (or keeps working) -- whether through tests or production\nuse.\n\nSo I'm +1 on turning it into an ERROR log instead, since it seems to\nme that encountering this case would almost certainly represent a bug\nin path generation.\n\nJames\n\n\n",
"msg_date": "Mon, 23 Nov 2020 08:19:36 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does create_gather_merge_plan need make_sort?"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 8:19 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Sun, Nov 22, 2020 at 5:07 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 11/22/20 10:31 PM, Tom Lane wrote:\n> > > Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > >> On 11/20/20 11:24 PM, James Coleman wrote:\n> > >>> While looking at another issue I noticed that create_gather_merge_plan\n> > >>> calls make_sort if the subplan isn't sufficiently sorted. In all of\n> > >>> the cases I've seen where a gather merge path (not plan) is created\n> > >>> the input path is expected to be properly sorted, so I was wondering\n> > >>> if anyone happened to know what case is being handled by the make_sort\n> > >>> call. Removing it doesn't seem to break any tests.\n> > >\n> > >> Yeah, I think you're right this is dead code, essentially. We're only\n> > >> ever calling create_gather_merge_path() with pathkeys matching the\n> > >> subpath. And it's like that on REL_12_STABLE too, i.e. before the\n> > >> incremental sort was introduced.\n> > >\n> > > It's probably there by analogy to the other callers of\n> > > prepare_sort_from_pathkeys, which all do at least a conditional\n> > > make_sort(). I'd be inclined to leave it there; it's cheap insurance\n> > > against somebody weakening the existing behavior.\n> > >\n> >\n> > But how do we know it's safe to actually do the sort there, e.g. in\n> > light of the volatility/parallel-safety issues discussed in other threads?\n> >\n> > I agree the check may be useful, but maybe we should just do elog(ERROR)\n> > instead.\n>\n> That was my thought. I'm not a big fan of maintaining a \"this might be\n> useful\" path particularly when there isn't any case in the code or\n> tests at all that exercises it. In other words, not only is it not\n> useful [yet], but also we don't actually have any signal to know that\n> it works (or keeps working) -- whether through tests or production\n> use.\n>\n> So I'm +1 on turning it into an ERROR log instead, since it seems to\n> me that encountering this case would almost certainly represent a bug\n> in path generation.\n\nHere's a patch to do that.\n\nJames",
"msg_date": "Sun, 29 Nov 2020 09:44:33 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does create_gather_merge_plan need make_sort?"
},
{
"msg_contents": "\n\nOn 11/29/20 3:44 PM, James Coleman wrote:\n> On Mon, Nov 23, 2020 at 8:19 AM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> ..\n>>\n>> So I'm +1 on turning it into an ERROR log instead, since it seems to\n>> me that encountering this case would almost certainly represent a bug\n>> in path generation.\n> \n> Here's a patch to do that.\n>\n\nThanks. Isn't the comment incomplete? Should it mention just parallel\nsafety or also volatility?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Nov 2020 22:10:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does create_gather_merge_plan need make_sort?"
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 4:10 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 11/29/20 3:44 PM, James Coleman wrote:\n> > On Mon, Nov 23, 2020 at 8:19 AM James Coleman <jtc331@gmail.com> wrote:\n> >>\n> >> ..\n> >>\n> >> So I'm +1 on turning it into an ERROR log instead, since it seems to\n> >> me that encountering this case would almost certainly represent a bug\n> >> in path generation.\n> >\n> > Here's a patch to do that.\n> >\n>\n> Thanks. Isn't the comment incomplete? Should it mention just parallel\n> safety or also volatility?\n\nVolatility makes it parallel unsafe, and I'm not sure I want to get\ninto listing every possible issue here, but in the interest of making\nit more obviously representative of the kinds of issues we might\nencounter, I've tweaked it to mention volatility also, as well as\nrefer to the issues as \"examples\" of such concerns.\n\nJames",
"msg_date": "Mon, 30 Nov 2020 20:13:12 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does create_gather_merge_plan need make_sort?"
},
{
"msg_contents": "For the record, this got committed as 6bc27698324 in December, along \nwith the other incremental sort fixes and improvements. I forgot to mark \nit as committed in the app, so I've done that now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Jan 2021 02:43:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does create_gather_merge_plan need make_sort?"
}
] |
[
{
"msg_contents": "Over on the -bugs list we had a report [1] of a seg fault with\nincremental sort. The short of the investigation there was that a\nsubplan wasn't being serialized since it wasn't parallel safe, and\nthat attempting to initialize that subplan resulted in the seg fault.\nTom pushed a change to raise an ERROR when this occurs so that at\nleast we don't crash the server.\n\nThere was some small amount of discussion about this on a thread about\na somewhat similar issue [2] (where volatile expressions were the\nissue), but that thread resulted in a committed patch, and beyond that\nit seems that this issue deserves its own discussion.\n\nI've been digging further into the problem, and my first question was\n\"why doesn't this result in a similar error but with a full sort when\nincremental sort is disabled?\" After all, we have code to consider a\ngather merge + full sort on the cheapest partial path in\ngenerate_useful_gather_paths(), and the plan that I get on Luis's test\ncase with incremental sort disabled has an full sort above a gather,\nwhich presumably it'd be cheaper to push down into the parallel plan\n(using gather merge instead of gather).\n\nIt turns out that there's a separate bug here: I'm guessing that in\nthe original commit of generate_useful_gather_paths() we had a\ncopy/paste thinko in this code:\n\n/*\n * If the path has no ordering at all, then we can't use either\n * incremental sort or rely on implicit sorting with a gather\n * merge.\n */\nif (subpath->pathkeys == NIL)\n continue;\n\nThe comment claims that we should skip subpaths that aren't sorted at\nall since we can't possibly use a gather merge directly or with an\nincremental sort applied first. It's true that we can't do either of\nthose things unless the subpath is already at least partially sorted.\nBut subsequently we attempt to construct a gather merge path on top of\na full sort (for the cheapest path), and there's no reason to limit\nthat to partially sorted paths (indeed in the presence of incremental\nsort it seems unlikely that that would be the cheapest path).\n\nWe still don't want to build incremental sort paths if the subpath has\nno pathkeys, but that will imply presorted_keys = 0, which we already\ncheck.\n\nSo 0001 removes that branch entirely. And, as expected, we now get a\ngather merge + full sort the resulting error about subplan not being\ninitialized.\n\nWith that oddity out of the way, I started looking at the actually\nreported problem, and nominally issue is that we have a correlated\nsubquery (in the final target list and the sort clause), and\ncorrelated subqueries (not sure why exactly in this case; see [3])\naren't considered parallel-safe.\n\nAs to why that's happening:\n1. find_em_expr_usable_for_sorting_rel doesn't exclude that\nexpression, and arguably correctly so in the general case since one\nmight (though we don't now) use this same kind of logic for\nnon-parallel plans.\n2. generate_useful_pathkeys_for_relation is noting that it'd be useful\nto sort on that expression and sees it as safe in part due to the (1).\n3. We create a sort node that includes that expression in its output.\nThat seems pretty much the same (in terms of safety given generalized\ninput paths/plans, not in terms of what Robert brought up in [4]) as\nwhat would happen in the prepare_sort_from_pathkeys call in\ncreate_gather_merge_plan.\n\nSo to fix this problem I've added the option to find_em_expr_for_rel\nto restrict to parallel-safe expressions in 0002.\n\nGiven point 3 above (and the discussion with Robert earlier) above it\nseems to me that there's a bigger problem here that just hasn't\nhappened to have been exposed yet: in particular the\nprepare_sort_from_pathkeys call in create_gather_merge_plan seems\nsuspect to me. But it's also possible other usages of\nprepare_sort_from_pathkeys could be suspect given other seemingly\nunrelated changed to path generation, since nothing other than\nimplicit knowledge about the conventions here prevents us from\nintroducing paths generating sorts with expressions not in the target\nlist (in fact a part of the issue here IMO is that non-var expression\npathkeys are added to the target list after path generation and\nselection is completed).\n\nAlternatively we could \"fix\" this by being even more conservative in\nfind_em_expr_usable_for_sorting_rel and disregard any expressions not\ncurrently found in the target list. But that seems unsatisfactory to\nme since, given we know that there are cases where\nprepare_sort_from_paths is explicitly intended to add pathkey\nexpressions to the target list, we'd be excluding possible useful\ncases (and indeed we already have, albeit contrived, test cases that\nfail if that change is made).\n\nThere exists a very similar path in the postgres fdw code (in fact the\nfunction name is the same: get_useful_pathkeys_for_relation) since\nfind_em_expr_for_rel is much simpler (it doesn't do many safety checks\nat all like we do in find_em_expr_usable_for_sorting_rel), but I think\nthat's safe (though I haven't thought about it a ton) because I\nbelieve those are sort keys we're going to send to the foreign server\nanyway, as opposed to sorting ourselves (but I haven't verified that\nthat's the case and that we never create sort nodes there).\n\nJames\n\n1: https://www.postgresql.org/message-id/622580997.37108180.1604080457319.JavaMail.zimbra%40siscobra.com.br\n2: https://www.postgresql.org/message-id/CAJGNTeNaxpXgBVcRhJX%2B2vSbq%2BF2kJqGBcvompmpvXb7pq%2BoFA%40mail.gmail.com\n3: https://www.postgresql.org/message-id/CAAaqYe_vihKjc%2B8LuQa49EHW4%2BKfefb3wHqPYFnCuUqozo%2BLFg%40mail.gmail.com\n4: https://www.postgresql.org/message-id/CA%2BTgmobX2079GNJWJVFjo5CmwTg%3DoQQOybFQL2CyZxMY59P6BA%40mail.gmail.com",
"msg_date": "Fri, 20 Nov 2020 20:55:39 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix generate_useful_gather_paths for parallel unsafe pathkeys"
},
{
"msg_contents": "On 11/21/20 2:55 AM, James Coleman wrote:\n> Over on the -bugs list we had a report [1] of a seg fault with\n> incremental sort. The short of the investigation there was that a\n> subplan wasn't being serialized since it wasn't parallel safe, and\n> that attempting to initialize that subplan resulted in the seg fault.\n> Tom pushed a change to raise an ERROR when this occurs so that at\n> least we don't crash the server.\n> \n> There was some small amount of discussion about this on a thread about\n> a somewhat similar issue [2] (where volatile expressions were the\n> issue), but that thread resulted in a committed patch, and beyond that\n> it seems that this issue deserves its own discussion.\n> \n> I've been digging further into the problem, and my first question was\n> \"why doesn't this result in a similar error but with a full sort when\n> incremental sort is disabled?\" After all, we have code to consider a\n> gather merge + full sort on the cheapest partial path in\n> generate_useful_gather_paths(), and the plan that I get on Luis's test\n> case with incremental sort disabled has an full sort above a gather,\n> which presumably it'd be cheaper to push down into the parallel plan\n> (using gather merge instead of gather).\n> \n> It turns out that there's a separate bug here: I'm guessing that in\n> the original commit of generate_useful_gather_paths() we had a\n> copy/paste thinko in this code:\n> \n> /*\n> * If the path has no ordering at all, then we can't use either\n> * incremental sort or rely on implicit sorting with a gather\n> * merge.\n> */\n> if (subpath->pathkeys == NIL)\n> continue;\n> \n> The comment claims that we should skip subpaths that aren't sorted at\n> all since we can't possibly use a gather merge directly or with an\n> incremental sort applied first. It's true that we can't do either of\n> those things unless the subpath is already at least partially sorted.\n> But subsequently we attempt to construct a gather merge path on top of\n> a full sort (for the cheapest path), and there's no reason to limit\n> that to partially sorted paths (indeed in the presence of incremental\n> sort it seems unlikely that that would be the cheapest path).\n> \n> We still don't want to build incremental sort paths if the subpath has\n> no pathkeys, but that will imply presorted_keys = 0, which we already\n> check.\n> \n> So 0001 removes that branch entirely. And, as expected, we now get a\n> gather merge + full sort the resulting error about subplan not being\n> initialized.\n> \n\nYeah, that's clearly a thinko in generate_useful_gather_paths. Thanks\nfor spotting it! The 0001 patch seems fine to me.\n\n> With that oddity out of the way, I started looking at the actually\n> reported problem, and nominally issue is that we have a correlated\n> subquery (in the final target list and the sort clause), and\n> correlated subqueries (not sure why exactly in this case; see [3])\n> aren't considered parallel-safe.\n> \n> As to why that's happening:\n> 1. find_em_expr_usable_for_sorting_rel doesn't exclude that\n> expression, and arguably correctly so in the general case since one\n> might (though we don't now) use this same kind of logic for\n> non-parallel plans.\n> 2. generate_useful_pathkeys_for_relation is noting that it'd be useful\n> to sort on that expression and sees it as safe in part due to the (1).\n> 3. We create a sort node that includes that expression in its output.\n> That seems pretty much the same (in terms of safety given generalized\n> input paths/plans, not in terms of what Robert brought up in [4]) as\n> what would happen in the prepare_sort_from_pathkeys call in\n> create_gather_merge_plan.\n> \n> So to fix this problem I've added the option to find_em_expr_for_rel\n> to restrict to parallel-safe expressions in 0002.\n> \n\nOK, that seems like a valid fix. I wonder if we can have EC with members\nmixing parallel-safe and parallel-unsafe expressions? I guess no, but\nit's more a feeling than a clear reasoning and so I wonder what would\nhappen in such cases.\n\n> Given point 3 above (and the discussion with Robert earlier) above it\n> seems to me that there's a bigger problem here that just hasn't\n> happened to have been exposed yet: in particular the\n> prepare_sort_from_pathkeys call in create_gather_merge_plan seems\n> suspect to me. But it's also possible other usages of\n> prepare_sort_from_pathkeys could be suspect given other seemingly\n> unrelated changed to path generation, since nothing other than\n> implicit knowledge about the conventions here prevents us from\n> introducing paths generating sorts with expressions not in the target\n> list (in fact a part of the issue here IMO is that non-var expression\n> pathkeys are added to the target list after path generation and\n> selection is completed).\n> \n> Alternatively we could \"fix\" this by being even more conservative in\n> find_em_expr_usable_for_sorting_rel and disregard any expressions not\n> currently found in the target list. But that seems unsatisfactory to\n> me since, given we know that there are cases where\n> prepare_sort_from_paths is explicitly intended to add pathkey\n> expressions to the target list, we'd be excluding possible useful\n> cases (and indeed we already have, albeit contrived, test cases that\n> fail if that change is made).\n> \n> There exists a very similar path in the postgres fdw code (in fact the\n> function name is the same: get_useful_pathkeys_for_relation) since\n> find_em_expr_for_rel is much simpler (it doesn't do many safety checks\n> at all like we do in find_em_expr_usable_for_sorting_rel), but I think\n> that's safe (though I haven't thought about it a ton) because I\n> believe those are sort keys we're going to send to the foreign server\n> anyway, as opposed to sorting ourselves (but I haven't verified that\n> that's the case and that we never create sort nodes there).\n> \n\nYeah. I think the various FDW-related restrictions may easily make that\nsafe, for the reasons you already mentioned. But also because IIRC FDWs\nin general are not parallel-safe, so there's no risk of running foreign\nscans under Gather [Merge].\n\nThanks for the patches, I'll take a closer look next week. The 0002\npatch is clearly something we need to backpatch, not sure about 0001 as\nit essentially enables new behavior in some cases (Sort on unsorted\npaths below Gather Merge).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 22 Nov 2020 22:59:26 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix generate_useful_gather_paths for parallel unsafe pathkeys"
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 4:59 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 11/21/20 2:55 AM, James Coleman wrote:\n> > Over on the -bugs list we had a report [1] of a seg fault with\n> > incremental sort. The short of the investigation there was that a\n> > subplan wasn't being serialized since it wasn't parallel safe, and\n> > that attempting to initialize that subplan resulted in the seg fault.\n> > Tom pushed a change to raise an ERROR when this occurs so that at\n> > least we don't crash the server.\n> >\n> > There was some small amount of discussion about this on a thread about\n> > a somewhat similar issue [2] (where volatile expressions were the\n> > issue), but that thread resulted in a committed patch, and beyond that\n> > it seems that this issue deserves its own discussion.\n> >\n> > I've been digging further into the problem, and my first question was\n> > \"why doesn't this result in a similar error but with a full sort when\n> > incremental sort is disabled?\" After all, we have code to consider a\n> > gather merge + full sort on the cheapest partial path in\n> > generate_useful_gather_paths(), and the plan that I get on Luis's test\n> > case with incremental sort disabled has an full sort above a gather,\n> > which presumably it'd be cheaper to push down into the parallel plan\n> > (using gather merge instead of gather).\n> >\n> > It turns out that there's a separate bug here: I'm guessing that in\n> > the original commit of generate_useful_gather_paths() we had a\n> > copy/paste thinko in this code:\n> >\n> > /*\n> > * If the path has no ordering at all, then we can't use either\n> > * incremental sort or rely on implicit sorting with a gather\n> > * merge.\n> > */\n> > if (subpath->pathkeys == NIL)\n> > continue;\n> >\n> > The comment claims that we should skip subpaths that aren't sorted at\n> > all since we can't possibly use a gather merge directly or with an\n> > incremental sort applied first. It's true that we can't do either of\n> > those things unless the subpath is already at least partially sorted.\n> > But subsequently we attempt to construct a gather merge path on top of\n> > a full sort (for the cheapest path), and there's no reason to limit\n> > that to partially sorted paths (indeed in the presence of incremental\n> > sort it seems unlikely that that would be the cheapest path).\n> >\n> > We still don't want to build incremental sort paths if the subpath has\n> > no pathkeys, but that will imply presorted_keys = 0, which we already\n> > check.\n> >\n> > So 0001 removes that branch entirely. And, as expected, we now get a\n> > gather merge + full sort the resulting error about subplan not being\n> > initialized.\n> >\n>\n> Yeah, that's clearly a thinko in generate_useful_gather_paths. Thanks\n> for spotting it! The 0001 patch seems fine to me.\n>\n> > With that oddity out of the way, I started looking at the actually\n> > reported problem, and nominally issue is that we have a correlated\n> > subquery (in the final target list and the sort clause), and\n> > correlated subqueries (not sure why exactly in this case; see [3])\n> > aren't considered parallel-safe.\n> >\n> > As to why that's happening:\n> > 1. find_em_expr_usable_for_sorting_rel doesn't exclude that\n> > expression, and arguably correctly so in the general case since one\n> > might (though we don't now) use this same kind of logic for\n> > non-parallel plans.\n> > 2. generate_useful_pathkeys_for_relation is noting that it'd be useful\n> > to sort on that expression and sees it as safe in part due to the (1).\n> > 3. We create a sort node that includes that expression in its output.\n> > That seems pretty much the same (in terms of safety given generalized\n> > input paths/plans, not in terms of what Robert brought up in [4]) as\n> > what would happen in the prepare_sort_from_pathkeys call in\n> > create_gather_merge_plan.\n> >\n> > So to fix this problem I've added the option to find_em_expr_for_rel\n> > to restrict to parallel-safe expressions in 0002.\n> >\n>\n> OK, that seems like a valid fix. I wonder if we can have EC with members\n> mixing parallel-safe and parallel-unsafe expressions? I guess no, but\n> it's more a feeling than a clear reasoning and so I wonder what would\n> happen in such cases.\n\nAn immediately contrived case would be something like \"t.x =\nunsafe(t.y)\". As to whether that can actually result in us wanting to\nsort by the safe member before we can execute the unsafe member (e.g.,\nin a filter), I'm less sure, but it seems at least plausible to me (or\nI don't see anything inherently preventing it).\n\n> > Given point 3 above (and the discussion with Robert earlier) above it\n> > seems to me that there's a bigger problem here that just hasn't\n> > happened to have been exposed yet: in particular the\n> > prepare_sort_from_pathkeys call in create_gather_merge_plan seems\n> > suspect to me. But it's also possible other usages of\n> > prepare_sort_from_pathkeys could be suspect given other seemingly\n> > unrelated changed to path generation, since nothing other than\n> > implicit knowledge about the conventions here prevents us from\n> > introducing paths generating sorts with expressions not in the target\n> > list (in fact a part of the issue here IMO is that non-var expression\n> > pathkeys are added to the target list after path generation and\n> > selection is completed).\n> >\n> > Alternatively we could \"fix\" this by being even more conservative in\n> > find_em_expr_usable_for_sorting_rel and disregard any expressions not\n> > currently found in the target list. But that seems unsatisfactory to\n> > me since, given we know that there are cases where\n> > prepare_sort_from_paths is explicitly intended to add pathkey\n> > expressions to the target list, we'd be excluding possible useful\n> > cases (and indeed we already have, albeit contrived, test cases that\n> > fail if that change is made).\n> >\n> > There exists a very similar path in the postgres fdw code (in fact the\n> > function name is the same: get_useful_pathkeys_for_relation) since\n> > find_em_expr_for_rel is much simpler (it doesn't do many safety checks\n> > at all like we do in find_em_expr_usable_for_sorting_rel), but I think\n> > that's safe (though I haven't thought about it a ton) because I\n> > believe those are sort keys we're going to send to the foreign server\n> > anyway, as opposed to sorting ourselves (but I haven't verified that\n> > that's the case and that we never create sort nodes there).\n> >\n>\n> Yeah. I think the various FDW-related restrictions may easily make that\n> safe, for the reasons you already mentioned. But also because IIRC FDWs\n> in general are not parallel-safe, so there's no risk of running foreign\n> scans under Gather [Merge].\n\nThere are some comments (IIRC in the parallel docs) about FDWs being\nable to be marked as parallel-safe, but I'm not sure if that facility\nis in play. Either way, it's quite a different case.\n\n> Thanks for the patches, I'll take a closer look next week. The 0002\n> patch is clearly something we need to backpatch, not sure about 0001 as\n> it essentially enables new behavior in some cases (Sort on unsorted\n> paths below Gather Merge).\n\nThanks for taking a look.\n\nI had the same question re: backporting. On the one hand it will\nchange behavior (this is always a bit of a gray area since in one\nsense bugs change behavior), but IMO it's not a new feature, because\nthe code clearly intended to have that feature in the first place (it\ncreates gather merges on top of a full sort). So I'd lean towards\nconsidering it a bug fix, but I'm also not going to make that a hill\nto die on.\n\nOne semi-related question: do you think we should add a comment to\nprepare_sort_from_pathkeys explaining that modifying it may mean\nfind_em_expr_for_rel needs to be updated also?\n\nJames\n\n\n",
"msg_date": "Mon, 23 Nov 2020 08:35:54 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix generate_useful_gather_paths for parallel unsafe pathkeys"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 8:35 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Sun, Nov 22, 2020 at 4:59 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > ...\n> > Thanks for the patches, I'll take a closer look next week. The 0002\n> > patch is clearly something we need to backpatch, not sure about 0001 as\n> > it essentially enables new behavior in some cases (Sort on unsorted\n> > paths below Gather Merge).\n>\n> Thanks for taking a look.\n>\n> I had the same question re: backporting. On the one hand it will\n> change behavior (this is always a bit of a gray area since in one\n> sense bugs change behavior), but IMO it's not a new feature, because\n> the code clearly intended to have that feature in the first place (it\n> creates gather merges on top of a full sort). So I'd lean towards\n> considering it a bug fix, but I'm also not going to make that a hill\n> to die on.\n>\n> One semi-related question: do you think we should add a comment to\n> prepare_sort_from_pathkeys explaining that modifying it may mean\n> find_em_expr_for_rel needs to be updated also?\n\nHere's an updated patch series.\n\n0001 and 0002 as before, but with some minor cleanup.\n\n0003 disallows SRFs in these sort expressions (per discussion over at [1]).\n\n0004 removes the search through the target for matching volatile\nexpressions (per discussion at [2]).\n\n0005 adds the comment I mentioned above clarifying these two functions\nare linked.\n\nJames\n\n1: https://www.postgresql.org/message-id/295524.1606246314%40sss.pgh.pa.us\n2: https://www.postgresql.org/message-id/124400.1606159456%40sss.pgh.pa.us",
"msg_date": "Sun, 29 Nov 2020 09:26:12 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix generate_useful_gather_paths for parallel unsafe pathkeys"
},
{
"msg_contents": "On 11/29/20 3:26 PM, James Coleman wrote:\n> On Mon, Nov 23, 2020 at 8:35 AM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> On Sun, Nov 22, 2020 at 4:59 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>> ...\n>>> Thanks for the patches, I'll take a closer look next week. The 0002\n>>> patch is clearly something we need to backpatch, not sure about 0001 as\n>>> it essentially enables new behavior in some cases (Sort on unsorted\n>>> paths below Gather Merge).\n>>\n>> Thanks for taking a look.\n>>\n>> I had the same question re: backporting. On the one hand it will\n>> change behavior (this is always a bit of a gray area since in one\n>> sense bugs change behavior), but IMO it's not a new feature, because\n>> the code clearly intended to have that feature in the first place (it\n>> creates gather merges on top of a full sort). So I'd lean towards\n>> considering it a bug fix, but I'm also not going to make that a hill\n>> to die on.\n>>\n>> One semi-related question: do you think we should add a comment to\n>> prepare_sort_from_pathkeys explaining that modifying it may mean\n>> find_em_expr_for_rel needs to be updated also?\n> \n> Here's an updated patch series.\n> \n> 0001 and 0002 as before, but with some minor cleanup.\n> \n\n0001 - seems fine\n\n0002 - Should we rename the \"parallel\" parameter to something more\ndescriptive, like \"require_parallel_safe\"?\n\n> 0003 disallows SRFs in these sort expressions (per discussion over at [1]).\n> \n\nOK, but I'd move the define from tlist.c to tlist.h, not optimizer.h.\n\n> 0004 removes the search through the target for matching volatile\n> expressions (per discussion at [2]).\n> \n\nOK\n\n> 0005 adds the comment I mentioned above clarifying these two functions\n> are linked.\n> \n\nMakes sense. I wonder if we need to be more specific about how\nconsistent those two places need to be. Do they need to match 1:1, or\nhow do people in the future decide which restrictions need to be in both\nplaces?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Nov 2020 22:20:39 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix generate_useful_gather_paths for parallel unsafe pathkeys"
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 4:20 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 11/29/20 3:26 PM, James Coleman wrote:\n> > On Mon, Nov 23, 2020 at 8:35 AM James Coleman <jtc331@gmail.com> wrote:\n> >>\n> >> On Sun, Nov 22, 2020 at 4:59 PM Tomas Vondra\n> >> <tomas.vondra@enterprisedb.com> wrote:\n> >>> ...\n> >>> Thanks for the patches, I'll take a closer look next week. The 0002\n> >>> patch is clearly something we need to backpatch, not sure about 0001 as\n> >>> it essentially enables new behavior in some cases (Sort on unsorted\n> >>> paths below Gather Merge).\n> >>\n> >> Thanks for taking a look.\n> >>\n> >> I had the same question re: backporting. On the one hand it will\n> >> change behavior (this is always a bit of a gray area since in one\n> >> sense bugs change behavior), but IMO it's not a new feature, because\n> >> the code clearly intended to have that feature in the first place (it\n> >> creates gather merges on top of a full sort). So I'd lean towards\n> >> considering it a bug fix, but I'm also not going to make that a hill\n> >> to die on.\n> >>\n> >> One semi-related question: do you think we should add a comment to\n> >> prepare_sort_from_pathkeys explaining that modifying it may mean\n> >> find_em_expr_for_rel needs to be updated also?\n> >\n> > Here's an updated patch series.\n> >\n> > 0001 and 0002 as before, but with some minor cleanup.\n> >\n>\n> 0001 - seems fine\n>\n> 0002 - Should we rename the \"parallel\" parameter to something more\n> descriptive, like \"require_parallel_safe\"?\n\nDone.\n\n> > 0003 disallows SRFs in these sort expressions (per discussion over at [1]).\n> >\n>\n> OK, but I'd move the define from tlist.c to tlist.h, not optimizer.h.\n\nIS_SRF_CALL doesn't really seem to be particularly specific\nconceptually to target lists (and of course now we're using it in\nequivclass.c), so I'd tried to put it somewhere more general. Is\nmoving it to tlist.h mostly to minimize changes? Or do you think it\nfits more naturally with the tlist code (I might be missing\nsomething)?\n\n> > 0004 removes the search through the target for matching volatile\n> > expressions (per discussion at [2]).\n> >\n>\n> OK\n>\n> > 0005 adds the comment I mentioned above clarifying these two functions\n> > are linked.\n> >\n>\n> Makes sense. I wonder if we need to be more specific about how\n> consistent those two places need to be. Do they need to match 1:1, or\n> how do people in the future decide which restrictions need to be in both\n> places?\n\nAt this point I'm not sure I'd have a good way to describe that: one\nis a clear superset of the other (and we already talk in the comments\nabout the other being a subset), but it's really going to be about\n\"what do we want to allow to be sorted proactively\"...hmm, that\nthought made me take a swing at it; let me know what you think.\n\nJames",
"msg_date": "Mon, 30 Nov 2020 20:59:17 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix generate_useful_gather_paths for parallel unsafe pathkeys"
},
{
"msg_contents": "Hi,\n\nI reviewed the patch series, tweaked a couple comments, added commit\nmessages etc. Barring objections, I'll push this in a couple days.\n\nOne thing that annoys me is that it breaks ABI because it adds two new\nparameters to find_em_expr_usable_for_sorting_rel, but I don't think we\ncan get around that. We could assume require_parallel_safe=true, but we\nstill need to pass the PlannerInfo pointer.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 16 Dec 2020 02:23:45 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix generate_useful_gather_paths for parallel unsafe pathkeys"
},
{
"msg_contents": "Hi,\n\nI've pushed (and backpatched) all the fixes posted to this thread. I \nbelieve that covers all the incremental sort fixes, so I've marked [1] \nas committed.\n\n[1] https://commitfest.postgresql.org/31/2754/\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 22 Dec 2020 02:07:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix generate_useful_gather_paths for parallel unsafe pathkeys"
}
] |
[
{
"msg_contents": "If you have a sufficiently broken data page, pageinspect throws an\nerror when trying to examine the page:\n\nERROR: invalid memory alloc request size 18446744073709551451\n\nThis is pretty unhelpful; it would be better not to try to print the\ndata instead of dying. With that, at least you can know where the\nproblem is.\n\nThis was introduced in d6061f83a166 (2015). Proposed patch to fix it\n(by having the code print a null \"data\" instead of dying) is attached.\n\n-- \n�lvaro Herrera",
"msg_date": "Sat, 21 Nov 2020 16:32:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "bug in pageinspect's \"tuple data\" feature"
},
{
"msg_contents": "On 21/11/2020 21:32, Alvaro Herrera wrote:\n> If you have a sufficiently broken data page, pageinspect throws an\n> error when trying to examine the page:\n> \n> ERROR: invalid memory alloc request size 18446744073709551451\n> \n> This is pretty unhelpful; it would be better not to try to print the\n> data instead of dying. With that, at least you can know where the\n> problem is.\n> \n> This was introduced in d6061f83a166 (2015). Proposed patch to fix it\n> (by having the code print a null \"data\" instead of dying) is attached.\n\nNull seems misleading. Maybe something like \"invalid\", or print a warning?\n\n- Heikki\n\n\n",
"msg_date": "Mon, 23 Nov 2020 09:11:26 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: bug in pageinspect's \"tuple data\" feature"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 09:11:26AM +0200, Heikki Linnakangas wrote:\n> On 21/11/2020 21:32, Alvaro Herrera wrote:\n>> This is pretty unhelpful; it would be better not to try to print the\n>> data instead of dying. With that, at least you can know where the\n>> problem is.\n>> \n>> This was introduced in d6061f83a166 (2015). Proposed patch to fix it\n>> (by having the code print a null \"data\" instead of dying) is attached.\n> \n> Null seems misleading. Maybe something like \"invalid\", or print a warning?\n\nHow did you get into this state to begin with? get_raw_page() uses\nReadBufferExtended() which gives some level of protection already, so\nshouldn't it be better to return an ERROR with ERRCODE_DATA_CORRUPTED\nand the block involved?\n--\nMichael",
"msg_date": "Tue, 24 Nov 2020 13:01:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bug in pageinspect's \"tuple data\" feature"
},
{
"msg_contents": "On 2020-Nov-24, Michael Paquier wrote:\n\n> On Mon, Nov 23, 2020 at 09:11:26AM +0200, Heikki Linnakangas wrote:\n> > On 21/11/2020 21:32, Alvaro Herrera wrote:\n> >> This is pretty unhelpful; it would be better not to try to print the\n> >> data instead of dying. With that, at least you can know where the\n> >> problem is.\n> >> \n> >> This was introduced in d6061f83a166 (2015). Proposed patch to fix it\n> >> (by having the code print a null \"data\" instead of dying) is attached.\n> > \n> > Null seems misleading. Maybe something like \"invalid\", or print a warning?\n\nGood idea, thanks.\n\n> How did you get into this state to begin with?\n\nThe data was corrupted for whatever reason. I don't care why or how, I\njust need to fix it. If the data isn't corrupted, then I don't use\npageinspect in the first place.\n\n> get_raw_page() uses ReadBufferExtended() which gives some level of\n> protection already, so shouldn't it be better to return an ERROR with\n> ERRCODE_DATA_CORRUPTED and the block involved?\n\nWhat would I gain from doing that? It's even more unhelpful, because it\nis intentional rather than accidental.\n\n\n",
"msg_date": "Tue, 24 Nov 2020 12:16:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bug in pageinspect's \"tuple data\" feature"
},
{
"msg_contents": "On Sat, Nov 21, 2020 at 2:32 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> If you have a sufficiently broken data page, pageinspect throws an\n> error when trying to examine the page:\n>\n> ERROR: invalid memory alloc request size 18446744073709551451\n>\n> This is pretty unhelpful; it would be better not to try to print the\n> data instead of dying. With that, at least you can know where the\n> problem is.\n>\n> This was introduced in d6061f83a166 (2015). Proposed patch to fix it\n> (by having the code print a null \"data\" instead of dying) is attached.\n\nSo I agree with the problem statement and I don't have any particular\nobjection to the patch as proposed, but I think it's just the tip of\nthe iceberg. These functions are tremendously careless about\nvalidating the input data and will crash if you breath on them too\nhard; we've just band-aided around that problem by making them\nsuper-user only (e.g. see 3e1338475ffc2eac25de60a9de9ce689b763aced).\nWhile that does address the security concern since superusers can do\ntons of bad things anyway, it's not much help if you want to actually\nbe able to run them on damaged pages without having the world end, and\nit's no help at all if you'd like to let them be run by a\nnon-superuser, since a constructed page can easily blow up the world.\nThe patch as proposed fixes one of many problems in this area and may\nwell be useful enough to commit without doing anything else, but I'd\nactually really like to see us do the same sort of hardening here that\nis present in the recently-committed amcheck-for-heap support, which\nis robust against a wide variety of things of this sort rather than\njust this one particularly. Again, this is not to say that you should\nbe on the hook for that; it's a general statement.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Nov 2020 15:44:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bug in pageinspect's \"tuple data\" feature"
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to propose getting the callstack of the postgres process\nby connecting to the server. This helps us in diagnosing the problems\nfrom a customer environment in case of hung process or in case of long\nrunning process.\nThe idea here is to implement & expose pg_print_callstack function,\ninternally what this function does is, the connected backend will send\nSIGUSR1 signal by setting PMSIGNAL_BACKTRACE_EMIT to the postmaster\nprocess. Postmaster process will send a SIGUSR1 signal to the process\nby setting PROCSIG_BACKTRACE_PRINT if the process has access to\nProcSignal. As syslogger process & Stats process don't have access to\nProcSignal, multiplexing with SIGUSR1 is not possible for these\nprocesses, hence SIGUSR2 signal will be sent for these processes. Once\nthe process receives this signal it will log the backtrace of the\nprocess.\nAttached is a WIP patch for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 22 Nov 2020 08:36:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Printing backtrace of postgres processes"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> The idea here is to implement & expose pg_print_callstack function,\n> internally what this function does is, the connected backend will send\n> SIGUSR1 signal by setting PMSIGNAL_BACKTRACE_EMIT to the postmaster\n> process. Postmaster process will send a SIGUSR1 signal to the process\n> by setting PROCSIG_BACKTRACE_PRINT if the process has access to\n> ProcSignal. As syslogger process & Stats process don't have access to\n> ProcSignal, multiplexing with SIGUSR1 is not possible for these\n> processes, hence SIGUSR2 signal will be sent for these processes. Once\n> the process receives this signal it will log the backtrace of the\n> process.\n\nSurely this is *utterly* unsafe. You can't do that sort of stuff in\na signal handler.\n\nIt might be all right to set a flag that would cause the next\nCHECK_FOR_INTERRUPTS to print a backtrace, but I'm not sure\nhow useful that really is.\n\nThe proposed postmaster.c addition seems quite useless, as there\nis exactly one stack trace it could ever log.\n\nI would like to see some discussion of the security implications\nof such a feature, as well. (\"There aren't any\" is the wrong\nanswer.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Nov 2020 01:25:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 11:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > The idea here is to implement & expose pg_print_callstack function,\n> > internally what this function does is, the connected backend will send\n> > SIGUSR1 signal by setting PMSIGNAL_BACKTRACE_EMIT to the postmaster\n> > process. Postmaster process will send a SIGUSR1 signal to the process\n> > by setting PROCSIG_BACKTRACE_PRINT if the process has access to\n> > ProcSignal. As syslogger process & Stats process don't have access to\n> > ProcSignal, multiplexing with SIGUSR1 is not possible for these\n> > processes, hence SIGUSR2 signal will be sent for these processes. Once\n> > the process receives this signal it will log the backtrace of the\n> > process.\n>\n> Surely this is *utterly* unsafe. You can't do that sort of stuff in\n> a signal handler.\n>\n> It might be all right to set a flag that would cause the next\n> CHECK_FOR_INTERRUPTS to print a backtrace, but I'm not sure\n> how useful that really is.\n>\n> The proposed postmaster.c addition seems quite useless, as there\n> is exactly one stack trace it could ever log.\n>\n> I would like to see some discussion of the security implications\n> of such a feature, as well. (\"There aren't any\" is the wrong\n> answer.)\n\nHi Hackers,\n\nAny thoughts on the security implication for this feature.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Nov 2020 10:28:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "> Surely this is *utterly* unsafe. You can't do that sort of stuff in\n> a signal handler.\n\nNot safely, anyway. The signal handler could be called in the middle\nof a malloc(), a pfree(), or all sorts of other exciting\ncircumstances. It'd have to be extremely careful to use only local\nresources on the stack and I don't see how that's feasible here.\n\nIt'll work - most of the time. But it could explode messily and\nexcitingly just when you actually need it to work properly, which is\nrarely what you want from features clearly intended for production\ndebugging.\n\n(It'd be interesting to add some test infrastructure that helps us\nfire signal handlers at awkward times in a controlled manner, so we\ncould predictably test signal handler re-entrancy and concurrency\nbehaviour.)\n\n> It might be all right to set a flag that would cause the next\n> CHECK_FOR_INTERRUPTS to print a backtrace, but I'm not sure\n> how useful that really is.\n\nI find that when I most often want a backtrace of a running, live\nbackend, it's because the backend is doing something that isn't\npassing a CHECK_FOR_INTERRUPTS() so it's not responding to signals. So\nit wouldn't help if a backend is waiting on an LWLock, busy in a\nblocking call to some loaded library, a blocking syscall, etc. But\nthere are enough other times I want live backtraces, and I'm not the\nonly one whose needs matter.\n\nIt'd be kinda handy when collecting samples of some backend's activity\nwhen it's taking an excessively long time doing something\nindeterminate. I generally use a gdb script for that because\nunfortunately the Linux trace tool situation is so hopeless that I\ncan't rely on perf or systemtap being present, working, and\nunderstanding the same command line arguments across various distros\nand versions, so something built-in would be convenient. That\nconvenience would probably be counterbalanced by the hassle of\nextracting the results from the log files unless it's possible to use\na function in one backend to query the stack in another instead of\njust writing it to the log.\n\nSo a weak +1 from me, assuming printing stacks from\nCHECK_FOR_INTERRUPTS() to the log. We already have most of the\ninfrastructure for that so the change is trivial, and we already trust\nanyone who can read the log, so it seems like a pretty low-cost,\nlow-risk change.\n\nSomewhat more interested favour if the results can be obtained from a\nfunction or view from another backend, but then the security\nimplications get a bit more exciting if we let non-superusers do it.\n\nYou may recall that I wanted to do something similar a while ago in\norder to request MemoryContextStats() without needing to attach gdb\nand invoke a function manually using ptrace(). Also changes to support\nreading TCP socket state for a process. So I find this sort of thing\nuseful in general.\n\nIf we're querying one backend from another we could read its stack\nwith ptrace() and unwind it with libunwind within the requesting\nbackend, which would be a whole lot safer to execute and would work\nfine even when blocked in syscalls or synchronous library calls. See\nthe eu-stack command from elfutils. If the target backend had shared\nlibraries loaded that the requested backend didn't, libunwind could\nload the debuginfo for us if available. The downsides would be that\nmany system lockdowns disable ptrace() for non-root even for\nsame-user-owned processes, and that we'd need to depend on libunwind\n(or other platform equivalents) so it'd probably be a contrib. In\nwhich case you have to wonder if it's that much better than running\n`system(\"eu-stack $pid\")` in plperl or a trivial custom C extension\nfunction.\n\n> I would like to see some discussion of the security implications\n> of such a feature, as well. (\"There aren't any\" is the wrong\n> answer.)\n\nIf the stack only goes to the log, I actually don't think there are\nsignificant security implications beyond those we already have with\nour existing backtrace printing features. We already trust anyone who\ncan read the log almost completely, and we can already emit stacks to\nthe log. But I'd still want it to be gated superuser-only, or a role\nthat's GRANTable by superuser only by default, since it exposes\narbitrary internals of the server.\n\n\"same user id\" matching is not sufficient. A less-privileged session\nuser might be calling into SECURITY DEFINER code, code from a\nprivileged view, or sensitive C library code. Checking for the\neffective user is no better because the effective user might be less\nprivileged at the moment the bt is requested, but the state up-stack\nmight reveal sensitive information from a more privileged user.\n\n\n",
"msg_date": "Mon, 30 Nov 2020 13:35:46 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Craig Ringer <craig.ringer@enterprisedb.com> writes:\n>> I would like to see some discussion of the security implications\n>> of such a feature, as well. (\"There aren't any\" is the wrong\n>> answer.)\n\n> If the stack only goes to the log, I actually don't think there are\n> significant security implications beyond those we already have with\n> our existing backtrace printing features. We already trust anyone who\n> can read the log almost completely, and we can already emit stacks to\n> the log. But I'd still want it to be gated superuser-only, or a role\n> that's GRANTable by superuser only by default, since it exposes\n> arbitrary internals of the server.\n\nThe concerns that I had were that the patch as submitted provides a\nmechanism that causes ALL processes in the system to dump backtraces,\nnot a targeted request; and that it allows any user to issue such\nrequests at an unbounded rate. That seems like a really easy route\nto denial of service. There's also a question of whether you'd even\nget intelligible results from dozens of processes simultaneously\ndumping many-line messages to the same place. (This might work out\nall right if you're using our syslogger, but it probably would not\nwith any other logging technology.)\n\nI'd feel better about it if the mechanism had you specify exactly\none target process, and were restricted to a superuser requestor.\n\nI'm not excited about adding on frammishes like letting one process\nextract another's stack trace. I think that just adds more points\nof failure, which is a bad thing in a feature that you're only going\nto care about when things are a bit pear-shaped already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 18:04:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 07:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I'd feel better about it if the mechanism had you specify exactly\n> one target process, and were restricted to a superuser requestor.\n\nEr, rather. I actually assumed the former was the case already, not\nhaving looked closely yet.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 11:11:24 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-22 01:25:08 -0500, Tom Lane wrote:\n> Surely this is *utterly* unsafe. You can't do that sort of stuff in\n> a signal handler.\n\nThat's of course true for the current implementation - but I don't think\nit's a fundamental constraint. With a bit of care backtrace() and\nbacktrace_symbols() itself can be signal safe:\n\n> backtrace() and backtrace_symbols_fd() don't call malloc() explicitly, but they are part of libgcc, which gets loaded dynamically when first\n> used. Dynamic loading usually triggers a call to malloc(3). If you need certain calls to these two functions to not allocate memory (in signal\n> handlers, for example), you need to make sure libgcc is loaded beforehand.\n\nIt should be quite doable to emit such backtraces directly to stderr,\ninstead of using appendStringInfoString()/elog(). Or even use a static\nbuffer.\n\nIt does have quite some appeal to be able to debug production workloads\nwhere queries can't be cancelled etc. And knowing that backtraces\nreliably work in case of SIGQUIT etc is also nice...\n\n\n> I would like to see some discussion of the security implications\n> of such a feature, as well. (\"There aren't any\" is the wrong\n> answer.)\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Nov 2020 19:26:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-30 13:35:46 +0800, Craig Ringer wrote:\n> I find that when I most often want a backtrace of a running, live\n> backend, it's because the backend is doing something that isn't\n> passing a CHECK_FOR_INTERRUPTS() so it's not responding to signals. So\n> it wouldn't help if a backend is waiting on an LWLock, busy in a\n> blocking call to some loaded library, a blocking syscall, etc. But\n> there are enough other times I want live backtraces, and I'm not the\n> only one whose needs matter.\n\nRandom thought: Wonder if it could be worth adding a conditionally\ncompiled mode where we track what the longest time between two\nCHECK_FOR_INTERRUPTS() calls is (with some extra logic for client\nIO).\n\nObviously the regression tests don't tend to hit the worst cases of\nCFR() less code, but even if they did, we currently wouldn't know from\nrunning the regression tests.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Nov 2020 19:31:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 11:31, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-11-30 13:35:46 +0800, Craig Ringer wrote:\n> > I find that when I most often want a backtrace of a running, live\n> > backend, it's because the backend is doing something that isn't\n> > passing a CHECK_FOR_INTERRUPTS() so it's not responding to signals. So\n> > it wouldn't help if a backend is waiting on an LWLock, busy in a\n> > blocking call to some loaded library, a blocking syscall, etc. But\n> > there are enough other times I want live backtraces, and I'm not the\n> > only one whose needs matter.\n>\n> Random thought: Wonder if it could be worth adding a conditionally\n> compiled mode where we track what the longest time between two\n> CHECK_FOR_INTERRUPTS() calls is (with some extra logic for client\n> IO).\n>\n> Obviously the regression tests don't tend to hit the worst cases of\n> CFR() less code, but even if they did, we currently wouldn't know from\n> running the regression tests.\n\nWe can probably determine that just as well with a perf or systemtap\nrun on an --enable-dtrace build. Just tag CHECK_FOR_INTERRUPTS() with\na SDT marker then record the timings.\n\nIt might be convenient to have it built-in I guess, but if we tag the\nsite and do the timing/tracing externally we don't have to bother\nabout conditional compilation and special builds.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 11:33:15 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It should be quite doable to emit such backtraces directly to stderr,\n> instead of using appendStringInfoString()/elog().\n\nNo, please no.\n\n(1) On lots of logging setups (think syslog), anything that goes to\nstderr is just going to wind up in the bit bucket. I realize that\nwe have that issue already for memory context dumps on OOM errors,\nbut that doesn't make it a good thing.\n\n(2) You couldn't really write \"to stderr\", only to fileno(stderr),\ncreating issues about interleaving of the output with regular stderr\noutput. For instance it's quite likely that the backtrace would\nappear before stderr output that had actually been emitted earlier,\nwhich'd be tremendously confusing.\n\n(3) This isn't going to do anything good for my concerns about interleaved\noutput from different processes, either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 23:01:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 9:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > It should be quite doable to emit such backtraces directly to stderr,\n> > instead of using appendStringInfoString()/elog().\n>\n> No, please no.\n>\n> (1) On lots of logging setups (think syslog), anything that goes to\n> stderr is just going to wind up in the bit bucket. I realize that\n> we have that issue already for memory context dumps on OOM errors,\n> but that doesn't make it a good thing.\n>\n> (2) You couldn't really write \"to stderr\", only to fileno(stderr),\n> creating issues about interleaving of the output with regular stderr\n> output. For instance it's quite likely that the backtrace would\n> appear before stderr output that had actually been emitted earlier,\n> which'd be tremendously confusing.\n>\n> (3) This isn't going to do anything good for my concerns about interleaved\n> output from different processes, either.\n>\n\nI felt if we are not agreeing on logging on the stderr, even using\nstatic buffer we might not be able to log as\nsend_message_to_server_log calls appendStringInfo. I felt that doing\nit from CHECK_FOR_INTERRUPTS may be better.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 1 Dec 2020 14:15:17 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 2:15 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Dec 1, 2020 at 9:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Andres Freund <andres@anarazel.de> writes:\n> > > It should be quite doable to emit such backtraces directly to stderr,\n> > > instead of using appendStringInfoString()/elog().\n> >\n> > No, please no.\n> >\n> > (1) On lots of logging setups (think syslog), anything that goes to\n> > stderr is just going to wind up in the bit bucket. I realize that\n> > we have that issue already for memory context dumps on OOM errors,\n> > but that doesn't make it a good thing.\n> >\n> > (2) You couldn't really write \"to stderr\", only to fileno(stderr),\n> > creating issues about interleaving of the output with regular stderr\n> > output. For instance it's quite likely that the backtrace would\n> > appear before stderr output that had actually been emitted earlier,\n> > which'd be tremendously confusing.\n> >\n> > (3) This isn't going to do anything good for my concerns about interleaved\n> > output from different processes, either.\n> >\n>\n> I felt if we are not agreeing on logging on the stderr, even using\n> static buffer we might not be able to log as\n> send_message_to_server_log calls appendStringInfo. I felt that doing\n> it from CHECK_FOR_INTERRUPTS may be better.\n>\n\nI have implemented printing of backtrace based on handling it in\nCHECK_FOR_INTERRUPTS. This patch also includes the change to allow\ngetting backtrace of any particular process based on the suggestions.\nAttached patch has the implementation for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 8 Dec 2020 15:08:11 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On 2020-12-08 10:38, vignesh C wrote:\n> I have implemented printing of backtrace based on handling it in\n> CHECK_FOR_INTERRUPTS. This patch also includes the change to allow\n> getting backtrace of any particular process based on the suggestions.\n> Attached patch has the implementation for the same.\n> Thoughts?\n\nAre we willing to use up a signal for this?\n\n\n",
"msg_date": "Fri, 15 Jan 2021 09:53:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On 2021-01-15 09:53:05 +0100, Peter Eisentraut wrote:\n> On 2020-12-08 10:38, vignesh C wrote:\n> > I have implemented printing of backtrace based on handling it in\n> > CHECK_FOR_INTERRUPTS. This patch also includes the change to allow\n> > getting backtrace of any particular process based on the suggestions.\n> > Attached patch has the implementation for the same.\n> > Thoughts?\n> \n> Are we willing to use up a signal for this?\n\nWhy is a full signal needed? Seems the procsignal infrastructure should\nsuffice?\n\n\n",
"msg_date": "Fri, 15 Jan 2021 12:10:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sat, Jan 16, 2021 at 1:40 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-01-15 09:53:05 +0100, Peter Eisentraut wrote:\n> > On 2020-12-08 10:38, vignesh C wrote:\n> > > I have implemented printing of backtrace based on handling it in\n> > > CHECK_FOR_INTERRUPTS. This patch also includes the change to allow\n> > > getting backtrace of any particular process based on the suggestions.\n> > > Attached patch has the implementation for the same.\n> > > Thoughts?\n> >\n> > Are we willing to use up a signal for this?\n>\n> Why is a full signal needed? Seems the procsignal infrastructure should\n> suffice?\n\nMost of the processes have access to ProcSignal, for these processes\nprinting of callstack signal was handled by using ProcSignal. Pgstat\nprocess & syslogger process do not have access to ProcSignal,\nmultiplexing with SIGUSR1 is not possible for these processes. So I\nhandled the printing of callstack for pgstat process & syslogger using\nthe SIGUSR2 signal.\nThis is because shared memory is detached before pgstat & syslogger\nprocess is started by using the below:\n/* Drop our connection to postmaster's shared memory, as well */\ndsm_detach_all();\nPGSharedMemoryDetach();\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 16 Jan 2021 23:04:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jan 16, 2021, at 09:34, vignesh C wrote:\n> On Sat, Jan 16, 2021 at 1:40 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2021-01-15 09:53:05 +0100, Peter Eisentraut wrote:\n> > > On 2020-12-08 10:38, vignesh C wrote:\n> > > > I have implemented printing of backtrace based on handling it in\n> > > > CHECK_FOR_INTERRUPTS. This patch also includes the change to allow\n> > > > getting backtrace of any particular process based on the suggestions.\n> > > > Attached patch has the implementation for the same.\n> > > > Thoughts?\n> > >\n> > > Are we willing to use up a signal for this?\n> >\n> > Why is a full signal needed? Seems the procsignal infrastructure should\n> > suffice?\n> \n> Most of the processes have access to ProcSignal, for these processes\n> printing of callstack signal was handled by using ProcSignal. Pgstat\n> process & syslogger process do not have access to ProcSignal,\n> multiplexing with SIGUSR1 is not possible for these processes. So I\n> handled the printing of callstack for pgstat process & syslogger using\n> the SIGUSR2 signal.\n> This is because shared memory is detached before pgstat & syslogger\n> process is started by using the below:\n> /* Drop our connection to postmaster's shared memory, as well */\n> dsm_detach_all();\n> PGSharedMemoryDetach();\n\nSure. But why is it important enough to support those that we are willing to dedicate a signal to the task? Their backtraces aren't often interesting, so I think we should just ignore them here.\n\nAndres\n\n\n",
"msg_date": "Sat, 16 Jan 2021 09:39:28 -0800",
"msg_from": "\"Andres Freund\" <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> On Sat, Jan 16, 2021 at 1:40 AM Andres Freund <andres@anarazel.de> wrote:\n>> Why is a full signal needed? Seems the procsignal infrastructure should\n>> suffice?\n\n> Most of the processes have access to ProcSignal, for these processes\n> printing of callstack signal was handled by using ProcSignal. Pgstat\n> process & syslogger process do not have access to ProcSignal,\n> multiplexing with SIGUSR1 is not possible for these processes. So I\n> handled the printing of callstack for pgstat process & syslogger using\n> the SIGUSR2 signal.\n\nI'd argue that backtraces for those processes aren't really essential,\nand indeed that trying to make the syslogger report its own backtrace\nis damn dangerous.\n\n(Personally, I think this whole patch fails the safety-vs-usefulness\ntradeoff, but I expect I'll get shouted down.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 16 Jan 2021 15:21:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sat, Jan 16, 2021 at 11:10 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On Sat, Jan 16, 2021, at 09:34, vignesh C wrote:\n> > On Sat, Jan 16, 2021 at 1:40 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2021-01-15 09:53:05 +0100, Peter Eisentraut wrote:\n> > > > On 2020-12-08 10:38, vignesh C wrote:\n> > > > > I have implemented printing of backtrace based on handling it in\n> > > > > CHECK_FOR_INTERRUPTS. This patch also includes the change to allow\n> > > > > getting backtrace of any particular process based on the suggestions.\n> > > > > Attached patch has the implementation for the same.\n> > > > > Thoughts?\n> > > >\n> > > > Are we willing to use up a signal for this?\n> > >\n> > > Why is a full signal needed? Seems the procsignal infrastructure should\n> > > suffice?\n> >\n> > Most of the processes have access to ProcSignal, for these processes\n> > printing of callstack signal was handled by using ProcSignal. Pgstat\n> > process & syslogger process do not have access to ProcSignal,\n> > multiplexing with SIGUSR1 is not possible for these processes. So I\n> > handled the printing of callstack for pgstat process & syslogger using\n> > the SIGUSR2 signal.\n> > This is because shared memory is detached before pgstat & syslogger\n> > process is started by using the below:\n> > /* Drop our connection to postmaster's shared memory, as well */\n> > dsm_detach_all();\n> > PGSharedMemoryDetach();\n>\n> Sure. But why is it important enough to support those that we are willing to dedicate a signal to the task? Their backtraces aren't often interesting, so I think we should just ignore them here.\n\nThanks for your comments Andres, I will ignore it for the processes\nwhich do not have access to ProcSignal. I will make the changes and\npost a patch for this soon.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 17 Jan 2021 22:26:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, 18 Jan 2021 at 00:56, vignesh C <vignesh21@gmail.com> wrote:\n\n>\n> Thanks for your comments Andres, I will ignore it for the processes\n> which do not have access to ProcSignal. I will make the changes and\n> post a patch for this soon.\n>\n\nI think that's sensible.\n\nI've had variable results with glibc's backtrace(), especially on older\nplatforms and/or with external debuginfo, but it's much better than\nnothing. It's often not feasible to get someone to install gdb and run\ncommands on their production systems - they can be isolated and firewalled\nor hobbled by painful change policies. Something basic built-in to\npostgres, even if basic, is likely to come in very handy.\n\nOn Mon, 18 Jan 2021 at 00:56, vignesh C <vignesh21@gmail.com> wrote:\nThanks for your comments Andres, I will ignore it for the processes\nwhich do not have access to ProcSignal. I will make the changes and\npost a patch for this soon.I think that's sensible.I've had variable results with glibc's backtrace(), especially on older platforms and/or with external debuginfo, but it's much better than nothing. It's often not feasible to get someone to install gdb and run commands on their production systems - they can be isolated and firewalled or hobbled by painful change policies. Something basic built-in to postgres, even if basic, is likely to come in very handy.",
"msg_date": "Mon, 18 Jan 2021 12:10:41 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sun, Jan 17, 2021 at 10:26 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, Jan 16, 2021 at 11:10 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On Sat, Jan 16, 2021, at 09:34, vignesh C wrote:\n> > > On Sat, Jan 16, 2021 at 1:40 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > On 2021-01-15 09:53:05 +0100, Peter Eisentraut wrote:\n> > > > > On 2020-12-08 10:38, vignesh C wrote:\n> > > > > > I have implemented printing of backtrace based on handling it in\n> > > > > > CHECK_FOR_INTERRUPTS. This patch also includes the change to allow\n> > > > > > getting backtrace of any particular process based on the suggestions.\n> > > > > > Attached patch has the implementation for the same.\n> > > > > > Thoughts?\n> > > > >\n> > > > > Are we willing to use up a signal for this?\n> > > >\n> > > > Why is a full signal needed? Seems the procsignal infrastructure should\n> > > > suffice?\n> > >\n> > > Most of the processes have access to ProcSignal, for these processes\n> > > printing of callstack signal was handled by using ProcSignal. Pgstat\n> > > process & syslogger process do not have access to ProcSignal,\n> > > multiplexing with SIGUSR1 is not possible for these processes. So I\n> > > handled the printing of callstack for pgstat process & syslogger using\n> > > the SIGUSR2 signal.\n> > > This is because shared memory is detached before pgstat & syslogger\n> > > process is started by using the below:\n> > > /* Drop our connection to postmaster's shared memory, as well */\n> > > dsm_detach_all();\n> > > PGSharedMemoryDetach();\n> >\n> > Sure. But why is it important enough to support those that we are willing to dedicate a signal to the task? Their backtraces aren't often interesting, so I think we should just ignore them here.\n>\n> Thanks for your comments Andres, I will ignore it for the processes\n> which do not have access to ProcSignal. I will make the changes and\n> post a patch for this soon.\n>\n\nThe attached patch has the fix for this.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Jan 2021 17:15:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sat, Jan 16, 2021 at 3:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'd argue that backtraces for those processes aren't really essential,\n> and indeed that trying to make the syslogger report its own backtrace\n> is damn dangerous.\n\nI agree. Ideally I'd like to be able to use the same mechanism\neverywhere and include those processes too, but surely regular\nbackends and parallel workers are going to be the things that come up\nmost often.\n\n> (Personally, I think this whole patch fails the safety-vs-usefulness\n> tradeoff, but I expect I'll get shouted down.)\n\nYou and I are frequently on opposite sides of these kinds of\nquestions, but I think this is a closer call than many cases. I'm\nconvinced that it's useful, but I'm not sure whether it's safe. On the\nusefulness side, backtraces are often the only way to troubleshoot\nproblems that occur on production systems. I wish we had better\nlogging and tracing tools instead of having to ask for this sort of\nthing, but we don't. EDB support today frequently asks customers to\nattach gdb and take a backtrace that way, and that has risks which\nthis implementation does not: for example, suppose you were unlucky\nenough to attach during a spinlock protected critical section, and\nsuppose you didn't continue the stopped process before the 60 second\ntimeout expired and some other process caused a PANIC. Even if this\nimplementation were to end up emitting a backtrace with a spinlock\nheld, it would remove the risk of leaving the process stopped while\nholding a critical lock, and would in that sense be safer. However, as\nsoon as you make something like this accessible via an SQL callable\nfunction, some people are going to start spamming it. And, as soon as\nthey do that, any risks inherent in the implementation are multiplied.\nIf it carries an 0.01% chance of crashing the system, we'll have\npeople taking production systems down with this all the time. At that\npoint I wouldn't want the feature, even if the gdb approach had the\nsame risk (which I don't think it does).\n\nWhat do you see as the main safety risks here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Jan 2021 12:30:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Jan 16, 2021 at 3:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (Personally, I think this whole patch fails the safety-vs-usefulness\n>> tradeoff, but I expect I'll get shouted down.)\n\n> What do you see as the main safety risks here?\n\nThe thing that is scaring me the most is the \"broadcast\" aspect.\nFor starters, I think that that is going to be useless in the\nfield because of the likelihood that different backends' stack\ntraces will get interleaved in whatever log file the traces are\ngoing to. Also, having hundreds of processes spitting dozens of\nlines to the same place at the same time is going to expose any\nlittle weaknesses in your logging arrangements, such as rsyslog's\ntendency to drop messages under load.\n\nI think it's got security hazards as well. If we restricted the\nfeature to cause a trace of only one process at a time, and required\nthat process to be logged in as the same database user as the\nrequestor (or at least someone with the privs of that user, such\nas a superuser), that'd be less of an issue.\n\nBeyond that, well, maybe it's all right. In theory anyplace that\nthere's a CHECK_FOR_INTERRUPTS should be okay to call elog from;\nbut it's completely untested whether we can do that and then\ncontinue, as opposed to aborting the transaction or whole session.\nI share your estimate that there'll be small-fraction-of-a-percent\nhazards that could still add up to dangerous instability if people\ngo wild with this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jan 2021 12:50:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Jan 19, 2021 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The thing that is scaring me the most is the \"broadcast\" aspect.\n> For starters, I think that that is going to be useless in the\n> field because of the likelihood that different backends' stack\n> traces will get interleaved in whatever log file the traces are\n> going to. Also, having hundreds of processes spitting dozens of\n> lines to the same place at the same time is going to expose any\n> little weaknesses in your logging arrangements, such as rsyslog's\n> tendency to drop messages under load.\n\n+1. I don't think broadcast is a good idea.\n\n> I think it's got security hazards as well. If we restricted the\n> feature to cause a trace of only one process at a time, and required\n> that process to be logged in as the same database user as the\n> requestor (or at least someone with the privs of that user, such\n> as a superuser), that'd be less of an issue.\n\nI am not sure I see a security hazard here. I think the checks for\nthis should have the same structure as pg_terminate_backend() and\npg_cancel_backend(); whatever is required there should be required\nhere, too, but not more, unless we have a real clear reason for such\nan inconsistency.\n\n> Beyond that, well, maybe it's all right. In theory anyplace that\n> there's a CHECK_FOR_INTERRUPTS should be okay to call elog from;\n> but it's completely untested whether we can do that and then\n> continue, as opposed to aborting the transaction or whole session.\n\nI guess that's a theoretical risk but it doesn't seem very likely.\nAnd, if we do have such a problem, I think that'd probably be a case\nof bad code that we would want to fix either way.\n\n> I share your estimate that there'll be small-fraction-of-a-percent\n> hazards that could still add up to dangerous instability if people\n> go wild with this.\n\nRight. I was more concerned about whether we could, for example, crash\nwhile inside the function that generates the backtrace, on some\nplatforms or in some scenarios. That would be super-sad. I assume that\nthe people who wrote the code tried to make sure that didn't happen\nbut I don't really know what's reasonable to expect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Jan 2021 15:58:36 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 19, 2021 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think it's got security hazards as well. If we restricted the\n>> feature to cause a trace of only one process at a time, and required\n>> that process to be logged in as the same database user as the\n>> requestor (or at least someone with the privs of that user, such\n>> as a superuser), that'd be less of an issue.\n\n> I am not sure I see a security hazard here. I think the checks for\n> this should have the same structure as pg_terminate_backend() and\n> pg_cancel_backend(); whatever is required there should be required\n> here, too, but not more, unless we have a real clear reason for such\n> an inconsistency.\n\nYeah, agreed. The \"broadcast\" option seems inconsistent with doing\nthings that way, but I don't have a problem with being able to send\na trace signal to the same processes you could terminate.\n\n>> I share your estimate that there'll be small-fraction-of-a-percent\n>> hazards that could still add up to dangerous instability if people\n>> go wild with this.\n\n> Right. I was more concerned about whether we could, for example, crash\n> while inside the function that generates the backtrace, on some\n> platforms or in some scenarios. That would be super-sad. I assume that\n> the people who wrote the code tried to make sure that didn't happen\n> but I don't really know what's reasonable to expect.\n\nRecursion is scary, but it should (I think) not be possible if this\nis driven off CHECK_FOR_INTERRUPTS. There will certainly be none\nof those in libbacktrace.\n\nOne point here is that it might be a good idea to suppress elog.c's\ncalls to functions in the error context stack. As we saw in another\nconnection recently, allowing that to happen makes for a *very*\nlarge increase in the footprint of code that you are expecting to\nwork at any random CHECK_FOR_INTERRUPTS call site.\n\nBTW, it also looks like the patch is doing nothing to prevent the\nbacktrace from being sent to the connected client. I'm not sure\nwhat I think about whether it'd be okay from a security standpoint\nto do that on the connection that requested the trace, but I sure\nas heck don't want it to happen on connections that didn't. Also,\nwhatever you think about security concerns, it seems like there'd be\npretty substantial reentrancy hazards if the interrupt occurs\nanywhere in code dealing with normal client communication.\n\nMaybe, given all of these things, we should forget using elog at\nall and just emit the trace with fprintf(stderr). That seems like\nit would decrease the odds of trouble by about an order of magnitude.\nIt would make it more painful to collect the trace in some logging\nsetups, of course.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jan 2021 16:22:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 2:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Jan 19, 2021 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think it's got security hazards as well. If we restricted the\n> >> feature to cause a trace of only one process at a time, and required\n> >> that process to be logged in as the same database user as the\n> >> requestor (or at least someone with the privs of that user, such\n> >> as a superuser), that'd be less of an issue.\n>\n> > I am not sure I see a security hazard here. I think the checks for\n> > this should have the same structure as pg_terminate_backend() and\n> > pg_cancel_backend(); whatever is required there should be required\n> > here, too, but not more, unless we have a real clear reason for such\n> > an inconsistency.\n>\n> Yeah, agreed. The \"broadcast\" option seems inconsistent with doing\n> things that way, but I don't have a problem with being able to send\n> a trace signal to the same processes you could terminate.\n>\n\nThe current patch supports both getting the trace for all the\nprocesses of that instance and getting the trace for a particular\nprocess, I'm understanding the concern here with broadcasting to all\nthe connected backends, I will remove the functionality to get trace\nfor all the processes. And I will also change the trace of a single\nprocess in such a way that the user can get the trace of only the\nprocesses which that user could terminate.\n\n> >> I share your estimate that there'll be small-fraction-of-a-percent\n> >> hazards that could still add up to dangerous instability if people\n> >> go wild with this.\n>\n> > Right. I was more concerned about whether we could, for example, crash\n> > while inside the function that generates the backtrace, on some\n> > platforms or in some scenarios. That would be super-sad. I assume that\n> > the people who wrote the code tried to make sure that didn't happen\n> > but I don't really know what's reasonable to expect.\n>\n> Recursion is scary, but it should (I think) not be possible if this\n> is driven off CHECK_FOR_INTERRUPTS. There will certainly be none\n> of those in libbacktrace.\n>\n> One point here is that it might be a good idea to suppress elog.c's\n> calls to functions in the error context stack. As we saw in another\n> connection recently, allowing that to happen makes for a *very*\n> large increase in the footprint of code that you are expecting to\n> work at any random CHECK_FOR_INTERRUPTS call site.\n>\n> BTW, it also looks like the patch is doing nothing to prevent the\n> backtrace from being sent to the connected client. I'm not sure\n> what I think about whether it'd be okay from a security standpoint\n> to do that on the connection that requested the trace, but I sure\n> as heck don't want it to happen on connections that didn't. Also,\n> whatever you think about security concerns, it seems like there'd be\n> pretty substantial reentrancy hazards if the interrupt occurs\n> anywhere in code dealing with normal client communication.\n>\n> Maybe, given all of these things, we should forget using elog at\n> all and just emit the trace with fprintf(stderr). That seems like\n> it would decrease the odds of trouble by about an order of magnitude.\n> It would make it more painful to collect the trace in some logging\n> setups, of course.\n\nI would prefer if the trace appears in the log file rather on the\nconsole, as you rightly pointed out it will be difficult to collect\nthe trace of fprint(stderr). Let me know if I misunderstood. I think\nyou are concerned about the problem where elog logs the trace to the\nclient also. Can we use LOG_SERVER_ONLY log level that would prevent\nit from logging to the client. That way we can keep the elog\nimplementation itself.\n\nThoughts?\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Jan 2021 19:15:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, 20 Jan 2021 at 01:31, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sat, Jan 16, 2021 at 3:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'd argue that backtraces for those processes aren't really essential,\n> > and indeed that trying to make the syslogger report its own backtrace\n> > is damn dangerous.\n>\n> I agree. Ideally I'd like to be able to use the same mechanism\n> everywhere and include those processes too, but surely regular\n> backends and parallel workers are going to be the things that come up\n> most often.\n>\n> > (Personally, I think this whole patch fails the safety-vs-usefulness\n> > tradeoff, but I expect I'll get shouted down.)\n>\n> You and I are frequently on opposite sides of these kinds of\n> questions, but I think this is a closer call than many cases. I'm\n> convinced that it's useful, but I'm not sure whether it's safe. On the\n> usefulness side, backtraces are often the only way to troubleshoot\n> problems that occur on production systems. I wish we had better\n> logging and tracing tools instead of having to ask for this sort of\n> thing, but we don't.\n\n\nAgreed.\n\nIn theory we should be able to do this sort of thing using external trace\nand diagnostic tools like perf, systemtap, etc. In practice, these tools\ntend to be quite version-sensitive and hard to get right without multiple\nrounds of back-and-forth to deal with specifics of the site's setup,\ninstalled debuginfo or lack thereof, specific tool versions, etc.\n\nIt's quite common to have to fall back on attaching gdb with a breakpoint\non a function in the export symbol table (so it works w/o debuginfo),\nsaving a core, and then analysing the core on a separate system on which\ndebuginfo is available for all the loaded modules. It's a major pain.\n\nThe ability to get a basic bt from within Pg is strongly desirable. IIRC\ngdb's basic unwinder works without external debuginfo, if not especially\nwell. libunwind produces much better results, but that didn't pass the\nextra-dependency bar when backtracing support was introduced to core\npostgres.\n\nOn a side note, to help get better diagnostics I've also been meaning to\nlook into building --enable-debug with -ggdb3 so we can embed macro info,\nand using dwz to deduplicate+compress the debuginfo so we can encourage\npeople to install it by default on production. I also want to start\nexporting pointers to all the important data symbols for diagnostic use,\neven if we do so in a separate ELF section just for debug use.\n\nOn Wed, 20 Jan 2021 at 01:31, Robert Haas <robertmhaas@gmail.com> wrote:On Sat, Jan 16, 2021 at 3:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'd argue that backtraces for those processes aren't really essential,\n> and indeed that trying to make the syslogger report its own backtrace\n> is damn dangerous.\n\nI agree. Ideally I'd like to be able to use the same mechanism\neverywhere and include those processes too, but surely regular\nbackends and parallel workers are going to be the things that come up\nmost often.\n\n> (Personally, I think this whole patch fails the safety-vs-usefulness\n> tradeoff, but I expect I'll get shouted down.)\n\nYou and I are frequently on opposite sides of these kinds of\nquestions, but I think this is a closer call than many cases. I'm\nconvinced that it's useful, but I'm not sure whether it's safe. On the\nusefulness side, backtraces are often the only way to troubleshoot\nproblems that occur on production systems. I wish we had better\nlogging and tracing tools instead of having to ask for this sort of\nthing, but we don't.Agreed.In theory we should be able to do this sort of thing using external trace and diagnostic tools like perf, systemtap, etc. In practice, these tools tend to be quite version-sensitive and hard to get right without multiple rounds of back-and-forth to deal with specifics of the site's setup, installed debuginfo or lack thereof, specific tool versions, etc.It's quite common to have to fall back on attaching gdb with a breakpoint on a function in the export symbol table (so it works w/o debuginfo), saving a core, and then analysing the core on a separate system on which debuginfo is available for all the loaded modules. It's a major pain.The ability to get a basic bt from within Pg is strongly desirable. IIRC gdb's basic unwinder works without external debuginfo, if not especially well. libunwind produces much better results, but that didn't pass the extra-dependency bar when backtracing support was introduced to core postgres.On a side note, to help get better diagnostics I've also been meaning to look into building --enable-debug with -ggdb3 so we can embed macro info, and using dwz to deduplicate+compress the debuginfo so we can encourage people to install it by default on production. I also want to start exporting pointers to all the important data symbols for diagnostic use, even if we do so in a separate ELF section just for debug use.",
"msg_date": "Thu, 21 Jan 2021 09:35:32 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, 20 Jan 2021 at 05:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Recursion is scary, but it should (I think) not be possible if this\n> is driven off CHECK_FOR_INTERRUPTS. There will certainly be none\n> of those in libbacktrace.\n>\n\nWe can also hold interrupts for the call, and it might be wise to do so.\n\nOne point here is that it might be a good idea to suppress elog.c's\n> calls to functions in the error context stack. As we saw in another\n> connection recently, allowing that to happen makes for a *very*\n> large increase in the footprint of code that you are expecting to\n> work at any random CHECK_FOR_INTERRUPTS call site.\n>\n\nI strongly agree. Treat it as errhidecontext().\n\nBTW, it also looks like the patch is doing nothing to prevent the\n> backtrace from being sent to the connected client. I'm not sure\n> what I think about whether it'd be okay from a security standpoint\n> to do that on the connection that requested the trace, but I sure\n> as heck don't want it to happen on connections that didn't.\n\n\nI don't see a good reason to send a bt to a client. Even though these\nbacktraces won't be analysing debuginfo and populating args, locals, etc,\nit should still just go to the server log.\n\n\n> Maybe, given all of these things, we should forget using elog at\n> all and just emit the trace with fprintf(stderr).\n\n\nThat causes quite a lot of pain with MemoryContextStats() already as it's\nfrequently difficult to actually locate the output given the variations\nthat exist in customer logging configurations. Sometimes stderr goes to a\nseparate file or to journald. It's also much harder to locate the desired\noutput since there's no log_line_prefix. I have a WIP patch floating around\nsomewhere that tries to teach MemoryContextStats to write to the ereport\nchannel when not called during an actual out-of-memory situation for that\nreason; an early version is somewhere in the archives.\n\nThis is one of those \"ok in development, painful in production\" situations.\n\nSo I'm not a big fan of pushing it to stderr, though I'd rather have that\nthan not have the ability at all.\n\nOn Wed, 20 Jan 2021 at 05:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nRecursion is scary, but it should (I think) not be possible if this\nis driven off CHECK_FOR_INTERRUPTS. There will certainly be none\nof those in libbacktrace.We can also hold interrupts for the call, and it might be wise to do so.\n\nOne point here is that it might be a good idea to suppress elog.c's\ncalls to functions in the error context stack. As we saw in another\nconnection recently, allowing that to happen makes for a *very*\nlarge increase in the footprint of code that you are expecting to\nwork at any random CHECK_FOR_INTERRUPTS call site.I strongly agree. Treat it as errhidecontext().\n\nBTW, it also looks like the patch is doing nothing to prevent the\nbacktrace from being sent to the connected client. I'm not sure\nwhat I think about whether it'd be okay from a security standpoint\nto do that on the connection that requested the trace, but I sure\nas heck don't want it to happen on connections that didn't.I don't see a good reason to send a bt to a client. Even though these backtraces won't be analysing debuginfo and populating args, locals, etc, it should still just go to the server log. \nMaybe, given all of these things, we should forget using elog at\nall and just emit the trace with fprintf(stderr).That causes quite a lot of pain with MemoryContextStats() already as it's frequently difficult to actually locate the output given the variations that exist in customer logging configurations. Sometimes stderr goes to a separate file or to journald. It's also much harder to locate the desired output since there's no log_line_prefix. I have a WIP patch floating around somewhere that tries to teach MemoryContextStats to write to the ereport channel when not called during an actual out-of-memory situation for that reason; an early version is somewhere in the archives.This is one of those \"ok in development, painful in production\" situations.So I'm not a big fan of pushing it to stderr, though I'd rather have that than not have the ability at all.",
"msg_date": "Thu, 21 Jan 2021 09:42:35 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> On Wed, 20 Jan 2021 at 05:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, it also looks like the patch is doing nothing to prevent the\n>> backtrace from being sent to the connected client.\n\n> I don't see a good reason to send a bt to a client. Even though these\n> backtraces won't be analysing debuginfo and populating args, locals, etc,\n> it should still just go to the server log.\n\nYeah. That's easier than I was thinking, we just need to\ns/LOG/LOG_SERVER_ONLY/.\n\n>> Maybe, given all of these things, we should forget using elog at\n>> all and just emit the trace with fprintf(stderr).\n\n> That causes quite a lot of pain with MemoryContextStats() already\n\nTrue. Given the changes discussed in the last couple messages, I don't\nsee any really killer reasons why we can't ship the trace through elog.\nWe can always try that first, and back off to fprintf if we do find\nreasons why it's too unstable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jan 2021 20:56:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, 21 Jan 2021 at 09:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> > On Wed, 20 Jan 2021 at 05:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> BTW, it also looks like the patch is doing nothing to prevent the\n> >> backtrace from being sent to the connected client.\n>\n> > I don't see a good reason to send a bt to a client. Even though these\n> > backtraces won't be analysing debuginfo and populating args, locals, etc,\n> > it should still just go to the server log.\n>\n> Yeah. That's easier than I was thinking, we just need to\n> s/LOG/LOG_SERVER_ONLY/.\n>\n> >> Maybe, given all of these things, we should forget using elog at\n> >> all and just emit the trace with fprintf(stderr).\n>\n> > That causes quite a lot of pain with MemoryContextStats() already\n>\n> True. Given the changes discussed in the last couple messages, I don't\n> see any really killer reasons why we can't ship the trace through elog.\n> We can always try that first, and back off to fprintf if we do find\n> reasons why it's too unstable.\n>\n\nYep, works for me.\n\nThanks for being open to considering this.\n\nI know lots of this stuff can seem like a pointless sidetrack because the\nutility of it is not obvious on dev systems or when you're doing your own\nhands-on expert support on systems you own and operate yourself. These\nsorts of things really only start to make sense when you're touching many\ndifferent postgres systems \"in the wild\" - such as commercial support\nservices, helping people on -general, -bugs or stackoverflow, etc.\n\nI really appreciate your help with it.\n\nOn Thu, 21 Jan 2021 at 09:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> On Wed, 20 Jan 2021 at 05:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, it also looks like the patch is doing nothing to prevent the\n>> backtrace from being sent to the connected client.\n\n> I don't see a good reason to send a bt to a client. Even though these\n> backtraces won't be analysing debuginfo and populating args, locals, etc,\n> it should still just go to the server log.\n\nYeah. That's easier than I was thinking, we just need to\ns/LOG/LOG_SERVER_ONLY/.\n\n>> Maybe, given all of these things, we should forget using elog at\n>> all and just emit the trace with fprintf(stderr).\n\n> That causes quite a lot of pain with MemoryContextStats() already\n\nTrue. Given the changes discussed in the last couple messages, I don't\nsee any really killer reasons why we can't ship the trace through elog.\nWe can always try that first, and back off to fprintf if we do find\nreasons why it's too unstable.Yep, works for me.Thanks for being open to considering this. I know lots of this stuff can seem like a pointless sidetrack because the utility of it is not obvious on dev systems or when you're doing your own hands-on expert support on systems you own and operate yourself. These sorts of things really only start to make sense when you're touching many different postgres systems \"in the wild\" - such as commercial support services, helping people on -general, -bugs or stackoverflow, etc.I really appreciate your help with it.",
"msg_date": "Thu, 21 Jan 2021 10:23:48 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 9:24 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> I know lots of this stuff can seem like a pointless sidetrack because the utility of it is not obvious on dev systems or when you're doing your own hands-on expert support on systems you own and operate yourself. These sorts of things really only start to make sense when you're touching many different postgres systems \"in the wild\" - such as commercial support services, helping people on -general, -bugs or stackoverflow, etc.\n>\n> I really appreciate your help with it.\n\nBig +1 for all that from me, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Jan 2021 12:05:24 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 7:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> > On Wed, 20 Jan 2021 at 05:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> BTW, it also looks like the patch is doing nothing to prevent the\n> >> backtrace from being sent to the connected client.\n>\n> > I don't see a good reason to send a bt to a client. Even though these\n> > backtraces won't be analysing debuginfo and populating args, locals, etc,\n> > it should still just go to the server log.\n>\n> Yeah. That's easier than I was thinking, we just need to\n> s/LOG/LOG_SERVER_ONLY/.\n>\n> >> Maybe, given all of these things, we should forget using elog at\n> >> all and just emit the trace with fprintf(stderr).\n>\n> > That causes quite a lot of pain with MemoryContextStats() already\n>\n> True. Given the changes discussed in the last couple messages, I don't\n> see any really killer reasons why we can't ship the trace through elog.\n> We can always try that first, and back off to fprintf if we do find\n> reasons why it's too unstable.\n>\n\nThanks all of them for the suggestions. Attached v3 patch which has\nfixes based on the suggestions. It includes the following fixes: 1)\nRemoval of support to get callstack of all postgres process, user can\nget only one process callstack. 2) Update the documentation. 3) Added\nnecessary checks for pg_print_callstack similar to\npg_terminate_backend. 4) Changed LOG to LOG_SERVER_ONLY.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Wed, 27 Jan 2021 19:05:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nOn 2021-01-27 19:05:16 +0530, vignesh C wrote:\n\n> /*\n> + * LogBackTrace\n> + *\n> + * Get the backtrace and log the backtrace to log file.\n> + */\n> +void\n> +LogBackTrace(void)\n> +{\n> +\tint\t\t\tsave_errno = errno;\n> +\n> +\tvoid\t *buf[100];\n> +\tint\t\t\tnframes;\n> +\tchar\t **strfrms;\n> +\tStringInfoData errtrace;\n> +\n> +\t/* OK to process messages. Reset the flag saying there are more to do. */\n> +\tPrintBacktracePending = false;\n\nISTM that it'd be better to do this in the caller, allowing this\nfunction to be used outside of signal triggered backtraces.\n\n> \n> +extern void LogBackTrace(void); /* Called from EmitProcSignalPrintCallStack */\n\nI don't think this comment is correct anymore?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 27 Jan 2021 09:09:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 10:40 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-01-27 19:05:16 +0530, vignesh C wrote:\n>\n> > /*\n> > + * LogBackTrace\n> > + *\n> > + * Get the backtrace and log the backtrace to log file.\n> > + */\n> > +void\n> > +LogBackTrace(void)\n> > +{\n> > + int save_errno = errno;\n> > +\n> > + void *buf[100];\n> > + int nframes;\n> > + char **strfrms;\n> > + StringInfoData errtrace;\n> > +\n> > + /* OK to process messages. Reset the flag saying there are more to do. */\n> > + PrintBacktracePending = false;\n>\n> ISTM that it'd be better to do this in the caller, allowing this\n> function to be used outside of signal triggered backtraces.\n>\n> >\n> > +extern void LogBackTrace(void); /* Called from EmitProcSignalPrintCallStack */\n>\n> I don't think this comment is correct anymore?\n\nThanks for the comments, I have fixed and attached an updated patch\nwith the fixes for the same.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Thu, 28 Jan 2021 17:22:24 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 5:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for the comments, I have fixed and attached an updated patch\n> with the fixes for the same.\n> Thoughts?\n\nThanks for the patch. Here are few comments:\n\n1) I think it's return SIGNAL_BACKEND_SUCCESS; instead of return 0; in\ncheck_valid_pid?\n\n2) How about following in pg_signal_backend for more readability\n+ if (ret != SIGNAL_BACKEND_SUCCESS)\n+ return ret;\ninstead of\n+ if (ret)\n+ return ret;\n\n3) How about validate_backend_pid or some better name instead of\ncheck_valid_pid?\n\n4) How about following\n+ errmsg(\"must be a superuser to print backtrace\nof backend process\")));\ninstead of\n+ errmsg(\"must be a superuser to print backtrace\nof superuser query process\")));\n\n5) How about following\n errmsg(\"must be a member of the role whose backed\nprocess's backtrace is being printed or member of\npg_signal_backend\")));\ninstead of\n+ errmsg(\"must be a member of the role whose\nbacktrace is being logged or member of pg_signal_backend\")));\n\n6) I'm not sure whether \"backtrace\" or \"call stack\" is a generic term\nfrom the user/developer perspective. In the patch, the function name\nand documentation says callstack(I think it is \"call stack\" actually),\nbut the error/warning messages says backtrace. IMHO, having\n\"backtrace\" everywhere in the patch, even the function name changed to\npg_print_backtrace, looks better and consistent. Thoughts?\n\n7) How about following in pg_print_callstack?\n{\n int bt_pid = PG_ARGISNULL(0) ? -1 : PG_GETARG_INT32(0);\n bool result = false;\n\n if (r == SIGNAL_BACKEND_SUCCESS)\n {\n if (EmitProcSignalPrintCallStack(bt_pid))\n result = true;\n else\n ereport(WARNING,\n (errmsg(\"failed to send signal to postmaster: %m\")));\n }\n\n PG_RETURN_BOOL(result);\n}\n\n8) How about following\n+ (errmsg(\"backtrace generation is not supported by\nthis PostgresSQL installation\")));\ninstead of\n+ (errmsg(\"backtrace generation is not supported by\nthis installation\")));\n\n9) Typo - it's \"example\" +2) Using \"addr2line -e postgres address\", For exmple:\n\n10) How about\n+ * Handle print backtrace signal\ninstead of\n+ * Handle receipt of an print backtrace.\n\n11) Isn't below in documentation specific to Linux platform. What\nhappens if GDB is not there on the platform?\n+<programlisting>\n+1) \"info line *address\" from gdb on postgres executable. For example:\n+gdb ./postgres\n+GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-115.el7\n\n12) +The callstack will be logged in the log file. What happens if the\nserver is started without a log file , ./pg_ctl -D data start? Where\nwill the backtrace go?\n\n13) Not sure, if it's an overkill, but how about pg_print_callstack\nreturning a warning/notice along with true, which just says, \"See\n<<<full log file name along with log directory>>>\". Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Jan 2021 19:40:36 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Thanks Bharath for your review comments. Please find my comments inline below.\n\nOn Thu, Jan 28, 2021 at 7:40 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jan 28, 2021 at 5:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, I have fixed and attached an updated patch\n> > with the fixes for the same.\n> > Thoughts?\n>\n> Thanks for the patch. Here are few comments:\n>\n> 1) I think it's return SIGNAL_BACKEND_SUCCESS; instead of return 0; in\n> check_valid_pid?\n>\n\nI did not want to use SIGNAL_BACKEND_SUCCESS as we have not yet\nsignalled the backend process at this time. I have added\nBACKEND_VALIDATION_SUCCESS macro and used it here for better\nreadability.\n\n> 2) How about following in pg_signal_backend for more readability\n> + if (ret != SIGNAL_BACKEND_SUCCESS)\n> + return ret;\n> instead of\n> + if (ret)\n> + return ret;\n>\n\nModified it to ret != BACKEND_VALIDATION_SUCCESS\n\n> 3) How about validate_backend_pid or some better name instead of\n> check_valid_pid?\n>\n\nModified it to validate_backend_pid\n\n> 4) How about following\n> + errmsg(\"must be a superuser to print backtrace\n> of backend process\")));\n> instead of\n> + errmsg(\"must be a superuser to print backtrace\n> of superuser query process\")));\n>\n\nHere the message should include superuser, we cannot remove it. Non\nsuper user can log non super user provided if user has permissions for\nit.\n\n> 5) How about following\n> errmsg(\"must be a member of the role whose backed\n> process's backtrace is being printed or member of\n> pg_signal_backend\")));\n> instead of\n> + errmsg(\"must be a member of the role whose\n> backtrace is being logged or member of pg_signal_backend\")));\n>\n\nModified it.\n\n> 6) I'm not sure whether \"backtrace\" or \"call stack\" is a generic term\n> from the user/developer perspective. In the patch, the function name\n> and documentation says callstack(I think it is \"call stack\" actually),\n> but the error/warning messages says backtrace. IMHO, having\n> \"backtrace\" everywhere in the patch, even the function name changed to\n> pg_print_backtrace, looks better and consistent. Thoughts?\n>\n\nModified it to pg_print_backtrace.\n\n> 7) How about following in pg_print_callstack?\n> {\n> int bt_pid = PG_ARGISNULL(0) ? -1 : PG_GETARG_INT32(0);\n> bool result = false;\n>\n> if (r == SIGNAL_BACKEND_SUCCESS)\n> {\n> if (EmitProcSignalPrintCallStack(bt_pid))\n> result = true;\n> else\n> ereport(WARNING,\n> (errmsg(\"failed to send signal to postmaster: %m\")));\n> }\n>\n> PG_RETURN_BOOL(result);\n> }\n>\n\nModified similarly with slight change.\n\n> 8) How about following\n> + (errmsg(\"backtrace generation is not supported by\n> this PostgresSQL installation\")));\n> instead of\n> + (errmsg(\"backtrace generation is not supported by\n> this installation\")));\n>\n\nI used the existing message to maintain consistency with\nset_backtrace. I feel we can keep it the same.\n\n> 9) Typo - it's \"example\" +2) Using \"addr2line -e postgres address\", For exmple:\n>\n\nModified it.\n\n> 10) How about\n> + * Handle print backtrace signal\n> instead of\n> + * Handle receipt of an print backtrace.\n>\n\nI used the existing message to maintain consistency similar to\nHandleProcSignalBarrierInterrupt. I feel we can keep it the same.\n\n> 11) Isn't below in documentation specific to Linux platform. What\n> happens if GDB is not there on the platform?\n> +<programlisting>\n> +1) \"info line *address\" from gdb on postgres executable. For example:\n> +gdb ./postgres\n> +GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-115.el7\n>\n\nI have made changes \"You can get the file name and line number by\nusing gdb/addr2line in linux platforms, as a prerequisite users must\nensure gdb/addr2line is already installed\".\n\nUser will get an error like this in windows:\nselect pg_print_backtrace(pg_backend_pid());\nWARNING: backtrace generation is not supported by this installation\n pg_print_callstack\n--------------------\n f\n(1 row)\n\nThe backtrace will not be logged in case of windows, it will throw a\nwarning \"backtrace generation is not supported by this installation\"\nThoughts?\n\n> 12) +The callstack will be logged in the log file. What happens if the\n> server is started without a log file , ./pg_ctl -D data start? Where\n> will the backtrace go?\n>\n\nUpdated to: The backtrace will be logged to the log file if logging is\nenabled, if logging is disabled backtrace will be logged to the\nconsole where the postmaster was started.\n\n> 13) Not sure, if it's an overkill, but how about pg_print_callstack\n> returning a warning/notice along with true, which just says, \"See\n> <<<full log file name along with log directory>>>\". Thoughts?\n\nAs you rightly pointed out it will be an overkill, I feel the existing\nis easily understandable.\n\nAttached v5 patch has the fixes for the same.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Fri, 29 Jan 2021 19:10:24 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Jan 29, 2021 at 7:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> > 4) How about following\n> > + errmsg(\"must be a superuser to print backtrace\n> > of backend process\")));\n> > instead of\n> > + errmsg(\"must be a superuser to print backtrace\n> > of superuser query process\")));\n> >\n>\n> Here the message should include superuser, we cannot remove it. Non\n> super user can log non super user provided if user has permissions for\n> it.\n>\n> > 5) How about following\n> > errmsg(\"must be a member of the role whose backed\n> > process's backtrace is being printed or member of\n> > pg_signal_backend\")));\n> > instead of\n> > + errmsg(\"must be a member of the role whose\n> > backtrace is being logged or member of pg_signal_backend\")));\n> >\n>\n> Modified it.\n\nMaybe I'm confused here to understand the difference between\nSIGNAL_BACKEND_NOSUPERUSER and SIGNAL_BACKEND_NOPERMISSION macros and\ncorresponding error messages. Some clarification/use case to know in\nwhich scenarios we hit those error messages might help me. Did we try\nto add test cases that show up these error messages for\npg_print_backtrace? If not, can we add?\n\n> Attached v5 patch has the fixes for the same.\n> Thoughts?\n\nThanks. Here are some comments on v5 patch:\n\n1) typo - it's \"process\" not \"porcess\" + a backend porcess. For example:\n\n2) select * from pg_print_backtrace(NULL);\npostgres=# select proname, proisstrict from pg_proc where proname =\n'pg_print_backtrace';\n proname | proisstrict\n--------------------+-------------\n pg_print_backtrace | t\n\n See the documentation:\n \"proisstrict bool\n\nFunction returns null if any call argument is null. In that case the\nfunction won't actually be called at all. Functions that are not\n“strict” must be prepared to handle null inputs.\"\nSo below PG_ARGISNUL check is not needed, you can remove that, because\npg_print_backtrace will not get called with null input.\nint bt_pid = PG_ARGISNULL(0) ? -1 : PG_GETARG_INT32(0);\n\n3) Can we just set results = true instead of PG_RETURN_BOOL(true); so\nthat it will be returned from PG_RETURN_BOOL(result); just for\nconsistency?\n if (!SendProcSignal(bt_pid, PROCSIG_BACKTRACE_PRINT,\nInvalidBackendId))\n PG_RETURN_BOOL(true);\n else\n ereport(WARNING,\n (errmsg(\"failed to send signal to postmaster: %m\")));\n }\n\n PG_RETURN_BOOL(result);\n\n4) Below is what happens if I request for a backtrace of the\npostmaster process? 1388210 is pid of postmaster.\npostgres=# select * from pg_print_backtrace(1388210);\nWARNING: PID 1388210 is not a PostgreSQL server process\n pg_print_backtrace\n--------------------\n f\n\nDoes it make sense to have a postmaster's backtrace? If yes, can we\nhave something like below in sigusr1_handler()?\n if (CheckPostmasterSignal(PMSIGNAL_BACKTRACE_EMIT))\n {\n LogBackTrace();\n }\n\n5) Can we have PROCSIG_PRINT_BACKTRACE instead of\nPROCSIG_BACKTRACE_PRINT, just for readability and consistency with\nfunction name?\n\n6) I think it's not the postmaster that prints backtrace of the\nrequested backend and we don't send SIGUSR1 with\nPMSIGNAL_BACKTRACE_EMIT to postmaster, I think it's the backends, upon\nreceiving SIGUSR1 with PMSIGNAL_BACKTRACE_EMIT signal, log their own\nbacktrace. Am I missing anything here? If I'm correct, we need to\nchange the below description and other places wherever we refer to\nthis description.\n\nThe idea here is to implement & expose pg_print_backtrace function, internally\nwhat this function does is, the connected backend will send SIGUSR1 signal by\nsetting PMSIGNAL_BACKTRACE_EMIT to the postmaster process. Once the process\nreceives this signal it will log the backtrace of the process.\n\n7) Can we get the parallel worker's backtrace? IIUC it's possible.\n\n8) What happens if a user runs pg_print_backtrace() on Windows or\nMacOS or some other platform? Do you want to say something about other\nplatforms where gdb/addr2line doesn't exist?\n+</programlisting>\n+ You can get the file name and line number by using gdb/addr2line in\n+ linux platforms, as a prerequisite users must ensure gdb/addr2line is\n+ already installed:\n+<programlisting>\n\n9) Can't we reuse set_backtrace with just adding a flag to\nset_backtrace(ErrorData *edata, int num_skip, bool\nis_print_backtrace_function), making it a non-static function and call\nset_backtrace(NULL, 0, true)?\n\nvoid\nset_backtrace(ErrorData *edata, int num_skip, bool is_print_backtrace_function)\n{\n StringInfoData errtrace;\n-------\n-------\n if (is_print_backtrace_function)\n elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n else\n edata->backtrace = errtrace.data;\n}\n\nI think it will be good if we do this, because we can avoid duplicate\ncode in set_backtrace and LogBackTrace.\n\n10) I think it's \"pg_signal_backend\" instead of \"pg_print_backtrace\"?\n+ backtrace is being printed or the calling role has been granted\n+ <literal>pg_print_backtrace</literal>, however only superusers can\n\n11) In the documentation added, isn't it good if we talk a bit about\nin which scenarios users can use this function? For instance,\nsomething like \"Use pg_print_backtrace to know exactly where it's\ncurrently waiting when a backend process hangs.\"?\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Feb 2021 06:14:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 6:14 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Fri, Jan 29, 2021 at 7:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > 4) How about following\n> > > + errmsg(\"must be a superuser to print backtrace\n> > > of backend process\")));\n> > > instead of\n> > > + errmsg(\"must be a superuser to print backtrace\n> > > of superuser query process\")));\n> > >\n> >\n> > Here the message should include superuser, we cannot remove it. Non\n> > super user can log non super user provided if user has permissions for\n> > it.\n> >\n> > > 5) How about following\n> > > errmsg(\"must be a member of the role whose backed\n> > > process's backtrace is being printed or member of\n> > > pg_signal_backend\")));\n> > > instead of\n> > > + errmsg(\"must be a member of the role whose\n> > > backtrace is being logged or member of pg_signal_backend\")));\n> > >\n> >\n> > Modified it.\n>\n> Maybe I'm confused here to understand the difference between\n> SIGNAL_BACKEND_NOSUPERUSER and SIGNAL_BACKEND_NOPERMISSION macros and\n> corresponding error messages. Some clarification/use case to know in\n> which scenarios we hit those error messages might help me. Did we try\n> to add test cases that show up these error messages for\n> pg_print_backtrace? If not, can we add?\n\nAre these superuser and permission checks enough from a security\nstandpoint that we don't expose some sensitive information to the\nuser? Although I'm not sure, say from the backtrace printed and\nattached to GDB, can users see the passwords or other sensitive\ninformation from the system that they aren't supposed to see?\n\nI'm sure this point would have been discussed upthread.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Feb 2021 11:04:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "At Fri, 29 Jan 2021 19:10:24 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> Attached v5 patch has the fixes for the same.\n\nPMSIGNAL_BACKTRACE_EMIT is not used anywhere?\n(the commit message seems stale.)\n\n+++ b/src/bin/pg_ctl/t/005_backtrace.pl\n\npg_ctl doesn't (or no longer?) seem relevant to this patch.\n\n\n+\teval {\n+\t\t$current_logfiles = slurp_file($node->data_dir . '/current_logfiles');\n+\t};\n+\tlast unless $@;\n\nIs there any reason not just to do \"while (! -e <filenmae)) { usleep(); }\" ?\n\n\n+logging_collector = on\n\nI don't see a reason for turning on logging collector here.\n\n\n+gdb ./postgres\n+GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-115.el7\n+Copyright (C) 2013 Free Software Foundation, Inc.\n+License GPLv3+: GNU GPL version 3 or later <literal><</literal>http://gnu.org/licenses/gpl.html<literal>></literal>\n+This is free software: you are free to change and redistribute it.\n+There is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\n+and \"show warranty\" for details.\n+This GDB was configured as \"x86_64-redhat-linux-gnu\".\n+For bug reporting instructions, please see:\n+<literal><</literal>http://www.gnu.org/software/gdb/bugs/<literal>></literal>...\n+Reading symbols from /home/postgresdba/inst/bin/postgres...done.\n\nAlmost all of the banner lines seem to be useless here.\n\n\n #define SIGNAL_BACKEND_SUCCESS 0\n+#define BACKEND_VALIDATION_SUCCESS 0\n #define SIGNAL_BACKEND_ERROR 1\n #define SIGNAL_BACKEND_NOPERMISSION 2\n #define SIGNAL_BACKEND_NOSUPERUSER 3\n\nEven though I can share the feeling that SIGNAL_BACKEND_SUCCESS is a\nbit odd to represent \"sending signal is allowed\", I feel that it's\nbetter using the existing symbol than using the new symbol.\n\n\n+validate_backend_pid(int pid)\n\nThe function needs a comment. The name is somewhat\nconfusing. check_privilege_to_send_singal()?\n\nMaybe the return value of the function should be changed to an enum,\nand its callers should use switch-case to handle the value.\n\n\n+\t\t\tif (!SendProcSignal(bt_pid, PROCSIG_BACKTRACE_PRINT, InvalidBackendId))\n+\t\t\t\tPG_RETURN_BOOL(true);\n+\t\t\telse\n+\t\t\t\tereport(WARNING,\n+\t\t\t\t\t\t(errmsg(\"failed to send signal to postmaster: %m\")));\n+\t\t}\n+\n+\t\tPG_RETURN_BOOL(result);\n\nThe variable \"result\" seems useless.\n\n\n+\telog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n+\n+\terrno = save_errno;\n+}\n\nYou need to release the resouces held by the errtrace. And the\nerrtrace is a bit pointless. Why isn't it \"backtrace\"?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 01 Feb 2021 17:45:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Jan 29, 2021 at 7:10 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks Bharath for your review comments. Please find my comments inline below.\n>\n> On Thu, Jan 28, 2021 at 7:40 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Jan 28, 2021 at 5:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > Thanks for the comments, I have fixed and attached an updated patch\n> > > with the fixes for the same.\n> > > Thoughts?\n> >\n> > Thanks for the patch. Here are few comments:\n> >\n> > 1) I think it's return SIGNAL_BACKEND_SUCCESS; instead of return 0; in\n> > check_valid_pid?\n> >\n>\n> I did not want to use SIGNAL_BACKEND_SUCCESS as we have not yet\n> signalled the backend process at this time. I have added\n> BACKEND_VALIDATION_SUCCESS macro and used it here for better\n> readability.\n>\n> > 2) How about following in pg_signal_backend for more readability\n> > + if (ret != SIGNAL_BACKEND_SUCCESS)\n> > + return ret;\n> > instead of\n> > + if (ret)\n> > + return ret;\n> >\n>\n> Modified it to ret != BACKEND_VALIDATION_SUCCESS\n>\n> > 3) How about validate_backend_pid or some better name instead of\n> > check_valid_pid?\n> >\n>\n> Modified it to validate_backend_pid\n>\n> > 4) How about following\n> > + errmsg(\"must be a superuser to print backtrace\n> > of backend process\")));\n> > instead of\n> > + errmsg(\"must be a superuser to print backtrace\n> > of superuser query process\")));\n> >\n>\n> Here the message should include superuser, we cannot remove it. Non\n> super user can log non super user provided if user has permissions for\n> it.\n>\n> > 5) How about following\n> > errmsg(\"must be a member of the role whose backed\n> > process's backtrace is being printed or member of\n> > pg_signal_backend\")));\n> > instead of\n> > + errmsg(\"must be a member of the role whose\n> > backtrace is being logged or member of pg_signal_backend\")));\n> >\n>\n> Modified it.\n>\n> > 6) I'm not sure whether \"backtrace\" or \"call stack\" is a generic term\n> > from the user/developer perspective. In the patch, the function name\n> > and documentation says callstack(I think it is \"call stack\" actually),\n> > but the error/warning messages says backtrace. IMHO, having\n> > \"backtrace\" everywhere in the patch, even the function name changed to\n> > pg_print_backtrace, looks better and consistent. Thoughts?\n> >\n>\n> Modified it to pg_print_backtrace.\n>\n> > 7) How about following in pg_print_callstack?\n> > {\n> > int bt_pid = PG_ARGISNULL(0) ? -1 : PG_GETARG_INT32(0);\n> > bool result = false;\n> >\n> > if (r == SIGNAL_BACKEND_SUCCESS)\n> > {\n> > if (EmitProcSignalPrintCallStack(bt_pid))\n> > result = true;\n> > else\n> > ereport(WARNING,\n> > (errmsg(\"failed to send signal to postmaster: %m\")));\n> > }\n> >\n> > PG_RETURN_BOOL(result);\n> > }\n> >\n>\n> Modified similarly with slight change.\n>\n> > 8) How about following\n> > + (errmsg(\"backtrace generation is not supported by\n> > this PostgresSQL installation\")));\n> > instead of\n> > + (errmsg(\"backtrace generation is not supported by\n> > this installation\")));\n> >\n>\n> I used the existing message to maintain consistency with\n> set_backtrace. I feel we can keep it the same.\n>\n> > 9) Typo - it's \"example\" +2) Using \"addr2line -e postgres address\", For exmple:\n> >\n>\n> Modified it.\n>\n> > 10) How about\n> > + * Handle print backtrace signal\n> > instead of\n> > + * Handle receipt of an print backtrace.\n> >\n>\n> I used the existing message to maintain consistency similar to\n> HandleProcSignalBarrierInterrupt. I feel we can keep it the same.\n>\n> > 11) Isn't below in documentation specific to Linux platform. What\n> > happens if GDB is not there on the platform?\n> > +<programlisting>\n> > +1) \"info line *address\" from gdb on postgres executable. For example:\n> > +gdb ./postgres\n> > +GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-115.el7\n> >\n>\n> I have made changes \"You can get the file name and line number by\n> using gdb/addr2line in linux platforms, as a prerequisite users must\n> ensure gdb/addr2line is already installed\".\n>\n> User will get an error like this in windows:\n> select pg_print_backtrace(pg_backend_pid());\n> WARNING: backtrace generation is not supported by this installation\n> pg_print_callstack\n> --------------------\n> f\n> (1 row)\n>\n> The backtrace will not be logged in case of windows, it will throw a\n> warning \"backtrace generation is not supported by this installation\"\n> Thoughts?\n>\n> > 12) +The callstack will be logged in the log file. What happens if the\n> > server is started without a log file , ./pg_ctl -D data start? Where\n> > will the backtrace go?\n> >\n>\n> Updated to: The backtrace will be logged to the log file if logging is\n> enabled, if logging is disabled backtrace will be logged to the\n> console where the postmaster was started.\n>\n> > 13) Not sure, if it's an overkill, but how about pg_print_callstack\n> > returning a warning/notice along with true, which just says, \"See\n> > <<<full log file name along with log directory>>>\". Thoughts?\n>\n> As you rightly pointed out it will be an overkill, I feel the existing\n> is easily understandable.\n>\n> Attached v5 patch has the fixes for the same.\n> Thoughts?\n\n1.\n+ elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n+\n+ errno = save_errno;\n\nCan you add some comments that why we have chosen LOG_SERVER_ONLY?\n\n2.\n+pg_print_backtrace(PG_FUNCTION_ARGS)\n+{\n+ int bt_pid = PG_ARGISNULL(0) ? -1 : PG_GETARG_INT32(0);\n+\n\nThe variable name bt_pid is a bit odd, can we just use pid?\n\n3.\n+Datum\n+pg_print_backtrace(PG_FUNCTION_ARGS)\n+{\n+ int bt_pid = PG_ARGISNULL(0) ? -1 : PG_GETARG_INT32(0);\n+\n+#ifdef HAVE_BACKTRACE_SYMBOLS\n+ {\n+ bool result = false;\n...\n+\n+ PG_RETURN_BOOL(result);\n+ }\n+#else\n+ {\n+ ereport(WARNING,\n+ (errmsg(\"backtrace generation is not supported by this installation\")));\n+ PG_RETURN_BOOL(false);\n+ }\n+#endif\n\nThe result is just initialized to false and it is never updated?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Feb 2021 15:02:56 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Thanks Bharath for your comments.\n\nOn Mon, Feb 1, 2021 at 6:14 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jan 29, 2021 at 7:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > 4) How about following\n> > > + errmsg(\"must be a superuser to print backtrace\n> > > of backend process\")));\n> > > instead of\n> > > + errmsg(\"must be a superuser to print backtrace\n> > > of superuser query process\")));\n> > >\n> >\n> > Here the message should include superuser, we cannot remove it. Non\n> > super user can log non super user provided if user has permissions for\n> > it.\n> >\n> > > 5) How about following\n> > > errmsg(\"must be a member of the role whose backed\n> > > process's backtrace is being printed or member of\n> > > pg_signal_backend\")));\n> > > instead of\n> > > + errmsg(\"must be a member of the role whose\n> > > backtrace is being logged or member of pg_signal_backend\")));\n> > >\n> >\n> > Modified it.\n>\n> Maybe I'm confused here to understand the difference between\n> SIGNAL_BACKEND_NOSUPERUSER and SIGNAL_BACKEND_NOPERMISSION macros and\n> corresponding error messages. Some clarification/use case to know in\n> which scenarios we hit those error messages might help me. Did we try\n> to add test cases that show up these error messages for\n> pg_print_backtrace? If not, can we add?\n\nI have tested this manually:\n\nI have tested it manually, Here is the test I did:\nCreate 2 users:\ncreate user test password 'test@123';\ncreate user test1 password 'test@123';\n\nTest1: Test print backtrace of a different user's session:\n./psql -d postgres -U test\npsql (14devel)\nType \"help\" for help.\npostgres=> select pg_backend_pid();\n pg_backend_pid\n----------------\n 30142\n(1 row)\n------------------------------------------\n./psql -d postgres -U test1\npsql (14devel)\nType \"help\" for help.\npostgres=> select pg_print_backtrace(30142);\nERROR: must be a member of the role whose backtrace is being logged\nor member of pg_signal_backend\n\nThe above will be successful after:\ngrant pg_signal_backend to test1;\n\nTest1: Test for non super user trying to print backtrace of a super\nuser's session:\n./psql -d postgres\npsql (14devel)\nType \"help\" for help.\npostgres=# select pg_backend_pid();\n pg_backend_pid\n----------------\n 30211\n(1 row)\n\n./psql -d postgres -U test1\npsql (14devel)\nType \"help\" for help.\npostgres=> select pg_print_backtrace(30211);\nERROR: must be a superuser to print backtrace of superuser process\nI have not added any tests for this as we required 2 active sessions\nand I did not see any existing framework for this.\nThis test should help in relating the behavior.\n\n> > Attached v5 patch has the fixes for the same.\n> > Thoughts?\n>\n> Thanks. Here are some comments on v5 patch:\n>\n> 1) typo - it's \"process\" not \"porcess\" + a backend porcess. For example:\n>\n\nModified.\n\n> 2) select * from pg_print_backtrace(NULL);\n> postgres=# select proname, proisstrict from pg_proc where proname =\n> 'pg_print_backtrace';\n> proname | proisstrict\n> --------------------+-------------\n> pg_print_backtrace | t\n>\n> See the documentation:\n> \"proisstrict bool\n>\n> Function returns null if any call argument is null. In that case the\n> function won't actually be called at all. Functions that are not\n> “strict” must be prepared to handle null inputs.\"\n> So below PG_ARGISNUL check is not needed, you can remove that, because\n> pg_print_backtrace will not get called with null input.\n> int bt_pid = PG_ARGISNULL(0) ? -1 : PG_GETARG_INT32(0);\n>\n\nModified.\n\n> 3) Can we just set results = true instead of PG_RETURN_BOOL(true); so\n> that it will be returned from PG_RETURN_BOOL(result); just for\n> consistency?\n> if (!SendProcSignal(bt_pid, PROCSIG_BACKTRACE_PRINT,\n> InvalidBackendId))\n> PG_RETURN_BOOL(true);\n> else\n> ereport(WARNING,\n> (errmsg(\"failed to send signal to postmaster: %m\")));\n> }\n>\n> PG_RETURN_BOOL(result);\n>\n\nI felt existing is better as it will reduce one instruction of setting\nfirst and then returning. There are only 2 places from where we\nreturn. I felt we could directly return true or false.\n\n> 4) Below is what happens if I request for a backtrace of the\n> postmaster process? 1388210 is pid of postmaster.\n> postgres=# select * from pg_print_backtrace(1388210);\n> WARNING: PID 1388210 is not a PostgreSQL server process\n> pg_print_backtrace\n> --------------------\n> f\n>\n> Does it make sense to have a postmaster's backtrace? If yes, can we\n> have something like below in sigusr1_handler()?\n> if (CheckPostmasterSignal(PMSIGNAL_BACKTRACE_EMIT))\n> {\n> LogBackTrace();\n> }\n>\n\nWe had a discussion about this in [1] earlier and felt including this\nis not very useful.\n\n> 5) Can we have PROCSIG_PRINT_BACKTRACE instead of\n> PROCSIG_BACKTRACE_PRINT, just for readability and consistency with\n> function name?\n>\n\nPROCSIG_PRINT_BACKTRACE is better, I have modified it.\n\n> 6) I think it's not the postmaster that prints backtrace of the\n> requested backend and we don't send SIGUSR1 with\n> PMSIGNAL_BACKTRACE_EMIT to postmaster, I think it's the backends, upon\n> receiving SIGUSR1 with PMSIGNAL_BACKTRACE_EMIT signal, log their own\n> backtrace. Am I missing anything here? If I'm correct, we need to\n> change the below description and other places wherever we refer to\n> this description.\n>\n> The idea here is to implement & expose pg_print_backtrace function, internally\n> what this function does is, the connected backend will send SIGUSR1 signal by\n> setting PMSIGNAL_BACKTRACE_EMIT to the postmaster process. Once the process\n> receives this signal it will log the backtrace of the process.\n>\n\nModified.\n\n> 7) Can we get the parallel worker's backtrace? IIUC it's possible.\n>\n\nYes we can get the parallel worker's backtrace. As this design is\ndriven based on CHECK_FOR_INTERRUPTS, it will be printed when\nCHECK_FOR_INTERRUPTS is called.\n\n> 8) What happens if a user runs pg_print_backtrace() on Windows or\n> MacOS or some other platform? Do you want to say something about other\n> platforms where gdb/addr2line doesn't exist?\n> +</programlisting>\n> + You can get the file name and line number by using gdb/addr2line in\n> + linux platforms, as a prerequisite users must ensure gdb/addr2line is\n> + already installed:\n> +<programlisting>\n>\n\nI have added the following:\nThis feature will be available, if the installer was generated on a\nplatform which had backtrace capturing capability. If not available it\nwill return false by throwing the following warning \"WARNING:\nbacktrace generation is not supported by this installation\"\n\n> 9) Can't we reuse set_backtrace with just adding a flag to\n> set_backtrace(ErrorData *edata, int num_skip, bool\n> is_print_backtrace_function), making it a non-static function and call\n> set_backtrace(NULL, 0, true)?\n>\n> void\n> set_backtrace(ErrorData *edata, int num_skip, bool is_print_backtrace_function)\n> {\n> StringInfoData errtrace;\n> -------\n> -------\n> if (is_print_backtrace_function)\n> elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n> else\n> edata->backtrace = errtrace.data;\n> }\n>\n> I think it will be good if we do this, because we can avoid duplicate\n> code in set_backtrace and LogBackTrace.\n>\n\nYes it is better, I have modified it to use the same function.\n\n> 10) I think it's \"pg_signal_backend\" instead of \"pg_print_backtrace\"?\n> + backtrace is being printed or the calling role has been granted\n> + <literal>pg_print_backtrace</literal>, however only superusers can\n>\n\nModified.\n\n> 11) In the documentation added, isn't it good if we talk a bit about\n> in which scenarios users can use this function? For instance,\n> something like \"Use pg_print_backtrace to know exactly where it's\n> currently waiting when a backend process hangs.\"?\n>\n\nModified to include more details.\n\n[1] - https:://www.postgresql.org/message-id/1778088.1606026308%40sss.pgh.pa.us\nAttached v6 patch with the fixes.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Wed, 3 Feb 2021 12:11:15 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 11:04 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 6:14 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Fri, Jan 29, 2021 at 7:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > 4) How about following\n> > > > + errmsg(\"must be a superuser to print backtrace\n> > > > of backend process\")));\n> > > > instead of\n> > > > + errmsg(\"must be a superuser to print backtrace\n> > > > of superuser query process\")));\n> > > >\n> > >\n> > > Here the message should include superuser, we cannot remove it. Non\n> > > super user can log non super user provided if user has permissions for\n> > > it.\n> > >\n> > > > 5) How about following\n> > > > errmsg(\"must be a member of the role whose backed\n> > > > process's backtrace is being printed or member of\n> > > > pg_signal_backend\")));\n> > > > instead of\n> > > > + errmsg(\"must be a member of the role whose\n> > > > backtrace is being logged or member of pg_signal_backend\")));\n> > > >\n> > >\n> > > Modified it.\n> >\n> > Maybe I'm confused here to understand the difference between\n> > SIGNAL_BACKEND_NOSUPERUSER and SIGNAL_BACKEND_NOPERMISSION macros and\n> > corresponding error messages. Some clarification/use case to know in\n> > which scenarios we hit those error messages might help me. Did we try\n> > to add test cases that show up these error messages for\n> > pg_print_backtrace? If not, can we add?\n>\n> Are these superuser and permission checks enough from a security\n> standpoint that we don't expose some sensitive information to the\n> user? Although I'm not sure, say from the backtrace printed and\n> attached to GDB, can users see the passwords or other sensitive\n> information from the system that they aren't supposed to see?\n>\n> I'm sure this point would have been discussed upthread.\n\nThis will just print the backtrace of the current backend. Users\ncannot get password information from this. This backtrace will be sent\nfrom customer side to the customer support. Developers will use gdb to\ncheck the file and line number using the addresses. We are suggesting\nto use gdb to get the file and line number from the address. Users\ncan attach gdb to the process even now without this feature, I think\nthat behavior will be the same as is. That will not be impacted\nbecause of this feature. It was discussed to keep the checks similar\nto pg_terminate_backend as discussed in [1].\n[1] https://www.postgresql.org/message-id/CA%2BTgmoZ8XeQVCCqfq826kAr804a1ZnYy46FnQB9r2n_iOofh8Q%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 3 Feb 2021 12:33:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> On Mon, Feb 1, 2021 at 11:04 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Are these superuser and permission checks enough from a security\n>> standpoint that we don't expose some sensitive information to the\n>> user?\n\n> This will just print the backtrace of the current backend. Users\n> cannot get password information from this.\n\nReally?\n\nA backtrace normally exposes the text of the current query, for\ninstance, which could contain very sensitive data (passwords in ALTER\nUSER, customer credit card numbers in ordinary data, etc etc). We\ndon't allow the postmaster log to be seen by any but very privileged\nusers; it's not sane to think that this data is any less\nsecurity-critical than the postmaster log.\n\nThis point is entirely separate from the question of whether\ntriggering stack traces at inopportune moments could cause system\nmalfunctions, but that question is also not to be ignored.\n\nTBH, I'm leaning to the position that this should be superuser\nonly. I do NOT agree with the idea that ordinary users should\nbe able to trigger it, even against backends theoretically\nbelonging to their own userid. (Do I need to point out that\nsome levels of the call stack might be from security-definer\nfunctions with more privilege than the session's nominal user?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Feb 2021 02:30:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 1:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > On Mon, Feb 1, 2021 at 11:04 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> Are these superuser and permission checks enough from a security\n> >> standpoint that we don't expose some sensitive information to the\n> >> user?\n>\n> > This will just print the backtrace of the current backend. Users\n> > cannot get password information from this.\n>\n> Really?\n>\n> A backtrace normally exposes the text of the current query, for\n> instance, which could contain very sensitive data (passwords in ALTER\n> USER, customer credit card numbers in ordinary data, etc etc). We\n> don't allow the postmaster log to be seen by any but very privileged\n> users; it's not sane to think that this data is any less\n> security-critical than the postmaster log.\n>\n> This point is entirely separate from the question of whether\n> triggering stack traces at inopportune moments could cause system\n> malfunctions, but that question is also not to be ignored.\n>\n> TBH, I'm leaning to the position that this should be superuser\n> only. I do NOT agree with the idea that ordinary users should\n> be able to trigger it, even against backends theoretically\n> belonging to their own userid. (Do I need to point out that\n> some levels of the call stack might be from security-definer\n> functions with more privilege than the session's nominal user?)\n>\n\nI had seen that the log that will be logged will be something like:\n postgres: test postgres [local]\nidle(ProcessClientReadInterrupt+0x3a) [0x9500ec]\n postgres: test postgres [local] idle(secure_read+0x183) [0x787f43]\n postgres: test postgres [local] idle() [0x7919de]\n postgres: test postgres [local] idle(pq_getbyte+0x32) [0x791a8e]\n postgres: test postgres [local] idle() [0x94fc16]\n postgres: test postgres [local] idle() [0x950099]\n postgres: test postgres [local] idle(PostgresMain+0x6d5) [0x954bd5]\n postgres: test postgres [local] idle() [0x898a09]\n postgres: test postgres [local] idle() [0x89838f]\n postgres: test postgres [local] idle() [0x894953]\n postgres: test postgres [local] idle(PostmasterMain+0x116b) [0x89422a]\n postgres: test postgres [local] idle() [0x79725b]\n /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6e68d75555]\n postgres: test postgres [local] idle() [0x484249]\nI was not sure if we would be able to get any secure information from\nthis. I did not notice the function arguments being printed. I felt\nthe function name, offset and the return address will be logged. I\nmight be missing something here.\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 3 Feb 2021 13:49:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 1:49 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 1:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > vignesh C <vignesh21@gmail.com> writes:\n> > > On Mon, Feb 1, 2021 at 11:04 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >> Are these superuser and permission checks enough from a security\n> > >> standpoint that we don't expose some sensitive information to the\n> > >> user?\n> >\n> > > This will just print the backtrace of the current backend. Users\n> > > cannot get password information from this.\n> >\n> > Really?\n> >\n> > A backtrace normally exposes the text of the current query, for\n> > instance, which could contain very sensitive data (passwords in ALTER\n> > USER, customer credit card numbers in ordinary data, etc etc). We\n> > don't allow the postmaster log to be seen by any but very privileged\n> > users; it's not sane to think that this data is any less\n> > security-critical than the postmaster log.\n> >\n> > This point is entirely separate from the question of whether\n> > triggering stack traces at inopportune moments could cause system\n> > malfunctions, but that question is also not to be ignored.\n> >\n> > TBH, I'm leaning to the position that this should be superuser\n> > only. I do NOT agree with the idea that ordinary users should\n> > be able to trigger it, even against backends theoretically\n> > belonging to their own userid. (Do I need to point out that\n> > some levels of the call stack might be from security-definer\n> > functions with more privilege than the session's nominal user?)\n> >\n>\n> I had seen that the log that will be logged will be something like:\n> postgres: test postgres [local]\n> idle(ProcessClientReadInterrupt+0x3a) [0x9500ec]\n> postgres: test postgres [local] idle(secure_read+0x183) [0x787f43]\n> postgres: test postgres [local] idle() [0x7919de]\n> postgres: test postgres [local] idle(pq_getbyte+0x32) [0x791a8e]\n> postgres: test postgres [local] idle() [0x94fc16]\n> postgres: test postgres [local] idle() [0x950099]\n> postgres: test postgres [local] idle(PostgresMain+0x6d5) [0x954bd5]\n> postgres: test postgres [local] idle() [0x898a09]\n> postgres: test postgres [local] idle() [0x89838f]\n> postgres: test postgres [local] idle() [0x894953]\n> postgres: test postgres [local] idle(PostmasterMain+0x116b) [0x89422a]\n> postgres: test postgres [local] idle() [0x79725b]\n> /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6e68d75555]\n> postgres: test postgres [local] idle() [0x484249]\n> I was not sure if we would be able to get any secure information from\n> this. I did not notice the function arguments being printed. I felt\n> the function name, offset and the return address will be logged. I\n> might be missing something here.\n> Thoughts?\n\nFirst of all, we need to see if the output of pg_print_backtrace shows\nup function parameter addresses or only function start addresses along\nwith line and file information when attached to gdb. In either case,\nIMO, it will be easy for experienced hackers(I'm not one though) to\ncalculate and fetch the query string or other function parameters or\nthe variables inside the functions from the stack by looking at the\ncode (which is available openly, of course).\n\nSay, if a backend is in a long running scan or insert operation, then\npg_print_backtrace is issued from another session, the\nexec_simple_query function shows up query_string. Below is captured\nfrom attached gdb though, I'm not sure whether the logged backtrace\nwill have function address or the function parameters addresses, I\nthink we can check that by having a long running query which\nfrequently checks interrupts and issue pg_print_backtrace from another\nsession to that backend. Now, attach gdb to the backend in which the\nquery is running, then take bt, see if the logged backtrace and the\ngdb bt have the same or closer addresses.\n\n#13 0x00005644f4320729 in exec_simple_query (\n query_string=0x5644f6771bf0 \"select pg_backend_pid();\") at postgres.c:1240\n#14 0x00005644f4324ff4 in PostgresMain (argc=1, argv=0x7ffd819bd5e0,\n dbname=0x5644f679d2b8 \"postgres\", username=0x5644f679d298 \"bharath\")\n at postgres.c:4394\n#15 0x00005644f4256f9d in BackendRun (port=0x5644f67935c0) at postmaster.c:4484\n#16 0x00005644f4256856 in BackendStartup (port=0x5644f67935c0) at\npostmaster.c:4206\n#17 0x00005644f4252a11 in ServerLoop () at postmaster.c:1730\n#18 0x00005644f42521aa in PostmasterMain (argc=3, argv=0x5644f676b1f0)\n at postmaster.c:1402\n#19 0x00005644f4148789 in main (argc=3, argv=0x5644f676b1f0) at main.c:209\n\nAs suggested by Tom, I'm okay if this function is callable only by the\nsuperusers. In that case, the superusers can fetch the backtrace and\nsend it for further analysis in case of any hangs or issues.\n\nOthers may have better thoughts.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Feb 2021 15:24:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nI also think this feature would be useful when supporting\nenvironments that lack debugger or debug symbols.\nI think such environments are not rare.\n\n\n+ <xref linkend=\"runtime-config-logging\"/> for more information. \nThis\n+ will help in identifying where exactly the backend process is \ncurrently\n+ executing.\n\nWhen I read this, I expected a backtrace would be generated at\nthe moment when it receives the signal, but actually it just\nsets a flag that causes the next CHECK_FOR_INTERRUPTS to print\na backtrace.\n\nHow about explaining the timing of the backtrace generation?\n\n\n+ print backtrace of superuser backends. This feature is not \nsupported\n+ for postmaster, logging and statistics process.\n\nSince the current patch use BackendPidGetProc(), it does not\nsupport this feature not only postmaster, logging, and\nstatistics but also checkpointer, background writer, and\nwalwriter.\n\nAnd when I specify pid of these PostgreSQL processes, it\nsays \"PID xxxx is not a PostgreSQL server process\".\n\nI think it may confuse users, so it might be worth\nchanging messages for those PostgreSQL processes.\nAuxiliaryPidGetProc() may help to do it.\n\n\ndiff --git a/src/backend/postmaster/checkpointer.c \nb/src/backend/postmaster/checkpointer.c\nindex 54a818b..5fae328 100644\n--- a/src/backend/postmaster/checkpointer.c\n+++ b/src/backend/postmaster/checkpointer.c\n@@ -57,6 +57,7 @@\n #include \"storage/shmem.h\"\n #include \"storage/smgr.h\"\n #include \"storage/spin.h\"\n+#include \"tcop/tcopprot.h\"\n #include \"utils/guc.h\"\n #include \"utils/memutils.h\"\n #include \"utils/resowner.h\"\n@@ -547,6 +548,13 @@ HandleCheckpointerInterrupts(void)\n if (ProcSignalBarrierPending)\n ProcessProcSignalBarrier();\n\n+ /* Process printing backtrace */\n+ if (PrintBacktracePending)\n+ {\n+ PrintBacktracePending = false;\n+ set_backtrace(NULL, 0);\n+ }\n+\n\nAlthough it implements backtrace for checkpointer, when\nI specified pid of checkpointer it was refused from\nBackendPidGetProc().\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 01 Mar 2021 14:13:16 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 10:43 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> Since the current patch use BackendPidGetProc(), it does not\n> support this feature not only postmaster, logging, and\n> statistics but also checkpointer, background writer, and\n> walwriter.\n>\n> And when I specify pid of these PostgreSQL processes, it\n> says \"PID xxxx is not a PostgreSQL server process\".\n>\n> I think it may confuse users, so it might be worth\n> changing messages for those PostgreSQL processes.\n> AuxiliaryPidGetProc() may help to do it.\n\nExactly this was the doubt I got when I initially reviewed this patch.\nAnd I felt it should be discussed in a separate thread, you may want\nto update your thoughts there [1].\n\n[1] - https://www.postgresql.org/message-id/CALj2ACW7Rr-R7mBcBQiXWPp%3DJV5chajjTdudLiF5YcpW-BmHhg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Mar 2021 18:25:13 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On 2021-03-04 21:55, Bharath Rupireddy wrote:\n> On Mon, Mar 1, 2021 at 10:43 AM torikoshia <torikoshia@oss.nttdata.com> \n> wrote:\n>> Since the current patch use BackendPidGetProc(), it does not\n>> support this feature not only postmaster, logging, and\n>> statistics but also checkpointer, background writer, and\n>> walwriter.\n>> \n>> And when I specify pid of these PostgreSQL processes, it\n>> says \"PID xxxx is not a PostgreSQL server process\".\n>> \n>> I think it may confuse users, so it might be worth\n>> changing messages for those PostgreSQL processes.\n>> AuxiliaryPidGetProc() may help to do it.\n> \n> Exactly this was the doubt I got when I initially reviewed this patch.\n> And I felt it should be discussed in a separate thread, you may want\n> to update your thoughts there [1].\n> \n> [1] -\n> https://www.postgresql.org/message-id/CALj2ACW7Rr-R7mBcBQiXWPp%3DJV5chajjTdudLiF5YcpW-BmHhg%40mail.gmail.com\n\nThanks!\nI'm going to join the discussion there.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 05 Mar 2021 18:47:52 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 3:24 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 1:49 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Feb 3, 2021 at 1:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > vignesh C <vignesh21@gmail.com> writes:\n> > > > On Mon, Feb 1, 2021 at 11:04 AM Bharath Rupireddy\n> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >> Are these superuser and permission checks enough from a security\n> > > >> standpoint that we don't expose some sensitive information to the\n> > > >> user?\n> > >\n> > > > This will just print the backtrace of the current backend. Users\n> > > > cannot get password information from this.\n> > >\n> > > Really?\n> > >\n> > > A backtrace normally exposes the text of the current query, for\n> > > instance, which could contain very sensitive data (passwords in ALTER\n> > > USER, customer credit card numbers in ordinary data, etc etc). We\n> > > don't allow the postmaster log to be seen by any but very privileged\n> > > users; it's not sane to think that this data is any less\n> > > security-critical than the postmaster log.\n> > >\n> > > This point is entirely separate from the question of whether\n> > > triggering stack traces at inopportune moments could cause system\n> > > malfunctions, but that question is also not to be ignored.\n> > >\n> > > TBH, I'm leaning to the position that this should be superuser\n> > > only. I do NOT agree with the idea that ordinary users should\n> > > be able to trigger it, even against backends theoretically\n> > > belonging to their own userid. (Do I need to point out that\n> > > some levels of the call stack might be from security-definer\n> > > functions with more privilege than the session's nominal user?)\n> > >\n> >\n> > I had seen that the log that will be logged will be something like:\n> > postgres: test postgres [local]\n> > idle(ProcessClientReadInterrupt+0x3a) [0x9500ec]\n> > postgres: test postgres [local] idle(secure_read+0x183) [0x787f43]\n> > postgres: test postgres [local] idle() [0x7919de]\n> > postgres: test postgres [local] idle(pq_getbyte+0x32) [0x791a8e]\n> > postgres: test postgres [local] idle() [0x94fc16]\n> > postgres: test postgres [local] idle() [0x950099]\n> > postgres: test postgres [local] idle(PostgresMain+0x6d5) [0x954bd5]\n> > postgres: test postgres [local] idle() [0x898a09]\n> > postgres: test postgres [local] idle() [0x89838f]\n> > postgres: test postgres [local] idle() [0x894953]\n> > postgres: test postgres [local] idle(PostmasterMain+0x116b) [0x89422a]\n> > postgres: test postgres [local] idle() [0x79725b]\n> > /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6e68d75555]\n> > postgres: test postgres [local] idle() [0x484249]\n> > I was not sure if we would be able to get any secure information from\n> > this. I did not notice the function arguments being printed. I felt\n> > the function name, offset and the return address will be logged. I\n> > might be missing something here.\n> > Thoughts?\n>\n> First of all, we need to see if the output of pg_print_backtrace shows\n> up function parameter addresses or only function start addresses along\n> with line and file information when attached to gdb. In either case,\n> IMO, it will be easy for experienced hackers(I'm not one though) to\n> calculate and fetch the query string or other function parameters or\n> the variables inside the functions from the stack by looking at the\n> code (which is available openly, of course).\n>\n> Say, if a backend is in a long running scan or insert operation, then\n> pg_print_backtrace is issued from another session, the\n> exec_simple_query function shows up query_string. Below is captured\n> from attached gdb though, I'm not sure whether the logged backtrace\n> will have function address or the function parameters addresses, I\n> think we can check that by having a long running query which\n> frequently checks interrupts and issue pg_print_backtrace from another\n> session to that backend. Now, attach gdb to the backend in which the\n> query is running, then take bt, see if the logged backtrace and the\n> gdb bt have the same or closer addresses.\n>\n> #13 0x00005644f4320729 in exec_simple_query (\n> query_string=0x5644f6771bf0 \"select pg_backend_pid();\") at postgres.c:1240\n> #14 0x00005644f4324ff4 in PostgresMain (argc=1, argv=0x7ffd819bd5e0,\n> dbname=0x5644f679d2b8 \"postgres\", username=0x5644f679d298 \"bharath\")\n> at postgres.c:4394\n> #15 0x00005644f4256f9d in BackendRun (port=0x5644f67935c0) at postmaster.c:4484\n> #16 0x00005644f4256856 in BackendStartup (port=0x5644f67935c0) at\n> postmaster.c:4206\n> #17 0x00005644f4252a11 in ServerLoop () at postmaster.c:1730\n> #18 0x00005644f42521aa in PostmasterMain (argc=3, argv=0x5644f676b1f0)\n> at postmaster.c:1402\n> #19 0x00005644f4148789 in main (argc=3, argv=0x5644f676b1f0) at main.c:209\n>\n> As suggested by Tom, I'm okay if this function is callable only by the\n> superusers. In that case, the superusers can fetch the backtrace and\n> send it for further analysis in case of any hangs or issues.\n>\n> Others may have better thoughts.\n\nI would like to clarify a bit to avoid confusion here: Currently when\nthere is a long running query or hang in the server, one of our\ncustomer support members will go for a screen share with the customer.\nIf gdb is not installed we tell the customer to install gdb. Then we\ntell the customer to attach the backend process and execute the\ncommand. We tell the customer to share this to the customer support\nteam and later the development team checks if this is an issue or a\nlong running query and provides a workaround or explains what needs to\nbe done next.\nThis feature reduces a lot of these processes. Whenever there is an\nissue and if the user/customer is not sure if it is a hang or long\nrunning query. User can execute pg_print_backtrace, after this is\nexecuted, the backtrace will be logged to the log file something like\nbelow:\n postgres: test postgres [local]\nidle(ProcessClientReadInterrupt+0x3a) [0x9500ec]\n postgres: test postgres [local] idle(secure_read+0x183) [0x787f43]\n postgres: test postgres [local] idle() [0x7919de]\n postgres: test postgres [local] idle(pq_getbyte+0x32) [0x791a8e]\n postgres: test postgres [local] idle() [0x94fc16]\n postgres: test postgres [local] idle() [0x950099]\n postgres: test postgres [local] idle(PostgresMain+0x6d5) [0x954bd5]\n postgres: test postgres [local] idle() [0x898a09]\n postgres: test postgres [local] idle() [0x89838f]\n postgres: test postgres [local] idle() [0x894953]\n postgres: test postgres [local] idle(PostmasterMain+0x116b) [0x89422a]\n postgres: test postgres [local] idle() [0x79725b]\n /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6e68d75555]\n postgres: test postgres [local] idle() [0x484249]\n\nThe above log contents (not the complete log file) will be shared by\nthe customer/user along with the query/configuration/statistics, etc\nto the support team. I have mentioned a few steps in the documentation\nhow to get the file/line from the backtrace logged using\ngdb/addr2line. Here this will be done in the developer environment,\nnot in the actual customer environment which has the sensitive data.\nWe don't attach to the running process in gdb. Developers will use the\nsame binary that was released to the customer(not in the customer\nenvironment) to get the file/line number. Users will be able to\nsimulate a backtrace which includes file and line number. I felt users\ncannot get the sensitive information from here. This information will\nhelp the developer to analyze and suggest what is the next action that\ncustomers need to take.\nI felt that there was a slight misunderstanding in where gdb is\nexecuted, it is not in the customer environment but in the developer\nenvironment. From the customer environment we will only get the logs\nof stack trace as mentioned above.\nI have changed it so that this feature is supported only for superuser users.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Wed, 5 May 2021 21:25:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Here's a cleaned-up copy of the doc text.\n\nSend a request to the backend with the specified process ID to log its backtrace.\nThe backtrace will be logged at message level <literal>LOG</literal>.\nIt will appear in the server log based on the log configuration set\n(See <xref linkend=\"runtime-config-logging\"/> for more information),\nbut will not be sent to the client regardless of\n<xref linkend=\"guc-client-min-messages\"/>.\nA backtrace will identify where exactly the backend process is currently\nexecuting. This may be useful to developers to diagnose stuck\nprocesses and other problems. This feature is\nnot supported for the postmaster, logger, or statistics collector process. This\nfeature will be available if PostgreSQL was built\nwith the ability to capture backtracee. If not available, the function will\nreturn false and show a WARNING.\nOnly superusers can request backends to log their backtrace.\n\n> - * this and related functions are not inlined.\n> + * this and related functions are not inlined. If edata pointer is valid\n> + * backtrace information will set in edata.\n\nwill *be* set\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 5 May 2021 21:13:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, May 6, 2021 at 7:43 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Here's a cleaned-up copy of the doc text.\n>\n> Send a request to the backend with the specified process ID to log its backtrace.\n> The backtrace will be logged at message level <literal>LOG</literal>.\n> It will appear in the server log based on the log configuration set\n> (See <xref linkend=\"runtime-config-logging\"/> for more information),\n> but will not be sent to the client regardless of\n> <xref linkend=\"guc-client-min-messages\"/>.\n> A backtrace will identify where exactly the backend process is currently\n> executing. This may be useful to developers to diagnose stuck\n> processes and other problems. This feature is\n> not supported for the postmaster, logger, or statistics collector process. This\n> feature will be available if PostgreSQL was built\n> with the ability to capture backtracee. If not available, the function will\n> return false and show a WARNING.\n> Only superusers can request backends to log their backtrace.\n\nThanks for rephrasing, I have modified to include checkpointer,\nwalwriter and background writer process also.\n\n> > - * this and related functions are not inlined.\n> > + * this and related functions are not inlined. If edata pointer is valid\n> > + * backtrace information will set in edata.\n>\n> will *be* set\n\nModified.\n\nThanks for the comments, Attached v8 patch has the fixes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 6 May 2021 20:23:42 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> A backtrace normally exposes the text of the current query, for\n> instance, which could contain very sensitive data (passwords in ALTER\n> USER, customer credit card numbers in ordinary data, etc etc). We\n> don't allow the postmaster log to be seen by any but very privileged\n> users; it's not sane to think that this data is any less\n> security-critical than the postmaster log.\n\nI agree. Vignesh points out downthread that it's just printing out\nmemory addresses and not necessarily anything too directly usable, but\nthe memory addresses themselves are potentially security-sensitive. If\nthat were not a thing, there would not be such a thing as ASLR.\n\n> This point is entirely separate from the question of whether\n> triggering stack traces at inopportune moments could cause system\n> malfunctions, but that question is also not to be ignored.\n\nThat worries me too, although I have a hard time saying exactly why.\nIf we call an OS-provided function called backtrace() and it does\nsomething other than generate a backtrace - e.g. makes the process seg\nfault, or mucks about with the values of global variables - isn't that\njust a bug in the OS? Do we have particular reasons to believe that\nsuch bugs are common? My own skepticism here is mostly based on how\ninconsistent debuggers are about being able to tell you anything\nuseful, which makes me think that in a binary compiled with any\noptimization, the ability of backtrace() to do something consistently\nuseful is also questionable. But that's a separate question from\nwhether it's likely to cause any active harm.\n\n> TBH, I'm leaning to the position that this should be superuser\n> only. I do NOT agree with the idea that ordinary users should\n> be able to trigger it, even against backends theoretically\n> belonging to their own userid. (Do I need to point out that\n> some levels of the call stack might be from security-definer\n> functions with more privilege than the session's nominal user?)\n\nI agree that ordinary users shouldn't be able to trigger it, but I\nthink it should be restricted to some predefined role, new or\nexisting, rather than to superuser. I see no reason not to let\nindividual users decide what risks they want to take.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 May 2021 14:38:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 3, 2021 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> TBH, I'm leaning to the position that this should be superuser\n>> only.\n\n> I agree that ordinary users shouldn't be able to trigger it, but I\n> think it should be restricted to some predefined role, new or\n> existing, rather than to superuser. I see no reason not to let\n> individual users decide what risks they want to take.\n\nIf we think it's worth having a predefined role for, OK. However,\nI don't like the future I see us heading towards where there are\nhundreds of random predefined roles. Is there an existing role\nthat it'd be reasonable to attach this ability to?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 May 2021 14:56:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-06 14:38:51 -0400, Robert Haas wrote:\n> On Wed, Feb 3, 2021 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > This point is entirely separate from the question of whether\n> > triggering stack traces at inopportune moments could cause system\n> > malfunctions, but that question is also not to be ignored.\n> \n> That worries me too, although I have a hard time saying exactly why.\n> If we call an OS-provided function called backtrace() and it does\n> something other than generate a backtrace - e.g. makes the process seg\n> fault, or mucks about with the values of global variables - isn't that\n> just a bug in the OS? Do we have particular reasons to believe that\n> such bugs are common? My own skepticism here is mostly based on how\n> inconsistent debuggers are about being able to tell you anything\n> useful, which makes me think that in a binary compiled with any\n> optimization, the ability of backtrace() to do something consistently\n> useful is also questionable. But that's a separate question from\n> whether it's likely to cause any active harm.\n\nI think that ship kind of has sailed with\n\ncommit 71a8a4f6e36547bb060dbcc961ea9b57420f7190\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: 2019-11-08 15:44:20 -0300\n\n Add backtrace support for error reporting\n\nwe allow generating backtraces in all kind of places, including\ne.g. some inside critical sections via backtrace_functions. I don't\nthink also doing so during interrupt processing is a meaningful increase\nin exposed surface area?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 May 2021 12:14:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-06 14:56:09 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Feb 3, 2021 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> TBH, I'm leaning to the position that this should be superuser\n> >> only.\n> \n> > I agree that ordinary users shouldn't be able to trigger it, but I\n> > think it should be restricted to some predefined role, new or\n> > existing, rather than to superuser. I see no reason not to let\n> > individual users decide what risks they want to take.\n> \n> If we think it's worth having a predefined role for, OK. However,\n> I don't like the future I see us heading towards where there are\n> hundreds of random predefined roles. Is there an existing role\n> that it'd be reasonable to attach this ability to?\n\nIt does seem like it'd be good to group it in with something\nelse. There's nothing fitting 100% though.\n\npostgres[1475723][1]=# SELECT rolname FROM pg_roles WHERE rolname LIKE 'pg_%' ORDER BY rolname;\n┌───────────────────────────┐\n│ rolname │\n├───────────────────────────┤\n│ pg_database_owner │\n│ pg_execute_server_program │\n│ pg_monitor │\n│ pg_read_all_data │\n│ pg_read_all_settings │\n│ pg_read_all_stats │\n│ pg_read_server_files │\n│ pg_signal_backend │\n│ pg_stat_scan_tables │\n│ pg_write_all_data │\n│ pg_write_server_files │\n└───────────────────────────┘\n(11 rows)\n\nWe could fit it into pg_monitor, but it's probably a bit more impactful\nthan most things in there? But I'd go for it anyway, I think.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 May 2021 12:19:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-06 14:38:51 -0400, Robert Haas wrote:\n>> On Wed, Feb 3, 2021 at 2:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> This point is entirely separate from the question of whether\n>>> triggering stack traces at inopportune moments could cause system\n>>> malfunctions, but that question is also not to be ignored.\n\n> I think that ship kind of has sailed with\n>\n> commit 71a8a4f6e36547bb060dbcc961ea9b57420f7190\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: 2019-11-08 15:44:20 -0300\n> Add backtrace support for error reporting\n\nThe fact that we have a scarily large surface area for that to\ncause problems is not a great argument for making the surface\narea even larger. Also, I don't think v13 has been out long\nenough for us to have full confidence that the backtrace behavior\ndoesn't cause any problems already.\n\n> we allow generating backtraces in all kind of places, including\n> e.g. some inside critical sections via backtrace_functions.\n\nIf there's an elog call inside a critical section, that seems\nlike a problem already. Are you sure that there are any such?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 May 2021 15:22:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-06 14:56:09 -0400, Tom Lane wrote:\n>> If we think it's worth having a predefined role for, OK. However,\n>> I don't like the future I see us heading towards where there are\n>> hundreds of random predefined roles. Is there an existing role\n>> that it'd be reasonable to attach this ability to?\n\n> It does seem like it'd be good to group it in with something\n> else. There's nothing fitting 100% though.\n\nI'd probably vote for pg_read_all_data, considering that much of\nthe concern about this has to do with the possibility of exposure\nof sensitive data. I'm not quite sure what the security expectations\nare for pg_monitor.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 May 2021 15:31:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-06 15:31:02 -0400, Tom Lane wrote:\n> I'd probably vote for pg_read_all_data, considering that much of\n> the concern about this has to do with the possibility of exposure\n> of sensitive data. I'm not quite sure what the security expectations\n> are for pg_monitor.\n\nI was wondering the same, but looking at the docs of pg_read_all_data\nthat doesn't seem like a great fit:\n\n <row>\n <entry>pg_read_all_data</entry>\n <entry>Read all data (tables, views, sequences), as if having SELECT\n rights on those objects, and USAGE rights on all schemas, even without\n having it explicitly. This role does not have the role attribute\n <literal>BYPASSRLS</literal> set. If RLS is being used, an administrator\n may wish to set <literal>BYPASSRLS</literal> on roles which this role is\n GRANTed to.</entry>\n </row>\n\nIt's mostly useful to be able to run pg_dump without superuser\npermissions.\n\nContrast that to pg_monitor:\n\n <row>\n <entry>pg_monitor</entry>\n <entry>Read/execute various monitoring views and functions.\n This role is a member of <literal>pg_read_all_settings</literal>,\n <literal>pg_read_all_stats</literal> and\n <literal>pg_stat_scan_tables</literal>.</entry>\n </row>\n...\n <para>\n The <literal>pg_monitor</literal>, <literal>pg_read_all_settings</literal>,\n <literal>pg_read_all_stats</literal> and <literal>pg_stat_scan_tables</literal>\n roles are intended to allow administrators to easily configure a role for the\n purpose of monitoring the database server. They grant a set of common privileges\n allowing the role to read various useful configuration settings, statistics and\n other system information normally restricted to superusers.\n </para>\n\n\"normally restricted to superusers\" seems to fit pretty well?\n\nI think if we follow your argument, pg_read_server_files would be a bit\nbetter fit than pg_read_all_data? But it seems \"too powerful\" to me.\n <row>\n <entry>pg_read_server_files</entry>\n <entry>Allow reading files from any location the database can access on the server with COPY and\n other file-access functions.</entry>\n </row>\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 May 2021 12:49:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-06 15:22:02 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > we allow generating backtraces in all kind of places, including\n> > e.g. some inside critical sections via backtrace_functions.\n> \n> If there's an elog call inside a critical section, that seems\n> like a problem already. Are you sure that there are any such?\n\nThere's several, yes. In xlog.c there's quite a few that are gated by\nwal_debug being enabled. But also a few without that,\ne.g. XLogFileInit() logging\n\telog(DEBUG1, \"creating and filling new WAL file\");\nand XLogFileInit() can be called within a critical section.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 May 2021 12:55:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, May 6, 2021 at 3:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-05-06 14:56:09 -0400, Tom Lane wrote:\n> >> If we think it's worth having a predefined role for, OK. However,\n> >> I don't like the future I see us heading towards where there are\n> >> hundreds of random predefined roles. Is there an existing role\n> >> that it'd be reasonable to attach this ability to?\n>\n> > It does seem like it'd be good to group it in with something\n> > else. There's nothing fitting 100% though.\n>\n> I'd probably vote for pg_read_all_data, considering that much of\n> the concern about this has to do with the possibility of exposure\n> of sensitive data. I'm not quite sure what the security expectations\n> are for pg_monitor.\n\nMaybe we should have a role that is specifically for server debugging\ntype things. This kind of overlaps with Mark Dilger's proposal to try\nto allow SET for security-sensitive GUCs to be delegated via\npredefined roles. The exact way to divide that up is open to question,\nbut it wouldn't seem crazy to me if the same role controlled the\nability to do this plus the ability to set the GUCs\nbacktrace_functions, debug_invalidate_system_caches_always,\nwal_consistency_checking, and maybe a few other things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 16:57:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, May 12, 2021 at 2:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 3:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2021-05-06 14:56:09 -0400, Tom Lane wrote:\n> > >> If we think it's worth having a predefined role for, OK. However,\n> > >> I don't like the future I see us heading towards where there are\n> > >> hundreds of random predefined roles. Is there an existing role\n> > >> that it'd be reasonable to attach this ability to?\n> >\n> > > It does seem like it'd be good to group it in with something\n> > > else. There's nothing fitting 100% though.\n> >\n> > I'd probably vote for pg_read_all_data, considering that much of\n> > the concern about this has to do with the possibility of exposure\n> > of sensitive data. I'm not quite sure what the security expectations\n> > are for pg_monitor.\n>\n> Maybe we should have a role that is specifically for server debugging\n> type things. This kind of overlaps with Mark Dilger's proposal to try\n> to allow SET for security-sensitive GUCs to be delegated via\n> predefined roles. The exact way to divide that up is open to question,\n> but it wouldn't seem crazy to me if the same role controlled the\n> ability to do this plus the ability to set the GUCs\n> backtrace_functions, debug_invalidate_system_caches_always,\n> wal_consistency_checking, and maybe a few other things.\n\n+1 for the idea of having a new role for this. Currently I have\nimplemented this feature to be supported only for the superuser. If we\nare ok with having a new role to handle debugging features, I will\nmake a 002 patch to handle this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 13 Jul 2021 21:03:22 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 9:03 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 2:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > Maybe we should have a role that is specifically for server debugging\n> > type things. This kind of overlaps with Mark Dilger's proposal to try\n> > to allow SET for security-sensitive GUCs to be delegated via\n> > predefined roles. The exact way to divide that up is open to question,\n> > but it wouldn't seem crazy to me if the same role controlled the\n> > ability to do this plus the ability to set the GUCs\n> > backtrace_functions, debug_invalidate_system_caches_always,\n> > wal_consistency_checking, and maybe a few other things.\n>\n> +1 for the idea of having a new role for this. Currently I have\n> implemented this feature to be supported only for the superuser. If we\n> are ok with having a new role to handle debugging features, I will\n> make a 002 patch to handle this.\n\nI see that there are a good number of user functions that are\naccessible only by superuser (I searched for \"if (!superuser())\" in\nthe code base). I agree with the intention to not overload the\nsuperuser anymore and have a few other special roles to delegate the\nexisting superuser-only functions to them. In that case, are we going\nto revisit and reassign all the existing superuser-only functions?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 21 Jul 2021 15:52:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 3:52 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Jul 13, 2021 at 9:03 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, May 12, 2021 at 2:27 AM Robert Haas <robertmhaas@gmail.com>\nwrote:\n> > >\n> > > Maybe we should have a role that is specifically for server debugging\n> > > type things. This kind of overlaps with Mark Dilger's proposal to try\n> > > to allow SET for security-sensitive GUCs to be delegated via\n> > > predefined roles. The exact way to divide that up is open to question,\n> > > but it wouldn't seem crazy to me if the same role controlled the\n> > > ability to do this plus the ability to set the GUCs\n> > > backtrace_functions, debug_invalidate_system_caches_always,\n> > > wal_consistency_checking, and maybe a few other things.\n> >\n> > +1 for the idea of having a new role for this. Currently I have\n> > implemented this feature to be supported only for the superuser. If we\n> > are ok with having a new role to handle debugging features, I will\n> > make a 002 patch to handle this.\n>\n> I see that there are a good number of user functions that are\n> accessible only by superuser (I searched for \"if (!superuser())\" in\n> the code base). I agree with the intention to not overload the\n> superuser anymore and have a few other special roles to delegate the\n> existing superuser-only functions to them. In that case, are we going\n> to revisit and reassign all the existing superuser-only functions?\n\nAs Robert pointed out, this idea is based on Mark Dilger's proposal. Mark\nDilger is already handling many of them at [1]. I'm proposing this patch\nonly for server debugging functionalities based on Robert's suggestions at\n[2].\n[1] - https://commitfest.postgresql.org/33/3223/\n[2] -\nhttps://www.postgresql.org/message-id/CA%2BTgmoZz%3DK1bQRp0Ug%3D6uMGFWg-6kaxdHe6VSWaxq0U-YkppYQ%40mail.gmail.com\n\nRegards,\nVignesh\n\nOn Wed, Jul 21, 2021 at 3:52 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:>> On Tue, Jul 13, 2021 at 9:03 PM vignesh C <vignesh21@gmail.com> wrote:> >> > On Wed, May 12, 2021 at 2:27 AM Robert Haas <robertmhaas@gmail.com> wrote:> > >> > > Maybe we should have a role that is specifically for server debugging> > > type things. This kind of overlaps with Mark Dilger's proposal to try> > > to allow SET for security-sensitive GUCs to be delegated via> > > predefined roles. The exact way to divide that up is open to question,> > > but it wouldn't seem crazy to me if the same role controlled the> > > ability to do this plus the ability to set the GUCs> > > backtrace_functions, debug_invalidate_system_caches_always,> > > wal_consistency_checking, and maybe a few other things.> >> > +1 for the idea of having a new role for this. Currently I have> > implemented this feature to be supported only for the superuser. If we> > are ok with having a new role to handle debugging features, I will> > make a 002 patch to handle this.>> I see that there are a good number of user functions that are> accessible only by superuser (I searched for \"if (!superuser())\" in> the code base). I agree with the intention to not overload the> superuser anymore and have a few other special roles to delegate the> existing superuser-only functions to them. In that case, are we going> to revisit and reassign all the existing superuser-only functions?As Robert pointed out, this idea is based on Mark Dilger's proposal. Mark Dilger is already handling many of them at [1]. I'm proposing this patch only for server debugging functionalities based on Robert's suggestions at [2].[1] - https://commitfest.postgresql.org/33/3223/[2] - https://www.postgresql.org/message-id/CA%2BTgmoZz%3DK1bQRp0Ug%3D6uMGFWg-6kaxdHe6VSWaxq0U-YkppYQ%40mail.gmail.comRegards,Vignesh",
"msg_date": "Wed, 21 Jul 2021 22:56:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 10:56 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jul 21, 2021 at 3:52 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Jul 13, 2021 at 9:03 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Wed, May 12, 2021 at 2:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > >\n> > > > Maybe we should have a role that is specifically for server debugging\n> > > > type things. This kind of overlaps with Mark Dilger's proposal to try\n> > > > to allow SET for security-sensitive GUCs to be delegated via\n> > > > predefined roles. The exact way to divide that up is open to question,\n> > > > but it wouldn't seem crazy to me if the same role controlled the\n> > > > ability to do this plus the ability to set the GUCs\n> > > > backtrace_functions, debug_invalidate_system_caches_always,\n> > > > wal_consistency_checking, and maybe a few other things.\n> > >\n> > > +1 for the idea of having a new role for this. Currently I have\n> > > implemented this feature to be supported only for the superuser. If we\n> > > are ok with having a new role to handle debugging features, I will\n> > > make a 002 patch to handle this.\n> >\n> > I see that there are a good number of user functions that are\n> > accessible only by superuser (I searched for \"if (!superuser())\" in\n> > the code base). I agree with the intention to not overload the\n> > superuser anymore and have a few other special roles to delegate the\n> > existing superuser-only functions to them. In that case, are we going\n> > to revisit and reassign all the existing superuser-only functions?\n>\n> As Robert pointed out, this idea is based on Mark Dilger's proposal. Mark Dilger is already handling many of them at [1]. I'm proposing this patch only for server debugging functionalities based on Robert's suggestions at [2].\n> [1] - https://commitfest.postgresql.org/33/3223/\n> [2] - https://www.postgresql.org/message-id/CA%2BTgmoZz%3DK1bQRp0Ug%3D6uMGFWg-6kaxdHe6VSWaxq0U-YkppYQ%40mail.gmail.com\n\nThe previous patch was failing because of the recent test changes made\nby commit 201a76183e2 which unified new and get_new_node, attached\npatch has the changes to handle the changes accordingly.\n\nRegards,\nVignesh",
"msg_date": "Thu, 26 Aug 2021 20:26:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "> On 26 Aug 2021, at 16:56, vignesh C <vignesh21@gmail.com> wrote:\n\n> The previous patch was failing because of the recent test changes made\n> by commit 201a76183e2 which unified new and get_new_node, attached\n> patch has the changes to handle the changes accordingly.\n\nThis patch now fails because of the test changes made by commit b3b4d8e68a,\nplease submit a rebase.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 4 Nov 2021 11:36:45 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Nov 4, 2021 at 4:06 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 26 Aug 2021, at 16:56, vignesh C <vignesh21@gmail.com> wrote:\n>\n> > The previous patch was failing because of the recent test changes made\n> > by commit 201a76183e2 which unified new and get_new_node, attached\n> > patch has the changes to handle the changes accordingly.\n>\n> This patch now fails because of the test changes made by commit b3b4d8e68a,\n> please submit a rebase.\n\nThanks for reporting this, the attached v9 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Tue, 9 Nov 2021 16:45:36 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 4:45 PM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for reporting this, the attached v9 patch has the changes for the same.\n\nThanks for the v9 patch. I have some comments:\n\n1) I think we are moving away from if (!superuser()) checks, see the\ncommit [1]. The goal is to let the GRANT-REVOKE system deal with who\nis supposed to run these system functions. Since pg_print_backtrace\nalso writes the info to server logs,\n\n2) I think we need to have LOG_SERVER_ONLY instead of LOG to avoid\nbakctrace being sent to the connected client. This will be good from\nsecurity perspective as well since we don't send backtrace over the\nwire to the client.\n+ PrintBacktracePending = false;\n+ ereport(LOG,\n+ (errmsg(\"logging backtrace of PID %d\", MyProcPid)));\n\nfor pg_log_backend_memory_contexts:\n+ /*\n+ * Use LOG_SERVER_ONLY to prevent the memory contexts\nfrom being sent\n+ * to the connected client.\n+ *\n+ * We don't buffer the information about all memory\ncontexts in a\n+ * backend into StringInfo and log it as one message.\nOtherwise which\n+ * may require the buffer to be enlarged very much and\nlead to OOM\n+ * error since there can be a large number of memory\ncontexts in a\n+ * backend. Instead, we log one message per memory context.\n+ */\n+ ereport(LOG_SERVER_ONLY,\n\n3) I think we need to extend this function to the auxiliary processes\ntoo, because users might be interested to see what these processes are\ndoing and where they are currently stuck via their backtraces, see the\nproposal for pg_log_backend_memory_contexts at [2]. I think you need\nto add below code in couple of other places such as\nHandleCheckpointerInterrupts, HandleMainLoopInterrupts,\nHandlePgArchInterrupts, HandleStartupProcInterrupts,\nHandleWalWriterInterrupts.\n\n+ /* Process printing backtrace */\n+ if (PrintBacktracePending)\n+ ProcessPrintBacktraceInterrupt();\n\n[1] commit f0b051e322d530a340e62f2ae16d99acdbcb3d05\nAuthor: Jeff Davis <jdavis@postgresql.org>\nDate: Tue Oct 26 13:13:52 2021 -0700\n\n Allow GRANT on pg_log_backend_memory_contexts().\n\n Remove superuser check, allowing any user granted permissions on\n pg_log_backend_memory_contexts() to log the memory contexts of any\n backend.\n\n Note that this could allow a privileged non-superuser to log the\n memory contexts of a superuser backend, but as discussed, that does\n not seem to be a problem.\n\n Reviewed-by: Nathan Bossart, Bharath Rupireddy, Michael Paquier,\nKyotaro Horiguchi, Andres Freund\n Discussion:\nhttps://postgr.es/m/e5cf6684d17c8d1ef4904ae248605ccd6da03e72.camel@j-davis.com\n\n[2] https://www.postgresql.org/message-id/CALj2ACU1nBzpacOK2q%3Da65S_4%2BOaz_rLTsU1Ri0gf7YUmnmhfQ%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 10 Nov 2021 12:17:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Nov 10, 2021 at 12:17 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 4:45 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for reporting this, the attached v9 patch has the changes for the same.\n>\n> Thanks for the v9 patch. I have some comments:\n>\n> 1) I think we are moving away from if (!superuser()) checks, see the\n> commit [1]. The goal is to let the GRANT-REVOKE system deal with who\n> is supposed to run these system functions. Since pg_print_backtrace\n> also writes the info to server logs,\n\nModified\n\n> 2) I think we need to have LOG_SERVER_ONLY instead of LOG to avoid\n> bakctrace being sent to the connected client. This will be good from\n> security perspective as well since we don't send backtrace over the\n> wire to the client.\n> + PrintBacktracePending = false;\n> + ereport(LOG,\n> + (errmsg(\"logging backtrace of PID %d\", MyProcPid)));\n>\n> for pg_log_backend_memory_contexts:\n> + /*\n> + * Use LOG_SERVER_ONLY to prevent the memory contexts\n> from being sent\n> + * to the connected client.\n> + *\n> + * We don't buffer the information about all memory\n> contexts in a\n> + * backend into StringInfo and log it as one message.\n> Otherwise which\n> + * may require the buffer to be enlarged very much and\n> lead to OOM\n> + * error since there can be a large number of memory\n> contexts in a\n> + * backend. Instead, we log one message per memory context.\n> + */\n> + ereport(LOG_SERVER_ONLY,\n\nModified\n\n> 3) I think we need to extend this function to the auxiliary processes\n> too, because users might be interested to see what these processes are\n> doing and where they are currently stuck via their backtraces, see the\n> proposal for pg_log_backend_memory_contexts at [2]. I think you need\n> to add below code in couple of other places such as\n> HandleCheckpointerInterrupts, HandleMainLoopInterrupts,\n> HandlePgArchInterrupts, HandleStartupProcInterrupts,\n> HandleWalWriterInterrupts.\n>\n> + /* Process printing backtrace */\n> + if (PrintBacktracePending)\n> + ProcessPrintBacktraceInterrupt();\n\nCreated 0002 patch to handle this.\n\nThanks for the comments, the attached v10 patch has the fixes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 11 Nov 2021 12:13:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Nov 11, 2021 at 12:14 PM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for the comments, the attached v10 patch has the fixes for the same.\n\nThanks for the patches. Here are some comments:\n\n1) In the docs, let's have the similar description of\npg_log_backend_memory_contexts for pg_print_backtrace, just for the\ncontinuity in the users readability.\n\n2) I don't know how the <screen> part looks like in the Server\nSignaling Functions table. I think here you can just say, it will emit\na warning and return false if not supported by the installation. And\nyou can give the <screen> part after the description where you are\nshowing a sample backtrace.\n\n+ capture backtrace. If not available, the function will return false\n+ and a warning is issued, for example:\n+<screen>\n+WARNING: backtrace generation is not supported by this installation\n+ pg_print_backtrace\n+--------------------\n+ f\n+</screen>\n+ </para></entry>\n+ </row>\n\n3) Replace '!' with '.'.\n+ * Note: this is called within a signal handler! All we can do is set\n\n4) It is not only the next CFI but also the process specific interrupt\nhandlers (in your 0002 patch) right?\n+ * a flag that will cause the next CHECK_FOR_INTERRUPTS to invoke\n\n5) I think you need to update CATALOG_VERSION_NO, mostly the committer\nwill take care of it but just in case.\n\n6) Be consistent with casing \"Verify\" and \"Might\"\n+# Verify that log output gets to the file\n+# might need to retry if logging collector process is slow...\n\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 12 Nov 2021 17:15:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Nov 12, 2021 at 5:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 11, 2021 at 12:14 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, the attached v10 patch has the fixes for the same.\n>\n> Thanks for the patches. Here are some comments:\n>\n> 1) In the docs, let's have the similar description of\n> pg_log_backend_memory_contexts for pg_print_backtrace, just for the\n> continuity in the users readability.\n>\n> 2) I don't know how the <screen> part looks like in the Server\n> Signaling Functions table. I think here you can just say, it will emit\n> a warning and return false if not supported by the installation. And\n> you can give the <screen> part after the description where you are\n> showing a sample backtrace.\n>\n> + capture backtrace. If not available, the function will return false\n> + and a warning is issued, for example:\n> +<screen>\n> +WARNING: backtrace generation is not supported by this installation\n> + pg_print_backtrace\n> +--------------------\n> + f\n> +</screen>\n> + </para></entry>\n> + </row>\n>\n> 3) Replace '!' with '.'.\n> + * Note: this is called within a signal handler! All we can do is set\n>\n> 4) It is not only the next CFI but also the process specific interrupt\n> handlers (in your 0002 patch) right?\n> + * a flag that will cause the next CHECK_FOR_INTERRUPTS to invoke\n>\n> 5) I think you need to update CATALOG_VERSION_NO, mostly the committer\n> will take care of it but just in case.\n>\n> 6) Be consistent with casing \"Verify\" and \"Might\"\n> +# Verify that log output gets to the file\n> +# might need to retry if logging collector process is slow...\n\nFew more comments:\n\n7) Do we need TAP tests for this function? I think it is sufficient to\ntest the function in misc_functions.sql, please remove\n002_print_backtrace_validation.pl. Note that we don't do similar TAP\ntesting for pg_log_backend_memory_contexts as well.\n\n8) I hope you have manually tested the pg_print_backtrace for the\nstartup process and other auxiliary processes.\n\n9) I think we can have a similar description (to the patch [1]). with\nlinks to each process definition in the glossary so that it will be\neasier for the users to follow the links and know what each process\nis. Especially, I didn't like the 0002 mentioned about the\nBackendPidGetProc or AuxiliaryPidGetProc as these are internal to the\nserver and the users don't care about them.\n\n- * end on its own first and its backtrace are not logged is not a problem.\n+ * BackendPidGetProc or AuxiliaryPidGetProc returns NULL if the pid isn't\n+ * valid; but by the time we reach kill(), a process for which we get a\n+ * valid proc here might have terminated on its own. There's no way to\n+ * acquire a lock on an arbitrary process to prevent that. But since this\n+ * mechanism is usually used to debug a backend running and consuming lots\n+ * of CPU cycles, that it might end on its own first and its backtrace are\n+ * not logged is not a problem.\n */\n\nHere's what I have written in the other patch [1], if okay please use this:\n\n+ Requests to log memory contexts of the backend or the\n+ <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or\n+ the <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\nprocess</glossterm>\n+ with the specified process ID. All of the\n+ <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\nprocesses</glossterm>\n+ are supported except the <glossterm\nlinkend=\"glossary-logger\">logger</glossterm>\n+ and the <glossterm\nlinkend=\"glossary-stats-collector\">statistics collector</glossterm>\n+ as they are not connected to shared memory the function can\nnot make requests.\n+ These memory contexts will be logged at\n<literal>LOG</literal> message level.\n+ They will appear in the server log based on the log configuration set\n (See <xref linkend=\"runtime-config-logging\"/> for more information),\n\nI have no further comments on v10.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACU1nBzpacOK2q%3Da65S_4%2BOaz_rLTsU1Ri0gf7YUmnmhfQ%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 12 Nov 2021 18:11:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Nov 12, 2021 at 5:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 11, 2021 at 12:14 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, the attached v10 patch has the fixes for the same.\n>\n> Thanks for the patches. Here are some comments:\n>\n> 1) In the docs, let's have the similar description of\n> pg_log_backend_memory_contexts for pg_print_backtrace, just for the\n> continuity in the users readability.\n\nI have kept some contents of the description similar. There is some\nadditional information to explain more about the functionality. I felt\nthat will help the user to understand more about the feature.\n\n> 2) I don't know how the <screen> part looks like in the Server\n> Signaling Functions table. I think here you can just say, it will emit\n> a warning and return false if not supported by the installation. And\n> you can give the <screen> part after the description where you are\n> showing a sample backtrace.\n>\n> + capture backtrace. If not available, the function will return false\n> + and a warning is issued, for example:\n> +<screen>\n> +WARNING: backtrace generation is not supported by this installation\n> + pg_print_backtrace\n> +--------------------\n> + f\n> +</screen>\n> + </para></entry>\n> + </row>\n\nModified\n\n> 3) Replace '!' with '.'.\n> + * Note: this is called within a signal handler! All we can do is set\n\nI have changed it similar to HandleLogMemoryContextInterrupt\n\n> 4) It is not only the next CFI but also the process specific interrupt\n> handlers (in your 0002 patch) right?\n> + * a flag that will cause the next CHECK_FOR_INTERRUPTS to invoke\n\nModified\n\n> 5) I think you need to update CATALOG_VERSION_NO, mostly the committer\n> will take care of it but just in case.\n\nModified\n\n> 6) Be consistent with casing \"Verify\" and \"Might\"\n> +# Verify that log output gets to the file\n> +# might need to retry if logging collector process is slow...\n\nModified\n\nThe attached v11 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Sun, 14 Nov 2021 20:46:00 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Nov 12, 2021 at 6:11 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Nov 12, 2021 at 5:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Nov 11, 2021 at 12:14 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > Thanks for the comments, the attached v10 patch has the fixes for the same.\n> >\n> > Thanks for the patches. Here are some comments:\n> >\n> > 1) In the docs, let's have the similar description of\n> > pg_log_backend_memory_contexts for pg_print_backtrace, just for the\n> > continuity in the users readability.\n> >\n> > 2) I don't know how the <screen> part looks like in the Server\n> > Signaling Functions table. I think here you can just say, it will emit\n> > a warning and return false if not supported by the installation. And\n> > you can give the <screen> part after the description where you are\n> > showing a sample backtrace.\n> >\n> > + capture backtrace. If not available, the function will return false\n> > + and a warning is issued, for example:\n> > +<screen>\n> > +WARNING: backtrace generation is not supported by this installation\n> > + pg_print_backtrace\n> > +--------------------\n> > + f\n> > +</screen>\n> > + </para></entry>\n> > + </row>\n> >\n> > 3) Replace '!' with '.'.\n> > + * Note: this is called within a signal handler! All we can do is set\n> >\n> > 4) It is not only the next CFI but also the process specific interrupt\n> > handlers (in your 0002 patch) right?\n> > + * a flag that will cause the next CHECK_FOR_INTERRUPTS to invoke\n> >\n> > 5) I think you need to update CATALOG_VERSION_NO, mostly the committer\n> > will take care of it but just in case.\n> >\n> > 6) Be consistent with casing \"Verify\" and \"Might\"\n> > +# Verify that log output gets to the file\n> > +# might need to retry if logging collector process is slow...\n>\n> Few more comments:\n>\n> 7) Do we need TAP tests for this function? I think it is sufficient to\n> test the function in misc_functions.sql, please remove\n> 002_print_backtrace_validation.pl. Note that we don't do similar TAP\n> testing for pg_log_backend_memory_contexts as well.\n\nI felt let's keep this test case, all the other tests just check if it\nreturns true or false, it does not checks for the contents in the\nlogfile. This is the only test which checks the logfile.\n\n> 8) I hope you have manually tested the pg_print_backtrace for the\n> startup process and other auxiliary processes.\n\nYes, I have checked them manually.\n\n> 9) I think we can have a similar description (to the patch [1]). with\n> links to each process definition in the glossary so that it will be\n> easier for the users to follow the links and know what each process\n> is. Especially, I didn't like the 0002 mentioned about the\n> BackendPidGetProc or AuxiliaryPidGetProc as these are internal to the\n> server and the users don't care about them.\n>\n> - * end on its own first and its backtrace are not logged is not a problem.\n> + * BackendPidGetProc or AuxiliaryPidGetProc returns NULL if the pid isn't\n> + * valid; but by the time we reach kill(), a process for which we get a\n> + * valid proc here might have terminated on its own. There's no way to\n> + * acquire a lock on an arbitrary process to prevent that. But since this\n> + * mechanism is usually used to debug a backend running and consuming lots\n> + * of CPU cycles, that it might end on its own first and its backtrace are\n> + * not logged is not a problem.\n> */\n>\n> Here's what I have written in the other patch [1], if okay please use this:\n>\n> + Requests to log memory contexts of the backend or the\n> + <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or\n> + the <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\n> process</glossterm>\n> + with the specified process ID. All of the\n> + <glossterm linkend=\"glossary-auxiliary-proc\">auxiliary\n> processes</glossterm>\n> + are supported except the <glossterm\n> linkend=\"glossary-logger\">logger</glossterm>\n> + and the <glossterm\n> linkend=\"glossary-stats-collector\">statistics collector</glossterm>\n> + as they are not connected to shared memory the function can\n> not make requests.\n> + These memory contexts will be logged at\n> <literal>LOG</literal> message level.\n> + They will appear in the server log based on the log configuration set\n> (See <xref linkend=\"runtime-config-logging\"/> for more information),\n\nI had mentioned BackendPidGetProc or AuxiliaryPidGetProc as comments\nin the function, but I have not changed that. I have changed the\ndocumentation similar to your patch.\n\nThanks for the comments, v11 patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm3BYGOG3-PQvYbWkB%3DG3h1KYJ8CO8UYbzfECH4DYGMGqA%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Nov 2021 20:48:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sun, Nov 14, 2021 at 8:49 PM vignesh C <vignesh21@gmail.com> wrote:\n> > 7) Do we need TAP tests for this function? I think it is sufficient to\n> > test the function in misc_functions.sql, please remove\n> > 002_print_backtrace_validation.pl. Note that we don't do similar TAP\n> > testing for pg_log_backend_memory_contexts as well.\n>\n> I felt let's keep this test case, all the other tests just check if it\n> returns true or false, it does not checks for the contents in the\n> logfile. This is the only test which checks the logfile.\n\n I still don't agree to have test cases within a new file\n002_print_backtrace_validation.pl. I feel this test case doesn't add\nvalue because the code coverage is done by .sql test cases and .pl\njust ensures the backtrace appears in the server logs. I don't think\nwe ever need a new file for this purpose. If this is the case, then\nthere are other functions like pg_log_backend_memory_contexts or\npg_log_query_plan (in progress thread) might add the same test files\nfor the same reasons which make the TAP tests i.e. \"make check-world\"\nto take longer times. Moreover, pg_log_backend_memory_contexts has\nbeen committed without having a TAP test case.\n\nI think we can remove it.\n\nFew more comments on v11:\n1) I think we can improve here by adding a link to \"backend\" as well,\nI will modify it in the other thread.\n+ Requests to log the backtrace of the backend or the\n+ <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or\nSomething like:\n+ Requests to log the backtrace of the <glossterm\nlinkend=\"glossary-backend\">backend</glossterm> or the\n+ <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or\n\n2) I think \"which is enough because the target process for logging of\nbacktrace is a backend\" isn't valid anymore with 0002, righit? Please\nremove it.\n+ * to call this function if we see PrintBacktracePending set. It is called from\n+ * CHECK_FOR_INTERRUPTS() or from process specific interrupt handlers, which is\n+ * enough because the target process for logging of backtrace is a backend.\n\n> Thanks for the comments, v11 patch attached at [1] has the changes for the same.\n\nThe v11 patches mostly look good to me except the above comments.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 15 Nov 2021 07:37:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 7:37 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, Nov 14, 2021 at 8:49 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > 7) Do we need TAP tests for this function? I think it is sufficient to\n> > > test the function in misc_functions.sql, please remove\n> > > 002_print_backtrace_validation.pl. Note that we don't do similar TAP\n> > > testing for pg_log_backend_memory_contexts as well.\n> >\n> > I felt let's keep this test case, all the other tests just check if it\n> > returns true or false, it does not checks for the contents in the\n> > logfile. This is the only test which checks the logfile.\n>\n> I still don't agree to have test cases within a new file\n> 002_print_backtrace_validation.pl. I feel this test case doesn't add\n> value because the code coverage is done by .sql test cases and .pl\n> just ensures the backtrace appears in the server logs. I don't think\n> we ever need a new file for this purpose. If this is the case, then\n> there are other functions like pg_log_backend_memory_contexts or\n> pg_log_query_plan (in progress thread) might add the same test files\n> for the same reasons which make the TAP tests i.e. \"make check-world\"\n> to take longer times. Moreover, pg_log_backend_memory_contexts has\n> been committed without having a TAP test case.\n>\n> I think we can remove it.\n\nRemoved\n\n> Few more comments on v11:\n> 1) I think we can improve here by adding a link to \"backend\" as well,\n> I will modify it in the other thread.\n> + Requests to log the backtrace of the backend or the\n> + <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or\n> Something like:\n> + Requests to log the backtrace of the <glossterm\n> linkend=\"glossary-backend\">backend</glossterm> or the\n> + <glossterm linkend=\"glossary-wal-sender\">WAL sender</glossterm> or\n\nModified\n\n> 2) I think \"which is enough because the target process for logging of\n> backtrace is a backend\" isn't valid anymore with 0002, righit? Please\n> remove it.\n> + * to call this function if we see PrintBacktracePending set. It is called from\n> + * CHECK_FOR_INTERRUPTS() or from process specific interrupt handlers, which is\n> + * enough because the target process for logging of backtrace is a backend.\n>\n> > Thanks for the comments, v11 patch attached at [1] has the changes for the same.\n\nModified\n\nThanks for the comments, the attached v12 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Mon, 15 Nov 2021 10:34:00 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 10:34 AM vignesh C <vignesh21@gmail.com> wrote:\n> > 2) I think \"which is enough because the target process for logging of\n> > backtrace is a backend\" isn't valid anymore with 0002, righit? Please\n> > remove it.\n> > + * to call this function if we see PrintBacktracePending set. It is called from\n> > + * CHECK_FOR_INTERRUPTS() or from process specific interrupt handlers, which is\n> > + * enough because the target process for logging of backtrace is a backend.\n> >\n> > > Thanks for the comments, v11 patch attached at [1] has the changes for the same.\n>\n> Modified\n\nI don't see the above change in v12. Am I missing something? I still\nsee the comment \"It is called from CHECK_FOR_INTERRUPTS(), which is\nenough because the target process for logging of backtrace is a\nbackend.\".\n\n+ * ProcessPrintBacktraceInterrupt - Perform logging of backtrace of this\n+ * backend process.\n+ *\n+ * Any backend that participates in ProcSignal signaling must arrange\n+ * to call this function if we see PrintBacktracePending set.\n+ * It is called from CHECK_FOR_INTERRUPTS(), which is enough because\n+ * the target process for logging of backtrace is a backend.\n+ */\n+void\n+ProcessPrintBacktraceInterrupt(void)\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 15 Nov 2021 11:00:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 10:34 AM vignesh C <vignesh21@gmail.com> wrote:\n\n>\n> Thanks for the comments, the attached v12 patch has the changes for the same.\n\nI have reviewed this patch and have some comments on v12-0001,\n\n1.\n+ This feature is not supported for the postmaster, logger, checkpointer,\n+ walwriter, background writer or statistics collector process. This\n\n\nComment says it is not supported for postmaster, logger, checkpointer\netc, but I just tried and it is working for checkpointer and walwriter\nprocesses, can you explain in comments why do we not want to support\nfor these processes? or the comment is old and now we are supporting\nfor some of these processes.\n\n\n2.\npostgres[64154]=# select pg_print_backtrace(64136);\nWARNING: 01000: PID 64136 is not a PostgreSQL server process\nLOCATION: pg_print_backtrace, signalfuncs.c:335\n pg_print_backtrace\n--------------------\n f\n\n\nFor postmaster I am getting this WARNING \"PID 64136 is not a\nPostgreSQL server process\", even if we don't want to support this\nprocess I don't think this message is good.\n\n\n\n3.\nI was looking into the patch, I tried to print the backtrace using GDB\nas well as using this function, I have complied in debug mode, I can\nsee the backtrace printed\nby GDB is more detailed than printed by this API, I understand we can\nfind out the function name from address, but can't we print the\ndetailed call stack with all function names at least when debug\nsymbols are available?\n\nUsing GDB\n\n#0 0x00007fa26c527e93 in __epoll_wait_nocancel () from\n#1 0x0000000000947a61 in WaitEventSetWaitBlock (set=0x2\n#2 0x00000000009478f9 in WaitEventSetWait (set=0x2f0111\n#3 0x00000000007a6cef in secure_read (port=0x2f26800, p\n#4 0x00000000007b0bd6 in pq_recvbuf () at pqcomm.c:957\n#5 0x00000000007b0c86 in pq_getbyte () at pqcomm.c:1000\n#6 0x0000000000978c13 in SocketBackend (inBuf=0x7ffea99\n#7 0x0000000000978e37 in ReadCommand (inBuf=0x7ffea9937\n#8 0x000000000097dca5 in PostgresMain (dbname=0x2f2ef40\n....\n\nUsing pg_print_backtrace\n postgres: dilipkumar postgres [local]\nSELECT(set_backtrace+0x38) [0xb118ff]\n postgres: dilipkumar postgres [local]\nSELECT(ProcessPrintBacktraceInterrupt+0x5b) [0x94fe42]\n postgres: dilipkumar postgres [local]\nSELECT(ProcessInterrupts+0x7d9) [0x97cb2a]\n postgres: dilipkumar postgres [local] SELECT() [0x78143c]\n postgres: dilipkumar postgres [local] SELECT() [0x736731]\n postgres: dilipkumar postgres [local] SELECT() [0x738f5f]\n postgres: dilipkumar postgres [local]\nSELECT(standard_ExecutorRun+0x1d6) [0x736d94]\n postgres: dilipkumar postgres [local] SELECT(ExecutorRun+0x55)\n[0x736bbc]\n postgres: dilipkumar postgres [local] SELECT() [0x97ff0c]\n postgres: dilipkumar postgres [local] SELECT(PortalRun+0x268) [0x97fbbf]\n postgres: dilipkumar postgres [local] SELECT() [0x9798dc]\n postgres: dilipkumar postgres [local]\nSELECT(PostgresMain+0x6ca) [0x97dda7]\n\n4.\n+WARNING: backtrace generation is not supported by this installation\n+ pg_print_backtrace\n+--------------------\n+ f\n\nShould we give some hints on how to enable that?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Nov 2021 11:37:33 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 11:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 10:34 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> >\n> > Thanks for the comments, the attached v12 patch has the changes for the same.\n>\n> I have reviewed this patch and have some comments on v12-0001,\n>\n> 1.\n> + This feature is not supported for the postmaster, logger, checkpointer,\n> + walwriter, background writer or statistics collector process. This\n>\n>\n> Comment says it is not supported for postmaster, logger, checkpointer\n> etc, but I just tried and it is working for checkpointer and walwriter\n> processes, can you explain in comments why do we not want to support\n> for these processes? or the comment is old and now we are supporting\n> for some of these processes.\n\nPlease see the v12-0002 which will have the description modified.\n\n> 2.\n> postgres[64154]=# select pg_print_backtrace(64136);\n> WARNING: 01000: PID 64136 is not a PostgreSQL server process\n> LOCATION: pg_print_backtrace, signalfuncs.c:335\n> pg_print_backtrace\n> --------------------\n> f\n>\n>\n> For postmaster I am getting this WARNING \"PID 64136 is not a\n> PostgreSQL server process\", even if we don't want to support this\n> process I don't think this message is good.\n\nThis is a generic message that is coming from pg_signal_backend, not\nrelated to Vignesh's patch. I agree with you that emitting a \"not\npostgres server process\" for the postmaster process which is the main\n\"postgres process\" doesn't sound sensible. Please see there's already\na thread [1] and see the v1 patch [2] for changing this message.\nPlease let me know if you want me to revive that stalled thread?\n\n[1] https://www.postgresql.org/message-id/CALj2ACW7Rr-R7mBcBQiXWPp%3DJV5chajjTdudLiF5YcpW-BmHhg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CALj2ACUGxedgYk-5nO8D2EJV2YHXnoycp_oqYAxDXTODhWkmkg%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 15 Nov 2021 11:53:44 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 11:53 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 11:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 10:34 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > >\n> > > Thanks for the comments, the attached v12 patch has the changes for the same.\n> >\n> > I have reviewed this patch and have some comments on v12-0001,\n> >\n> > 1.\n> > + This feature is not supported for the postmaster, logger, checkpointer,\n> > + walwriter, background writer or statistics collector process. This\n> >\n> >\n> > Comment says it is not supported for postmaster, logger, checkpointer\n> > etc, but I just tried and it is working for checkpointer and walwriter\n> > processes, can you explain in comments why do we not want to support\n> > for these processes? or the comment is old and now we are supporting\n> > for some of these processes.\n>\n> Please see the v12-0002 which will have the description modified.\n\nOkay, now I see that.\n\n> > 2.\n> > postgres[64154]=# select pg_print_backtrace(64136);\n> > WARNING: 01000: PID 64136 is not a PostgreSQL server process\n> > LOCATION: pg_print_backtrace, signalfuncs.c:335\n> > pg_print_backtrace\n> > --------------------\n> > f\n> >\n> >\n> > For postmaster I am getting this WARNING \"PID 64136 is not a\n> > PostgreSQL server process\", even if we don't want to support this\n> > process I don't think this message is good.\n>\n> This is a generic message that is coming from pg_signal_backend, not\n> related to Vignesh's patch. I agree with you that emitting a \"not\n> postgres server process\" for the postmaster process which is the main\n> \"postgres process\" doesn't sound sensible. Please see there's already\n> a thread [1] and see the v1 patch [2] for changing this message.\n> Please let me know if you want me to revive that stalled thread?\n\n>[1] https://www.postgresql.org/message-id/CALj2ACW7Rr-R7mBcBQiXWPp%3DJV5chajjTdudLiF5YcpW-BmHhg%40mail.gmail.com\n>[2] https://www.postgresql.org/message-id/CALj2ACUGxedgYk-5nO8D2EJV2YHXnoycp_oqYAxDXTODhWkmkg%40mail.gmail.com\n\nHmm, yeah I think I like the idea posted in [1], however, I could not\nopen the link [2]\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Nov 2021 12:08:19 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 11:00 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 10:34 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > 2) I think \"which is enough because the target process for logging of\n> > > backtrace is a backend\" isn't valid anymore with 0002, righit? Please\n> > > remove it.\n> > > + * to call this function if we see PrintBacktracePending set. It is called from\n> > > + * CHECK_FOR_INTERRUPTS() or from process specific interrupt handlers, which is\n> > > + * enough because the target process for logging of backtrace is a backend.\n> > >\n> > > > Thanks for the comments, v11 patch attached at [1] has the changes for the same.\n> >\n> > Modified\n>\n> I don't see the above change in v12. Am I missing something? I still\n> see the comment \"It is called from CHECK_FOR_INTERRUPTS(), which is\n> enough because the target process for logging of backtrace is a\n> backend.\".\n\nThis change is present in the 0002\n(v12-0002-pg_print_backtrace-support-for-printing-backtrac.patch)\npatch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 15 Nov 2021 12:13:12 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 12:13 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 11:00 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 10:34 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > 2) I think \"which is enough because the target process for logging of\n> > > > backtrace is a backend\" isn't valid anymore with 0002, righit? Please\n> > > > remove it.\n> > > > + * to call this function if we see PrintBacktracePending set. It is called from\n> > > > + * CHECK_FOR_INTERRUPTS() or from process specific interrupt handlers, which is\n> > > > + * enough because the target process for logging of backtrace is a backend.\n> > > >\n> > > > > Thanks for the comments, v11 patch attached at [1] has the changes for the same.\n> > >\n> > > Modified\n> >\n> > I don't see the above change in v12. Am I missing something? I still\n> > see the comment \"It is called from CHECK_FOR_INTERRUPTS(), which is\n> > enough because the target process for logging of backtrace is a\n> > backend.\".\n>\n> This change is present in the 0002\n> (v12-0002-pg_print_backtrace-support-for-printing-backtrac.patch)\n> patch.\n\nThanks. Yes, it was removed in 0002. I missed it.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 15 Nov 2021 12:16:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 12:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > 2.\n> > > postgres[64154]=# select pg_print_backtrace(64136);\n> > > WARNING: 01000: PID 64136 is not a PostgreSQL server process\n> > > LOCATION: pg_print_backtrace, signalfuncs.c:335\n> > > pg_print_backtrace\n> > > --------------------\n> > > f\n> > >\n> > >\n> > > For postmaster I am getting this WARNING \"PID 64136 is not a\n> > > PostgreSQL server process\", even if we don't want to support this\n> > > process I don't think this message is good.\n> >\n> > This is a generic message that is coming from pg_signal_backend, not\n> > related to Vignesh's patch. I agree with you that emitting a \"not\n> > postgres server process\" for the postmaster process which is the main\n> > \"postgres process\" doesn't sound sensible. Please see there's already\n> > a thread [1] and see the v1 patch [2] for changing this message.\n> > Please let me know if you want me to revive that stalled thread?\n>\n> >[1] https://www.postgresql.org/message-id/CALj2ACW7Rr-R7mBcBQiXWPp%3DJV5chajjTdudLiF5YcpW-BmHhg%40mail.gmail.com\n> >[2] https://www.postgresql.org/message-id/CALj2ACUGxedgYk-5nO8D2EJV2YHXnoycp_oqYAxDXTODhWkmkg%40mail.gmail.com\n>\n> Hmm, yeah I think I like the idea posted in [1], however, I could not\n> open the link [2]\n\nThanks, I posted an updated v3 patch at [1]. Please review it there.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACWS2bJRW-bSvcoL4FvS%3DkbQ8SSWXi%3D9RFUt7uqZvTQWWw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 15 Nov 2021 12:58:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 11:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 10:34 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> >\n> > Thanks for the comments, the attached v12 patch has the changes for the same.\n>\n> I have reviewed this patch and have some comments on v12-0001,\n>\n> 1.\n> + This feature is not supported for the postmaster, logger, checkpointer,\n> + walwriter, background writer or statistics collector process. This\n>\n>\n> Comment says it is not supported for postmaster, logger, checkpointer\n> etc, but I just tried and it is working for checkpointer and walwriter\n> processes, can you explain in comments why do we not want to support\n> for these processes? or the comment is old and now we are supporting\n> for some of these processes.\n>\n>\n> 2.\n> postgres[64154]=# select pg_print_backtrace(64136);\n> WARNING: 01000: PID 64136 is not a PostgreSQL server process\n> LOCATION: pg_print_backtrace, signalfuncs.c:335\n> pg_print_backtrace\n> --------------------\n> f\n>\n>\n> For postmaster I am getting this WARNING \"PID 64136 is not a\n> PostgreSQL server process\", even if we don't want to support this\n> process I don't think this message is good.\n>\n>\n>\n> 3.\n> I was looking into the patch, I tried to print the backtrace using GDB\n> as well as using this function, I have complied in debug mode, I can\n> see the backtrace printed\n> by GDB is more detailed than printed by this API, I understand we can\n> find out the function name from address, but can't we print the\n> detailed call stack with all function names at least when debug\n> symbols are available?\n>\n> Using GDB\n>\n> #0 0x00007fa26c527e93 in __epoll_wait_nocancel () from\n> #1 0x0000000000947a61 in WaitEventSetWaitBlock (set=0x2\n> #2 0x00000000009478f9 in WaitEventSetWait (set=0x2f0111\n> #3 0x00000000007a6cef in secure_read (port=0x2f26800, p\n> #4 0x00000000007b0bd6 in pq_recvbuf () at pqcomm.c:957\n> #5 0x00000000007b0c86 in pq_getbyte () at pqcomm.c:1000\n> #6 0x0000000000978c13 in SocketBackend (inBuf=0x7ffea99\n> #7 0x0000000000978e37 in ReadCommand (inBuf=0x7ffea9937\n> #8 0x000000000097dca5 in PostgresMain (dbname=0x2f2ef40\n> ....\n>\n> Using pg_print_backtrace\n> postgres: dilipkumar postgres [local]\n> SELECT(set_backtrace+0x38) [0xb118ff]\n> postgres: dilipkumar postgres [local]\n> SELECT(ProcessPrintBacktraceInterrupt+0x5b) [0x94fe42]\n> postgres: dilipkumar postgres [local]\n> SELECT(ProcessInterrupts+0x7d9) [0x97cb2a]\n> postgres: dilipkumar postgres [local] SELECT() [0x78143c]\n> postgres: dilipkumar postgres [local] SELECT() [0x736731]\n> postgres: dilipkumar postgres [local] SELECT() [0x738f5f]\n> postgres: dilipkumar postgres [local]\n> SELECT(standard_ExecutorRun+0x1d6) [0x736d94]\n> postgres: dilipkumar postgres [local] SELECT(ExecutorRun+0x55)\n> [0x736bbc]\n> postgres: dilipkumar postgres [local] SELECT() [0x97ff0c]\n> postgres: dilipkumar postgres [local] SELECT(PortalRun+0x268) [0x97fbbf]\n> postgres: dilipkumar postgres [local] SELECT() [0x9798dc]\n> postgres: dilipkumar postgres [local]\n> SELECT(PostgresMain+0x6ca) [0x97dda7]\n\nI did not find any API's with such an implementation. We have used\nbacktrace and backtrace_symbols to print the address. Same thing is\nused to add error backtrace and print backtrace for assert failure,\nsee errbacktrace and ExceptionalCondition. I felt this is ok.\n\n> 4.\n> +WARNING: backtrace generation is not supported by this installation\n> + pg_print_backtrace\n> +--------------------\n> + f\n>\n> Should we give some hints on how to enable that?\n\nModified to include a hint message.\nThe Attached v13 patch has the fix for it.\n\nRegards,\nVignesh",
"msg_date": "Mon, 15 Nov 2021 21:12:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 09:12:49PM +0530, vignesh C wrote:\n> The idea here is to implement & expose pg_print_backtrace function, internally\n\nThis patch is closely related to this one\nhttps://commitfest.postgresql.org/35/3142/\n| Logging plan of the currently running query\n\nI suggest to review that patch and make sure there's nothing you could borrow.\n\nMy only comment for now is that maybe the function name should be\npg_log_backtrace() rather than pg_print_backtrace(), since it doesn't actually\n\"print\" the backtrace, but rather request the other backend to log its\nbacktrace.\n\nDid you see that the windows build failed ?\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.153557\n\nI think you'll need to create an \"alternate\" output like\nsrc/test/regress/expected/misc_functions_1.out\n\nIt's possible that's best done by creating a totally new .sql and .out files\nadded to src/test/regress/parallel_schedule - because otherwise, you'd have to\nduplicate the existing 300 lines of misc_fuctions.out.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 15 Nov 2021 13:42:51 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 1:12 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 09:12:49PM +0530, vignesh C wrote:\n> > The idea here is to implement & expose pg_print_backtrace function, internally\n>\n> This patch is closely related to this one\n> https://commitfest.postgresql.org/35/3142/\n> | Logging plan of the currently running query\n>\n> I suggest to review that patch and make sure there's nothing you could borrow.\n>\n> My only comment for now is that maybe the function name should be\n> pg_log_backtrace() rather than pg_print_backtrace(), since it doesn't actually\n> \"print\" the backtrace, but rather request the other backend to log its\n> backtrace.\n\n+1 for pg_log_backtrace().\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 16 Nov 2021 08:04:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 1:12 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 09:12:49PM +0530, vignesh C wrote:\n> > The idea here is to implement & expose pg_print_backtrace function, internally\n>\n> This patch is closely related to this one\n> https://commitfest.postgresql.org/35/3142/\n> | Logging plan of the currently running query\n>\n> I suggest to review that patch and make sure there's nothing you could borrow.\n\nI had a look and felt the attached patch is in similar lines\n\n> My only comment for now is that maybe the function name should be\n> pg_log_backtrace() rather than pg_print_backtrace(), since it doesn't actually\n> \"print\" the backtrace, but rather request the other backend to log its\n> backtrace.\n\nModified\n\n> Did you see that the windows build failed ?\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.153557\n> I think you'll need to create an \"alternate\" output like\n> src/test/regress/expected/misc_functions_1.out\n>\n> It's possible that's best done by creating a totally new .sql and .out files\n> added to src/test/regress/parallel_schedule - because otherwise, you'd have to\n> duplicate the existing 300 lines of misc_fuctions.out.\n\nModified\n\nAttached v14 patch has the fixes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 17 Nov 2021 20:12:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 08:12:44PM +0530, vignesh C wrote:\n> Attached v14 patch has the fixes for the same.\n\nThanks for updating the patch.\n\nI cleaned up the docs and comments. I think this could be nearly \"Ready\".\n\nIf you like the changes in my \"fixup\" patch (0002 and 0004), you should be able\nto apply my 0002 on top of your 0001. I'm sure it'll cause some conflicts with\nyour 2nd patch, though...\n\nThis doesn't bump the catversion, since that would cause the patch to fail in\ncfbot every time another commit updates catversion.\n\nYour 0001 patch allows printing backtraces of autovacuum, but the doc says it's\nonly for \"backends\". Should the change to autovacuum.c be moved to the 2nd\npatch ? Or, why is autovacuum so special that it should be handled in the\nfirst patch ?\n\n-- \nJustin",
"msg_date": "Thu, 18 Nov 2021 10:22:44 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Nov 18, 2021 at 9:52 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Nov 17, 2021 at 08:12:44PM +0530, vignesh C wrote:\n> > Attached v14 patch has the fixes for the same.\n>\n> Thanks for updating the patch.\n>\n> I cleaned up the docs and comments. I think this could be nearly \"Ready\".\n>\n> If you like the changes in my \"fixup\" patch (0002 and 0004), you should be able\n> to apply my 0002 on top of your 0001. I'm sure it'll cause some conflicts with\n> your 2nd patch, though...\n\nI have slightly modified and taken the changes. I have not taken a few\nof the changes to keep it similar to pg_log_backend_memory_contexts.\n\n> This doesn't bump the catversion, since that would cause the patch to fail in\n> cfbot every time another commit updates catversion.\n\nI have removed it, since it there in commit message it is ok.\n\n> Your 0001 patch allows printing backtraces of autovacuum, but the doc says it's\n> only for \"backends\". Should the change to autovacuum.c be moved to the 2nd\n> patch ? Or, why is autovacuum so special that it should be handled in the\n> first patch ?\n\nI had separated the patches so that it is easier for review, I have\nmerged the changes as few rounds of review is done for the patch. Now\nsince the patch is merged, this change is handled.\nThe Attached v15 patch has the fixes for the same.\n\nRegards,\nVignesh",
"msg_date": "Fri, 19 Nov 2021 16:07:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:\n> The Attached v15 patch has the fixes for the same.\n\nThanks. The v15 patch LGTM and the cf bot is happy hence marking it as RfC.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 20 Nov 2021 11:50:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sat, Nov 20, 2021 at 11:50 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Nov 19, 2021 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:\n> > The Attached v15 patch has the fixes for the same.\n>\n> Thanks. The v15 patch LGTM and the cf bot is happy hence marking it as RfC.\n\nThe patch was not applying because of the recent commit [1]. I took\nthis opportunity and tried a bunch of things without modifying the\ncore logic of the pg_log_backtrace feature that Vignesh has worked on.\n\nI did following - moved the duplicate code to a new function\nCheckPostgresProcessId which can be used by pg_log_backtrace,\npg_log_memory_contexts, pg_signal_backend and pg_log_query_plan ([2]),\nmodified the code comments, docs and tests to be more in sync with the\ncommit [1], moved two of ProcessLogBacktraceInterrupt calls (archiver\nand wal writer) to their respective interrupt handlers. Here's the v16\nversion that I've come up with.\n\n[1]\ncommit 790fbda902093c71ae47bff1414799cd716abb80\nAuthor: Fujii Masao <fujii@postgresql.org>\nDate: Tue Jan 11 23:19:59 2022 +0900\n\n Enhance pg_log_backend_memory_contexts() for auxiliary processes.\n\n[2] https://www.postgresql.org/message-id/flat/20220114063846.7jygvulyu6g23kdv%40jrouhaud\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Fri, 14 Jan 2022 16:18:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On 2022-01-14 19:48, Bharath Rupireddy wrote:\n> On Sat, Nov 20, 2021 at 11:50 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> \n>> On Fri, Nov 19, 2021 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:\n>> > The Attached v15 patch has the fixes for the same.\n>> \n>> Thanks. The v15 patch LGTM and the cf bot is happy hence marking it as \n>> RfC.\n> \n> The patch was not applying because of the recent commit [1]. I took\n> this opportunity and tried a bunch of things without modifying the\n> core logic of the pg_log_backtrace feature that Vignesh has worked on.\n> \n> I did following - moved the duplicate code to a new function\n> CheckPostgresProcessId which can be used by pg_log_backtrace,\n> pg_log_memory_contexts, pg_signal_backend and pg_log_query_plan ([2]),\n\nThanks for refactoring!\nI'm going to use it for pg_log_query_plan after this patch is merged.\n\n> modified the code comments, docs and tests to be more in sync with the\n> commit [1], moved two of ProcessLogBacktraceInterrupt calls (archiver\n> and wal writer) to their respective interrupt handlers. Here's the v16\n> version that I've come up with.\n\nI have some minor comments.\n\n> +</screen>\n> + You can get the file name and line number from the logged details \n> by using\n> + gdb/addr2line in linux platforms (users must ensure gdb/addr2line \n> is\n> + already installed).\n> +<programlisting>\n> +1) \"info line *address\" from gdb on postgres executable. For example:\n> +gdb ./postgres\n> +(gdb) info line *0x71c25d\n> +Line 378 of \"execMain.c\" starts at address 0x71c25d \n> <literal><</literal>standard_ExecutorRun+470<literal>></literal> \n> and ends at 0x71c263 \n> <literal><</literal>standard_ExecutorRun+476<literal>></literal>.\n> +OR\n> +2) Using \"addr2line -e postgres address\", For example:\n> +addr2line -e ./postgres 0x71c25d\n> +/home/postgresdba/src/backend/executor/execMain.c:378\n> +</programlisting>\n> + </para>\n> +\n\nIsn't it better to remove line 1) and 2) from <programlisting>?\nI just glanced at the existing sgml, but <programlisting> seems to \ncontain only codes.\n\n\n> + * CheckPostgresProcessId -- check if the process with given pid is a \n> backend\n> + * or an auxiliary process.\n> + *\n> +\n> + */\n\nIsn't the 4th line needless?\n\nBTW, when I saw the name of this function, I thought it just checks if \nthe specified pid is PostgreSQL process or not.\nSince it returns the pointer to the PGPROC or BackendId of the PID, it \nmight be kind to write comments about it.\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 24 Jan 2022 16:35:22 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "\n\nOn 2022/01/24 16:35, torikoshia wrote:\n> On 2022-01-14 19:48, Bharath Rupireddy wrote:\n>> On Sat, Nov 20, 2021 at 11:50 AM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>>\n>>> On Fri, Nov 19, 2021 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:\n>>> > The Attached v15 patch has the fixes for the same.\n>>>\n>>> Thanks. The v15 patch LGTM and the cf bot is happy hence marking it as RfC.\n>>\n>> The patch was not applying because of the recent commit [1]. I took\n>> this opportunity and tried a bunch of things without modifying the\n>> core logic of the pg_log_backtrace feature that Vignesh has worked on.\n\nI have one question about this feature. When the PID of auxiliary process like archiver is specified, probably the function always reports the same result, doesn't it? Because, for example, the archiver always logs its backtrace in HandlePgArchInterrupts().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 25 Jan 2022 01:36:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 10:06 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2022/01/24 16:35, torikoshia wrote:\n> > On 2022-01-14 19:48, Bharath Rupireddy wrote:\n> >> On Sat, Nov 20, 2021 at 11:50 AM Bharath Rupireddy\n> >> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>>\n> >>> On Fri, Nov 19, 2021 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:\n> >>> > The Attached v15 patch has the fixes for the same.\n> >>>\n> >>> Thanks. The v15 patch LGTM and the cf bot is happy hence marking it as RfC.\n> >>\n> >> The patch was not applying because of the recent commit [1]. I took\n> >> this opportunity and tried a bunch of things without modifying the\n> >> core logic of the pg_log_backtrace feature that Vignesh has worked on.\n>\n> I have one question about this feature. When the PID of auxiliary process like archiver is specified, probably the function always reports the same result, doesn't it? Because, for example, the archiver always logs its backtrace in HandlePgArchInterrupts().\n\nIt can be either from HandlePgArchInterrupts in pgarch_MainLoop or\nfrom HandlePgArchInterrupts in pgarch_ArchiverCopyLoop, since the\narchiver functionality is mainly in these functions.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 25 Jan 2022 11:19:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 1:05 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> On 2022-01-14 19:48, Bharath Rupireddy wrote:\n> > On Sat, Nov 20, 2021 at 11:50 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Fri, Nov 19, 2021 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> > The Attached v15 patch has the fixes for the same.\n> >>\n> >> Thanks. The v15 patch LGTM and the cf bot is happy hence marking it as\n> >> RfC.\n> >\n> > The patch was not applying because of the recent commit [1]. I took\n> > this opportunity and tried a bunch of things without modifying the\n> > core logic of the pg_log_backtrace feature that Vignesh has worked on.\n> >\n> > I did following - moved the duplicate code to a new function\n> > CheckPostgresProcessId which can be used by pg_log_backtrace,\n> > pg_log_memory_contexts, pg_signal_backend and pg_log_query_plan ([2]),\n>\n> Thanks for refactoring!\n> I'm going to use it for pg_log_query_plan after this patch is merged.\n>\n> > modified the code comments, docs and tests to be more in sync with the\n> > commit [1], moved two of ProcessLogBacktraceInterrupt calls (archiver\n> > and wal writer) to their respective interrupt handlers. Here's the v16\n> > version that I've come up with.\n>\n> I have some minor comments.\n>\n> > +</screen>\n> > + You can get the file name and line number from the logged details\n> > by using\n> > + gdb/addr2line in linux platforms (users must ensure gdb/addr2line\n> > is\n> > + already installed).\n> > +<programlisting>\n> > +1) \"info line *address\" from gdb on postgres executable. For example:\n> > +gdb ./postgres\n> > +(gdb) info line *0x71c25d\n> > +Line 378 of \"execMain.c\" starts at address 0x71c25d\n> > <literal><</literal>standard_ExecutorRun+470<literal>></literal>\n> > and ends at 0x71c263\n> > <literal><</literal>standard_ExecutorRun+476<literal>></literal>.\n> > +OR\n> > +2) Using \"addr2line -e postgres address\", For example:\n> > +addr2line -e ./postgres 0x71c25d\n> > +/home/postgresdba/src/backend/executor/execMain.c:378\n> > +</programlisting>\n> > + </para>\n> > +\n>\n> Isn't it better to remove line 1) and 2) from <programlisting>?\n> I just glanced at the existing sgml, but <programlisting> seems to\n> contain only codes.\n\nModified\n\n> > + * CheckPostgresProcessId -- check if the process with given pid is a\n> > backend\n> > + * or an auxiliary process.\n> > + *\n> > +\n> > + */\n>\n> Isn't the 4th line needless?\n\nModified\n\n> BTW, when I saw the name of this function, I thought it just checks if\n> the specified pid is PostgreSQL process or not.\n> Since it returns the pointer to the PGPROC or BackendId of the PID, it\n> might be kind to write comments about it.\n\nModified\n\nThanks for the comments, attached v17 patch has the fix for the same.\n\nRegards,\nVignesh",
"msg_date": "Tue, 25 Jan 2022 12:00:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 12:00 PM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for the comments, attached v17 patch has the fix for the same.\n\nHave a minor comment on v17:\n\ncan we modify the elog(LOG, to new style ereport(LOG, ?\n+ elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n\n/*----------\n * New-style error reporting API: to be used in this way:\n * ereport(ERROR,\n * errcode(ERRCODE_UNDEFINED_CURSOR),\n * errmsg(\"portal \\\"%s\\\" not found\", stmt->portalname),\n * ... other errxxx() fields as needed ...);\n *\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 26 Jan 2022 11:06:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 11:07 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 12:00 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, attached v17 patch has the fix for the same.\n>\n> Have a minor comment on v17:\n>\n> can we modify the elog(LOG, to new style ereport(LOG, ?\n> + elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n>\n> /*----------\n> * New-style error reporting API: to be used in this way:\n> * ereport(ERROR,\n> * errcode(ERRCODE_UNDEFINED_CURSOR),\n> * errmsg(\"portal \\\"%s\\\" not found\", stmt->portalname),\n> * ... other errxxx() fields as needed ...);\n> *\n\nAttached v18 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 27 Jan 2022 10:44:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Jan 27, 2022 at 10:45 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 11:07 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Jan 25, 2022 at 12:00 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > Thanks for the comments, attached v17 patch has the fix for the same.\n> >\n> > Have a minor comment on v17:\n> >\n> > can we modify the elog(LOG, to new style ereport(LOG, ?\n> > + elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n> >\n> > /*----------\n> > * New-style error reporting API: to be used in this way:\n> > * ereport(ERROR,\n> > * errcode(ERRCODE_UNDEFINED_CURSOR),\n> > * errmsg(\"portal \\\"%s\\\" not found\", stmt->portalname),\n> > * ... other errxxx() fields as needed ...);\n> > *\n>\n> Attached v18 patch has the changes for the same.\n\nThanks. The v18 patch LGTM. I'm not sure if the CF bot failure is\nrelated to the patch - https://cirrus-ci.com/task/5633364051886080\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 28 Jan 2022 13:54:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Jan 28, 2022 at 1:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jan 27, 2022 at 10:45 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Jan 26, 2022 at 11:07 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 25, 2022 at 12:00 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > Thanks for the comments, attached v17 patch has the fix for the same.\n> > >\n> > > Have a minor comment on v17:\n> > >\n> > > can we modify the elog(LOG, to new style ereport(LOG, ?\n> > > + elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n> > >\n> > > /*----------\n> > > * New-style error reporting API: to be used in this way:\n> > > * ereport(ERROR,\n> > > * errcode(ERRCODE_UNDEFINED_CURSOR),\n> > > * errmsg(\"portal \\\"%s\\\" not found\", stmt->portalname),\n> > > * ... other errxxx() fields as needed ...);\n> > > *\n> >\n> > Attached v18 patch has the changes for the same.\n>\n> Thanks. The v18 patch LGTM. I'm not sure if the CF bot failure is\n> related to the patch - https://cirrus-ci.com/task/5633364051886080\n\nI felt this test failure is not related to this change. Let's wait and\nsee the results of the next run. Also I noticed that this test seems\nto have failed many times in the buildfarm too recently:\nhttps://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=60&branch=HEAD&member=&stage=recoveryCheck&filter=Submit\n\nRegards,\nVignesh",
"msg_date": "Sat, 29 Jan 2022 08:06:31 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sat, Jan 29, 2022 at 8:06 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Jan 28, 2022 at 1:54 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Jan 27, 2022 at 10:45 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 26, 2022 at 11:07 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jan 25, 2022 at 12:00 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > Thanks for the comments, attached v17 patch has the fix for the same.\n> > > >\n> > > > Have a minor comment on v17:\n> > > >\n> > > > can we modify the elog(LOG, to new style ereport(LOG, ?\n> > > > + elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n> > > >\n> > > > /*----------\n> > > > * New-style error reporting API: to be used in this way:\n> > > > * ereport(ERROR,\n> > > > * errcode(ERRCODE_UNDEFINED_CURSOR),\n> > > > * errmsg(\"portal \\\"%s\\\" not found\", stmt->portalname),\n> > > > * ... other errxxx() fields as needed ...);\n> > > > *\n> > >\n> > > Attached v18 patch has the changes for the same.\n> >\n> > Thanks. The v18 patch LGTM. I'm not sure if the CF bot failure is\n> > related to the patch - https://cirrus-ci.com/task/5633364051886080\n>\n> I felt this test failure is not related to this change. Let's wait and\n> see the results of the next run. Also I noticed that this test seems\n> to have failed many times in the buildfarm too recently:\n> https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=60&branch=HEAD&member=&stage=recoveryCheck&filter=Submit\n\nCFBot has passed in the last run.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 31 Jan 2022 09:34:17 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "rebased to appease cfbot.\n\n+ couple of little fixes as 0002.\n\n-- \nJustin",
"msg_date": "Wed, 9 Mar 2022 09:56:23 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Mar 9, 2022 at 9:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> rebased to appease cfbot.\n>\n> + couple of little fixes as 0002.\n\nThanks for rebasing and fixing a few issues. I have taken all your\nchanges except for mcxtfuncs changes as those changes were not done as\npart of this patch. Attached v19 patch which has the changes for the\nsame.\n\nRegards,\nVignesh",
"msg_date": "Sat, 12 Mar 2022 12:59:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Sadly the cfbot is showing a patch conflict again. It's just a trivial\nconflict in the regression test schedule so I'm not going to update\nthe status but it would be good to rebase it so we continue to get\ncfbot testing.\n\n\n",
"msg_date": "Wed, 30 Mar 2022 11:53:52 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 11:53:52AM -0400, Greg Stark wrote:\n> Sadly the cfbot is showing a patch conflict again. It's just a trivial\n> conflict in the regression test schedule so I'm not going to update\n> the status but it would be good to rebase it so we continue to get\n> cfbot testing.\n\nDone. No changes.",
"msg_date": "Wed, 30 Mar 2022 11:03:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 12:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, Mar 30, 2022 at 11:53:52AM -0400, Greg Stark wrote:\n> > Sadly the cfbot is showing a patch conflict again. It's just a trivial\n> > conflict in the regression test schedule so I'm not going to update\n> > the status but it would be good to rebase it so we continue to get\n> > cfbot testing.\n>\n> Done. No changes.\n\n+ if (chk_auxiliary_proc)\n+ ereport(WARNING,\n+ errmsg(\"PID %d is not a PostgreSQL server process\", pid));\n+ else\n+ ereport(WARNING,\n+ errmsg(\"PID %d is not a PostgreSQL backend process\", pid));\n\nThis doesn't look right to me. I don't think that PostgreSQL server\nprocesses are one kind of thing and PostgreSQL backend processes are\nanother kind of thing. I think they're two terms that are pretty\nnearly interchangeable, or maybe at best you want to argue that\nbackend processes are some subset of server processes. I don't see\nthis sort of thing adding any clarity.\n\n-static void\n+void\n set_backtrace(ErrorData *edata, int num_skip)\n {\n StringInfoData errtrace;\n@@ -978,7 +978,18 @@ set_backtrace(ErrorData *edata, int num_skip)\n \"backtrace generation is not supported by this installation\");\n #endif\n\n- edata->backtrace = errtrace.data;\n+ if (edata)\n+ edata->backtrace = errtrace.data;\n+ else\n+ {\n+ /*\n+ * LOG_SERVER_ONLY is used to make sure the backtrace is never\n+ * sent to client. We want to avoid messing up the other client\n+ * session.\n+ */\n+ elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n+ pfree(errtrace.data);\n+ }\n }\n\nThis looks like a grotty hack.\n\n- PGPROC *proc = BackendPidGetProc(pid);\n-\n- /*\n- * BackendPidGetProc returns NULL if the pid isn't valid; but by the time\n- * we reach kill(), a process for which we get a valid proc here might\n- * have terminated on its own. There's no way to acquire a lock on an\n- * arbitrary process to prevent that. But since so far all the callers of\n- * this mechanism involve some request for ending the process anyway, that\n- * it might end on its own first is not a problem.\n- *\n- * Note that proc will also be NULL if the pid refers to an auxiliary\n- * process or the postmaster (neither of which can be signaled via\n- * pg_signal_backend()).\n- */\n- if (proc == NULL)\n- {\n- /*\n- * This is just a warning so a loop-through-resultset will not abort\n- * if one backend terminated on its own during the run.\n- */\n- ereport(WARNING,\n- (errmsg(\"PID %d is not a PostgreSQL backend process\", pid)));\n+ PGPROC *proc;\n\n+ /* Users can only signal valid backend or an auxiliary process. */\n+ proc = CheckPostgresProcessId(pid, false, NULL);\n+ if (!proc)\n return SIGNAL_BACKEND_ERROR;\n- }\n\nIncidentally changing the behavior of pg_signal_backend() doesn't seem\nlike a great idea. We can do that as a separate commit, after\nconsidering whether documentation changes are needed. But it's not\nsomething that should get folded into a commit on another topic.\n\n+ /*\n+ * BackendPidGetProc() and AuxiliaryPidGetProc() return NULL if the pid\n+ * isn't valid; but by the time we reach kill(), a process for which we\n+ * get a valid proc here might have terminated on its own. There's no way\n+ * to acquire a lock on an arbitrary process to prevent that. But since\n+ * this mechanism is usually used to debug a backend or an auxiliary\n+ * process running and consuming lots of memory, that it might end on its\n+ * own first without logging the requested info is not a problem.\n+ */\n\nThis comment made a lot more sense where it used to be than it does\nwhere it is now. I think more work is required here than just cutting\nand pasting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Apr 2022 11:48:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 9:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 12:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Wed, Mar 30, 2022 at 11:53:52AM -0400, Greg Stark wrote:\n> > > Sadly the cfbot is showing a patch conflict again. It's just a trivial\n> > > conflict in the regression test schedule so I'm not going to update\n> > > the status but it would be good to rebase it so we continue to get\n> > > cfbot testing.\n> >\n> > Done. No changes.\n>\n> + if (chk_auxiliary_proc)\n> + ereport(WARNING,\n> + errmsg(\"PID %d is not a PostgreSQL server process\", pid));\n> + else\n> + ereport(WARNING,\n> + errmsg(\"PID %d is not a PostgreSQL backend process\", pid));\n>\n> This doesn't look right to me. I don't think that PostgreSQL server\n> processes are one kind of thing and PostgreSQL backend processes are\n> another kind of thing. I think they're two terms that are pretty\n> nearly interchangeable, or maybe at best you want to argue that\n> backend processes are some subset of server processes. I don't see\n> this sort of thing adding any clarity.\n\nI have changed it to \"PID %d is not a PostgreSQL server process\" which\nis similar to the existing warning message in\npg_log_backend_memory_contexts.\n\n> -static void\n> +void\n> set_backtrace(ErrorData *edata, int num_skip)\n> {\n> StringInfoData errtrace;\n> @@ -978,7 +978,18 @@ set_backtrace(ErrorData *edata, int num_skip)\n> \"backtrace generation is not supported by this installation\");\n> #endif\n>\n> - edata->backtrace = errtrace.data;\n> + if (edata)\n> + edata->backtrace = errtrace.data;\n> + else\n> + {\n> + /*\n> + * LOG_SERVER_ONLY is used to make sure the backtrace is never\n> + * sent to client. We want to avoid messing up the other client\n> + * session.\n> + */\n> + elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n> + pfree(errtrace.data);\n> + }\n> }\n>\n> This looks like a grotty hack.\n\nI have changed it so that the backtrace is set and returned to the\ncaller. The caller will take care of logging or setting it to the\nerror data context. Thoughts?\n\n> - PGPROC *proc = BackendPidGetProc(pid);\n> -\n> - /*\n> - * BackendPidGetProc returns NULL if the pid isn't valid; but by the time\n> - * we reach kill(), a process for which we get a valid proc here might\n> - * have terminated on its own. There's no way to acquire a lock on an\n> - * arbitrary process to prevent that. But since so far all the callers of\n> - * this mechanism involve some request for ending the process anyway, that\n> - * it might end on its own first is not a problem.\n> - *\n> - * Note that proc will also be NULL if the pid refers to an auxiliary\n> - * process or the postmaster (neither of which can be signaled via\n> - * pg_signal_backend()).\n> - */\n> - if (proc == NULL)\n> - {\n> - /*\n> - * This is just a warning so a loop-through-resultset will not abort\n> - * if one backend terminated on its own during the run.\n> - */\n> - ereport(WARNING,\n> - (errmsg(\"PID %d is not a PostgreSQL backend process\", pid)));\n> + PGPROC *proc;\n>\n> + /* Users can only signal valid backend or an auxiliary process. */\n> + proc = CheckPostgresProcessId(pid, false, NULL);\n> + if (!proc)\n> return SIGNAL_BACKEND_ERROR;\n> - }\n>\n> Incidentally changing the behavior of pg_signal_backend() doesn't seem\n> like a great idea. We can do that as a separate commit, after\n> considering whether documentation changes are needed. But it's not\n> something that should get folded into a commit on another topic.\n\nAgreed. I have kept the logic of pg_signal_backend as it is.\npg_log_backtrace and pg_log_backend_memory_contexts use the same\nfunctionality to check and send signal. I have added a new function\n\"CheckProcSendDebugSignal\" which has the common code implementation\nand will be called by both pg_log_backtrace and\npg_log_backend_memory_contexts.\n\n> + /*\n> + * BackendPidGetProc() and AuxiliaryPidGetProc() return NULL if the pid\n> + * isn't valid; but by the time we reach kill(), a process for which we\n> + * get a valid proc here might have terminated on its own. There's no way\n> + * to acquire a lock on an arbitrary process to prevent that. But since\n> + * this mechanism is usually used to debug a backend or an auxiliary\n> + * process running and consuming lots of memory, that it might end on its\n> + * own first without logging the requested info is not a problem.\n> + */\n>\n> This comment made a lot more sense where it used to be than it does\n> where it is now. I think more work is required here than just cutting\n> and pasting.\n\nThis function was called from pg_signal_backend earlier, this logic is\nnow moved to CheckProcSendDebugSignal which will be called only by\npg_log_backtrace and pg_log_backend_memory_contexts. This looks\nappropriate in CheckProcSendDebugSignal with slight change to specify\nit can consume lots of memory or a long running process.\nThoughts?\n\nAttached v20 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 6 Apr 2022 12:29:18 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 12:29 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Apr 5, 2022 at 9:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Mar 30, 2022 at 12:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Wed, Mar 30, 2022 at 11:53:52AM -0400, Greg Stark wrote:\n> > > > Sadly the cfbot is showing a patch conflict again. It's just a trivial\n> > > > conflict in the regression test schedule so I'm not going to update\n> > > > the status but it would be good to rebase it so we continue to get\n> > > > cfbot testing.\n> > >\n> > > Done. No changes.\n> >\n> > + if (chk_auxiliary_proc)\n> > + ereport(WARNING,\n> > + errmsg(\"PID %d is not a PostgreSQL server process\", pid));\n> > + else\n> > + ereport(WARNING,\n> > + errmsg(\"PID %d is not a PostgreSQL backend process\", pid));\n> >\n> > This doesn't look right to me. I don't think that PostgreSQL server\n> > processes are one kind of thing and PostgreSQL backend processes are\n> > another kind of thing. I think they're two terms that are pretty\n> > nearly interchangeable, or maybe at best you want to argue that\n> > backend processes are some subset of server processes. I don't see\n> > this sort of thing adding any clarity.\n>\n> I have changed it to \"PID %d is not a PostgreSQL server process\" which\n> is similar to the existing warning message in\n> pg_log_backend_memory_contexts.\n>\n> > -static void\n> > +void\n> > set_backtrace(ErrorData *edata, int num_skip)\n> > {\n> > StringInfoData errtrace;\n> > @@ -978,7 +978,18 @@ set_backtrace(ErrorData *edata, int num_skip)\n> > \"backtrace generation is not supported by this installation\");\n> > #endif\n> >\n> > - edata->backtrace = errtrace.data;\n> > + if (edata)\n> > + edata->backtrace = errtrace.data;\n> > + else\n> > + {\n> > + /*\n> > + * LOG_SERVER_ONLY is used to make sure the backtrace is never\n> > + * sent to client. We want to avoid messing up the other client\n> > + * session.\n> > + */\n> > + elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace.data);\n> > + pfree(errtrace.data);\n> > + }\n> > }\n> >\n> > This looks like a grotty hack.\n>\n> I have changed it so that the backtrace is set and returned to the\n> caller. The caller will take care of logging or setting it to the\n> error data context. Thoughts?\n>\n> > - PGPROC *proc = BackendPidGetProc(pid);\n> > -\n> > - /*\n> > - * BackendPidGetProc returns NULL if the pid isn't valid; but by the time\n> > - * we reach kill(), a process for which we get a valid proc here might\n> > - * have terminated on its own. There's no way to acquire a lock on an\n> > - * arbitrary process to prevent that. But since so far all the callers of\n> > - * this mechanism involve some request for ending the process anyway, that\n> > - * it might end on its own first is not a problem.\n> > - *\n> > - * Note that proc will also be NULL if the pid refers to an auxiliary\n> > - * process or the postmaster (neither of which can be signaled via\n> > - * pg_signal_backend()).\n> > - */\n> > - if (proc == NULL)\n> > - {\n> > - /*\n> > - * This is just a warning so a loop-through-resultset will not abort\n> > - * if one backend terminated on its own during the run.\n> > - */\n> > - ereport(WARNING,\n> > - (errmsg(\"PID %d is not a PostgreSQL backend process\", pid)));\n> > + PGPROC *proc;\n> >\n> > + /* Users can only signal valid backend or an auxiliary process. */\n> > + proc = CheckPostgresProcessId(pid, false, NULL);\n> > + if (!proc)\n> > return SIGNAL_BACKEND_ERROR;\n> > - }\n> >\n> > Incidentally changing the behavior of pg_signal_backend() doesn't seem\n> > like a great idea. We can do that as a separate commit, after\n> > considering whether documentation changes are needed. But it's not\n> > something that should get folded into a commit on another topic.\n>\n> Agreed. I have kept the logic of pg_signal_backend as it is.\n> pg_log_backtrace and pg_log_backend_memory_contexts use the same\n> functionality to check and send signal. I have added a new function\n> \"CheckProcSendDebugSignal\" which has the common code implementation\n> and will be called by both pg_log_backtrace and\n> pg_log_backend_memory_contexts.\n>\n> > + /*\n> > + * BackendPidGetProc() and AuxiliaryPidGetProc() return NULL if the pid\n> > + * isn't valid; but by the time we reach kill(), a process for which we\n> > + * get a valid proc here might have terminated on its own. There's no way\n> > + * to acquire a lock on an arbitrary process to prevent that. But since\n> > + * this mechanism is usually used to debug a backend or an auxiliary\n> > + * process running and consuming lots of memory, that it might end on its\n> > + * own first without logging the requested info is not a problem.\n> > + */\n> >\n> > This comment made a lot more sense where it used to be than it does\n> > where it is now. I think more work is required here than just cutting\n> > and pasting.\n>\n> This function was called from pg_signal_backend earlier, this logic is\n> now moved to CheckProcSendDebugSignal which will be called only by\n> pg_log_backtrace and pg_log_backend_memory_contexts. This looks\n> appropriate in CheckProcSendDebugSignal with slight change to specify\n> it can consume lots of memory or a long running process.\n> Thoughts?\n>\n> Attached v20 patch has the changes for the same.\n\nThe patch was not applying on top of HEAD. Attached patch is a rebased\nversion on top of HEAD.\n\nRegards,\nVignesh",
"msg_date": "Thu, 14 Apr 2022 10:33:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "At Thu, 14 Apr 2022 10:33:50 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> On Wed, Apr 6, 2022 at 12:29 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Apr 5, 2022 at 9:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > This looks like a grotty hack.\n> >\n> > I have changed it so that the backtrace is set and returned to the\n> > caller. The caller will take care of logging or setting it to the\n> > error data context. Thoughts?\n\nThe point here I think is that the StringInfo is useless here in the\nfirst place. Addition to that, it is outside the use of StringInfo\nhammering in a string pointer into it.\n\nBy the way, as Andres suggested very early in this thread, backrace()\ncan be called from signal handlers with some preparation. So we can\nrun backtrace() in HandleLogBacktraceInterrupt() and backtrace_symbols\nin ProceeLogBacktraceInterrupt(). This make this a bit complex, but\nthe outcome should be more informative.\n\n\n> > > Incidentally changing the behavior of pg_signal_backend() doesn't seem\n> > > like a great idea. We can do that as a separate commit, after\n> > > considering whether documentation changes are needed. But it's not\n> > > something that should get folded into a commit on another topic.\n> >\n> > Agreed. I have kept the logic of pg_signal_backend as it is.\n> > pg_log_backtrace and pg_log_backend_memory_contexts use the same\n> > functionality to check and send signal. I have added a new function\n> > \"CheckProcSendDebugSignal\" which has the common code implementation\n> > and will be called by both pg_log_backtrace and\n> > pg_log_backend_memory_contexts.\n\nThe name looks like too specific than what it actually does, and\nrather doesn't manifest what it does.\nSendProcSignalBackendOrAuxproc() might be more eescriptive.\n\nOr we can provide BackendOrAuxiliaryPidGetProc(int pid, BackendId &backendid).\n\n> > > + /*\n> > > + * BackendPidGetProc() and AuxiliaryPidGetProc() return NULL if the pid\n> > > + * isn't valid; but by the time we reach kill(), a process for which we\n> > > + * get a valid proc here might have terminated on its own. There's no way\n> > > + * to acquire a lock on an arbitrary process to prevent that. But since\n> > > + * this mechanism is usually used to debug a backend or an auxiliary\n> > > + * process running and consuming lots of memory, that it might end on its\n> > > + * own first without logging the requested info is not a problem.\n> > > + */\n> > >\n> > > This comment made a lot more sense where it used to be than it does\n> > > where it is now. I think more work is required here than just cutting\n> > > and pasting.\n> >\n> > This function was called from pg_signal_backend earlier, this logic is\n> > now moved to CheckProcSendDebugSignal which will be called only by\n> > pg_log_backtrace and pg_log_backend_memory_contexts. This looks\n> > appropriate in CheckProcSendDebugSignal with slight change to specify\n> > it can consume lots of memory or a long running process.\n> > Thoughts?\n\nFor example. do you see the following part correct as well for\npg_log_backtrace()?\n\n> + * this mechanism is usually used to debug a backend or an auxiliary\n> + * process running and consuming lots of memory, that it might end on its\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Apr 2022 15:19:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 11:49 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 14 Apr 2022 10:33:50 +0530, vignesh C <vignesh21@gmail.com> wrote in\n> > On Wed, Apr 6, 2022 at 12:29 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 5, 2022 at 9:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > This looks like a grotty hack.\n> > >\n> > > I have changed it so that the backtrace is set and returned to the\n> > > caller. The caller will take care of logging or setting it to the\n> > > error data context. Thoughts?\n>\n> The point here I think is that the StringInfo is useless here in the\n> first place. Addition to that, it is outside the use of StringInfo\n> hammering in a string pointer into it.\n\nModified\n\n> By the way, as Andres suggested very early in this thread, backrace()\n> can be called from signal handlers with some preparation. So we can\n> run backtrace() in HandleLogBacktraceInterrupt() and backtrace_symbols\n> in ProceeLogBacktraceInterrupt(). This make this a bit complex, but\n> the outcome should be more informative.\n\nUsing that approach we could only send the trace to stderr, when we\nare trying to log it to the logs it might result in deadlock as the\nsignal handler can be called in between malloc lock. We dropped that\napproach to avoid issues pointed in [1].\n\n> > > > Incidentally changing the behavior of pg_signal_backend() doesn't seem\n> > > > like a great idea. We can do that as a separate commit, after\n> > > > considering whether documentation changes are needed. But it's not\n> > > > something that should get folded into a commit on another topic.\n> > >\n> > > Agreed. I have kept the logic of pg_signal_backend as it is.\n> > > pg_log_backtrace and pg_log_backend_memory_contexts use the same\n> > > functionality to check and send signal. I have added a new function\n> > > \"CheckProcSendDebugSignal\" which has the common code implementation\n> > > and will be called by both pg_log_backtrace and\n> > > pg_log_backend_memory_contexts.\n>\n> The name looks like too specific than what it actually does, and\n> rather doesn't manifest what it does.\n> SendProcSignalBackendOrAuxproc() might be more eescriptive.\n>\n> Or we can provide BackendOrAuxiliaryPidGetProc(int pid, BackendId &backendid).\n\nModified it to SendProcSignalBackendOrAuxproc\n\n> > > > + /*\n> > > > + * BackendPidGetProc() and AuxiliaryPidGetProc() return NULL if the pid\n> > > > + * isn't valid; but by the time we reach kill(), a process for which we\n> > > > + * get a valid proc here might have terminated on its own. There's no way\n> > > > + * to acquire a lock on an arbitrary process to prevent that. But since\n> > > > + * this mechanism is usually used to debug a backend or an auxiliary\n> > > > + * process running and consuming lots of memory, that it might end on its\n> > > > + * own first without logging the requested info is not a problem.\n> > > > + */\n> > > >\n> > > > This comment made a lot more sense where it used to be than it does\n> > > > where it is now. I think more work is required here than just cutting\n> > > > and pasting.\n> > >\n> > > This function was called from pg_signal_backend earlier, this logic is\n> > > now moved to CheckProcSendDebugSignal which will be called only by\n> > > pg_log_backtrace and pg_log_backend_memory_contexts. This looks\n> > > appropriate in CheckProcSendDebugSignal with slight change to specify\n> > > it can consume lots of memory or a long running process.\n> > > Thoughts?\n>\n> For example. do you see the following part correct as well for\n> pg_log_backtrace()?\n\">\n> > + * this mechanism is usually used to debug a backend or an auxiliary\n> > + * process running and consuming lots of memory, that it might end on its\n\nI felt it was ok since I mentioned it as \"consuming lots of memory or\na long running process\", let me know if you want to change it to\nsomething else, I will change it.\n\n[1] - https://www.postgresql.org/message-id/1361750.1606795285%40sss.pgh.pa.us\n\nThe attached v21 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Mon, 18 Apr 2022 09:09:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 9:10 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> The attached v21 patch has the changes for the same.\n\nThanks for the patch. Here are some comments:\n\n1. I think there's a fundamental problem with this patch, that is, it\nprints the backtrace when the interrupt is processed but not when\ninterrupt is received. This way, the backends/aux processes will\nalways have backtraces in their main loops or some processing loops\nwherever CHECK_FOR_INTERRUPTS() is called [1]. What we need is\nsomething more useful. What I think is the better way is to capture\nthe backtrace, call set_backtrace(), when the interrupt arrives and\nthen if we don't want to print it in the interrupt handler, perhaps\nsave it and print it in the next CFI() cycle. See some realistic and\nuseful backtrace [2] when I emitted the backtrace in the interrupt\nhandler.\n\n2.\n+ specified process ID. This function can send the request to\n+ backends and auxiliary processes except the logger and statistics\n+ collector. The backtraces will be logged at <literal>LOG</literal>\nThere's no statistics collector process any more. Please remove it.\nAlso remove 'the' before 'logger' to be in sync with\npg_log_backend_memory_contexts docs.\n\n3.\n+ configuration set (See <xref linkend=\"runtime-config-logging\"/> for\nLowercase 'see' to be in sync with pg_log_backend_memory_contexts docs.\n\n4.\n+ You can obtain the file name and line number from the logged\ndetails by using\nHow about 'One can obtain'?\n\n5.\n+postgres=# select pg_log_backtrace(pg_backend_pid());\n+For example:\n+<screen>\n+2021-01-27 11:33:50.247 IST [111735] LOG: current backtrace:\nA more realistic example would be better here, say walwriter or\ncheckpointer or some other process backtrace will be good instead of a\nbackend logging its pg_log_backtrace()'s call stack?\n\n6.\nDon't we need the backtrace of walreceiver? I think it'll be good to\nhave in ProcessWalRcvInterrupts().\n\n7.\n+ errmsg_internal(\"logging backtrace of PID %d\", MyProcPid));\n+ elog(LOG_SERVER_ONLY, \"current backtrace:%s\", errtrace);\nCan we get rid of \"current backtrace:%s\" and have something similar to\nProcessLogMemoryContextInterrupt() like below?\n\nerrmsg(\"logging backtrace of PID %d\", MyProcPid)));\nerrmsg(\"%s\", errtrace);\n\n[1]\n2022-11-10 09:55:44.691 UTC [1346443] LOG: logging backtrace of PID 1346443\n2022-11-10 09:55:44.691 UTC [1346443] LOG: current backtrace:\n postgres: checkpointer (set_backtrace+0x46) [0x5640df9849c6]\n postgres: checkpointer (ProcessLogBacktraceInterrupt+0x16)\n[0x5640df7fd326]\n postgres: checkpointer (CheckpointerMain+0x1a3) [0x5640df77f553]\n postgres: checkpointer (AuxiliaryProcessMain+0xc9) [0x5640df77d5e9]\n postgres: checkpointer (+0x436a9a) [0x5640df783a9a]\n postgres: checkpointer (PostmasterMain+0xd57) [0x5640df7877e7]\n postgres: checkpointer (main+0x20f) [0x5640df4a6f1f]\n /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f4b9fe9dd90]\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f4b9fe9de40]\n postgres: checkpointer (_start+0x25) [0x5640df4a7265]\n2022-11-10 09:56:05.032 UTC [1346448] LOG: logging backtrace of PID 1346448\n2022-11-10 09:56:05.032 UTC [1346448] LOG: current backtrace:\n postgres: logical replication launcher (set_backtrace+0x46)\n[0x5640df9849c6]\n postgres: logical replication launcher\n(ProcessLogBacktraceInterrupt+0x16) [0x5640df7fd326]\n postgres: logical replication launcher\n(ApplyLauncherMain+0x11b) [0x5640df7a253b]\n postgres: logical replication launcher\n(StartBackgroundWorker+0x220) [0x5640df77e270]\n postgres: logical replication launcher (+0x4369f4) [0x5640df7839f4]\n postgres: logical replication launcher (+0x43771d) [0x5640df78471d]\n /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f4b9feb6520]\n /lib/x86_64-linux-gnu/libc.so.6(__select+0xbd) [0x7f4b9ff8f74d]\n postgres: logical replication launcher (+0x43894d) [0x5640df78594d]\n postgres: logical replication launcher (PostmasterMain+0xcb5)\n[0x5640df787745]\n postgres: logical replication launcher (main+0x20f) [0x5640df4a6f1f]\n /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f4b9fe9dd90]\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f4b9fe9de40]\n postgres: logical replication launcher (_start+0x25) [0x5640df4a7265]\n\n[2]\n2022-11-10 10:25:20.021 UTC [1351953] LOG: current backtrace:\n postgres: ubuntu postgres [local] INSERT(set_backtrace+0x46)\n[0x55c60ae069b6]\n postgres: ubuntu postgres [local]\nINSERT(procsignal_sigusr1_handler+0x1d8) [0x55c60ac7f4f8]\n /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f75a395c520]\n /lib/x86_64-linux-gnu/libc.so.6(+0x91104) [0x7f75a39ab104]\n /lib/x86_64-linux-gnu/libc.so.6(+0x9ccf8) [0x7f75a39b6cf8]\n postgres: ubuntu postgres [local] INSERT(PGSemaphoreLock+0x22)\n[0x55c60abfb1f2]\n postgres: ubuntu postgres [local] INSERT(LWLockAcquire+0x174)\n[0x55c60ac8f384]\n postgres: ubuntu postgres [local] INSERT(+0x1fb778) [0x55c60a9ca778]\n postgres: ubuntu postgres [local] INSERT(+0x1fb8d3) [0x55c60a9ca8d3]\n postgres: ubuntu postgres [local]\nINSERT(XLogInsertRecord+0x4e4) [0x55c60a9cb0c4]\n postgres: ubuntu postgres [local] INSERT(XLogInsert+0xcf)\n[0x55c60a9d167f]\n postgres: ubuntu postgres [local] INSERT(heap_insert+0x2ca)\n[0x55c60a97168a]\n postgres: ubuntu postgres [local] INSERT(+0x1ab14a) [0x55c60a97a14a]\n postgres: ubuntu postgres [local] INSERT(+0x33dcab) [0x55c60ab0ccab]\n postgres: ubuntu postgres [local] INSERT(+0x33f10d) [0x55c60ab0e10d]\n postgres: ubuntu postgres [local]\nINSERT(standard_ExecutorRun+0x102) [0x55c60aadfdb2]\n postgres: ubuntu postgres [local] INSERT(+0x4d48d4) [0x55c60aca38d4]\n postgres: ubuntu postgres [local] INSERT(PortalRun+0x27d)\n[0x55c60aca47ed]\n postgres: ubuntu postgres [local] INSERT(+0x4d180f) [0x55c60aca080f]\n postgres: ubuntu postgres [local] INSERT(PostgresMain+0x1eb4)\n[0x55c60aca2b94]\n postgres: ubuntu postgres [local] INSERT(+0x4396f8) [0x55c60ac086f8]\n postgres: ubuntu postgres [local] INSERT(PostmasterMain+0xcb5)\n[0x55c60ac09745]\n postgres: ubuntu postgres [local] INSERT(main+0x20f) [0x55c60a928f1f]\n /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f75a3943d90]\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f75a3943e40]\n postgres: ubuntu postgres [local] INSERT(_start+0x25) [0x55c60a929265]\n2022-11-10 10:25:20.021 UTC [1351953] STATEMENT: insert into foo\nselect * from generate_series(1, 1000000000);\n2022-11-10 10:25:20.521 UTC [1351953] LOG: logging backtrace of PID 1351953\n2022-11-10 10:25:20.521 UTC [1351953] LOG: current backtrace:\n postgres: ubuntu postgres [local] INSERT(set_backtrace+0x46)\n[0x55c60ae069b6]\n postgres: ubuntu postgres [local]\nINSERT(procsignal_sigusr1_handler+0x1d8) [0x55c60ac7f4f8]\n /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f75a395c520]\n postgres: ubuntu postgres [local]\nINSERT(XLogInsertRecord+0x976) [0x55c60a9cb556]\n postgres: ubuntu postgres [local] INSERT(XLogInsert+0xcf)\n[0x55c60a9d167f]\n postgres: ubuntu postgres [local] INSERT(heap_insert+0x2ca)\n[0x55c60a97168a]\n postgres: ubuntu postgres [local] INSERT(+0x1ab14a) [0x55c60a97a14a]\n postgres: ubuntu postgres [local] INSERT(+0x33dcab) [0x55c60ab0ccab]\n postgres: ubuntu postgres [local] INSERT(+0x33f10d) [0x55c60ab0e10d]\n postgres: ubuntu postgres [local]\nINSERT(standard_ExecutorRun+0x102) [0x55c60aadfdb2]\n postgres: ubuntu postgres [local] INSERT(+0x4d48d4) [0x55c60aca38d4]\n postgres: ubuntu postgres [local] INSERT(PortalRun+0x27d)\n[0x55c60aca47ed]\n postgres: ubuntu postgres [local] INSERT(+0x4d180f) [0x55c60aca080f]\n postgres: ubuntu postgres [local] INSERT(PostgresMain+0x1eb4)\n[0x55c60aca2b94]\n postgres: ubuntu postgres [local] INSERT(+0x4396f8) [0x55c60ac086f8]\n postgres: ubuntu postgres [local] INSERT(PostmasterMain+0xcb5)\n[0x55c60ac09745]\n postgres: ubuntu postgres [local] INSERT(main+0x20f) [0x55c60a928f1f]\n /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f75a3943d90]\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f75a3943e40]\n postgres: ubuntu postgres [local] INSERT(_start+0x25) [0x55c60a929265]\n2022-11-10 10:25:20.521 UTC [1351953] STATEMENT: insert into foo\nselect * from generate_series(1, 1000000000);\n2022-11-10 10:25:21.050 UTC [1351953] LOG: logging backtrace of PID 1351953\n2022-11-10 10:25:21.050 UTC [1351953] LOG: current backtrace:\n postgres: ubuntu postgres [local] INSERT(set_backtrace+0x46)\n[0x55c60ae069b6]\n postgres: ubuntu postgres [local]\nINSERT(procsignal_sigusr1_handler+0x1d8) [0x55c60ac7f4f8]\n /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f75a395c520]\n /lib/x86_64-linux-gnu/libc.so.6(fdatasync+0x17) [0x7f75a3a35af7]\n postgres: ubuntu postgres [local] INSERT(+0x1fb384) [0x55c60a9ca384]\n postgres: ubuntu postgres [local] INSERT(+0x1fb7ef) [0x55c60a9ca7ef]\n postgres: ubuntu postgres [local] INSERT(+0x1fb8d3) [0x55c60a9ca8d3]\n postgres: ubuntu postgres [local]\nINSERT(XLogInsertRecord+0x4e4) [0x55c60a9cb0c4]\n postgres: ubuntu postgres [local] INSERT(XLogInsert+0xcf)\n[0x55c60a9d167f]\n postgres: ubuntu postgres [local] INSERT(heap_insert+0x2ca)\n[0x55c60a97168a]\n postgres: ubuntu postgres [local] INSERT(+0x1ab14a) [0x55c60a97a14a]\n postgres: ubuntu postgres [local] INSERT(+0x33dcab) [0x55c60ab0ccab]\n postgres: ubuntu postgres [local] INSERT(+0x33f10d) [0x55c60ab0e10d]\n postgres: ubuntu postgres [local]\nINSERT(standard_ExecutorRun+0x102) [0x55c60aadfdb2]\n postgres: ubuntu postgres [local] INSERT(+0x4d48d4) [0x55c60aca38d4]\n postgres: ubuntu postgres [local] INSERT(PortalRun+0x27d)\n[0x55c60aca47ed]\n postgres: ubuntu postgres [local] INSERT(+0x4d180f) [0x55c60aca080f]\n postgres: ubuntu postgres [local] INSERT(PostgresMain+0x1eb4)\n[0x55c60aca2b94]\n postgres: ubuntu postgres [local] INSERT(+0x4396f8) [0x55c60ac086f8]\n postgres: ubuntu postgres [local] INSERT(PostmasterMain+0xcb5)\n[0x55c60ac09745]\n postgres: ubuntu postgres [local] INSERT(main+0x20f) [0x55c60a928f1f]\n /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f75a3943d90]\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f75a3943e40]\n postgres: ubuntu postgres [local] INSERT(_start+0x25) [0x55c60a929265]\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 10 Nov 2022 15:56:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "At Thu, 10 Nov 2022 15:56:35 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Apr 18, 2022 at 9:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > The attached v21 patch has the changes for the same.\n> \n> Thanks for the patch. Here are some comments:\n> \n> 1. I think there's a fundamental problem with this patch, that is, it\n> prints the backtrace when the interrupt is processed but not when\n> interrupt is received. This way, the backends/aux processes will\n\nYeah, but the obstacle was backtrace(3) itself. Andres pointed [1]\nthat that may be doable with some care (and I agree to the opinion).\nAFAIS no discussions followed and things have been to the current\nshape since then.\n\n\n[1] https://www.postgresql.org/message-id/20201201032649.aekv5b5dicvmovf4%40alap3.anarazel.de\n| > Surely this is *utterly* unsafe. You can't do that sort of stuff in\n| > a signal handler.\n| \n| That's of course true for the current implementation - but I don't think\n| it's a fundamental constraint. With a bit of care backtrace() and\n| backtrace_symbols() itself can be signal safe:\n\nman 3 backtrace\n> * backtrace() and backtrace_symbols_fd() don't call malloc() explic‐\n> itly, but they are part of libgcc, which gets loaded dynamically\n> when first used. Dynamic loading usually triggers a call to mal‐\n> loc(3). If you need certain calls to these two functions to not\n> allocate memory (in signal handlers, for example), you need to make\n> sure libgcc is loaded beforehand.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Nov 2022 11:29:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 7:59 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 10 Nov 2022 15:56:35 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Mon, Apr 18, 2022 at 9:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > The attached v21 patch has the changes for the same.\n> >\n> > Thanks for the patch. Here are some comments:\n> >\n> > 1. I think there's a fundamental problem with this patch, that is, it\n> > prints the backtrace when the interrupt is processed but not when\n> > interrupt is received. This way, the backends/aux processes will\n>\n> Yeah, but the obstacle was backtrace(3) itself. Andres pointed [1]\n> that that may be doable with some care (and I agree to the opinion).\n> AFAIS no discussions followed and things have been to the current\n> shape since then.\n>\n>\n> [1] https://www.postgresql.org/message-id/20201201032649.aekv5b5dicvmovf4%40alap3.anarazel.de\n> | > Surely this is *utterly* unsafe. You can't do that sort of stuff in\n> | > a signal handler.\n> |\n> | That's of course true for the current implementation - but I don't think\n> | it's a fundamental constraint. With a bit of care backtrace() and\n> | backtrace_symbols() itself can be signal safe:\n>\n> man 3 backtrace\n> > * backtrace() and backtrace_symbols_fd() don't call malloc() explic‐\n> > itly, but they are part of libgcc, which gets loaded dynamically\n> > when first used. Dynamic loading usually triggers a call to mal‐\n> > loc(3). If you need certain calls to these two functions to not\n> > allocate memory (in signal handlers, for example), you need to make\n> > sure libgcc is loaded beforehand.\n\nI missed that part. Thanks for pointing it out. The\nbacktrace_symbols() seems to be returning a malloc'ed array [1],\nmeaning it can't be used in signal handlers (if used, it can cause\ndeadlocks as per [2]) and existing set_backtrace() is using it.\nTherefore, we need to either change set_backtrace() to use\nbacktrace_symbols_fd() instead of backtrace_symobls() or introduce\nanother function for the purpose of this feature. If done that, then\nwe can think of preloading of libgcc which makes backtrace(),\nbacktrace_symobols_fd() safe to use in signal handlers.\n\nLooks like we're not loading libgcc explicitly now into any of\npostgres processes, please correct me if I'm wrong here. If we're not\nloading it right now, is it acceptable to load libgcc into every\npostgres process for the sake of this feature?\n\n[1] https://linux.die.net/man/3/backtrace_symbols\n[2] https://stackoverflow.com/questions/40049751/malloc-inside-linux-signal-handler-cause-deadlock\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 11 Nov 2022 18:04:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 6:04 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Nov 11, 2022 at 7:59 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 10 Nov 2022 15:56:35 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > On Mon, Apr 18, 2022 at 9:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > The attached v21 patch has the changes for the same.\n> > >\n> > > Thanks for the patch. Here are some comments:\n> > >\n> > > 1. I think there's a fundamental problem with this patch, that is, it\n> > > prints the backtrace when the interrupt is processed but not when\n> > > interrupt is received. This way, the backends/aux processes will\n> >\n> > Yeah, but the obstacle was backtrace(3) itself. Andres pointed [1]\n> > that that may be doable with some care (and I agree to the opinion).\n> > AFAIS no discussions followed and things have been to the current\n> > shape since then.\n> >\n> >\n> > [1] https://www.postgresql.org/message-id/20201201032649.aekv5b5dicvmovf4%40alap3.anarazel.de\n> > | > Surely this is *utterly* unsafe. You can't do that sort of stuff in\n> > | > a signal handler.\n> > |\n> > | That's of course true for the current implementation - but I don't think\n> > | it's a fundamental constraint. With a bit of care backtrace() and\n> > | backtrace_symbols() itself can be signal safe:\n> >\n> > man 3 backtrace\n> > > * backtrace() and backtrace_symbols_fd() don't call malloc() explic‐\n> > > itly, but they are part of libgcc, which gets loaded dynamically\n> > > when first used. Dynamic loading usually triggers a call to mal‐\n> > > loc(3). If you need certain calls to these two functions to not\n> > > allocate memory (in signal handlers, for example), you need to make\n> > > sure libgcc is loaded beforehand.\n>\n> I missed that part. Thanks for pointing it out. The\n> backtrace_symbols() seems to be returning a malloc'ed array [1],\n> meaning it can't be used in signal handlers (if used, it can cause\n> deadlocks as per [2]) and existing set_backtrace() is using it.\n> Therefore, we need to either change set_backtrace() to use\n> backtrace_symbols_fd() instead of backtrace_symobls() or introduce\n> another function for the purpose of this feature. If done that, then\n> we can think of preloading of libgcc which makes backtrace(),\n> backtrace_symobols_fd() safe to use in signal handlers.\n>\n> Looks like we're not loading libgcc explicitly now into any of\n> postgres processes, please correct me if I'm wrong here. If we're not\n> loading it right now, is it acceptable to load libgcc into every\n> postgres process for the sake of this feature?\n>\n> [1] https://linux.die.net/man/3/backtrace_symbols\n> [2] https://stackoverflow.com/questions/40049751/malloc-inside-linux-signal-handler-cause-deadlock\n\nI took a stab at it after discussing with Vignesh off-list. Here's\nwhat I've done:\n\n1. Make backtrace functions signal safe i.e. ensure libgcc is loaded,\njust in case if it's not, by calling backtrace() function during the\nearly life of a process after SIGUSR1 signal handler and other signal\nhandlers are established.\n2. Emit backtrace to stderr within the signal handler itself to keep\nthings simple so that we don't need to allocate dynamic memory inside\nthe signal handler.\n3. Split the patch into 3 for ease of review, 0001 does preparatory\nstuff, 0002 adds the function, 0003 adds the docs and tests.\n\nI'm attaching the v23 patch set for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 30 Nov 2022 11:43:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 11:43 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I'm attaching the v22 patch set for further review.\n\nNeeded a rebase due to 216a784829c2c5f03ab0c43e009126cbb819e9b2.\nAttaching v23 patch set for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 11 Jan 2023 20:14:13 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "> On 11 Jan 2023, at 15:44, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Wed, Nov 30, 2022 at 11:43 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> \n>> I'm attaching the v22 patch set for further review.\n> \n> Needed a rebase due to 216a784829c2c5f03ab0c43e009126cbb819e9b2.\n> Attaching v23 patch set for further review.\n\nThis thread has stalled for well over 6 months with the patch going from CF to\nCF. From skimming the thread it seems that a lot of the details have been\nironed out with most (all?) objections addressed. Is any committer interested\nin picking this up? If not we should probably mark it returned with feedback.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 20 Jul 2023 16:52:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 8:22 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 11 Jan 2023, at 15:44, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Nov 30, 2022 at 11:43 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> I'm attaching the v22 patch set for further review.\n> >\n> > Needed a rebase due to 216a784829c2c5f03ab0c43e009126cbb819e9b2.\n> > Attaching v23 patch set for further review.\n>\n> This thread has stalled for well over 6 months with the patch going from CF to\n> CF. From skimming the thread it seems that a lot of the details have been\n> ironed out with most (all?) objections addressed. Is any committer interested\n> in picking this up? If not we should probably mark it returned with feedback.\n\nRebase needed due to function oid clash. Picked the new OID with the\nhelp of src/include/catalog/unused_oids. PSA v24 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 5 Nov 2023 01:49:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nI'm not sure if this actually still needs review, but it's marked as such in the CF app, so I'm reviewing it in the hopes of moving it along.\r\n\r\nThe feature works as documented. The docs say \"This function is supported only if PostgreSQL was built with the ability to capture backtraces, otherwise it will emit a warning.\" I'm not sure what building with the ability to capture backtraces is, but it worked with no special config on my machine. I don't have much C experience, so I don't know if this is something that should have more context in a README somewhere, or if it's likely someone who's interested in this will already know what to do. The code looks fine to me.\r\n\r\nI tried running make installcheck-world, but it failed on 17 tests. However, master also fails here on 17 tests. A normal make check-world passes on both branches. I assume I'm doing something wrong and would appreciate any pointers [0].\r\n\r\nBased on my review and Daniel's comment above, I'm marking this as Ready for Committer.\r\n\r\nThanks,\r\nMaciek\r\n\r\n[0] My regression.diffs has errors like\r\n\r\n```\r\ndiff -U3 /home/maciek/code/aux/postgres/src/test/regress/expected/copyselect.out /home/maciek/code/aux/postgres/src/test/regress/results/copyselect.out\r\n--- /home/maciek/code/aux/postgres/src/test/regress/expected/copyselect.out 2023-01-02 12:21:10.792646101 -0800\r\n+++ /home/maciek/code/aux/postgres/src/test/regress/results/copyselect.out 2024-01-14 15:04:07.513887866 -0800\r\n@@ -131,11 +131,6 @@\r\n 2\r\n ?column? \r\n ----------\r\n- 3\r\n-(1 row)\r\n-\r\n- ?column? \r\n-----------\r\n 4\r\n (1 row)\r\n```\r\n\r\nand\r\n\r\n```\r\ndiff -U3 /home/maciek/code/aux/postgres/src/test/regress/expected/create_table.out /home/maciek/code/aux/postgres/src/test/regress/results/create_table.out\r\n--- /home/maciek/code/aux/postgres/src/test/regress/expected/create_table.out 2023-10-02 22:14:02.583377845 -0700\r\n+++ /home/maciek/code/aux/postgres/src/test/regress/results/create_table.out 2024-01-14 15:04:09.037890710 -0800\r\n@@ -854,8 +854,6 @@\r\n b | integer | | not null | 1 | plain | | \r\n Partition of: parted FOR VALUES IN ('b')\r\n Partition constraint: ((a IS NOT NULL) AND (a = 'b'::text))\r\n-Not-null constraints:\r\n- \"part_b_b_not_null\" NOT NULL \"b\" (local, inherited)\r\n \r\n -- Both partition bound and partition key in describe output\r\n \\d+ part_c\r\n``` \r\n\r\nI'm on Ubuntu 22.04 with Postgres 11, 12, 13, and 16 installed from PGDG.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Sun, 14 Jan 2024 23:20:48 +0000",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sun, 5 Nov 2023 at 01:49, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jul 20, 2023 at 8:22 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 11 Jan 2023, at 15:44, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Wed, Nov 30, 2022 at 11:43 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >>\n> > >> I'm attaching the v22 patch set for further review.\n> > >\n> > > Needed a rebase due to 216a784829c2c5f03ab0c43e009126cbb819e9b2.\n> > > Attaching v23 patch set for further review.\n> >\n> > This thread has stalled for well over 6 months with the patch going from CF to\n> > CF. From skimming the thread it seems that a lot of the details have been\n> > ironed out with most (all?) objections addressed. Is any committer interested\n> > in picking this up? If not we should probably mark it returned with feedback.\n>\n> Rebase needed due to function oid clash. Picked the new OID with the\n> help of src/include/catalog/unused_oids. PSA v24 patch set.\n\nRebase needed due to changes in parallel_schedule. PSA v25 patch set.\n\nRegards,\nVignesh",
"msg_date": "Fri, 26 Jan 2024 07:56:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On 2022-Jan-27, vignesh C wrote:\n\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index 0ee6974f1c..855ccc8902 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n\n> + You can get the file name and line number from the logged details by using\n> + gdb/addr2line in linux platforms (users must ensure gdb/addr2line is\n> + already installed).\n\nThis doesn't read great. I mean, what is gdb/addr2line? I think you\nmean either GDB or addr2line; this could be clearer.\n\n> diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\n> index 7402696986..522a525741 100644\n> --- a/src/backend/utils/error/elog.c\n> +++ b/src/backend/utils/error/elog.c\n> @@ -944,9 +943,10 @@ errbacktrace(void)\n> * Compute backtrace data and add it to the supplied ErrorData. num_skip\n> * specifies how many inner frames to skip. Use this to avoid showing the\n> * internal backtrace support functions in the backtrace. This requires that\n> - * this and related functions are not inlined.\n> + * this and related functions are not inlined. If the edata pointer is valid,\n> + * backtrace information will be set in edata.\n> */\n> -static void\n> +void\n> set_backtrace(ErrorData *edata, int num_skip)\n> {\n> \tStringInfoData errtrace;\n\nThis seems like a terrible API choice, and the comment change is no\ngood. I suggest you need to create a function that deals only with a\nStringInfo, maybe\n append_backtrace(StringInfo buf, int num_skip)\nwhich is used by set_backtrace to print the backtrace in\nedata->backtrace, and a new function log_backtrace() that does the\nereport(LOG_SERVER_ONLY) thing.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Before you were born your parents weren't as boring as they are now. They\ngot that way paying your bills, cleaning up your room and listening to you\ntell them how idealistic you are.\" -- Charles J. Sykes' advice to teenagers\n\n\n",
"msg_date": "Fri, 26 Jan 2024 11:41:04 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 4:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n\nThanks for reviewing.\n\n> > + You can get the file name and line number from the logged details by using\n> > + gdb/addr2line in linux platforms (users must ensure gdb/addr2line is\n> > + already installed).\n>\n> This doesn't read great. I mean, what is gdb/addr2line? I think you\n> mean either GDB or addr2line; this could be clearer.\n\nWrapped them in <productname> tag and reworded the comment a bit.\n\n> > * internal backtrace support functions in the backtrace. This requires that\n> > - * this and related functions are not inlined.\n> > + * this and related functions are not inlined. If the edata pointer is valid,\n> > + * backtrace information will be set in edata.\n> > */\n> > -static void\n> > +void\n> > set_backtrace(ErrorData *edata, int num_skip)\n> > {\n> > StringInfoData errtrace;\n>\n> This seems like a terrible API choice, and the comment change is no\n> good. I suggest you need to create a function that deals only with a\n> StringInfo, maybe\n> append_backtrace(StringInfo buf, int num_skip)\n> which is used by set_backtrace to print the backtrace in\n> edata->backtrace, and a new function log_backtrace() that does the\n> ereport(LOG_SERVER_ONLY) thing.\n\nYou probably were looking at v21, the above change isn't there in\nversions after that. Can you please review the latest version v26\nattached here?\n\nWe might want this patch extended to the newly added walsummarizer\nprocess which I'll do so in the next version.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 26 Jan 2024 23:58:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 11:58:00PM +0530, Bharath Rupireddy wrote:\n> You probably were looking at v21, the above change isn't there in\n> versions after that. Can you please review the latest version v26\n> attached here?\n> \n> We might want this patch extended to the newly added walsummarizer\n> process which I'll do so in the next version.\n\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n+bool\n+SendProcSignalBackendOrAuxproc(int pid, ProcSignalReason reason)\n+{\n\nLooking at 0001. This API is a thin wrapper of SendProcSignal(), that\njust checks that we have actually a process before using it.\nShouldn't it be in procsignal.c.\n\nNow looking at 0002, this new routine is used in one place. Seeing\nthat we have something similar in pgstatfuncs.c, wouldn't it be\nbetter, instead of englobing SendProcSignal(), to have one routine\nthat's able to return a PID for a PostgreSQL process?\n\nAll the backtrace-related handling is stored in procsignal.c, could it\nbe cleaner to move the internals into a separate, new file, like\nprocbacktrace.c or something similar?\n\nLoadBacktraceFunctions() is one more thing we need to set up in all\nauxiliary processes. That's a bit sad to require that in all these\nplaces, and we may forget to add it. Could we put more efforts in\ncentralizing these initializations? The auxiliary processes are one\nportion of the problem, and getting stack traces for backend processes\nwould be useful on its own. Another suggestion that I'd propose to\nsimplify the patch would be to focus solely on backends for now, and\ndo something for auxiliary process later on. If you do that, the\nstrange layer with BackendOrAuxproc() is not a problem anymore, as it\nwould be left out for now.\n\n+<screen>\n+logging current backtrace of process with PID 3499242:\n+postgres: ubuntu postgres [local] INSERT(+0x61a355)[0x5559b94de355]\n+postgres: ubuntu postgres [local] INSERT(procsignal_sigusr1_handler+0x9e)[0x5559b94de4ef]\n\nThis is IMO too much details for the documentation, and could be\nconfusing depending on how the code evolves in the future. I'd\nsuggest to keep it minimal, cutting that to a few lines. I don't see\na need to mention ubuntu, either.\n--\nMichael",
"msg_date": "Tue, 6 Feb 2024 19:50:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Tue, Feb 6, 2024 at 4:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> --- a/src/backend/storage/ipc/procarray.c\n> +++ b/src/backend/storage/ipc/procarray.c\n> +bool\n> +SendProcSignalBackendOrAuxproc(int pid, ProcSignalReason reason)\n> +{\n>\n> Looking at 0001. This API is a thin wrapper of SendProcSignal(), that\n> just checks that we have actually a process before using it.\n> Shouldn't it be in procsignal.c.\n>\n> Now looking at 0002, this new routine is used in one place. Seeing\n> that we have something similar in pgstatfuncs.c, wouldn't it be\n> better, instead of englobing SendProcSignal(), to have one routine\n> that's able to return a PID for a PostgreSQL process?\n\nI liked the idea of going ahead with logging backtraces for only\nbackends for now, so a separate wrapper like this isn't needed.\n\n> All the backtrace-related handling is stored in procsignal.c, could it\n> be cleaner to move the internals into a separate, new file, like\n> procbacktrace.c or something similar?\n\n+1. Moved all the code to a new file.\n\n> LoadBacktraceFunctions() is one more thing we need to set up in all\n> auxiliary processes. That's a bit sad to require that in all these\n> places, and we may forget to add it. Could we put more efforts in\n> centralizing these initializations?\n\nIf we were to do it for only backends (including bg workers)\nInitProcess() is the better place. If we were to do it for both\nbackends and auxiliary processes, BaseInit() is best.\n\n> The auxiliary processes are one\n> portion of the problem, and getting stack traces for backend processes\n> would be useful on its own. Another suggestion that I'd propose to\n> simplify the patch would be to focus solely on backends for now, and\n> do something for auxiliary process later on. If you do that, the\n> strange layer with BackendOrAuxproc() is not a problem anymore, as it\n> would be left out for now.\n\n+1 to keep it simple for now; that is, log backtraces of only backends\nleaving auxiliary processes aside.\n\n> +<screen>\n> +logging current backtrace of process with PID 3499242:\n> +postgres: ubuntu postgres [local] INSERT(+0x61a355)[0x5559b94de355]\n> +postgres: ubuntu postgres [local] INSERT(procsignal_sigusr1_handler+0x9e)[0x5559b94de4ef]\n>\n> This is IMO too much details for the documentation, and could be\n> confusing depending on how the code evolves in the future. I'd\n> suggest to keep it minimal, cutting that to a few lines. I don't see\n> a need to mention ubuntu, either.\n\nWell, that 'ubuntu' is the default username there, I've changed it now\nand kept the output short.\n\nI've simplified the tests, now we don't need two separate output files\nfor tests. Please see the attached v27 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 7 Feb 2024 14:04:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Feb 07, 2024 at 02:04:39PM +0530, Bharath Rupireddy wrote:\n> Well, that 'ubuntu' is the default username there, I've changed it now\n> and kept the output short.\n\nI would keep it just at two or three lines, with a \"For example, with\nlines like\":\n\n> I've simplified the tests, now we don't need two separate output files\n> for tests. Please see the attached v27 patch.\n\n+ proname => 'pg_log_backtrace', provolatile => 'v', prorettype => 'bool', \n\nHmm. Would it be better to be in line with memory contexts logging\nand use pg_log_backend_backtrace()? One thing I was wondering is that\nthere may be a good point in splitting the backtrace support into two\nfunctions (backends and auxiliary processes) that could be split with\ntwo execution ACLs across different roles. \n\n+ PROCSIG_LOG_BACKTRACE, /* ask backend to log the current backtrace */\n\nIncorrect order.\n\n+-- Backtrace is not logged for auxiliary processes \n\nNot sure there's a point in keeping that in the tests for now.\n\n+ * XXX: It might be worth implementing it for auxiliary processes.\n\nSame, I would remove that.\n\n+static volatile sig_atomic_t backtrace_functions_loaded = false;\n\nHmm, so you need that because of the fact that it would be called in a\nsignal as backtrace(3) says:\n\"If you need certain calls to these two functions to not allocate\nmemory (in signal handlers, for example), you need to make sure libgcc\nis loaded beforehand\".\n\nTrue that it is not interesting to only log something when having a\nCFI, this needs to be dynamic to get a precise state of things.\n--\nMichael",
"msg_date": "Wed, 7 Feb 2024 18:27:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Feb 7, 2024 at 2:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 07, 2024 at 02:04:39PM +0530, Bharath Rupireddy wrote:\n> > Well, that 'ubuntu' is the default username there, I've changed it now\n> > and kept the output short.\n>\n> I would keep it just at two or three lines, with a \"For example, with\n> lines like\":\n\nDone.\n\n> > I've simplified the tests, now we don't need two separate output files\n> > for tests. Please see the attached v27 patch.\n>\n> + proname => 'pg_log_backtrace', provolatile => 'v', prorettype => 'bool',\n>\n> Hmm. Would it be better to be in line with memory contexts logging\n> and use pg_log_backend_backtrace()?\n\n+1.\n\n> One thing I was wondering is that\n> there may be a good point in splitting the backtrace support into two\n> functions (backends and auxiliary processes) that could be split with\n> two execution ACLs across different roles.\n\n-1 for that unless we have any requests. I mean, backtrace is a common\nthing for all postgres processes, why different controls are needed?\nI'd go with what pg_log_backend_memory_contexts does - it supports\nboth backends and auxiliary processes.\n\n> + PROCSIG_LOG_BACKTRACE, /* ask backend to log the current backtrace */\n>\n> Incorrect order.\n\nPROCSIG_XXX aren't alphabetically ordered, no?\n\n> +-- Backtrace is not logged for auxiliary processes\n>\n> Not sure there's a point in keeping that in the tests for now.\n>\n> + * XXX: It might be worth implementing it for auxiliary processes.\n>\n> Same, I would remove that.\n\nDone.\n\n> +static volatile sig_atomic_t backtrace_functions_loaded = false;\n>\n> Hmm, so you need that because of the fact that it would be called in a\n> signal as backtrace(3) says:\n> \"If you need certain calls to these two functions to not allocate\n> memory (in signal handlers, for example), you need to make sure libgcc\n> is loaded beforehand\".\n>\n> True that it is not interesting to only log something when having a\n> CFI, this needs to be dynamic to get a precise state of things.\n\nRight.\n\nI've also fixed some test failures. Please see the attached v28 patch\nset. 0002 extends pg_log_backend_backtrace to auxiliary processes,\njust like pg_log_backend_memory_contexts (not focused on PID\nde-duplication code yet).\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 7 Feb 2024 21:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Wed, Feb 7, 2024 at 9:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Feb 7, 2024 at 2:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Feb 07, 2024 at 02:04:39PM +0530, Bharath Rupireddy wrote:\n> > > Well, that 'ubuntu' is the default username there, I've changed it now\n> > > and kept the output short.\n> >\n> > I would keep it just at two or three lines, with a \"For example, with\n> > lines like\":\n>\n> Done.\n>\n> > > I've simplified the tests, now we don't need two separate output files\n> > > for tests. Please see the attached v27 patch.\n> >\n> > + proname => 'pg_log_backtrace', provolatile => 'v', prorettype => 'bool',\n> >\n> > Hmm. Would it be better to be in line with memory contexts logging\n> > and use pg_log_backend_backtrace()?\n>\n> +1.\n>\n> > One thing I was wondering is that\n> > there may be a good point in splitting the backtrace support into two\n> > functions (backends and auxiliary processes) that could be split with\n> > two execution ACLs across different roles.\n>\n> -1 for that unless we have any requests. I mean, backtrace is a common\n> thing for all postgres processes, why different controls are needed?\n> I'd go with what pg_log_backend_memory_contexts does - it supports\n> both backends and auxiliary processes.\n>\n> > + PROCSIG_LOG_BACKTRACE, /* ask backend to log the current backtrace */\n> >\n> > Incorrect order.\n>\n> PROCSIG_XXX aren't alphabetically ordered, no?\n>\n> > +-- Backtrace is not logged for auxiliary processes\n> >\n> > Not sure there's a point in keeping that in the tests for now.\n> >\n> > + * XXX: It might be worth implementing it for auxiliary processes.\n> >\n> > Same, I would remove that.\n>\n> Done.\n>\n> > +static volatile sig_atomic_t backtrace_functions_loaded = false;\n> >\n> > Hmm, so you need that because of the fact that it would be called in a\n> > signal as backtrace(3) says:\n> > \"If you need certain calls to these two functions to not allocate\n> > memory (in signal handlers, for example), you need to make sure libgcc\n> > is loaded beforehand\".\n> >\n> > True that it is not interesting to only log something when having a\n> > CFI, this needs to be dynamic to get a precise state of things.\n>\n> Right.\n>\n> I've also fixed some test failures. Please see the attached v28 patch\n> set. 0002 extends pg_log_backend_backtrace to auxiliary processes,\n> just like pg_log_backend_memory_contexts (not focused on PID\n> de-duplication code yet).\n\nI've missed adding LoadBacktraceFunctions() in InitAuxiliaryProcess\nfor 0002 patch. Please find the attached v29 patch set. Sorry for the\nnoise.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 8 Feb 2024 00:30:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Feb 08, 2024 at 12:30:00AM +0530, Bharath Rupireddy wrote:\n> I've missed adding LoadBacktraceFunctions() in InitAuxiliaryProcess\n> for 0002 patch. Please find the attached v29 patch set. Sorry for the\n> noise.\n\nI've been torturing the patch with \\watch and loops calling the\nfunction while doing a sequential scan of pg_stat_activity, and that\nwas stable while doing a pgbench and an installcheck-world in\nparallel, with some infinite loops and some spinlocks I should not\nhave taken.\n\n+ if (backtrace_functions_loaded)\n+ return;\n\nI was really wondering about this point, particularly regarding the\nfact that this would load libgcc for all the backends when they start,\nunconditionally. One thing could be to hide that behind a postmaster\nGUC disabled by default, but then we'd come back to using gdb on a\nlive server, which is no fun on a customer environment because of the\nextra dependencies, which may not, or just cannot, be installed. So\nyeah, I think that I'm OK about that.\n\n+ * procsignal.h\n+ * Backtrace-related routines\n\nThis one is incorrect.\n\nIn HandleLogBacktraceInterrupt(), we don't use backtrace_symbols() and\nrely on backtrace_symbols_fd() to avoid doing malloc() in the signal\nhandler as mentioned in [1] back in 2022. Perhaps the part about the\nfact that we don't use backtrace_symbols() should be mentioned\nexplicitely in a comment rather than silently implied? That's\na very important point.\n\nEchoing with upthread, and we've been more lax with superuser checks\nand assignment of custom roles in recent years, I agree with the\napproach of the patch to make that superuser by default. Then users\ncan force their own policy as they want with an EXECUTE ACL on the SQL\nfunction.\n\nAs a whole, I'm pretty excited about being able to rely on that\nwithout the need to use gdb to get a live trace. Does anybody have\nobjections and/or comments, particularly about the superuser and the\nload part at backend startup? This thread has been going on for so\nlong that it would be good to let 1 week for folks to react before\ndoing anything. See v29 for references.\n\n[1]: https://www.postgresql.org/message-id/CALj2ACUNZVB0cQovvKBd53-upsMur8j-5_K=-fg86uAa+WYEWg@mail.gmail.com\n--\nMichael",
"msg_date": "Thu, 8 Feb 2024 12:25:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Thu, Feb 08, 2024 at 12:25:18PM +0900, Michael Paquier wrote:\n> In HandleLogBacktraceInterrupt(), we don't use backtrace_symbols() and\n> rely on backtrace_symbols_fd() to avoid doing malloc() in the signal\n> handler as mentioned in [1] back in 2022. Perhaps the part about the\n> fact that we don't use backtrace_symbols() should be mentioned\n> explicitely in a comment rather than silently implied? That's\n> a very important point.\n\nThis has been itching me, so I have spent more time reading about\nthat, and while browsing signal(7) and signal-safety(7), I've first\nnoticed that this is not safe in the patch: \n+ write_stderr(\"logging current backtrace of process with PID %d:\\n\",\n+ MyProcPid);\n\nNote that there's a write_stderr_signal_safe().\n\nAnyway, I've been digging around the signal-safety of backtrace(3)\n(even looking a bit at some GCC code, brrr), and I am under the\nimpression that backtrace() is just by nature not safe and also\ndangerous in signal handlers. One example of issue I've found:\nhttps://github.com/gperftools/gperftools/issues/838\n\nThis looks like enough ground to me to reject the patch.\n--\nMichael",
"msg_date": "Fri, 9 Feb 2024 17:13:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On 2024-Feb-09, Michael Paquier wrote:\n\n> Anyway, I've been digging around the signal-safety of backtrace(3)\n> (even looking a bit at some GCC code, brrr), and I am under the\n> impression that backtrace() is just by nature not safe and also\n> dangerous in signal handlers. One example of issue I've found:\n> https://github.com/gperftools/gperftools/issues/838\n> \n> This looks like enough ground to me to reject the patch.\n\nHmm, but the backtrace() manpage says\n\n • backtrace() and backtrace_symbols_fd() don't call malloc() explic‐\n itly, but they are part of libgcc, which gets loaded dynamically\n when first used. Dynamic loading usually triggers a call to mal‐\n loc(3). If you need certain calls to these two functions to not\n allocate memory (in signal handlers, for example), you need to make\n sure libgcc is loaded beforehand.\n\nand the patch ensures that libgcc is loaded by calling a dummy\nbacktrace() at the start of the process.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"All rings of power are equal,\nBut some rings of power are more equal than others.\"\n (George Orwell's The Lord of the Rings)\n\n\n",
"msg_date": "Fri, 9 Feb 2024 09:48:36 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Feb 9, 2024 at 2:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Feb-09, Michael Paquier wrote:\n>\n> > Anyway, I've been digging around the signal-safety of backtrace(3)\n> > (even looking a bit at some GCC code, brrr), and I am under the\n> > impression that backtrace() is just by nature not safe and also\n> > dangerous in signal handlers. One example of issue I've found:\n> > https://github.com/gperftools/gperftools/issues/838\n> >\n> > This looks like enough ground to me to reject the patch.\n>\n> Hmm, but the backtrace() manpage says\n>\n> • backtrace() and backtrace_symbols_fd() don't call malloc() explic‐\n> itly, but they are part of libgcc, which gets loaded dynamically\n> when first used. Dynamic loading usually triggers a call to mal‐\n> loc(3). If you need certain calls to these two functions to not\n> allocate memory (in signal handlers, for example), you need to make\n> sure libgcc is loaded beforehand.\n>\n> and the patch ensures that libgcc is loaded by calling a dummy\n> backtrace() at the start of the process.\n>\n\nWe defer actual action triggered by a signal till CHECK_FOR_INTERRUPTS\nis called. I understand that we can't do that here since we want to\ncapture the backtrace at that moment and can't wait till next CFI. But\nprinting the backend can surely wait till next CFI right?\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 9 Feb 2024 14:27:26 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Feb 09, 2024 at 02:27:26PM +0530, Ashutosh Bapat wrote:\n> On Fri, Feb 9, 2024 at 2:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Hmm, but the backtrace() manpage says\n>>\n>> • backtrace() and backtrace_symbols_fd() don't call malloc() explic‐\n>> itly, but they are part of libgcc, which gets loaded dynamically\n>> when first used. Dynamic loading usually triggers a call to mal‐\n>> loc(3). If you need certain calls to these two functions to not\n>> allocate memory (in signal handlers, for example), you need to make\n>> sure libgcc is loaded beforehand.\n>>\n>> and the patch ensures that libgcc is loaded by calling a dummy\n>> backtrace() at the start of the process.\n\nFWIW, anything I am reading about the matter freaks me out, including\nthe dlopen() part in all the backends:\nhttps://www.gnu.org/software/libc/manual/html_node/Backtraces.html\n\nSo I really question whether it is a good idea to assume if this will\nalways be safe depending on the version of libgcc dealt with,\nincreasing the impact area. Perhaps that's worrying too much, but it\nlooks like one of these things where we'd better be really careful.\n\n> We defer actual action triggered by a signal till CHECK_FOR_INTERRUPTS\n> is called. I understand that we can't do that here since we want to\n> capture the backtrace at that moment and can't wait till next CFI. But\n> printing the backend can surely wait till next CFI right?\n\nDelaying the call of backtrace() to happen during a CFI() would be\nsafe, yes, and writing data to stderr would not really be an issue as\nat least the data would be sent somewhere. That's less useful, but\nwe do that for memory contexts.\n--\nMichael",
"msg_date": "Sat, 10 Feb 2024 09:06:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Sat, Feb 10, 2024 at 5:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Feb 09, 2024 at 02:27:26PM +0530, Ashutosh Bapat wrote:\n> > On Fri, Feb 9, 2024 at 2:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >> Hmm, but the backtrace() manpage says\n> >>\n> >> • backtrace() and backtrace_symbols_fd() don't call malloc() explic‐\n> >> itly, but they are part of libgcc, which gets loaded dynamically\n> >> when first used. Dynamic loading usually triggers a call to mal‐\n> >> loc(3). If you need certain calls to these two functions to not\n> >> allocate memory (in signal handlers, for example), you need to make\n> >> sure libgcc is loaded beforehand.\n> >>\n> >> and the patch ensures that libgcc is loaded by calling a dummy\n> >> backtrace() at the start of the process.\n>\n> FWIW, anything I am reading about the matter freaks me out, including\n> the dlopen() part in all the backends:\n> https://www.gnu.org/software/libc/manual/html_node/Backtraces.html\n>\n> So I really question whether it is a good idea to assume if this will\n> always be safe depending on the version of libgcc dealt with,\n> increasing the impact area. Perhaps that's worrying too much, but it\n> looks like one of these things where we'd better be really careful.\n\nI agree. We don't want a call to backtrace printing mechanism to make\nthings worse.\n\n>\n> > We defer actual action triggered by a signal till CHECK_FOR_INTERRUPTS\n> > is called. I understand that we can't do that here since we want to\n> > capture the backtrace at that moment and can't wait till next CFI. But\n> > printing the backend can surely wait till next CFI right?\n>\n> Delaying the call of backtrace() to happen during a CFI() would be\n> safe, yes, and writing data to stderr would not really be an issue as\n> at least the data would be sent somewhere. That's less useful, but\n> we do that for memory contexts.\n\nMemory contexts do not change more or less till next CFI, but stack\ntraces do. So I am not sure whether it is desirable to wait to capture\nbacktrace till next CFI. Given that the user can not time a call to\npg_log_backend() exactly, so whether it captures the backtrace exactly\nat when interrupt happens or at the next CFI may not matter much in\npractice.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 12 Feb 2024 06:52:24 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Feb 12, 2024 at 6:52 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> >\n> > > We defer actual action triggered by a signal till CHECK_FOR_INTERRUPTS\n> > > is called. I understand that we can't do that here since we want to\n> > > capture the backtrace at that moment and can't wait till next CFI. But\n> > > printing the backend can surely wait till next CFI right?\n> >\n> > Delaying the call of backtrace() to happen during a CFI() would be\n> > safe, yes, and writing data to stderr would not really be an issue as\n> > at least the data would be sent somewhere. That's less useful, but\n> > we do that for memory contexts.\n>\n> Memory contexts do not change more or less till next CFI, but stack\n> traces do. So I am not sure whether it is desirable to wait to capture\n> backtrace till next CFI. Given that the user can not time a call to\n> pg_log_backend() exactly, so whether it captures the backtrace exactly\n> at when interrupt happens or at the next CFI may not matter much in\n> practice.\n>\n\nThinking more about this I have following thoughts/questions:\n\n1. Whether getting a backtrace at CFI is good enough?\nBacktrace is required to know what a process is doing when it's stuck\nor is behaviour unexpected etc. PostgreSQL code has CFIs sprinkled in\nalmost all the tight loops. Whether those places are enough to cover\nmost of the cases that the user of this feature would care about?\n\n2. tools like gdb, strace can be used to get the stack trace of any\nprocess, so do we really need this tool?\nMost of the OSes provide such tools but may be there are useful in\nkubernetes like environment, I am not sure.\n\n3. tools like gdb and strace are able to capture stack trace at any\npoint during execution. Can we use the same mechanism instead of\nrelying on CFI?\n\n4. tools like gdb and strace can capture more than just stack trace\ne.g. variable values, values of registers etc. Are we planning to add\nthose facilities as well? OR whether this feature will be useful\nwithout those facilities?\n\nMay the feature be more useful if it can provide PostgreSQL specific\ndetails which an external tool can not.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 13 Feb 2024 19:32:28 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Re: Michael Paquier\n> >> • backtrace() and backtrace_symbols_fd() don't call malloc() explic‐\n> >> itly, but they are part of libgcc, which gets loaded dynamically\n> >> when first used. Dynamic loading usually triggers a call to mal‐\n> >> loc(3). If you need certain calls to these two functions to not\n> >> allocate memory (in signal handlers, for example), you need to make\n> >> sure libgcc is loaded beforehand.\n> >>\n> >> and the patch ensures that libgcc is loaded by calling a dummy\n> >> backtrace() at the start of the process.\n> \n> FWIW, anything I am reading about the matter freaks me out, including\n> the dlopen() part in all the backends:\n> https://www.gnu.org/software/libc/manual/html_node/Backtraces.html\n\nI'd be concerned about the cost of doing that as part of the startup\nof every single backend process. Shouldn't this rather be done within\nthe postmaster so it's automatically inherited by forked backends?\n(EXEC_BACKEND systems probably don't have libgcc I guess.)\n\nChristoph\n\n\n",
"msg_date": "Fri, 23 Feb 2024 16:39:47 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 04:39:47PM +0100, Christoph Berg wrote:\n> I'd be concerned about the cost of doing that as part of the startup\n> of every single backend process. Shouldn't this rather be done within\n> the postmaster so it's automatically inherited by forked backends?\n> (EXEC_BACKEND systems probably don't have libgcc I guess.)\n\nSomething like this can be measured with a bunch of concurrent\nconnections attempting connections and a very high rate, like pgbench\nwith an empty script and -C, for local connections.\n--\nMichael",
"msg_date": "Sat, 24 Feb 2024 08:38:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "Re: Michael Paquier\n> Something like this can be measured with a bunch of concurrent\n> connections attempting connections and a very high rate, like pgbench\n> with an empty script and -C, for local connections.\n\nI tried that now. Mind that I'm not a benchmarking expert, and there's\nbeen quite some jitter in the results, but I think there's a clear\ntrend.\n\nCurrent head without and with the v28 patchset.\nCommand line:\npgbench -n -C -c 20 -j 20 -f empty.sql -T 30 --progress=2\nempty.sql just contains a \";\"\nmodel name: 13th Gen Intel(R) Core(TM) i7-13700H\n\nhead:\ntps = 2211.289863 (including reconnection times)\ntps = 2113.907588 (including reconnection times)\ntps = 2200.406877 (including reconnection times)\naverage: 2175\n\nv28:\ntps = 1873.472459 (including reconnection times)\ntps = 2068.094383 (including reconnection times)\ntps = 2196.890897 (including reconnection times)\naverage: 2046\n\n2046 / 2175 = 0.941\n\nEven if we regard the 1873 as an outlier, I've seen many vanilla runs\nwith 22xx tps, and not a single v28 run with 22xx tps. Other numbers I\ncollected suggested a cost of at least 3% for the feature.\n\nChristoph\n\n\n",
"msg_date": "Mon, 26 Feb 2024 16:05:05 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 04:05:05PM +0100, Christoph Berg wrote:\n> I tried that now. Mind that I'm not a benchmarking expert, and there's\n> been quite some jitter in the results, but I think there's a clear\n> trend.\n> \n> Even if we regard the 1873 as an outlier, I've seen many vanilla runs\n> with 22xx tps, and not a single v28 run with 22xx tps. Other numbers I\n> collected suggested a cost of at least 3% for the feature.\n\nThanks for the numbers. Yes, that's annoying and I suspect could be\nnoticeable for a lot of users..\n--\nMichael",
"msg_date": "Tue, 27 Feb 2024 11:46:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing backtrace of postgres processes"
}
] |
[
{
"msg_contents": "OSX implements sysv shmem support via a mach wrapper, however the mach\nsysv shmem wrapper has some severe restrictions that prevent us from\nallocating enough memory segments in some cases.\n\nThese limits appear to be due to the way the wrapper itself is\nimplemented and not mach.\n\nFor example when running a test suite that spins up many postgres\ninstances I commonly see this issue:\nDETAIL: Failed system call was shmget(key=5432002, size=56, 03600).\n\nIn order to avoid hitting these limits we can bypass the wrapper layer\nand just use mach directly.\n\nThis is roughly how all major browsers seem to implement shared memory\nsupport on OSX.\n\nThis still needs a lot of cleanup but seems to compile and basic tests\nseem to pass.\n---\n configure | 15 +-\n configure.ac | 11 +-\n src/backend/port/mach_shmem.c | 797 ++++++++++++++++++++++++++++++++++\n src/include/pg_config.h.in | 3 +\n src/include/postgres.h | 14 +\n 5 files changed, 831 insertions(+), 9 deletions(-)\n create mode 100644 src/backend/port/mach_shmem.c\n\ndiff --git a/configure b/configure\nindex dd64692345..5a2eb14dea 100755\n--- a/configure\n+++ b/configure\n@@ -18043,16 +18043,21 @@ fi\n \n \n # Select shared-memory implementation type.\n-if test \"$PORTNAME\" != \"win32\"; then\n+if test \"$PORTNAME\" = \"win32\"; then\n \n-$as_echo \"#define USE_SYSV_SHARED_MEMORY 1\" >>confdefs.h\n+$as_echo \"#define USE_WIN32_SHARED_MEMORY 1\" >>confdefs.h\n \n- SHMEM_IMPLEMENTATION=\"src/backend/port/sysv_shmem.c\"\n+ SHMEM_IMPLEMENTATION=\"src/backend/port/win32_shmem.c\"\n+elif test \"$PORTNAME\" = \"darwin\"; then\n+\n+$as_echo \"#define USE_MACH_SHARED_MEMORY 1\" >>confdefs.h\n+\n+ SHMEM_IMPLEMENTATION=\"src/backend/port/mach_shmem.c\"\n else\n \n-$as_echo \"#define USE_WIN32_SHARED_MEMORY 1\" >>confdefs.h\n+$as_echo \"#define USE_SYSV_SHARED_MEMORY 1\" >>confdefs.h\n \n- SHMEM_IMPLEMENTATION=\"src/backend/port/win32_shmem.c\"\n+ SHMEM_IMPLEMENTATION=\"src/backend/port/sysv_shmem.c\"\n fi\n \n # Select random number source. If a TLS library is used then it will be the\ndiff --git a/configure.ac b/configure.ac\nindex 748fb50236..1bbcd5dc78 100644\n--- a/configure.ac\n+++ b/configure.ac\n@@ -2144,12 +2144,15 @@ fi\n \n \n # Select shared-memory implementation type.\n-if test \"$PORTNAME\" != \"win32\"; then\n- AC_DEFINE(USE_SYSV_SHARED_MEMORY, 1, [Define to select SysV-style shared memory.])\n- SHMEM_IMPLEMENTATION=\"src/backend/port/sysv_shmem.c\"\n-else\n+if test \"$PORTNAME\" = \"win32\"; then\n AC_DEFINE(USE_WIN32_SHARED_MEMORY, 1, [Define to select Win32-style shared memory.])\n SHMEM_IMPLEMENTATION=\"src/backend/port/win32_shmem.c\"\n+elif test \"$PORTNAME\" = \"darwin\"; then\n+ AC_DEFINE(USE_MACH_SHARED_MEMORY, 1, [Define to select Mach-style shared memory.])\n+ SHMEM_IMPLEMENTATION=\"src/backend/port/mach_shmem.c\"\n+else\n+ AC_DEFINE(USE_SYSV_SHARED_MEMORY, 1, [Define to select SysV-style shared memory.])\n+ SHMEM_IMPLEMENTATION=\"src/backend/port/sysv_shmem.c\"\n fi\n \n # Select random number source. If a TLS library is used then it will be the\ndiff --git a/src/backend/port/mach_shmem.c b/src/backend/port/mach_shmem.c\nnew file mode 100644\nindex 0000000000..2fe42ed770\n--- /dev/null\n+++ b/src/backend/port/mach_shmem.c\n@@ -0,0 +1,797 @@\n+/*-------------------------------------------------------------------------\n+ *\n+ * mach_shmem.c\n+ *\t Implement shared memory using mach facilities\n+ *\n+ * These routines used to be a fairly thin layer on top of mach shared\n+ * memory functionality. With the addition of anonymous-shmem logic,\n+ * they're a bit fatter now. We still require a mach shmem block to\n+ * exist, though, because mmap'd shmem provides no way to find out how\n+ * many processes are attached, which we need for interlocking purposes.\n+ *\n+ * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n+ * Portions Copyright (c) 1994, Regents of the University of California\n+ *\n+ * IDENTIFICATION\n+ *\t src/backend/port/mach_shmem.c\n+ *\n+ *-------------------------------------------------------------------------\n+ */\n+#include \"postgres.h\"\n+\n+#include <unistd.h>\n+#include <sys/mman.h>\n+#include <sys/stat.h>\n+#ifdef HAVE_SYS_IPC_H\n+#include <sys/ipc.h>\n+#endif\n+\n+#include \"miscadmin.h\"\n+#include \"portability/mem.h\"\n+#include \"storage/dsm.h\"\n+#include \"storage/ipc.h\"\n+#include \"storage/pg_shmem.h\"\n+#include \"utils/pidfile.h\"\n+\n+#include <mach/port.h>\n+#include <mach/mach.h>\n+#include <mach/mach_vm.h>\n+\n+\n+/*\n+ * As of PostgreSQL 9.3, we normally allocate only a very small amount of\n+ * System V shared memory, and only for the purposes of providing an\n+ * interlock to protect the data directory. The real shared memory block\n+ * is allocated using mmap(). This works around the problem that many\n+ * systems have very low limits on the amount of System V shared memory\n+ * that can be allocated. Even a limit of a few megabytes will be enough\n+ * to run many copies of PostgreSQL without needing to adjust system settings.\n+ *\n+ * We assume that no one will attempt to run PostgreSQL 9.3 or later on\n+ * systems that are ancient enough that anonymous shared memory is not\n+ * supported, such as pre-2.4 versions of Linux. If that turns out to be\n+ * false, we might need to add compile and/or run-time tests here and do this\n+ * only if the running kernel supports it.\n+ *\n+ * However, we must always disable this logic in the EXEC_BACKEND case, and\n+ * fall back to the old method of allocating the entire segment using System V\n+ * shared memory, because there's no way to attach an anonymous mmap'd segment\n+ * to a process after exec(). Since EXEC_BACKEND is intended only for\n+ * developer use, this shouldn't be a big problem. Because of this, we do\n+ * not worry about supporting anonymous shmem in the EXEC_BACKEND cases below.\n+ *\n+ * As of PostgreSQL 12, we regained the ability to use a large System V shared\n+ * memory region even in non-EXEC_BACKEND builds, if shared_memory_type is set\n+ * to sysv (though this is not the default).\n+ */\n+\n+\n+typedef key_t IpcMemoryKey;\t\t/* shared memory key passed to shmget(2) */\n+typedef int IpcMemoryId;\t\t/* shared memory ID returned by shmget(2) */\n+\n+/*\n+ * How does a given IpcMemoryId relate to this PostgreSQL process?\n+ *\n+ * One could recycle unattached segments of different data directories if we\n+ * distinguished that case from other SHMSTATE_FOREIGN cases. Doing so would\n+ * cause us to visit less of the key space, making us less likely to detect a\n+ * SHMSTATE_ATTACHED key. It would also complicate the concurrency analysis,\n+ * in that postmasters of different data directories could simultaneously\n+ * attempt to recycle a given key. We'll waste keys longer in some cases, but\n+ * avoiding the problems of the alternative justifies that loss.\n+ */\n+typedef enum\n+{\n+\tSHMSTATE_ANALYSIS_FAILURE,\t/* unexpected failure to analyze the ID */\n+\tSHMSTATE_ATTACHED,\t\t\t/* pertinent to DataDir, has attached PIDs */\n+\tSHMSTATE_ENOENT,\t\t\t/* no segment of that ID */\n+\tSHMSTATE_FOREIGN,\t\t\t/* exists, but not pertinent to DataDir */\n+\tSHMSTATE_UNATTACHED\t\t\t/* pertinent to DataDir, no attached PIDs */\n+} IpcMemoryState;\n+\n+\n+unsigned long UsedShmemSegID = 0;\n+void\t *UsedShmemSegAddr = NULL;\n+\n+static Size AnonymousShmemSize;\n+static void *AnonymousShmem = NULL;\n+memory_object_size_t mo_size = 0;\n+\n+static void *InternalIpcMemoryCreate(IpcMemoryKey memKey, Size size);\n+static void IpcMemoryDetach(int status, Datum shmaddr);\n+static void IpcMemoryDelete(int status, Datum shmId);\n+static IpcMemoryState PGSharedMemoryAttach(IpcMemoryId shmId,\n+\t\t\t\t\t\t\t\t\t\tvoid *attachAt,\n+\t\t\t\t\t\t\t\t\t\tPGShmemHeader **addr);\n+\n+\n+/*\n+ *\tInternalIpcMemoryCreate(memKey, size)\n+ *\n+ * Attempt to create a new shared memory segment with the specified key.\n+ * Will fail (return NULL) if such a segment already exists. If successful,\n+ * attach the segment to the current process and return its attached address.\n+ * On success, callbacks are registered with on_shmem_exit to detach and\n+ * delete the segment when on_shmem_exit is called.\n+ *\n+ * If we fail with a failure code other than collision-with-existing-segment,\n+ * print out an error and abort. Other types of errors are not recoverable.\n+ */\n+static void *\n+InternalIpcMemoryCreate(IpcMemoryKey memKey, Size size)\n+{\n+\tmem_entry_name_port_t MachPort = MACH_PORT_NULL;\n+\tkern_return_t kr;\n+\tmach_vm_address_t requestedAddress;\n+\tvoid\t *memAddress;\n+\tmo_size = round_page(size);\n+\n+\t/*\n+\t * Normally we just pass requestedAddress = NULL to shmat(), allowing the\n+\t * system to choose where the segment gets mapped. But in an EXEC_BACKEND\n+\t * build, it's possible for whatever is chosen in the postmaster to not\n+\t * work for backends, due to variations in address space layout. As a\n+\t * rather klugy workaround, allow the user to specify the address to use\n+\t * via setting the environment variable PG_SHMEM_ADDR. (If this were of\n+\t * interest for anything except debugging, we'd probably create a cleaner\n+\t * and better-documented way to set it, such as a GUC.)\n+\t */\n+#ifdef EXEC_BACKEND\n+\t{\n+\t\tchar\t *pg_shmem_addr = getenv(\"PG_SHMEM_ADDR\");\n+\n+\t\tif (pg_shmem_addr)\n+\t\t\trequestedAddress = (void *) strtoul(pg_shmem_addr, NULL, 0);\n+\t}\n+#endif\n+\n+\t/* OK, should be able to attach to the segment */\n+\tkr = mach_make_memory_entry_64(mach_task_self(),\n+\t\t\t\t\t\t\t\t &mo_size,\n+\t\t\t\t\t\t\t\t memKey,\n+\t\t\t\t\t\t\t\t MAP_MEM_NAMED_CREATE | VM_PROT_DEFAULT,\n+\t\t\t\t\t\t\t\t &MachPort,\n+\t\t\t\t\t\t\t\t MACH_PORT_NULL);\n+\n+\t/* Register on-exit routine to delete the new segment */\n+\ton_shmem_exit(IpcMemoryDelete, Int32GetDatum(MachPort));\n+\n+\tif (kr != KERN_SUCCESS)\n+\t\telog(FATAL,\n+\t\t\t \"mach_make_memory_entry_64(\"\n+\t\t\t \"target_task=%d, \"\n+\t\t\t \"size=%llu, \"\n+\t\t\t \"offset=%d, \"\n+\t\t\t \"permission=0x%x, \"\n+\t\t\t \"object_handle=%d, \"\n+\t\t\t \"parent_entry=%d) failed: \"\n+\t\t\t \"%d\",\n+\t\t\t mach_task_self(),\n+\t\t\t mo_size,\n+\t\t\t memKey,\n+\t\t\t MAP_MEM_NAMED_CREATE | VM_PROT_DEFAULT,\n+\t\t\t MachPort,\n+\t\t\t MACH_PORT_NULL,\n+\t\t\t kr);\n+\n+\tkr = mach_vm_map(mach_task_self(),\n+\t\t\t\t\t &requestedAddress,\n+\t\t\t\t\t mo_size,\n+\t\t\t\t\t 0,\n+\t\t\t\t\t VM_FLAGS_ANYWHERE,\n+\t\t\t\t\t MachPort,\n+\t\t\t\t\t 0,\n+\t\t\t\t\t FALSE,\n+\t\t\t\t\t VM_PROT_DEFAULT,\n+\t\t\t\t\t VM_PROT_DEFAULT,\n+\t\t\t\t\t VM_INHERIT_NONE);\n+\n+\tif (kr != KERN_SUCCESS)\n+\t\telog(FATAL,\n+\t\t\t \"mach_vm_map(\"\n+\t\t\t \"target_task=%d, \"\n+\t\t\t \"address=%p, \"\n+\t\t\t \"size=%llu, \"\n+\t\t\t \"mask=%d, \"\n+\t\t\t \"flags=0x%x, \"\n+\t\t\t \"object=%d, \"\n+\t\t\t \"offset=%d, \"\n+\t\t\t \"copy=%d, \"\n+\t\t\t \"cur_protection=0x%x, \"\n+\t\t\t \"max_protection=0x%x, \"\n+\t\t\t \"inheritance=0x%x) failed: \"\n+\t\t\t \"%d\",\n+\t\t\t mach_task_self(),\n+\t\t\t &requestedAddress,\n+\t\t\t mo_size,\n+\t\t\t 0,\n+\t\t\t VM_FLAGS_ANYWHERE,\n+\t\t\t MachPort,\n+\t\t\t 0,\n+\t\t\t FALSE,\n+\t\t\t VM_PROT_DEFAULT,\n+\t\t\t VM_PROT_DEFAULT,\n+\t\t\t VM_INHERIT_NONE,\n+\t\t\t kr);\n+\tmemAddress = (void *)requestedAddress;\n+\n+\t/* Register on-exit routine to detach new segment before deleting */\n+\ton_shmem_exit(IpcMemoryDetach, AddressGetDatum(requestedAddress));\n+\n+\t/*\n+\t * Store shmem key and ID in data directory lockfile. Format to try to\n+\t * keep it the same length always (trailing junk in the lockfile won't\n+\t * hurt, but might confuse humans).\n+\t */\n+\t{\n+\t\tchar\t\tline[64];\n+\n+\t\tsprintf(line, \"%9lu %9lu\",\n+\t\t\t\t(unsigned long) memKey, (unsigned long) MachPort);\n+\t\tAddToDataDirLockFile(LOCK_FILE_LINE_SHMEM_KEY, line);\n+\t}\n+\n+\treturn memAddress;\n+}\n+\n+/****************************************************************************/\n+/*\tIpcMemoryDetach(status, shmaddr)\tremoves a shared memory segment\t\t*/\n+/*\t\t\t\t\t\t\t\t\t\tfrom process' address space\t\t\t*/\n+/*\t(called as an on_shmem_exit callback, hence funny argument list)\t\t*/\n+/****************************************************************************/\n+static void\n+IpcMemoryDetach(int status, Datum shmaddr)\n+{\n+\t/* Detach System V shared memory block. */\n+\tkern_return_t kr = mach_vm_deallocate(mach_task_self(), *DatumGetAddress(shmaddr), mo_size);\n+\tif (kr != KERN_SUCCESS)\n+\t\telog(LOG, \"mach_vm_deallocate(target=%d, address=%p, size=%llu) failed: %d\",\n+\t\t\t mach_task_self(), DatumGetAddress(shmaddr), mo_size, kr);\n+}\n+\n+/****************************************************************************/\n+/*\tIpcMemoryDelete(status, shmId)\t\tdeletes a shared memory segment\t\t*/\n+/*\t(called as an on_shmem_exit callback, hence funny argument list)\t\t*/\n+/****************************************************************************/\n+static void\n+IpcMemoryDelete(int status, Datum shmId)\n+{\n+\tkern_return_t kr = mach_port_deallocate(mach_task_self(), DatumGetInt32(shmId));\n+\tif (kr != KERN_SUCCESS)\n+\t\telog(FATAL, \"mach_port_deallocate(task=%d, name=%d) failed: %d\",\n+\t\t\t mach_task_self(), DatumGetInt32(shmId), kr);\n+}\n+\n+/*\n+ * PGSharedMemoryIsInUse\n+ *\n+ * Is a previously-existing shmem segment still existing and in use?\n+ *\n+ * The point of this exercise is to detect the case where a prior postmaster\n+ * crashed, but it left child backends that are still running. Therefore\n+ * we only care about shmem segments that are associated with the intended\n+ * DataDir. This is an important consideration since accidental matches of\n+ * shmem segment IDs are reasonably common.\n+ */\n+bool\n+PGSharedMemoryIsInUse(unsigned long id1, unsigned long id2)\n+{\n+\tkern_return_t kr;\n+\tmach_msg_type_number_t pCount = VM_PAGE_INFO_BASIC_COUNT;\n+\tvm_page_info_basic_data_t pInfo;\n+\n+\tkr = mach_vm_page_info(mach_task_self(), id2, VM_PAGE_INFO_BASIC, (vm_page_info_t)&pInfo, &pCount);\n+\tif (kr != KERN_SUCCESS)\n+\t\telog(LOG, \"mach_vm_map(target_task=%d, address=%lu, flavor=0x%x, info=%p, infoCnt=%d) failed: %d\",\n+\t\t\t mach_task_self(), id2, VM_PAGE_INFO_BASIC, &pInfo, pCount, kr);\n+\n+\tif (pInfo.disposition & (VM_PAGE_QUERY_PAGE_PRESENT | VM_PAGE_QUERY_PAGE_REF | VM_PAGE_QUERY_PAGE_DIRTY))\n+\t\treturn true;\n+\treturn false;\n+}\n+\n+/*\n+ * Test for a segment with id shmId; see comment at IpcMemoryState.\n+ *\n+ * If the segment exists, we'll attempt to attach to it, using attachAt\n+ * if that's not NULL (but it's best to pass NULL if possible).\n+ *\n+ * *addr is set to the segment memory address if we attached to it, else NULL.\n+ */\n+static IpcMemoryState\n+PGSharedMemoryAttach(IpcMemoryId shmId,\n+\t\t\t\t\t void *attachAt,\n+\t\t\t\t\t PGShmemHeader **addr)\n+{\n+\tmem_entry_name_port_t MachPort;\n+\tkern_return_t kr;\n+\tstruct stat statbuf;\n+\tPGShmemHeader *hdr;\n+\n+\t*addr = NULL;\n+\n+\t/*\n+\t * First, try to stat the shm segment ID, to see if it exists at all.\n+\t */\n+\tmach_msg_type_number_t pCount = VM_PAGE_INFO_BASIC_COUNT;\n+\tvm_page_info_basic_data_t pInfo;\n+\tkr = mach_vm_page_info(mach_task_self(), shmId, VM_PAGE_INFO_BASIC, (vm_page_info_t)&pInfo, &pCount);\n+\tif (kr != KERN_SUCCESS)\n+\t\telog(LOG, \"mach_vm_page_info(target_task=%d, address=%d, flavor=0x%x, info=%p, infoCnt=%d) failed: %d\",\n+\t\t\t mach_task_self(), shmId, VM_PAGE_INFO_BASIC, &pInfo, pCount, kr);\n+\n+\tif (pInfo.disposition & (VM_PAGE_QUERY_PAGE_PRESENT | VM_PAGE_QUERY_PAGE_REF | VM_PAGE_QUERY_PAGE_DIRTY))\n+\t\treturn SHMSTATE_ATTACHED;\n+\n+\tif (kr != KERN_SUCCESS)\n+\t{\n+\t\t/*\n+\t\t * EINVAL actually has multiple possible causes documented in the\n+\t\t * shmctl man page, but we assume it must mean the segment no longer\n+\t\t * exists.\n+\t\t */\n+\t\tif (kr == KERN_INVALID_ADDRESS)\n+\t\t\treturn SHMSTATE_ENOENT;\n+\n+\t\t/*\n+\t\t * EACCES implies we have no read permission, which means it is not a\n+\t\t * Postgres shmem segment (or at least, not one that is relevant to\n+\t\t * our data directory).\n+\t\t */\n+\t\tif (kr == KERN_PROTECTION_FAILURE)\n+\t\t\treturn SHMSTATE_FOREIGN;\n+\n+\t\t/*\n+\t\t * Some Linux kernel versions (in fact, all of them as of July 2007)\n+\t\t * sometimes return EIDRM when EINVAL is correct. The Linux kernel\n+\t\t * actually does not have any internal state that would justify\n+\t\t * returning EIDRM, so we can get away with assuming that EIDRM is\n+\t\t * equivalent to EINVAL on that platform.\n+\t\t */\n+\n+\t\t/*\n+\t\t * Otherwise, we had better assume that the segment is in use. The\n+\t\t * only likely case is (non-Linux, assumed spec-compliant) EIDRM,\n+\t\t * which implies that the segment has been IPC_RMID'd but there are\n+\t\t * still processes attached to it.\n+\t\t */\n+\t\treturn SHMSTATE_ANALYSIS_FAILURE;\n+\t}\n+\n+\t/*\n+\t * Try to attach to the segment and see if it matches our data directory.\n+\t * This avoids any risk of duplicate-shmem-key conflicts on machines that\n+\t * are running several postmasters under the same userid.\n+\t *\n+\t * (When we're called from PGSharedMemoryCreate, this stat call is\n+\t * duplicative; but since this isn't a high-traffic case it's not worth\n+\t * trying to optimize.)\n+\t */\n+\tif (stat(DataDir, &statbuf) < 0)\n+\t\treturn SHMSTATE_ANALYSIS_FAILURE;\t/* can't stat; be conservative */\n+\n+\tMachPort = DatumGetInt32(shmId);\n+\tkr = mach_vm_map(mach_task_self(),\n+\t\t\t\t\t attachAt,\n+\t\t\t\t\t mo_size,\n+\t\t\t\t\t 0,\n+\t\t\t\t\t VM_FLAGS_ANYWHERE,\n+\t\t\t\t\t MachPort,\n+\t\t\t\t\t 0,\n+\t\t\t\t\t FALSE,\n+\t\t\t\t\t VM_PROT_DEFAULT,\n+\t\t\t\t\t VM_PROT_DEFAULT,\n+\t\t\t\t\t VM_INHERIT_NONE);\n+\n+\tif (kr != KERN_SUCCESS)\n+\t{\n+\t\t/*\n+\t\t * Attachment failed. The cases we're interested in are the same as\n+\t\t * for the shmctl() call above. In particular, note that the owning\n+\t\t * postmaster could have terminated and removed the segment between\n+\t\t * shmctl() and shmat().\n+\t\t *\n+\t\t * If attachAt isn't NULL, it's possible that EINVAL reflects a\n+\t\t * problem with that address not a vanished segment, so it's best to\n+\t\t * pass NULL when probing for conflicting segments.\n+\t\t */\n+\t\telog(LOG,\n+\t\t\t \"mach_vm_map(\"\n+\t\t\t \"target_task=%d, \"\n+\t\t\t \"address=%p, \"\n+\t\t\t \"size=%llu, \"\n+\t\t\t \"mask=%d, \"\n+\t\t\t \"flags=0x%x, \"\n+\t\t\t \"object=%d, \"\n+\t\t\t \"offset=%d, \"\n+\t\t\t \"copy=%d, \"\n+\t\t\t \"cur_protection=0x%x, \"\n+\t\t\t \"max_protection=0x%x, \"\n+\t\t\t \"inheritance=0x%x) failed: \"\n+\t\t\t \"%d\",\n+\t\t\t mach_task_self(),\n+\t\t\t attachAt,\n+\t\t\t (mach_vm_size_t)sizeof(PGShmemHeader),\n+\t\t\t 0,\n+\t\t\t VM_FLAGS_ANYWHERE,\n+\t\t\t MachPort,\n+\t\t\t 0,\n+\t\t\t FALSE,\n+\t\t\t VM_PROT_DEFAULT,\n+\t\t\t VM_PROT_DEFAULT,\n+\t\t\t VM_INHERIT_NONE,\n+\t\t\t kr);\n+\n+\t\tif (kr == KERN_INVALID_ADDRESS)\n+\t\t\treturn SHMSTATE_ENOENT; /* segment disappeared */\n+\t\tif (errno == KERN_PROTECTION_FAILURE)\n+\t\t\treturn SHMSTATE_FOREIGN;\t/* must be non-Postgres */\n+\t\t/* Otherwise, be conservative. */\n+\t\treturn SHMSTATE_ANALYSIS_FAILURE;\n+\t}\n+\thdr = (void *)attachAt;\n+\t*addr = hdr;\n+\n+\tif (hdr->magic != PGShmemMagic ||\n+\t\thdr->device != statbuf.st_dev ||\n+\t\thdr->inode != statbuf.st_ino)\n+\t{\n+\t\t/*\n+\t\t * It's either not a Postgres segment, or not one for my data\n+\t\t * directory.\n+\t\t */\n+\t\treturn SHMSTATE_FOREIGN;\n+\t}\n+\n+\t/*\n+\t * It does match our data directory, so now test whether any processes are\n+\t * still attached to it. (We are, now, but the shm_nattch result is from\n+\t * before we attached to it.)\n+\t */\n+\treturn pInfo.ref_count == 0 ? SHMSTATE_UNATTACHED : SHMSTATE_ATTACHED;\n+}\n+\n+/*\n+ * Creates an anonymous mmap()ed shared memory segment.\n+ *\n+ * Pass the requested size in *size. This function will modify *size to the\n+ * actual size of the allocation, if it ends up allocating a segment that is\n+ * larger than requested.\n+ */\n+static void *\n+CreateAnonymousSegment(Size *size)\n+{\n+\tSize\t\tallocsize = *size;\n+\tvoid\t *ptr = MAP_FAILED;\n+\tint\t\t\tmmap_errno = 0;\n+\n+\tif (ptr == MAP_FAILED && huge_pages != HUGE_PAGES_ON)\n+\t{\n+\t\t/*\n+\t\t * Use the original size, not the rounded-up value, when falling back\n+\t\t * to non-huge pages.\n+\t\t */\n+\t\tallocsize = *size;\n+\t\tptr = mmap(NULL, allocsize, PROT_READ | PROT_WRITE,\n+\t\t\t\t PG_MMAP_FLAGS, -1, 0);\n+\t\tmmap_errno = errno;\n+\t}\n+\n+\tif (ptr == MAP_FAILED)\n+\t{\n+\t\terrno = mmap_errno;\n+\t\tereport(FATAL,\n+\t\t\t\t(errmsg(\"could not map anonymous shared memory: %m\"),\n+\t\t\t\t (mmap_errno == ENOMEM) ?\n+\t\t\t\t errhint(\"This error usually means that PostgreSQL's request \"\n+\t\t\t\t\t\t \"for a shared memory segment exceeded available memory, \"\n+\t\t\t\t\t\t \"swap space, or huge pages. To reduce the request size \"\n+\t\t\t\t\t\t \"(currently %zu bytes), reduce PostgreSQL's shared \"\n+\t\t\t\t\t\t \"memory usage, perhaps by reducing shared_buffers or \"\n+\t\t\t\t\t\t \"max_connections.\",\n+\t\t\t\t\t\t allocsize) : 0));\n+\t}\n+\n+\t*size = allocsize;\n+\treturn ptr;\n+}\n+\n+/*\n+ * AnonymousShmemDetach --- detach from an anonymous mmap'd block\n+ * (called as an on_shmem_exit callback, hence funny argument list)\n+ */\n+static void\n+AnonymousShmemDetach(int status, Datum arg)\n+{\n+\t/* Release anonymous shared memory block, if any. */\n+\tif (AnonymousShmem != NULL)\n+\t{\n+\t\tif (munmap(AnonymousShmem, AnonymousShmemSize) < 0)\n+\t\t\telog(LOG, \"munmap(%p, %zu) failed: %m\",\n+\t\t\t\t AnonymousShmem, AnonymousShmemSize);\n+\t\tAnonymousShmem = NULL;\n+\t}\n+}\n+\n+/*\n+ * PGSharedMemoryCreate\n+ *\n+ * Create a shared memory segment of the given size and initialize its\n+ * standard header. Also, register an on_shmem_exit callback to release\n+ * the storage.\n+ *\n+ * Dead Postgres segments pertinent to this DataDir are recycled if found, but\n+ * we do not fail upon collision with foreign shmem segments. The idea here\n+ * is to detect and re-use keys that may have been assigned by a crashed\n+ * postmaster or backend.\n+ */\n+PGShmemHeader *\n+PGSharedMemoryCreate(Size size,\n+\t\t\t\t\t PGShmemHeader **shim)\n+{\n+\tkern_return_t kr;\n+\tIpcMemoryKey NextShmemSegID;\n+\tmach_vm_address_t *memAddress;\n+\tPGShmemHeader *hdr;\n+\tstruct stat statbuf;\n+\tSize\t\tsysvsize;\n+\n+\t/*\n+\t * We use the data directory's ID info (inode and device numbers) to\n+\t * positively identify shmem segments associated with this data dir, and\n+\t * also as seeds for searching for a free shmem key.\n+\t */\n+\tif (stat(DataDir, &statbuf) < 0)\n+\t\tereport(FATAL,\n+\t\t\t\t(errcode_for_file_access(),\n+\t\t\t\t errmsg(\"could not stat data directory \\\"%s\\\": %m\",\n+\t\t\t\t\t\tDataDir)));\n+\n+\t/* Room for a header? */\n+\tAssert(size > MAXALIGN(sizeof(PGShmemHeader)));\n+\n+\tif (shared_memory_type == SHMEM_TYPE_MMAP)\n+\t{\n+\t\tAnonymousShmem = CreateAnonymousSegment(&size);\n+\t\tAnonymousShmemSize = size;\n+\n+\t\t/* Register on-exit routine to unmap the anonymous segment */\n+\t\ton_shmem_exit(AnonymousShmemDetach, (Datum) 0);\n+\n+\t\t/* Now we need only allocate a minimal-sized SysV shmem block. */\n+\t\tsysvsize = sizeof(PGShmemHeader);\n+\t}\n+\telse\n+\t\tsysvsize = size;\n+\n+\t/*\n+\t * Loop till we find a free IPC key. Trust CreateDataDirLockFile() to\n+\t * ensure no more than one postmaster per data directory can enter this\n+\t * loop simultaneously. (CreateDataDirLockFile() does not entirely ensure\n+\t * that, but prefer fixing it over coping here.)\n+\t */\n+\tNextShmemSegID = statbuf.st_ino;\n+\n+\tfor (;;)\n+\t{\n+\t\tPGShmemHeader *oldhdr;\n+\t\tIpcMemoryState state;\n+\n+\t\t/* Try to create new segment */\n+\t\tmemAddress = InternalIpcMemoryCreate(NextShmemSegID, sysvsize);\n+\t\tif (memAddress)\n+\t\t\tbreak;\t\t\t\t/* successful create and attach */\n+\n+\t\t/* Check shared memory and possibly remove and recreate */\n+\n+\t\t/*\n+\t\t * shmget() failure is typically EACCES, hence SHMSTATE_FOREIGN.\n+\t\t * ENOENT, a narrow possibility, implies SHMSTATE_ENOENT, but one can\n+\t\t * safely treat SHMSTATE_ENOENT like SHMSTATE_FOREIGN.\n+\t\t */\n+\t\tmach_msg_type_number_t pCount = VM_PAGE_INFO_BASIC_COUNT;\n+\t\tvm_page_info_basic_data_t pInfo;\n+\t\tkr = mach_vm_page_info(mach_task_self(), *memAddress, VM_PAGE_INFO_BASIC, (vm_page_info_t)&pInfo, &pCount);\n+\t\tif (kr != KERN_SUCCESS)\n+\t\t\telog(LOG, \"mach_vm_page_info(target_task=%d, address=%d, flavor=0x%x, info=%p, infoCnt=%d) failed: %d\",\n+\t\t\t\t mach_task_self(), NextShmemSegID, VM_PAGE_INFO_BASIC, &pInfo, pCount, kr);\n+\n+\t\tif (kr != KERN_SUCCESS)\n+\t\t{\n+\t\t\toldhdr = NULL;\n+\t\t\tstate = SHMSTATE_FOREIGN;\n+\t\t}\n+\t\telse\n+\t\t\tstate = PGSharedMemoryAttach(pInfo.object_id, NULL, &oldhdr);\n+\n+\t\tswitch (state)\n+\t\t{\n+\t\t\tcase SHMSTATE_ANALYSIS_FAILURE:\n+\t\t\tcase SHMSTATE_ATTACHED:\n+\t\t\t\tereport(FATAL,\n+\t\t\t\t\t(errcode(ERRCODE_LOCK_FILE_EXISTS),\n+\t\t\t\t\t errmsg(\"pre-existing shared memory block (key %lu, ID %lu) is still in use\",\n+\t\t\t\t\t\t(unsigned long) NextShmemSegID,\n+\t\t\t\t\t\t(unsigned long) pInfo.object_id),\n+\t\t\t\t\t errhint(\"Terminate any old server processes associated with data directory \\\"%s\\\".\",\n+\t\t\t\t\t\tDataDir)));\n+\t\t\t\tbreak;\n+\t\t\tcase SHMSTATE_ENOENT:\n+\n+\t\t\t\t/*\n+\t\t\t\t * To our surprise, some other process deleted since our last\n+\t\t\t\t * InternalIpcMemoryCreate(). Moments earlier, we would have\n+\t\t\t\t * seen SHMSTATE_FOREIGN. Try that same ID again.\n+\t\t\t\t */\n+\t\t\t\telog(LOG,\n+\t\t\t\t\t \"shared memory block (key %lu, ID %lu) deleted during startup\",\n+\t\t\t\t\t (unsigned long) NextShmemSegID,\n+\t\t\t\t\t (unsigned long) pInfo.object_id);\n+\t\t\t\tbreak;\n+\t\t\tcase SHMSTATE_FOREIGN:\n+\t\t\t\tNextShmemSegID++;\n+\t\t\t\tbreak;\n+\t\t\tcase SHMSTATE_UNATTACHED:\n+\n+\t\t\t\t/*\n+\t\t\t\t * The segment pertains to DataDir, and every process that had\n+\t\t\t\t * used it has died or detached. Zap it, if possible, and any\n+\t\t\t\t * associated dynamic shared memory segments, as well. This\n+\t\t\t\t * shouldn't fail, but if it does, assume the segment belongs\n+\t\t\t\t * to someone else after all, and try the next candidate.\n+\t\t\t\t * Otherwise, try again to create the segment. That may fail\n+\t\t\t\t * if some other process creates the same shmem key before we\n+\t\t\t\t * do, in which case we'll try the next key.\n+\t\t\t\t */\n+\t\t\t\tif (oldhdr->dsm_control != 0)\n+\t\t\t\t\tdsm_cleanup_using_control_segment(oldhdr->dsm_control);\n+\t\t\t\tif (mach_port_deallocate(mach_task_self(), pInfo.object_id) != KERN_SUCCESS)\n+\t\t\t\t\tNextShmemSegID++;\n+\t\t\t\tbreak;\n+\t\t}\n+\n+\t\tif (oldhdr) {\n+\t\t\tkr = mach_vm_deallocate(mach_task_self(), (mach_vm_address_t)oldhdr, mo_size);\n+\t\t\tif (kr != KERN_SUCCESS)\n+\t\t\t\telog(LOG, \"mach_vm_deallocate(target=%d, address=%p, size=%llu) failed: %d\",\n+\t\t\t\t\t mach_task_self(), oldhdr, mo_size, kr);\n+\t\t}\n+\t}\n+\n+\t/* Initialize new segment. */\n+\thdr = (PGShmemHeader *) memAddress;\n+\thdr->creatorPID = getpid();\n+\thdr->magic = PGShmemMagic;\n+\thdr->dsm_control = 0;\n+\n+\t/* Fill in the data directory ID info, too */\n+\thdr->device = statbuf.st_dev;\n+\thdr->inode = statbuf.st_ino;\n+\n+\t/*\n+\t * Initialize space allocation status for segment.\n+\t */\n+\thdr->totalsize = size;\n+\thdr->freeoffset = MAXALIGN(sizeof(PGShmemHeader));\n+\t*shim = hdr;\n+\n+\t/* Save info for possible future use */\n+\tUsedShmemSegAddr = memAddress;\n+\tUsedShmemSegID = (unsigned long) NextShmemSegID;\n+\n+\t/*\n+\t * If AnonymousShmem is NULL here, then we're not using anonymous shared\n+\t * memory, and should return a pointer to the System V shared memory\n+\t * block. Otherwise, the System V shared memory block is only a shim, and\n+\t * we must return a pointer to the real block.\n+\t */\n+\tif (AnonymousShmem == NULL)\n+\t\treturn hdr;\n+\tmemcpy(AnonymousShmem, hdr, sizeof(PGShmemHeader));\n+\treturn (PGShmemHeader *) AnonymousShmem;\n+}\n+\n+#ifdef EXEC_BACKEND\n+\n+/*\n+ * PGSharedMemoryReAttach\n+ *\n+ * This is called during startup of a postmaster child process to re-attach to\n+ * an already existing shared memory segment. This is needed only in the\n+ * EXEC_BACKEND case; otherwise postmaster children inherit the shared memory\n+ * segment attachment via fork().\n+ *\n+ * UsedShmemSegID and UsedShmemSegAddr are implicit parameters to this\n+ * routine. The caller must have already restored them to the postmaster's\n+ * values.\n+ */\n+void\n+PGSharedMemoryReAttach(void)\n+{\n+\tIpcMemoryId shmid;\n+\tPGShmemHeader *hdr;\n+\tIpcMemoryState state;\n+\tvoid\t *origUsedShmemSegAddr = UsedShmemSegAddr;\n+\n+\tAssert(UsedShmemSegAddr != NULL);\n+\tAssert(IsUnderPostmaster);\n+\n+\telog(DEBUG3, \"attaching to %p\", UsedShmemSegAddr);\n+\tshmid = shmget(UsedShmemSegID, sizeof(PGShmemHeader), 0);\n+\tif (shmid < 0)\n+\t\tstate = SHMSTATE_FOREIGN;\n+\telse\n+\t\tstate = PGSharedMemoryAttach(shmid, UsedShmemSegAddr, &hdr);\n+\tif (state != SHMSTATE_ATTACHED)\n+\t\telog(FATAL, \"could not reattach to shared memory (key=%d, addr=%p): %m\",\n+\t\t\t (int) UsedShmemSegID, UsedShmemSegAddr);\n+\tif (hdr != origUsedShmemSegAddr)\n+\t\telog(FATAL, \"reattaching to shared memory returned unexpected address (got %p, expected %p)\",\n+\t\t\t hdr, origUsedShmemSegAddr);\n+\tdsm_set_control_handle(hdr->dsm_control);\n+\n+\tUsedShmemSegAddr = hdr;\t\t/* probably redundant */\n+}\n+\n+/*\n+ * PGSharedMemoryNoReAttach\n+ *\n+ * This is called during startup of a postmaster child process when we choose\n+ * *not* to re-attach to the existing shared memory segment. We must clean up\n+ * to leave things in the appropriate state. This is not used in the non\n+ * EXEC_BACKEND case, either.\n+ *\n+ * The child process startup logic might or might not call PGSharedMemoryDetach\n+ * after this; make sure that it will be a no-op if called.\n+ *\n+ * UsedShmemSegID and UsedShmemSegAddr are implicit parameters to this\n+ * routine. The caller must have already restored them to the postmaster's\n+ * values.\n+ */\n+void\n+PGSharedMemoryNoReAttach(void)\n+{\n+\tAssert(UsedShmemSegAddr != NULL);\n+\tAssert(IsUnderPostmaster);\n+\n+\t/* For cleanliness, reset UsedShmemSegAddr to show we're not attached. */\n+\tUsedShmemSegAddr = NULL;\n+\t/* And the same for UsedShmemSegID. */\n+\tUsedShmemSegID = 0;\n+}\n+\n+#endif\t\t\t\t\t\t\t/* EXEC_BACKEND */\n+\n+/*\n+ * PGSharedMemoryDetach\n+ *\n+ * Detach from the shared memory segment, if still attached. This is not\n+ * intended to be called explicitly by the process that originally created the\n+ * segment (it will have on_shmem_exit callback(s) registered to do that).\n+ * Rather, this is for subprocesses that have inherited an attachment and want\n+ * to get rid of it.\n+ *\n+ * UsedShmemSegID and UsedShmemSegAddr are implicit parameters to this\n+ * routine, also AnonymousShmem and AnonymousShmemSize.\n+ */\n+void\n+PGSharedMemoryDetach(void)\n+{\n+\tkern_return_t kr;\n+\tif (UsedShmemSegAddr != NULL)\n+\t{\n+\t\tkr = mach_vm_deallocate(mach_task_self(), (mach_vm_address_t)UsedShmemSegAddr, mo_size);\n+\t\tif (kr != KERN_SUCCESS)\n+\t\t\telog(LOG, \"mach_vm_deallocate(target=%d, address=%p, size=%llu) failed: %d\",\n+\t\t\t\t mach_task_self(), UsedShmemSegAddr, mo_size, kr);\n+\t\tUsedShmemSegAddr = NULL;\n+\t}\n+\n+\tif (AnonymousShmem != NULL)\n+\t{\n+\t\tif (munmap(AnonymousShmem, AnonymousShmemSize) < 0)\n+\t\t\telog(LOG, \"munmap(%p, %zu) failed: %m\",\n+\t\t\t\t AnonymousShmem, AnonymousShmemSize);\n+\t\tAnonymousShmem = NULL;\n+\t}\n+}\ndiff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in\nindex de8f838e53..5105934d01 100644\n--- a/src/include/pg_config.h.in\n+++ b/src/include/pg_config.h.in\n@@ -878,6 +878,9 @@\n /* Define to 1 to build with LLVM based JIT support. (--with-llvm) */\n #undef USE_LLVM\n \n+/* Define to select Mach-style shared memory. */\n+#undef USE_MACH_SHARED_MEMORY\n+\n /* Define to select named POSIX semaphores. */\n #undef USE_NAMED_POSIX_SEMAPHORES\n \ndiff --git a/src/include/postgres.h b/src/include/postgres.h\nindex c48f47e930..8044b3dff3 100644\n--- a/src/include/postgres.h\n+++ b/src/include/postgres.h\n@@ -555,6 +555,20 @@ typedef struct NullableDatum\n \n #define PointerGetDatum(X) ((Datum) (X))\n \n+/*\n+ * DatumGetAddress\n+ *\t\tReturns mach_vm_address_t value of a datum.\n+ */\n+\n+#define DatumGetAddress(X) ((mach_vm_address_t *) DatumGetPointer(X))\n+\n+/*\n+ * AddressGetDatum\n+ *\t\tReturns datum representation for a mach_vm_address_t.\n+ */\n+\n+#define AddressGetDatum(X) ((Datum) (X))\n+\n /*\n * DatumGetCString\n *\t\tReturns C string (null-terminated string) value of a datum.\n-- \n2.29.2\n\n\n\n",
"msg_date": "Sun, 22 Nov 2020 01:19:30 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> OSX implements sysv shmem support via a mach wrapper, however the mach\n> sysv shmem wrapper has some severe restrictions that prevent us from\n> allocating enough memory segments in some cases.\n> ...\n> In order to avoid hitting these limits we can bypass the wrapper layer\n> and just use mach directly.\n\nI wanted to review this, but it's impossible because the kernel calls\nyou're using have you've-got-to-be-kidding documentation like this:\n\nhttps://developer.apple.com/documentation/kernel/1402504-mach_vm_page_info?language=objc\n\nGoogle finds the source code too, but that's equally bereft of useful\ndocumentation. So I wonder where you obtained the information necessary\nto write this patch.\n\nI did notice however that mach_shmem.c seems to be 95% copy-and-paste from\nsysv_shmem.c, including a lot of comments that don't feel particularly\ntrue or relevant here. I wonder whether we need a separate file as\nopposed to a few #ifdef's around the kernel calls.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jan 2021 19:29:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > OSX implements sysv shmem support via a mach wrapper, however the mach\n> > sysv shmem wrapper has some severe restrictions that prevent us from\n> > allocating enough memory segments in some cases.\n> > ...\n> > In order to avoid hitting these limits we can bypass the wrapper layer\n> > and just use mach directly.\n>\n> I wanted to review this, but it's impossible because the kernel calls\n> you're using have you've-got-to-be-kidding documentation like this:\n>\n> https://developer.apple.com/documentation/kernel/1402504-mach_vm_page_info?language=objc\nYeah, the documentation situation with Apple tends to be some degree of terrible\nin general. Ultimately I was referencing the sysv shmem wrapper code\nitself a lot:\nhttps://github.com/apple/darwin-xnu/blob/master/bsd/kern/sysv_shm.c\n\nI also used the browser shmem implementations as references:\nhttps://github.com/chromium/chromium/blob/master/base/memory/platform_shared_memory_region_mac.cc\nhttps://github.com/mozilla/gecko-dev/blob/master/ipc/glue/SharedMemoryBasic_mach.mm\n>\n> Google finds the source code too, but that's equally bereft of useful\n> documentation. So I wonder where you obtained the information necessary\n> to write this patch.\nMostly trial and error while using the above references.\n>\n> I did notice however that mach_shmem.c seems to be 95% copy-and-paste from\n> sysv_shmem.c, including a lot of comments that don't feel particularly\n> true or relevant here. I wonder whether we need a separate file as\n> opposed to a few #ifdef's around the kernel calls.\nWell I basically started with copying sysv_shmem.c and then started\nreplacing the sysv\nshmem calls with their mach equivalents. I kept it as a separate file\nsince the mach\ncalls use different semantics and that's also how the other projects\nseem to separate\nout their mach shared memory handlers. I left a number of the original\ncomments since\nI wasn't entirely sure what best to replace them with.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jan 2021 19:12:58 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > OSX implements sysv shmem support via a mach wrapper, however the mach\n> > sysv shmem wrapper has some severe restrictions that prevent us from\n> > allocating enough memory segments in some cases.\n> > ...\n> > In order to avoid hitting these limits we can bypass the wrapper layer\n> > and just use mach directly.\n>\n> I wanted to review this, but it's impossible because the kernel calls\n> you're using have you've-got-to-be-kidding documentation like this:\n>\n> https://developer.apple.com/documentation/kernel/1402504-mach_vm_page_info?language=objc\nFYI there's a good chance I'm not using some of these info API's correctly,\nfrom my understanding due to mach semantics a number of the sanity checks\nwe are doing for sysv shmem are probably unnecessary when using mach\nAPI's due to the mach port task bindings(mach_task_self()) effectively\nensuring our maps are already task bound and won't conflict with other tasks.\n\nThe mach vm API for our use case does appear to be superior to the shmem API\non top of not being bound by the sysv wrapper limits due to it being\nnatively task\nbound as opposed to normal sysv shared memory maps which appears to be\nsystem wide global maps and thus requires lots of key conflict checks which\nshould be entirely unnecessary when using task bound mach vm maps.\n\nFor example PGSharedMemoryIsInUse() from my understanding should always\nreturn false for the mach vm version as it is already task scoped.\n\nI'm not 100% sure here as I'm not very familiar with all the ways postgres uses\nshared memory however. Is it true that postgresql shared memory is strictly used\nby child processes fork()'d from the parent postmaster? If that is the\ncase I think\nas long as we ensure all our map operations are bound to the parent postmaster\nmach_task_self() port then we should never conflict with another postmaster's\nprocess tree's memory maps.\n\nI tried to largely copy existing sysv shmem semantics in this first\nproof of concept as\nI'm largely unfamiliar with postgresql internals but I suspect we can\nsafely remove\nmuch of the map key conflict checking code entirely.\n\nHere's a few additional references I found which may be helpful.\n\nThis one is more oriented for kernel programming, however much of the info is\nrelevant to the userspace interface as well it would appear:\nhttps://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/vm/vm.html#//apple_ref/doc/uid/TP30000905-CH210-TPXREF109\n\nThis book gives an overview of the mach userspace interface as well, although\nI don't think it's a fully complete API reference, but it covers some\nof the basic use\ncases:\nhttps://flylib.com/books/en/3.126.1.89/1/\n\nApparently the hurd kernel uses a virtual memory interface that is based on\nthe mach kernel, and it has more detailed API docs, note that these tend to\nnot have the \"mach_\" prefix but appear to largely function the same:\nhttps://www.gnu.org/software/hurd/gnumach-doc/Virtual-Memory-Interface.html\n\nThere is additional documentation from the OSFMK kernel(which was combined with\nXNU by apple from my understanding) here related to the virtual memory\ninterface,\nlike the herd docs these tend to not have the \"mach_\" prefix:\nhttp://web.mit.edu/darwin/src/modules/xnu/osfmk/man/\n>\n> Google finds the source code too, but that's equally bereft of useful\n> documentation. So I wonder where you obtained the information necessary\n> to write this patch.\n>\n> I did notice however that mach_shmem.c seems to be 95% copy-and-paste from\n> sysv_shmem.c, including a lot of comments that don't feel particularly\n> true or relevant here. I wonder whether we need a separate file as\n> opposed to a few #ifdef's around the kernel calls.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jan 2021 22:20:19 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> from my understanding due to mach semantics a number of the sanity checks\n> we are doing for sysv shmem are probably unnecessary when using mach\n> API's due to the mach port task bindings(mach_task_self()) effectively\n> ensuring our maps are already task bound and won't conflict with other tasks.\n\nReally? If so, this whole patch is pretty much dead in the water,\nbecause the fact that sysv shmem is globally accessible is exactly\nwhy we use it (well, that and the fact that you can find out how many\nprocesses are attached to it). It's been a long time since we cared\nabout sysv shmem as a source of shared storage. What we really use\nit for is as a form of mutual exclusion, i.e. being able to detect\nwhether any live children remain from a dead postmaster. That is,\nPGSharedMemoryIsInUse is not some minor ancillary check, it's the main\nreason this module exists at all. If POSIX-style mmap'd shmem had the\nsame capability we'd likely have dropped sysv altogether long ago.\n\nI've occasionally wondered if we should take another look at file locking\nas a mechanism for ensuring only one postmaster+children process group\ncan access a data directory. Back in the day it was too untrustworthy,\nbut maybe that has changed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jan 2021 01:12:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 11:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > from my understanding due to mach semantics a number of the sanity checks\n> > we are doing for sysv shmem are probably unnecessary when using mach\n> > API's due to the mach port task bindings(mach_task_self()) effectively\n> > ensuring our maps are already task bound and won't conflict with other tasks.\n>\n> Really? If so, this whole patch is pretty much dead in the water,\n> because the fact that sysv shmem is globally accessible is exactly\n> why we use it (well, that and the fact that you can find out how many\n> processes are attached to it). It's been a long time since we cared\n> about sysv shmem as a source of shared storage. What we really use\n> it for is as a form of mutual exclusion, i.e. being able to detect\n> whether any live children remain from a dead postmaster. That is,\n> PGSharedMemoryIsInUse is not some minor ancillary check, it's the main\n> reason this module exists at all. If POSIX-style mmap'd shmem had the\n> same capability we'd likely have dropped sysv altogether long ago.\nOh, I had figured the mutual exclusion check was just to ensure that\none didn't accidentally overlap an existing shared memory map.\n\nIf it's just an additional sanity check maybe we should make it non-fatal for\nENOSPC returns as it is impossible to increase the sysv shm segment limits\non OSX from my understanding.\n\nBut there should be a way to do this with mach, I see a few options, the task\nbound map behavior from my understanding is what you get when using\nmach_task_self(), however you can also look up a task using pid_for_task()\nfrom my understanding(there may be security checks here that could cause\nissues). It also may make sense to write the postmaster task ID and use that\ninstead of the PID. I'll try and rework this.\n\nBy the way is PGSharedMemoryIsInUse only using the pid(id2) for looking up\nthe shmem segment key or is something else needed? I noticed there is an\nunused id1 variable as well that gets passed to it.\n>\n> I've occasionally wondered if we should take another look at file locking\n> as a mechanism for ensuring only one postmaster+children process group\n> can access a data directory. Back in the day it was too untrustworthy,\n> but maybe that has changed.\nYeah, there's probably some ugly edge cases there still.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jan 2021 04:50:04 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "On 2020-11-22 09:19, James Hilliard wrote:\n> OSX implements sysv shmem support via a mach wrapper, however the mach\n> sysv shmem wrapper has some severe restrictions that prevent us from\n> allocating enough memory segments in some cases.\n> \n> These limits appear to be due to the way the wrapper itself is\n> implemented and not mach.\n> \n> For example when running a test suite that spins up many postgres\n> instances I commonly see this issue:\n> DETAIL: Failed system call was shmget(key=5432002, size=56, 03600).\n\nThis is the first I've heard in years of shared memory limits being a \nproblem on macOS. What settings or what levels of concurrency do you \nuse to provoke these errors?\n\n\n\n",
"msg_date": "Wed, 20 Jan 2021 07:10:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This is the first I've heard in years of shared memory limits being a \n> problem on macOS. What settings or what levels of concurrency do you \n> use to provoke these errors?\n\nI suppose it wouldn't be too hard to run into shmmni with aggressive\nparallel testing; the default is just 32. But AFAIK it still works\nto set the shm limits via /etc/sysctl.conf. (I wonder if you have\nto disable SIP to mess with that, though.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jan 2021 01:30:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 9:19 PM James Hilliard\n<james.hilliard1@gmail.com> wrote:\n> In order to avoid hitting these limits we can bypass the wrapper layer\n> and just use mach directly.\n\nFWIW I looked into using mach_vm_alllocate() years ago because I\nwanted to be able to use its VM_FLAGS_SUPERPAGE_SIZE_2MB flag to\nimplement PostgreSQL's huge_pages option for the main shared memory\narea, but I ran into some difficulty getting such mapping to be\ninherited by fork() children. There may be some way to get past that,\nbut it seems the current crop of Apple Silicon has only 4KB and 16KB\npages and I don't know if that's interesting enough. On the other\nhand, I just saw a claim that \"running an arm64 Debian VM on Apple M1,\nusing a 16K page size instead of 4K reduces this kernel build time by\n*16%*\".\n\n[1] https://twitter.com/AtTheHackOfDawn/status/1333895115174187011\n\n\n",
"msg_date": "Thu, 21 Jan 2021 15:47:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
},
{
"msg_contents": "On 1/19/21 6:50 AM, James Hilliard wrote:\n> On Mon, Jan 18, 2021 at 11:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> James Hilliard <james.hilliard1@gmail.com> writes:\n>>> from my understanding due to mach semantics a number of the sanity checks\n>>> we are doing for sysv shmem are probably unnecessary when using mach\n>>> API's due to the mach port task bindings(mach_task_self()) effectively\n>>> ensuring our maps are already task bound and won't conflict with other tasks.\n>>\n>> Really? If so, this whole patch is pretty much dead in the water,\n>> because the fact that sysv shmem is globally accessible is exactly\n>> why we use it (well, that and the fact that you can find out how many\n>> processes are attached to it). It's been a long time since we cared\n>> about sysv shmem as a source of shared storage. What we really use\n>> it for is as a form of mutual exclusion, i.e. being able to detect\n>> whether any live children remain from a dead postmaster. That is,\n>> PGSharedMemoryIsInUse is not some minor ancillary check, it's the main\n>> reason this module exists at all. If POSIX-style mmap'd shmem had the\n>> same capability we'd likely have dropped sysv altogether long ago.\n >\n> Oh, I had figured the mutual exclusion check was just to ensure that\n> one didn't accidentally overlap an existing shared memory map.\n\nAt the very least, this patch does not seem to be ready for review, so I \nhave marked it Waiting on Author.\n\nSince it appears the approach needs to be reconsidered, my \nrecommendation is to mark it Returned with Feedback at the end of the CF \nso I'll do that unless there are objections.\n\nThanks,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 25 Mar 2021 08:24:36 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH 1/1] Initial mach based shared memory support."
}
] |
[
{
"msg_contents": "Hi,\n\n\nI have a question about parallel plans. I also posted it on the general list but perhaps it's a question for hackers. Here is my test case :\n\n\nselect version();\nversion\n\n\n----------------------------------------------------------------------------------------------------------------------------------\nPostgreSQL 13.1 (Ubuntu 13.1-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n\n\ncreate unlogged table drop_me as select generate_series(1,7e7) n1;\nSELECT 70000000\n\n\nexplain\nselect count(*)\nfrom (select\nn1\nfrom drop_me\n) s;\n\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------\nFinalize Aggregate (cost=675319.13..675319.14 rows=1 width=8)\n-> Gather (cost=675318.92..675319.13 rows=2 width=8)\nWorkers Planned: 2\n-> Partial Aggregate (cost=674318.92..674318.93 rows=1 width=8)\n-> Parallel Seq Scan on drop_me (cost=0.00..601402.13 rows=29166713 width=0)\nJIT:\nFunctions: 4\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\n\n\nParallel plan, 1s\n\n\nexplain\nselect count(*)\nfrom (select\nn1\nfrom drop_me\nunion all\nselect\nn1\nfrom drop_me) ua;\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------\nFinalize Aggregate (cost=1640315.00..1640315.01 rows=1 width=8)\n-> Gather (cost=1640314.96..1640314.99 rows=2 width=8)\nWorkers Planned: 2\n-> Partial Aggregate (cost=1640304.96..1640304.97 rows=1 width=8)\n-> Parallel Append (cost=0.00..1494471.40 rows=58333426 width=0)\n-> Parallel Seq Scan on drop_me (cost=0.00..601402.13 rows=29166713 width=0)\n-> Parallel Seq Scan on drop_me drop_me_1 (cost=0.00..601402.13 rows=29166713 width=0)\nJIT:\nFunctions: 6\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\n\n\nParallel plan, 2s2\n\n\nexplain\nselect count(*)\nfrom (select\nn1\nfrom drop_me\nunion all\nvalues(1)) ua;\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------\nAggregate (cost=2934739.24..2934739.25 rows=1 width=8)\n-> Append (cost=0.00..2059737.83 rows=70000113 width=32)\n-> Seq Scan on drop_me (cost=0.00..1009736.12 rows=70000112 width=6)\n-> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=32)\n-> Result (cost=0.00..0.01 rows=1 width=4)\nJIT:\nFunctions: 4\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\n\n\nNo parallel plan, 2s6\n\n\nI read the documentation but I don't get the reason of the \"noparallel\" seq scan of drop_me in the last case ?\n\n\nBest regards,\n\nPhil\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\nI have a question about parallel plans. I also posted it on the general list but perhaps it's a question for hackers. Here is my test case :\n\n\nselect version();\nversion\n\n\n----------------------------------------------------------------------------------------------------------------------------------\nPostgreSQL 13.1 (Ubuntu 13.1-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n\n\ncreate unlogged table drop_me as select generate_series(1,7e7) n1;\nSELECT 70000000\n\n\nexplain\nselect count(*)\nfrom (select\nn1\nfrom drop_me\n) s;\n\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------\nFinalize Aggregate (cost=675319.13..675319.14 rows=1 width=8)\n-> Gather (cost=675318.92..675319.13 rows=2 width=8)\nWorkers Planned: 2\n-> Partial Aggregate (cost=674318.92..674318.93 rows=1 width=8)\n-> Parallel Seq Scan on drop_me (cost=0.00..601402.13 rows=29166713 width=0)\nJIT:\nFunctions: 4\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\n\n\nParallel plan, 1s\n\n\nexplain\nselect count(*)\nfrom (select\nn1\nfrom drop_me\nunion all\nselect\nn1\nfrom drop_me) ua;\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------\nFinalize Aggregate (cost=1640315.00..1640315.01 rows=1 width=8)\n-> Gather (cost=1640314.96..1640314.99 rows=2 width=8)\nWorkers Planned: 2\n-> Partial Aggregate (cost=1640304.96..1640304.97 rows=1 width=8)\n-> Parallel Append (cost=0.00..1494471.40 rows=58333426 width=0)\n-> Parallel Seq Scan on drop_me (cost=0.00..601402.13 rows=29166713 width=0)\n-> Parallel Seq Scan on drop_me drop_me_1 (cost=0.00..601402.13 rows=29166713 width=0)\nJIT:\nFunctions: 6\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\n\n\nParallel plan, 2s2\n\n\nexplain\nselect count(*)\nfrom (select\nn1\nfrom drop_me\nunion all\nvalues(1)) ua;\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------\nAggregate (cost=2934739.24..2934739.25 rows=1 width=8)\n-> Append (cost=0.00..2059737.83 rows=70000113 width=32)\n-> Seq Scan on drop_me (cost=0.00..1009736.12 rows=70000112 width=6)\n-> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=32)\n-> Result (cost=0.00..0.01 rows=1 width=4)\nJIT:\nFunctions: 4\nOptions: Inlining true, Optimization true, Expressions true, Deforming true\n\n\nNo parallel plan, 2s6\n\n\nI read the documentation but I don't get the reason of the \"noparallel\" seq scan of drop_me in the last case ?\n\n\nBest regards,\nPhil",
"msg_date": "Sun, 22 Nov 2020 12:50:55 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 11:51 PM Phil Florent <philflorent@hotmail.com> wrote:\n>\n>\n> Hi,\n>\n>\n> I have a question about parallel plans. I also posted it on the general list but perhaps it's a question for hackers. Here is my test case :\n>\n>\n> explain\n> select count(*)\n> from (select\n> n1\n> from drop_me\n> union all\n> values(1)) ua;\n>\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Aggregate (cost=2934739.24..2934739.25 rows=1 width=8)\n> -> Append (cost=0.00..2059737.83 rows=70000113 width=32)\n> -> Seq Scan on drop_me (cost=0.00..1009736.12 rows=70000112 width=6)\n> -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=32)\n> -> Result (cost=0.00..0.01 rows=1 width=4)\n> JIT:\n> Functions: 4\n> Options: Inlining true, Optimization true, Expressions true, Deforming true\n>\n>\n> No parallel plan, 2s6\n>\n>\n> I read the documentation but I don't get the reason of the \"noparallel\" seq scan of drop_me in the last case ?\n>\n\nWithout debugging this, it looks to me that the UNION type resolution\nisn't working as well as it possibly could in this case, for the\ngeneration of a parallel plan. I found that with a minor tweak to your\nSQL, either for the table creation or query, it will produce a\nparallel plan.\n\nNoting that currently you're creating the drop_me table with a\n\"numeric\" column, you can either:\n\n(1) Change the table creation\n\nFROM:\n create unlogged table drop_me as select generate_series(1,7e7) n1;\nTO:\n create unlogged table drop_me as select generate_series(1,7e7)::int n1;\n\n\nOR\n\n\n(2) Change the query\n\nFROM:\n explain\n select count(*)\n from (select\n n1\n from drop_me\n union all\n values(1)) ua;\n\nTO:\n\n explain\n select count(*)\n from (select\n n1\n from drop_me\n union all\n values(1::numeric)) ua;\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=821152.71..821152.72 rows=1 width=8)\n -> Gather (cost=821152.50..821152.71 rows=2 width=8)\n Workers Planned: 2\n -> Partial Aggregate (cost=820152.50..820152.51 rows=1 width=8)\n -> Parallel Append (cost=0.00..747235.71 rows=29166714 width=0)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Parallel Seq Scan on drop_me\n(cost=0.00..601402.13 rows=29166713 width=0)\n(7 rows)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 23 Nov 2020 16:04:45 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "Hi Greg,\n\nThe implicit conversion was the cause of the non parallel plan, thanks for the explanation and the workarounds. It can cause a huge difference in terms of performance, I will give the information to our developers.\n\nRegards,\n\nPhil\n\n\n\n________________________________\nDe : Greg Nancarrow <gregn4422@gmail.com>\nEnvoyé : lundi 23 novembre 2020 06:04\nÀ : Phil Florent <philflorent@hotmail.com>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nObjet : Re: Parallel plans and \"union all\" subquery\n\nOn Sun, Nov 22, 2020 at 11:51 PM Phil Florent <philflorent@hotmail.com> wrote:\n>\n>\n> Hi,\n>\n>\n> I have a question about parallel plans. I also posted it on the general list but perhaps it's a question for hackers. Here is my test case :\n>\n>\n> explain\n> select count(*)\n> from (select\n> n1\n> from drop_me\n> union all\n> values(1)) ua;\n>\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Aggregate (cost=2934739.24..2934739.25 rows=1 width=8)\n> -> Append (cost=0.00..2059737.83 rows=70000113 width=32)\n> -> Seq Scan on drop_me (cost=0.00..1009736.12 rows=70000112 width=6)\n> -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=32)\n> -> Result (cost=0.00..0.01 rows=1 width=4)\n> JIT:\n> Functions: 4\n> Options: Inlining true, Optimization true, Expressions true, Deforming true\n>\n>\n> No parallel plan, 2s6\n>\n>\n> I read the documentation but I don't get the reason of the \"noparallel\" seq scan of drop_me in the last case ?\n>\n\nWithout debugging this, it looks to me that the UNION type resolution\nisn't working as well as it possibly could in this case, for the\ngeneration of a parallel plan. I found that with a minor tweak to your\nSQL, either for the table creation or query, it will produce a\nparallel plan.\n\nNoting that currently you're creating the drop_me table with a\n\"numeric\" column, you can either:\n\n(1) Change the table creation\n\nFROM:\n create unlogged table drop_me as select generate_series(1,7e7) n1;\nTO:\n create unlogged table drop_me as select generate_series(1,7e7)::int n1;\n\n\nOR\n\n\n(2) Change the query\n\nFROM:\n explain\n select count(*)\n from (select\n n1\n from drop_me\n union all\n values(1)) ua;\n\nTO:\n\n explain\n select count(*)\n from (select\n n1\n from drop_me\n union all\n values(1::numeric)) ua;\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=821152.71..821152.72 rows=1 width=8)\n -> Gather (cost=821152.50..821152.71 rows=2 width=8)\n Workers Planned: 2\n -> Partial Aggregate (cost=820152.50..820152.51 rows=1 width=8)\n -> Parallel Append (cost=0.00..747235.71 rows=29166714 width=0)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Parallel Seq Scan on drop_me\n(cost=0.00..601402.13 rows=29166713 width=0)\n(7 rows)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n\n\n\n\n\n\nHi Greg,\n\n\n\nThe implicit conversion was the cause of the non parallel plan, thanks for the explanation and the workarounds. It can cause a huge difference in terms of performance, I will give the information to our developers.\n\n\n\n\nRegards,\n\n\nPhil\n\n\n\n\n\n\n\nDe : Greg Nancarrow <gregn4422@gmail.com>\nEnvoyé : lundi 23 novembre 2020 06:04\nÀ : Phil Florent <philflorent@hotmail.com>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nObjet : Re: Parallel plans and \"union all\" subquery\n \n\n\nOn Sun, Nov 22, 2020 at 11:51 PM Phil Florent <philflorent@hotmail.com> wrote:\n>\n>\n> Hi,\n>\n>\n> I have a question about parallel plans. I also posted it on the general list but perhaps it's a question for hackers. Here is my test case :\n>\n>\n> explain\n> select count(*)\n> from (select\n> n1\n> from drop_me\n> union all\n> values(1)) ua;\n>\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Aggregate (cost=2934739.24..2934739.25 rows=1 width=8)\n> -> Append (cost=0.00..2059737.83 rows=70000113 width=32)\n> -> Seq Scan on drop_me (cost=0.00..1009736.12 rows=70000112 width=6)\n> -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=32)\n> -> Result (cost=0.00..0.01 rows=1 width=4)\n> JIT:\n> Functions: 4\n> Options: Inlining true, Optimization true, Expressions true, Deforming true\n>\n>\n> No parallel plan, 2s6\n>\n>\n> I read the documentation but I don't get the reason of the \"noparallel\" seq scan of drop_me in the last case ?\n>\n\nWithout debugging this, it looks to me that the UNION type resolution\nisn't working as well as it possibly could in this case, for the\ngeneration of a parallel plan. I found that with a minor tweak to your\nSQL, either for the table creation or query, it will produce a\nparallel plan.\n\nNoting that currently you're creating the drop_me table with a\n\"numeric\" column, you can either:\n\n(1) Change the table creation\n\nFROM:\n create unlogged table drop_me as select generate_series(1,7e7) n1;\nTO:\n create unlogged table drop_me as select generate_series(1,7e7)::int n1;\n\n\nOR\n\n\n(2) Change the query\n\nFROM:\n explain\n select count(*)\n from (select\n n1\n from drop_me\n union all\n values(1)) ua;\n\nTO:\n\n explain\n select count(*)\n from (select\n n1\n from drop_me\n union all\n values(1::numeric)) ua;\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=821152.71..821152.72 rows=1 width=8)\n -> Gather (cost=821152.50..821152.71 rows=2 width=8)\n Workers Planned: 2\n -> Partial Aggregate (cost=820152.50..820152.51 rows=1 width=8)\n -> Parallel Append (cost=0.00..747235.71 rows=29166714 width=0)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Parallel Seq Scan on drop_me\n(cost=0.00..601402.13 rows=29166713 width=0)\n(7 rows)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia",
"msg_date": "Mon, 23 Nov 2020 12:17:17 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "On 23-11-2020 13:17, Phil Florent wrote:\n> Hi Greg,\n> \n> The implicit conversion was the cause of the non parallel plan, thanks \n> for the explanation and the workarounds. It can cause a huge difference \n> in terms of performance, I will give the information to our developers.\n> \n> Regards,\n> \n> Phil\n> \n> \n> \n> ------------------------------------------------------------------------\n> *De :* Greg Nancarrow <gregn4422@gmail.com>\n> *Envoyé :* lundi 23 novembre 2020 06:04\n> *À :* Phil Florent <philflorent@hotmail.com>\n> *Cc :* pgsql-hackers@lists.postgresql.org \n> <pgsql-hackers@lists.postgresql.org>\n> *Objet :* Re: Parallel plans and \"union all\" subquery\n> On Sun, Nov 22, 2020 at 11:51 PM Phil Florent <philflorent@hotmail.com> \n> wrote:\n>>\n>>\n>> Hi,\n>>\n>>\n>> I have a question about parallel plans. I also posted it on the general list but perhaps it's a question for hackers. Here is my test case :\n>>\n>>\n>> explain\n>> select count(*)\n>> from (select\n>> n1\n>> from drop_me\n>> union all\n>> values(1)) ua;\n>>\n>>\n>> QUERY PLAN\n>> --------------------------------------------------------------------------------\n>> Aggregate (cost=2934739.24..2934739.25 rows=1 width=8)\n>> -> Append (cost=0.00..2059737.83 rows=70000113 width=32)\n>> -> Seq Scan on drop_me (cost=0.00..1009736.12 rows=70000112 width=6)\n>> -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=32)\n>> -> Result (cost=0.00..0.01 rows=1 width=4)\n>> JIT:\n>> Functions: 4\n>> Options: Inlining true, Optimization true, Expressions true, Deforming true\n>>\n>>\n>> No parallel plan, 2s6\n>>\n>>\n>> I read the documentation but I don't get the reason of the \"noparallel\" seq scan of drop_me in the last case ?\n>>\n> \n> Without debugging this, it looks to me that the UNION type resolution\n> isn't working as well as it possibly could in this case, for the\n> generation of a parallel plan. I found that with a minor tweak to your\n> SQL, either for the table creation or query, it will produce a\n> parallel plan.\n> \n> Noting that currently you're creating the drop_me table with a\n> \"numeric\" column, you can either:\n> \n> (1) Change the table creation\n> \n> FROM:\n> create unlogged table drop_me as select generate_series(1,7e7) n1;\n> TO:\n> create unlogged table drop_me as select generate_series(1,7e7)::int n1;\n> \n> \n> OR\n> \n> \n> (2) Change the query\n> \n> FROM:\n> explain\n> select count(*)\n> from (select\n> n1\n> from drop_me\n> union all\n> values(1)) ua;\n> \n> TO:\n> \n> explain\n> select count(*)\n> from (select\n> n1\n> from drop_me\n> union all\n> values(1::numeric)) ua;\n> \n> \n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------\n> Finalize Aggregate (cost=821152.71..821152.72 rows=1 width=8)\n> -> Gather (cost=821152.50..821152.71 rows=2 width=8)\n> Workers Planned: 2\n> -> Partial Aggregate (cost=820152.50..820152.51 rows=1 width=8)\n> -> Parallel Append (cost=0.00..747235.71 rows=29166714 \n> width=0)\n> -> Result (cost=0.00..0.01 rows=1 width=0)\n> -> Parallel Seq Scan on drop_me\n> (cost=0.00..601402.13 rows=29166713 width=0)\n> (7 rows)\n> \n> \n> Regards,\n> Greg Nancarrow\n> Fujitsu Australia\n\nHi,\n\nFor this problem there is a patch I created, which is registered under \nhttps://commitfest.postgresql.org/30/2787/ that should fix this without \nany workarounds. Maybe someone can take a look at it?\n\nRegards,\nLuc\nSwarm64\n\n\n",
"msg_date": "Mon, 23 Nov 2020 16:34:02 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 2:34 AM Luc Vlaming <luc@swarm64.com> wrote:\n>\n> Hi,\n>\n> For this problem there is a patch I created, which is registered under\n> https://commitfest.postgresql.org/30/2787/ that should fix this without\n> any workarounds. Maybe someone can take a look at it?\n>\n\nI tried your patch with the latest PG source code (24/11), but\nunfortunately a non-parallel plan was still produced in this case.\n\ntest=# explain\nselect count(*)\nfrom (select\nn1\nfrom drop_me\nunion all\nvalues(1)) ua;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Aggregate (cost=1889383.54..1889383.55 rows=1 width=8)\n -> Append (cost=0.00..1362834.03 rows=42123961 width=32)\n -> Seq Scan on drop_me (cost=0.00..730974.60 rows=42123960 width=32)\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=32)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n(5 rows)\n\n\nThat's not to say your patch doesn't have merit - but maybe just not a\nfix for this particular case.\n\nAs before, if the SQL is tweaked to align the types for the UNION, you\nget a parallel plan:\n\ntest=# explain\nselect count(*)\nfrom (select\nn1\nfrom drop_me\nunion all\nvalues(1::numeric)) ua;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=821152.71..821152.72 rows=1 width=8)\n -> Gather (cost=821152.50..821152.71 rows=2 width=8)\n Workers Planned: 2\n -> Partial Aggregate (cost=820152.50..820152.51 rows=1 width=8)\n -> Parallel Append (cost=0.00..747235.71 rows=29166714 width=0)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Parallel Seq Scan on drop_me\n(cost=0.00..601402.13 rows=29166713 width=0)\n(7 rows)\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 24 Nov 2020 11:44:48 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "On 24-11-2020 01:44, Greg Nancarrow wrote:\n> On Tue, Nov 24, 2020 at 2:34 AM Luc Vlaming <luc@swarm64.com> wrote:\n>>\n>> Hi,\n>>\n>> For this problem there is a patch I created, which is registered under\n>> https://commitfest.postgresql.org/30/2787/ that should fix this without\n>> any workarounds. Maybe someone can take a look at it?\n>>\n> \n> I tried your patch with the latest PG source code (24/11), but\n> unfortunately a non-parallel plan was still produced in this case.\n> \n> test=# explain\n> select count(*)\n> from (select\n> n1\n> from drop_me\n> union all\n> values(1)) ua;\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Aggregate (cost=1889383.54..1889383.55 rows=1 width=8)\n> -> Append (cost=0.00..1362834.03 rows=42123961 width=32)\n> -> Seq Scan on drop_me (cost=0.00..730974.60 rows=42123960 width=32)\n> -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.02 rows=1 width=32)\n> -> Result (cost=0.00..0.01 rows=1 width=4)\n> (5 rows)\n> \n> \n> That's not to say your patch doesn't have merit - but maybe just not a\n> fix for this particular case.\n> \n> As before, if the SQL is tweaked to align the types for the UNION, you\n> get a parallel plan:\n> \n> test=# explain\n> select count(*)\n> from (select\n> n1\n> from drop_me\n> union all\n> values(1::numeric)) ua;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------\n> Finalize Aggregate (cost=821152.71..821152.72 rows=1 width=8)\n> -> Gather (cost=821152.50..821152.71 rows=2 width=8)\n> Workers Planned: 2\n> -> Partial Aggregate (cost=820152.50..820152.51 rows=1 width=8)\n> -> Parallel Append (cost=0.00..747235.71 rows=29166714 width=0)\n> -> Result (cost=0.00..0.01 rows=1 width=0)\n> -> Parallel Seq Scan on drop_me\n> (cost=0.00..601402.13 rows=29166713 width=0)\n> (7 rows)\n> \n> \n> Regards,\n> Greg Nancarrow\n> Fujitsu Australia\n> \n\nHi,\n\nYou're completely right, sorry for my error. I was too quick on assuming \nmy patch would work for this specific case too; I should have tested \nthat before replying. It looked very similar but turns out to not work \nbecause of the upper rel not being considered parallel.\n\nI would like to extend my patch to support this, or create a second \npatch. This would however be significantly more involved because it \nwould require that we (always?) consider two paths whenever we process a \nsubquery: the best parallel plan and the best serial plan. Before I \nemback on such a journey I would like some input on whether this would \nbe a very bad idea. Thoughts?\n\nRegards,\nLuc\nSwarm64\n\n\n",
"msg_date": "Wed, 25 Nov 2020 08:43:03 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 6:43 PM Luc Vlaming <luc@swarm64.com> wrote:\n>\n>\n> You're completely right, sorry for my error. I was too quick on assuming\n> my patch would work for this specific case too; I should have tested\n> that before replying. It looked very similar but turns out to not work\n> because of the upper rel not being considered parallel.\n>\n> I would like to extend my patch to support this, or create a second\n> patch. This would however be significantly more involved because it\n> would require that we (always?) consider two paths whenever we process a\n> subquery: the best parallel plan and the best serial plan. Before I\n> emback on such a journey I would like some input on whether this would\n> be a very bad idea. Thoughts?\n>\n\nHi,\n\nI must admit, your intended approach isn't what immediately came to mind\nwhen I saw this issue. Have you analyzed and debugged this to know exactly\nwhat is going on?\nI haven't had time to debug this and see exactly where the code paths\ndiverge for the use of \"values(1)\" verses \"values(1::numeric)\" in this\ncase, but that would be one of the first steps.\n\nWhat I wondered (and I may well be wrong) was how come the documented type\nresolution algorithm (\nhttps://www.postgresql.org/docs/13/typeconv-union-case.html) doesn't seem\nto be working quite right here, at least to the point of creating the\nsame/similar parse tree as when I change \"values(1)\" to\n\"values(1::numeric)\" or even just \"values(1.)\"? So shouldn't then the use\nof \"values(1)\" in this case (a constant, convertible to numeric - the\npreferred type ) result in the same (parallel) plan as when\n\"values(1::numeric)\" is used? Perhaps this isn't happening because the code\nis treating these as generalised expressions when their types aren't the\nsame, and this then affects parsing/planning?\nMy natural thought was that there seems to be a minor issue in the code,\nwhich should be reasonably easy to fix, at least for this fairly simple\ncase.\n\nHowever, I claim no expertise in the area of parser/analyzer/planner, I\nonly know certain areas of that code, but enough to appreciate it is\ncomplex and intricate, and easily broken.\nPerhaps one of the major contributors to this area of the code, who\nprobably know this code very well, like maybe Tom Lane or Robert Haas (to\nname two) might like to comment on whether what we're looking at is indeed\na bug/deficiency and worth fixing, and whether Luc is correct in his\nexpressed approach on what would be required to fix it?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\nOn Wed, Nov 25, 2020 at 6:43 PM Luc Vlaming <luc@swarm64.com> wrote:>>> You're completely right, sorry for my error. I was too quick on assuming> my patch would work for this specific case too; I should have tested> that before replying. It looked very similar but turns out to not work> because of the upper rel not being considered parallel.>> I would like to extend my patch to support this, or create a second> patch. This would however be significantly more involved because it> would require that we (always?) consider two paths whenever we process a> subquery: the best parallel plan and the best serial plan. Before I> emback on such a journey I would like some input on whether this would> be a very bad idea. Thoughts?>Hi,I must admit, your intended approach isn't what immediately came to mind when I saw this issue. Have you analyzed and debugged this to know exactly what is going on? I haven't had time to debug this and see exactly where the code paths diverge for the use of \"values(1)\" verses \"values(1::numeric)\" in this case, but that would be one of the first steps.What I wondered (and I may well be wrong) was how come the documented type resolution algorithm (https://www.postgresql.org/docs/13/typeconv-union-case.html) doesn't seem to be working quite right here, at least to the point of creating the same/similar parse tree as when I change \"values(1)\" to \"values(1::numeric)\" or even just \"values(1.)\"? So shouldn't then the use of \"values(1)\" in this case (a constant, convertible to numeric - the preferred type ) result in the same (parallel) plan as when \"values(1::numeric)\" is used? Perhaps this isn't happening because the code is treating these as generalised expressions when their types aren't the same, and this then affects parsing/planning?My natural thought was that there seems to be a minor issue in the code, which should be reasonably easy to fix, at least for this fairly simple case.However, I claim no expertise in the area of parser/analyzer/planner, I only know certain areas of that code, but enough to appreciate it is complex and intricate, and easily broken.Perhaps one of the major contributors to this area of the code, who probably know this code very well, like maybe Tom Lane or Robert Haas (to name two) might like to comment on whether what we're looking at is indeed a bug/deficiency and worth fixing, and whether Luc is correct in his expressed approach on what would be required to fix it?Regards,Greg NancarrowFujitsu Australia",
"msg_date": "Thu, 26 Nov 2020 00:54:39 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "On 25-11-2020 14:54, Greg Nancarrow wrote:\n> \n> \n> On Wed, Nov 25, 2020 at 6:43 PM Luc Vlaming <luc@swarm64.com \n> <mailto:luc@swarm64.com>> wrote:\n> >\n> >\n> > You're completely right, sorry for my error. I was too quick on assuming\n> > my patch would work for this specific case too; I should have tested\n> > that before replying. It looked very similar but turns out to not work\n> > because of the upper rel not being considered parallel.\n> >\n> > I would like to extend my patch to support this, or create a second\n> > patch. This would however be significantly more involved because it\n> > would require that we (always?) consider two paths whenever we process a\n> > subquery: the best parallel plan and the best serial plan. Before I\n> > emback on such a journey I would like some input on whether this would\n> > be a very bad idea. Thoughts?\n> >\n> \n> Hi,\n> \n> I must admit, your intended approach isn't what immediately came to mind \n> when I saw this issue. Have you analyzed and debugged this to know \n> exactly what is going on?\n> I haven't had time to debug this and see exactly where the code paths \n> diverge for the use of \"values(1)\" verses \"values(1::numeric)\" in this \n> case, but that would be one of the first steps.\n> \n> What I wondered (and I may well be wrong) was how come the documented \n> type resolution algorithm \n> (https://www.postgresql.org/docs/13/typeconv-union-case.html \n> <https://www.postgresql.org/docs/13/typeconv-union-case.html>) doesn't \n> seem to be working quite right here, at least to the point of creating \n> the same/similar parse tree as when I change \"values(1)\" to \n> \"values(1::numeric)\" or even just \"values(1.)\"? So shouldn't then the \n> use of \"values(1)\" in this case (a constant, convertible to numeric - \n> the preferred type ) result in the same (parallel) plan as when \n> \"values(1::numeric)\" is used? Perhaps this isn't happening because the \n> code is treating these as generalised expressions when their types \n> aren't the same, and this then affects parsing/planning?\n> My natural thought was that there seems to be a minor issue in the code, \n> which should be reasonably easy to fix, at least for this fairly simple \n> case.\n> \n> However, I claim no expertise in the area of parser/analyzer/planner, I \n> only know certain areas of that code, but enough to appreciate it is \n> complex and intricate, and easily broken.\n> Perhaps one of the major contributors to this area of the code, who \n> probably know this code very well, like maybe Tom Lane or Robert Haas \n> (to name two) might like to comment on whether what we're looking at is \n> indeed a bug/deficiency and worth fixing, and whether Luc is correct in \n> his expressed approach on what would be required to fix it?\n> \n> Regards,\n> Greg Nancarrow\n> Fujitsu Australia\n\nSo from what I recall from building the patch is that the difference is \nthat when all types are identical, then flatten_simple_union_all simply \nflattens all union-all operations into an append relation.\nIf you don't have identical types then the situation has to be handled \nby the code in prepunion.c which doesn't always keep a parallel path \naround. The patch I had posted fixes this for a relatively simple issue \nand not the case described here.\nIf interesting I can make a draft of what this would look like if this \nmakes it easier to discuss?\n\nRegards,\nLuc\nSwarm64\n\n\n",
"msg_date": "Thu, 26 Nov 2020 08:11:22 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 6:11 PM Luc Vlaming <luc@swarm64.com> wrote:\n>\n> If interesting I can make a draft of what this would look like if this\n> makes it easier to discuss?\n>\n\nSure, that would help clarify it.\n\nI did debug this a bit, but it seems my gut feeling was wrong, even\nthough it knows a type coercion is required and can be done, the\nparse/analyze code doesn't actually modify the nodes in place \"for\nfear of changing the semantics\", so when the types don't exactly match\nit's all left up to the planner, but for this parse tree it fails to\nproduce a parallel plan.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 27 Nov 2020 14:14:48 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel plans and \"union all\" subquery"
},
{
"msg_contents": "On 27-11-2020 04:14, Greg Nancarrow wrote:\n> On Thu, Nov 26, 2020 at 6:11 PM Luc Vlaming <luc@swarm64.com> wrote:\n>>\n>> If interesting I can make a draft of what this would look like if this\n>> makes it easier to discuss?\n>>\n> \n> Sure, that would help clarify it.\nOkay. I will try to build an example but this will take a few weeks as \nvacations and such are coming up too.\n\n> \n> I did debug this a bit, but it seems my gut feeling was wrong, even\n> though it knows a type coercion is required and can be done, the\n> parse/analyze code doesn't actually modify the nodes in place \"for\n> fear of changing the semantics\", so when the types don't exactly match\n> it's all left up to the planner, but for this parse tree it fails to\n> produce a parallel plan.\n> \n\nYes. However I think here also lies an opportunity, because to me it \nseems much more appealing to have the planner being able to deal \ncorrectly with all the situations rather than having things like \nflatten_simple_union_all() that provide a solution for the ideal case.\n\n> Regards,\n> Greg Nancarrow\n> Fujitsu Australia\n> \n\nRegards,\nLuc\nSwarm64\n\n\n",
"msg_date": "Fri, 27 Nov 2020 07:56:19 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel plans and \"union all\" subquery"
}
] |
[
{
"msg_contents": "Hi,\n\nAfter my commit c532d15ddd to split up copy.c, buildfarm animal \"prion\" \nfailed in pg_upgrade \n(https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-11-23%2009%3A23%3A16):\n\n> Upgrade Complete\n> ----------------\n> Optimizer statistics are not transferred by pg_upgrade so,\n> once you start the new server, consider running:\n> /home/ec2-user/bf/root/HEAD/pgsql.build/tmp_install/home/ec2-user/bf/root/HEAD/inst/bin/vacuumdb --all --analyze-in-stages\n> \n> Running this script will delete the old cluster's data files:\n> ./delete_old_cluster.sh\n> waiting for server to start.... done\n> server started\n> pg_dump: error: query failed: ERROR: missing chunk number 0 for toast value 14334 in pg_toast_2619\n> pg_dump: error: query was: SELECT\n> a.attnum,\n> a.attname,\n> a.atttypmod,\n> a.attstattarget,\n> a.attstorage,\n> t.typstorage,\n> a.attnotnull,\n> a.atthasdef,\n> a.attisdropped,\n> a.attlen,\n> a.attalign,\n> a.attislocal,\n> pg_catalog.format_type(t.oid, a.atttypmod) AS atttypname,\n> array_to_string(a.attoptions, ', ') AS attoptions,\n> CASE WHEN a.attcollation <> t.typcollation THEN a.attcollation ELSE 0 END AS attcollation,\n> pg_catalog.array_to_string(ARRAY(SELECT pg_catalog.quote_ident(option_name) || ' ' || pg_catalog.quote_literal(option_value) FROM pg_catalog.pg_options_to_table(attfdwoptions) ORDER BY option_name), E',\n> ') AS attfdwoptions,\n> a.attidentity,\n> CASE WHEN a.atthasmissing AND NOT a.attisdropped THEN a.attmissingval ELSE null END AS attmissingval,\n> a.attgenerated\n> FROM pg_catalog.pg_attribute a LEFT JOIN pg_catalog.pg_type t ON a.atttypid = t.oid\n> WHERE a.attrelid = '35221'::pg_catalog.oid AND a.attnum > 0::pg_catalog.int2\n> ORDER BY a.attnum\n> pg_dumpall: error: pg_dump failed on database \"regression\", exiting\n> waiting for server to shut down.... done\n> server stopped\n> pg_dumpall of post-upgrade database cluster failed\n> make: *** [check] Error 1\n\nI don't think this is related to commit c532d15ddd, as there is no COPY \ninvolved here. I have no clue what might might be going on here, though. \nAny ideas?\n\nGoogling around, I found one report that looks somewhat similar: \nhttps://www.postgresql.org/message-id/20190830142655.GA14011%40telsasoft.com. \nThat was with CLUSTER/VACUUM FULL, while pg_upgrade performs ANALYZE.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 23 Nov 2020 12:10:31 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "prion failed with ERROR: missing chunk number 0 for toast value 14334\n in pg_toast_2619"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> After my commit c532d15ddd to split up copy.c, buildfarm animal \"prion\" \n> failed in pg_upgrade \n> (https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-11-23%2009%3A23%3A16):\n\nprion's continued to fail, rarely, in this same way:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-02-09%2009%3A13%3A16\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-02-15%2004%3A08%3A06\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-05-12%2018%3A43%3A14\n\nThe failures are remarkably identical, and they also look a lot like\nfield reports we've been seeing off and on for years. I do not know\nwhy it always seems to be pg_toast_2619 (i.e. pg_statistic) that's\naffected, but the pattern is pretty undeniable by now.\n\nWhat I do have that's new is that *I can reproduce it*, at long last.\nFor me, installing the attached patch and running pg_upgrade's\n\"make check\" fails, pretty much every time, with symptoms identical\nto prion's.\n\nThe patch consists of\n\n(1) 100ms delay just before detoasting, when loading a pg_statistic\ncatcache entry that has toasted datums\n\n(2) provisions to mark such catcache entries dead immediately\n(effectively, CATCACHE_FORCE_RELEASE for only these tuples);\nthis is to force us through (1) as often as possible\n\n(3) some quick-hack debugging aids added to HeapTupleSatisfiesToast,\nplus convert the missing-chunk error to PANIC to get a stack\ntrace.\n\nIf it doesn't reproduce for you, try adjusting the delay. 100ms\nwas the first value I tried, though, so I think it's probably\nnot too sensitive.\n\nThe trace I'm getting shows pretty positively that autovacuum\nhas fired on pg_statistic, and removed the needed toast entries,\njust before the failure. So something is wrong with our logic\nabout when toast entries can be removed.\n\nI do not have a lot of idea why, but I see something that is\nprobably a honkin' big clue:\n\n2021-05-15 17:28:05.965 EDT [2469444] LOG: HeapTupleSatisfiesToast: xmin 2 t_infomask 0x0b02\n\nThat is, the toast tuples in question are not just frozen, but\nactually have xmin = FrozenTransactionId.\n\nI do not think that is normal --- at least, it's not the state\nimmediately after initdb, and I can't make it happen with\n\"vacuum freeze pg_statistic\". A plausible theory is that pg_upgrade\ncaused this to happen (but how?) and then vacuuming of toast rows\ngoes off the rails because of it.\n\nAnyway, I have no more time to poke at this right now, so I'm\nposting the reproducer in case someone else wants to look at it.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 15 May 2021 18:21:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "I wrote:\n> What I do have that's new is that *I can reproduce it*, at long last.\n\nI see what's happening, but I'm not quite sure where we should fix it.\n\n> I do not have a lot of idea why, but I see something that is\n> probably a honkin' big clue:\n> 2021-05-15 17:28:05.965 EDT [2469444] LOG: HeapTupleSatisfiesToast: xmin 2 t_infomask 0x0b02\n> That is, the toast tuples in question are not just frozen, but\n> actually have xmin = FrozenTransactionId.\n\nNah, that was a mistake: I was printing xmin using HeapTupleHeaderGetXmin\nwhere I should have used HeapTupleHeaderGetRawXmin.\n\nAnyway, the detailed sequence of events is that a background autoanalyze\nreplaces a pg_statistic entry, including outdating a toasted field, and\ncommits. The foreground pg_dump session has already fetched that entry,\nand in the race condition in question, it doesn't try to read the toast\nrows till after the commit. The reason it fails to find them is that\nas part of the normal indexscan sequence for reading the toast table,\nwe execute heap_page_prune, and *it decides those rows are DEAD and\nprunes them away* before we can fetch them.\n\nThe reason heap_page_prune thinks they're dead is that\nGlobalVisTestIsRemovableXid told it so.\n\nThe reason GlobalVisTestIsRemovableXid thinks that is that\nGlobalVisCatalogRels.maybe_needed is insane:\n\n(gdb) p GlobalVisCatalogRels\n$2 = {definitely_needed = {value = 16614}, maybe_needed = {\n value = 18446744071709568230}}\n\nThe reason GlobalVisCatalogRels.maybe_needed is insane is that\n(per pg_controldata) our latest checkpoint has\n\nLatest checkpoint's NextXID: 0:16614\nLatest checkpoint's oldestXID: 2294983910\n\nso when GetSnapshotData computes\n\n\t\toldestfxid = FullXidRelativeTo(latest_completed, oldestxid);\n\nit gets an insane value:\n\noldestfxid 18446744071709568230, latest_completed 16613, oldestxid 2294983910\n\nAnd the reason oldestXID contains that is that pg_upgrade applied\npg_resetwal, which does this:\n\t \n /*\n * For the moment, just set oldestXid to a value that will force\n * immediate autovacuum-for-wraparound. It's not clear whether adding\n * user control of this is useful, so let's just do something that's\n * reasonably safe. The magic constant here corresponds to the\n * maximum allowed value of autovacuum_freeze_max_age.\n */\n ControlFile.checkPointCopy.oldestXid = set_xid - 2000000000;\n if (ControlFile.checkPointCopy.oldestXid < FirstNormalTransactionId)\n ControlFile.checkPointCopy.oldestXid += FirstNormalTransactionId;\n\nSo it seems like we should do some combination of these things:\n\n1. Fix FullXidRelativeTo to be a little less trusting. It'd\nprobably be sane to make it return FirstNormalTransactionId\nwhen it'd otherwise produce a wrapped-around FullXid, but is\nthere any situation where we'd want it to throw an error instead?\n\n2. Change pg_resetwal to not do the above. It's not entirely\napparent to me what business it has trying to force\nautovacuum-for-wraparound anyway, but if it does need to do that,\ncan we devise a less klugy method?\n\nIt also seems like some assertions in procarray.c would be a\ngood idea. With the attached patch, we get through core\nregression just fine, but the pg_upgrade test fails immediately\nafter the \"Resetting WAL archives\" step.\n\nBTW, it looks like the failing code all came in with dc7420c2c\n(snapshot scalability), so this fails to explain the similar-looking\nfield reports we've seen. But it's clearly a live bug in v14.\nI shall go add it to the open items.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 16 May 2021 16:23:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "On Sun, May 16, 2021 at 1:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> it gets an insane value:\n>\n> oldestfxid 18446744071709568230, latest_completed 16613, oldestxid 2294983910\n>\n> And the reason oldestXID contains that is that pg_upgrade applied\n> pg_resetwal, which does this:\n>\n> /*\n> * For the moment, just set oldestXid to a value that will force\n> * immediate autovacuum-for-wraparound. It's not clear whether adding\n> * user control of this is useful, so let's just do something that's\n> * reasonably safe. The magic constant here corresponds to the\n> * maximum allowed value of autovacuum_freeze_max_age.\n> */\n> ControlFile.checkPointCopy.oldestXid = set_xid - 2000000000;\n> if (ControlFile.checkPointCopy.oldestXid < FirstNormalTransactionId)\n> ControlFile.checkPointCopy.oldestXid += FirstNormalTransactionId;\n\nThis same pg_resetwal code has probably caused quite a few problems on\npg_upgrade'd databases:\n\nhttps://postgr.es/m/20210423234256.hwopuftipdmp3okf@alap3.anarazel.de\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 16 May 2021 13:34:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, May 16, 2021 at 1:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> And the reason oldestXID contains that is that pg_upgrade applied\n>> pg_resetwal, which does this:\n>> * For the moment, just set oldestXid to a value that will force\n>> * immediate autovacuum-for-wraparound.\n\n> This same pg_resetwal code has probably caused quite a few problems on\n> pg_upgrade'd databases:\n> https://postgr.es/m/20210423234256.hwopuftipdmp3okf@alap3.anarazel.de\n\nHm, yeah. I'm not sure if transferring the value forward from the\nold cluster is entirely safe, but if it is, that seems like a\npromising route to a fix. (We should still have more sanity checking\naround the GlobalVis code, though.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 May 2021 18:21:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-16 16:23:02 -0400, Tom Lane wrote:\n> And the reason oldestXID contains that is that pg_upgrade applied\n> pg_resetwal, which does this:\n> \t \n> /*\n> * For the moment, just set oldestXid to a value that will force\n> * immediate autovacuum-for-wraparound. It's not clear whether adding\n> * user control of this is useful, so let's just do something that's\n> * reasonably safe. The magic constant here corresponds to the\n> * maximum allowed value of autovacuum_freeze_max_age.\n> */\n> ControlFile.checkPointCopy.oldestXid = set_xid - 2000000000;\n> if (ControlFile.checkPointCopy.oldestXid < FirstNormalTransactionId)\n> ControlFile.checkPointCopy.oldestXid += FirstNormalTransactionId;\n\nYea - this is causing quite a few problems... See\nhttps://www.postgresql.org/message-id/20210423234256.hwopuftipdmp3okf%40alap3.anarazel.de\n\n\n> So it seems like we should do some combination of these things:\n> \n> 1. Fix FullXidRelativeTo to be a little less trusting. It'd\n> probably be sane to make it return FirstNormalTransactionId\n> when it'd otherwise produce a wrapped-around FullXid, but is\n> there any situation where we'd want it to throw an error instead?\n\nI'm wondering whether we should *always* make it an error, and fix the\nplaces where that causes problems.\n\n\n> 2. Change pg_resetwal to not do the above. It's not entirely\n> apparent to me what business it has trying to force\n> autovacuum-for-wraparound anyway, but if it does need to do that,\n> can we devise a less klugy method?\n\nYes, see the above email. I think we really to transport accurate oldest\nxid + epoch for pg_upgrade.\n\n\n> It also seems like some assertions in procarray.c would be a\n> good idea. With the attached patch, we get through core\n> regression just fine, but the pg_upgrade test fails immediately\n> after the \"Resetting WAL archives\" step.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 16 May 2021 15:27:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-16 18:21:21 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Sun, May 16, 2021 at 1:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> And the reason oldestXID contains that is that pg_upgrade applied\n> >> pg_resetwal, which does this:\n> >> * For the moment, just set oldestXid to a value that will force\n> >> * immediate autovacuum-for-wraparound.\n>\n> > This same pg_resetwal code has probably caused quite a few problems on\n> > pg_upgrade'd databases:\n> > https://postgr.es/m/20210423234256.hwopuftipdmp3okf@alap3.anarazel.de\n>\n> Hm, yeah. I'm not sure if transferring the value forward from the\n> old cluster is entirely safe, but if it is, that seems like a\n> promising route to a fix. (We should still have more sanity checking\n> around the GlobalVis code, though.)\n\nWhy would it not be safe? Or at least safer than what we're doing right\nnow? It definitely isn't safe to use a newer value than what the old\ncluster used - tables might have older tuples. And an older value than\nthe old cluster's means that we can either accidentally wrap around\nwithout being protected or that a cluster might shut down to prevent a\nwraparound. And after the pg_resetwal we can't assign xids that are\nolder than set_xid - so it won't become inaccurate?\n\nI think we should remove the heuristic thing from pg_resetwal entirely,\nand error out if next-xid is set to something too far away from oldest\nxid, unless oldexid is also specified.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 16 May 2021 15:35:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-16 18:21:21 -0400, Tom Lane wrote:\n>> Hm, yeah. I'm not sure if transferring the value forward from the\n>> old cluster is entirely safe, but if it is, that seems like a\n>> promising route to a fix. (We should still have more sanity checking\n>> around the GlobalVis code, though.)\n\n> Why would it not be safe?\n\nI'm just wondering about the catalog tuples set up by pg_upgrade\nitself. If they're all frozen then they probably don't matter to\nthis, but it might take some thought.\n\n> I think we should remove the heuristic thing from pg_resetwal entirely,\n> and error out if next-xid is set to something too far away from oldest\n> xid, unless oldexid is also specified.\n\nSeems plausible ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 May 2021 18:42:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-16 18:42:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Why would it not be safe?\n> \n> I'm just wondering about the catalog tuples set up by pg_upgrade\n> itself. If they're all frozen then they probably don't matter to\n> this, but it might take some thought.\n\nThere shouldn't be any catalog objects (vs tuples) set up by pg_upgrade\nat the time of the resetwal, as far as I can see. copy_xact_xlog_xid(),\nwhich includes the resetwal calls, is done before any new objects are\ncreated/restored.\n\nThe only thing that happens before copy_xact_xlog_xid() is\nprepare_new_cluster(), which analyzes/freezes the catalog of the new\ncluster. Of course that does create new stats tuples for catalog tables,\nbut if the freezing of those doesn't work, we'd be in deep trouble\nregardless of which concrete oldestXid value we choose - that happens\nwith xids as they are in a freshly initdb' cluster, which might be in\nthe future in the old cluster, or might have aborted. Their pg_xact will\nbe overwritten in copy_xact_xlog_xid().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 16 May 2021 16:28:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "On Sun, May 16, 2021 at 04:23:02PM -0400, Tom Lane wrote:\n> And the reason oldestXID contains that is that pg_upgrade applied\n> pg_resetwal, which does this:\n> \t \n> /*\n> * For the moment, just set oldestXid to a value that will force\n> * immediate autovacuum-for-wraparound. It's not clear whether adding\n> * user control of this is useful, so let's just do something that's\n> * reasonably safe. The magic constant here corresponds to the\n> * maximum allowed value of autovacuum_freeze_max_age.\n> */\n> ControlFile.checkPointCopy.oldestXid = set_xid - 2000000000;\n> if (ControlFile.checkPointCopy.oldestXid < FirstNormalTransactionId)\n> ControlFile.checkPointCopy.oldestXid += FirstNormalTransactionId;\n> \n> So it seems like we should do some combination of these things:\n> \n> 1. Fix FullXidRelativeTo to be a little less trusting. It'd\n> probably be sane to make it return FirstNormalTransactionId\n> when it'd otherwise produce a wrapped-around FullXid, but is\n> there any situation where we'd want it to throw an error instead?\n> \n> 2. Change pg_resetwal to not do the above. It's not entirely\n> apparent to me what business it has trying to force\n> autovacuum-for-wraparound anyway, but if it does need to do that,\n> can we devise a less klugy method?\n\nSorry, I wish I could help with this pg_upgrade problem, but I have no\nidea why pg_resetwal is doing that. Pg_upgrade is supposed to be\nturning off autovacuum, though I think we have had some cases where it\ncould not be fully turned off and consumption of many xids caused\nautovacuum to run and break pg_upgrade.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sun, 16 May 2021 23:21:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "Hi,\n\nOn 5/17/21 1:28 AM, Andres Freund wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> Hi,\n>\n> On 2021-05-16 18:42:53 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Why would it not be safe?\n>> I'm just wondering about the catalog tuples set up by pg_upgrade\n>> itself. If they're all frozen then they probably don't matter to\n>> this, but it might take some thought.\n> There shouldn't be any catalog objects (vs tuples) set up by pg_upgrade\n> at the time of the resetwal, as far as I can see. copy_xact_xlog_xid(),\n> which includes the resetwal calls, is done before any new objects are\n> created/restored.\n>\n> The only thing that happens before copy_xact_xlog_xid() is\n> prepare_new_cluster(), which analyzes/freezes the catalog of the new\n> cluster. Of course that does create new stats tuples for catalog tables,\n> but if the freezing of those doesn't work, we'd be in deep trouble\n> regardless of which concrete oldestXid value we choose - that happens\n> with xids as they are in a freshly initdb' cluster, which might be in\n> the future in the old cluster, or might have aborted. Their pg_xact will\n> be overwritten in copy_xact_xlog_xid().\n\n\nFWIW a patch proposal to copy the oldest unfrozen XID during pg_upgrade \n(it adds a new (- u) parameter to pg_resetwal) has been submitted a \ncouple of weeks ago, see: https://commitfest.postgresql.org/33/3105/\n\nI was also wondering if:\n\n * We should keep the old behavior in case pg_resetwal -x is being used\n without -u?
(The proposed patch does not set an arbitrary oldestXID\n anymore in
case -x is used)\n * We should ensure that the xid provided with -x or -u is\n >=
FirstNormalTransactionId (Currently the only check is that it is\n # 0)?\n\nThanks\n\nBertrand\n\n\n\n\n\n\n\nHi,\n\nOn 5/17/21 1:28 AM, Andres Freund\n wrote:\n\n\nCAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n\n\n\nHi,\n\nOn 2021-05-16 18:42:53 -0400, Tom Lane wrote:\n\n\nAndres Freund <andres@anarazel.de> writes:\n\n\nWhy would it not be safe?\n\n\n\nI'm just wondering about the catalog tuples set up by pg_upgrade\nitself. If they're all frozen then they probably don't matter to\nthis, but it might take some thought.\n\n\n\nThere shouldn't be any catalog objects (vs tuples) set up by pg_upgrade\nat the time of the resetwal, as far as I can see. copy_xact_xlog_xid(),\nwhich includes the resetwal calls, is done before any new objects are\ncreated/restored.\n\nThe only thing that happens before copy_xact_xlog_xid() is\nprepare_new_cluster(), which analyzes/freezes the catalog of the new\ncluster. Of course that does create new stats tuples for catalog tables,\nbut if the freezing of those doesn't work, we'd be in deep trouble\nregardless of which concrete oldestXid value we choose - that happens\nwith xids as they are in a freshly initdb' cluster, which might be in\nthe future in the old cluster, or might have aborted. Their pg_xact will\nbe overwritten in copy_xact_xlog_xid().\n\n\n\n FWIW a patch proposal to copy the oldest unfrozen XID during\n pg_upgrade (it adds a new (- u) parameter to pg_resetwal) has been\n submitted a couple of weeks ago, see:\n https://commitfest.postgresql.org/33/3105/\nI was also wondering if: \n\n\nWe should keep the old behavior in case pg_resetwal -x is\n being used without -u?
(The proposed patch does not set an\n arbitrary oldestXID anymore in
case -x is used)\nWe should ensure that the xid provided with -x or -u is\n >=
FirstNormalTransactionId (Currently the only check is that\n it is # 0)?\n\nThanks\nBertrand",
"msg_date": "Mon, 17 May 2021 20:14:40 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in\n pg_toast_2619"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-17 20:14:40 +0200, Drouvot, Bertrand wrote:\n> FWIW a patch proposal to copy the oldest unfrozen XID during pg_upgrade (it\n> adds a new (- u) parameter to pg_resetwal) has been submitted a couple of\n> weeks ago, see: https://commitfest.postgresql.org/33/3105/\n\nI'll try to look at it soon.\n\n\n> I was also wondering if:\n> \n> * We should keep the old behavior in case pg_resetwal -x is being used\n> without -u?
(The proposed patch does not set an arbitrary oldestXID\n> anymore in
case -x is used)\n\nI don't think we should. I don't see anything in the old behaviour worth\nmaintaining.\n\n\n> * We should ensure that the xid provided with -x or -u is\n> >=
FirstNormalTransactionId (Currently the only check is that it is\n> # 0)?\n\nApplying TransactionIdIsNormal() seems like a good idea. I think it's\nimportant to verify that the xid provided with -x is within a reasonable\nrange of the oldest xid.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 May 2021 11:56:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "Hi,\n\nOn 5/17/21 8:56 PM, Andres Freund wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> Hi,\n>\n> On 2021-05-17 20:14:40 +0200, Drouvot, Bertrand wrote:\n>> FWIW a patch proposal to copy the oldest unfrozen XID during pg_upgrade (it\n>> adds a new (- u) parameter to pg_resetwal) has been submitted a couple of\n>> weeks ago, see: https://commitfest.postgresql.org/33/3105/\n> I'll try to look at it soon.\n\nThanks!\n\n>\n>> I was also wondering if:\n>>\n>> * We should keep the old behavior in case pg_resetwal -x is being used\n>> without -u?
(The proposed patch does not set an arbitrary oldestXID\n>> anymore in
case -x is used)\n> I don't think we should. I don't see anything in the old behaviour worth\n> maintaining.\n>\n>\n>> * We should ensure that the xid provided with -x or -u is\n>> >=
FirstNormalTransactionId (Currently the only check is that it is\n>> # 0)?\n> Applying TransactionIdIsNormal() seems like a good idea. I think it's\n> important to verify that the xid provided with -x is within a reasonable\n> range of the oldest xid.\n\nI'll copy/paste this feedback (+ an updated patch to make use of \nTransactionIdIsNormal() checks) to the thread [1] that is linked to the \ncommitfest entry.\n\nThanks\n\nBertrand\n\n[1]: \nhttps://www.postgresql.org/message-id/fe006d56-85f1-5f1e-98e7-05b53dff4f51@amazon.com\n\n\n\n",
"msg_date": "Tue, 18 May 2021 13:04:18 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in\n pg_toast_2619"
},
{
"msg_contents": "On Tue, May 18, 2021 at 01:04:18PM +0200, Drouvot, Bertrand wrote:\n> On 5/17/21 8:56 PM, Andres Freund wrote:\n>> On 2021-05-17 20:14:40 +0200, Drouvot, Bertrand wrote:\n>>> I was also wondering if:\n>>> \n>>> * We should keep the old behavior in case pg_resetwal -x is being used\n>>> without -u?
(The proposed patch does not set an arbitrary oldestXID\n>>> anymore in
case -x is used)\n>> I don't think we should. I don't see anything in the old behaviour worth\n>> maintaining.\n\nSo, pg_resetwal logic with the oldest XID assignment is causing some\nproblem here. This open item is opened for some time now and it is\nidle for a couple of weeks. It looks that we have some solution\ndrafted, to be able to move forward, with the following things (no\npatches yet): \n- More robustness safety checks in procarray.c.\n- A rework of oldestXid in pg_resetwal.\n\nIs there somebody working on that?\n--\nMichael",
"msg_date": "Wed, 30 Jun 2021 15:29:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 03:29:41PM +0900, Michael Paquier wrote:\n> On Tue, May 18, 2021 at 01:04:18PM +0200, Drouvot, Bertrand wrote:\n> > On 5/17/21 8:56 PM, Andres Freund wrote:\n> >> On 2021-05-17 20:14:40 +0200, Drouvot, Bertrand wrote:\n> >>> I was also wondering if:\n> >>> \n> >>> * We should keep the old behavior in case pg_resetwal -x is being used\n> >>> without -u?
(The proposed patch does not set an arbitrary oldestXID\n> >>> anymore in
case -x is used)\n> >> I don't think we should. I don't see anything in the old behaviour worth\n> >> maintaining.\n> \n> So, pg_resetwal logic with the oldest XID assignment is causing some\n> problem here. This open item is opened for some time now and it is\n> idle for a couple of weeks. It looks that we have some solution\n> drafted, to be able to move forward, with the following things (no\n> patches yet): \n> - More robustness safety checks in procarray.c.\n> - A rework of oldestXid in pg_resetwal.\n> \n> Is there somebody working on that?\n\nBertrand sent a patch which is awaiting review.\nThis allows both of Tom's reproducers to pass pg_upgrade (with assertions and\ndelays).\n\nhttp://cfbot.cputube.org/bertrand-drouvot.html\n\nI re-arranged the pg_upgrade output of that patch: it was in the middle of the\ntwo halves: \"Setting next transaction ID and epoch for new cluster\"\n\n-- \nJustin",
"msg_date": "Tue, 6 Jul 2021 14:06:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "On Sun, May 16, 2021 at 04:23:02PM -0400, Tom Lane wrote:\n> 1. Fix FullXidRelativeTo to be a little less trusting. It'd\n> probably be sane to make it return FirstNormalTransactionId\n> when it'd otherwise produce a wrapped-around FullXid, but is\n> there any situation where we'd want it to throw an error instead?\n> \n> 2. Change pg_resetwal to not do the above. It's not entirely\n> apparent to me what business it has trying to force\n> autovacuum-for-wraparound anyway, but if it does need to do that,\n> can we devise a less klugy method?\n> \n> It also seems like some assertions in procarray.c would be a\n> good idea. With the attached patch, we get through core\n> regression just fine, but the pg_upgrade test fails immediately\n> after the \"Resetting WAL archives\" step.\n\n#2 is done as of 74cf7d46a.\n\nIs there a plan to include Tom's procarray assertions ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 15 Aug 2021 09:44:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "On Sun, Aug 15, 2021 at 09:44:55AM -0500, Justin Pryzby wrote:\n> On Sun, May 16, 2021 at 04:23:02PM -0400, Tom Lane wrote:\n> > 1. Fix FullXidRelativeTo to be a little less trusting. It'd\n> > probably be sane to make it return FirstNormalTransactionId\n> > when it'd otherwise produce a wrapped-around FullXid, but is\n> > there any situation where we'd want it to throw an error instead?\n> > \n> > 2. Change pg_resetwal to not do the above. It's not entirely\n> > apparent to me what business it has trying to force\n> > autovacuum-for-wraparound anyway, but if it does need to do that,\n> > can we devise a less klugy method?\n> > \n> > It also seems like some assertions in procarray.c would be a\n> > good idea. With the attached patch, we get through core\n> > regression just fine, but the pg_upgrade test fails immediately\n> > after the \"Resetting WAL archives\" step.\n> \n> #2 is done as of 74cf7d46a.\n> \n> Is there a plan to include Tom's procarray assertions ?\n\nI'm confused about the state of this patch/thread.\n\nmake check causes autovacuum crashes (but then the regression tests succeed\nanyway).\n\nI notice now that Tom was referring to failures in pg_upgrade, so maybe didn't\nnotice this part:\n\n$ grep -c BACKTRACE src/test/regress/log/postmaster.log\n10\n\n2021-10-17 16:30:42.623 CDT autovacuum worker[13490] BACKTRACE: \n postgres: autovacuum worker regression(errbacktrace+0x4e) [0x5650e5ea08ee]\n postgres: autovacuum worker regression(HeapTupleSatisfiesVisibility+0xd93) [0x5650e5a85283]\n postgres: autovacuum worker regression(heap_hot_search_buffer+0x2a5) [0x5650e5a7ec05]\n postgres: autovacuum worker regression(+0x1b573c) [0x5650e5a8073c]\n postgres: autovacuum worker regression(index_fetch_heap+0x5d) [0x5650e5a9345d]\n postgres: autovacuum worker regression(index_getnext_slot+0x5b) [0x5650e5a934fb]\n postgres: autovacuum worker regression(systable_getnext_ordered+0x26) [0x5650e5a92716]\n postgres: autovacuum worker regression(heap_fetch_toast_slice+0x33d) [0x5650e5a866ed]\n postgres: autovacuum worker regression(+0x162f61) [0x5650e5a2df61]\n postgres: autovacuum worker regression(toast_flatten_tuple+0xef) [0x5650e5a85dbf]\n postgres: autovacuum worker regression(+0x5b770d) [0x5650e5e8270d]\n postgres: autovacuum worker regression(+0x5b7d85) [0x5650e5e82d85]\n postgres: autovacuum worker regression(SearchCatCache3+0x1a9) [0x5650e5e83dc9]\n postgres: autovacuum worker regression(+0x29e4f5) [0x5650e5b694f5]\n postgres: autovacuum worker regression(+0x2a012e) [0x5650e5b6b12e]\n postgres: autovacuum worker regression(analyze_rel+0x1d1) [0x5650e5b6c851]\n postgres: autovacuum worker regression(vacuum+0x5c0) [0x5650e5bd4480]\n postgres: autovacuum worker regression(+0x40ce91) [0x5650e5cd7e91]\n postgres: autovacuum worker regression(+0x40df16) [0x5650e5cd8f16]\n postgres: autovacuum worker regression(AutoVacuumUpdateDelay+0) [0x5650e5cd9020]\n postgres: autovacuum worker regression(+0x41cccb) [0x5650e5ce7ccb]\n /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890) [0x7f1dd1d26890]\n /lib/x86_64-linux-gnu/libc.so.6(__select+0x17) [0x7f1dd128fff7]\n postgres: autovacuum worker regression(+0x41d11e) [0x5650e5ce811e]\n postgres: autovacuum worker regression(PostmasterMain+0xd1c) [0x5650e5ce9c1c]\n postgres: autovacuum worker regression(main+0x220) [0x5650e5a1f4f0]\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f1dd119ab97]\n postgres: autovacuum worker regression(_start+0x2a) [0x5650e5a1f83a]\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 17 Oct 2021 16:43:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
},
{
"msg_contents": "On Sun, Oct 17, 2021 at 04:43:15PM -0500, Justin Pryzby wrote:\n> On Sun, Aug 15, 2021 at 09:44:55AM -0500, Justin Pryzby wrote:\n> > On Sun, May 16, 2021 at 04:23:02PM -0400, Tom Lane wrote:\n> > > 1. Fix FullXidRelativeTo to be a little less trusting. It'd\n> > > probably be sane to make it return FirstNormalTransactionId\n> > > when it'd otherwise produce a wrapped-around FullXid, but is\n> > > there any situation where we'd want it to throw an error instead?\n> > > \n> > > 2. Change pg_resetwal to not do the above. It's not entirely\n> > > apparent to me what business it has trying to force\n> > > autovacuum-for-wraparound anyway, but if it does need to do that,\n> > > can we devise a less klugy method?\n> > > \n> > > It also seems like some assertions in procarray.c would be a\n> > > good idea. With the attached patch, we get through core\n> > > regression just fine, but the pg_upgrade test fails immediately\n> > > after the \"Resetting WAL archives\" step.\n> > \n> > #2 is done as of 74cf7d46a.\n> > \n> > Is there a plan to include Tom's procarray assertions ?\n> \n> I'm confused about the state of this patch/thread.\n> \n> make check causes autovacuum crashes (but then the regression tests succeed\n> anyway).\n\nSorry, I was confused here. autovacuum is not crashing as I said; the\nBACKTRACE lines from the LOG added by Tom's debugging patch:\n\n+ if (trace_toast_visibility)\n+ ereport(LOG,\n+ errmsg(\"HeapTupleSatisfiesToast: xmin %u t_infomask 0x%04x\",\n+ HeapTupleHeaderGetXmin(tuple),\n+ tuple->t_infomask),\n+ debug_query_string ? 0 : errbacktrace());\n\n\n2021-10-17 22:56:57.066 CDT autovacuum worker[19601] LOG: HeapTupleSatisfiesToast: xmin 2 t_infomask 0x0b02\n2021-10-17 22:56:57.066 CDT autovacuum worker[19601] BACKTRACE: \n...\n\nI see that the pg_statistic problem can still occur on v14. I still don't have\na recipe to reproduce it, though, other than running VACUUM FULL in a loop.\nCan I provide anything useful to debug it? xmin, infomask, core, and\nlog_autovacuum_min_duration=0 ??\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 17 Oct 2021 23:21:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: prion failed with ERROR: missing chunk number 0 for toast value\n 14334 in pg_toast_2619"
}
] |
[
{
"msg_contents": " From a report by user \"kes\" on irc, I constructed this test case:\n\ncreate table marktst (id integer, val numrange, exclude using gist (val with &&));\ninsert into marktst select i, numrange(i, i+1, '[)') from generate_series(1,1000) i;\nvacuum marktst;\nanalyze marktst;\n\nselect * from (select val from marktst where val && numrange(10,11)) s1 full join (values (1)) v(a) on true;\nERROR: function ammarkpos is not defined for index marktst_val_excl\n\nfor which the query plan is:\n QUERY PLAN \n--------------------------------------------------------------------------------------------\n Merge Full Join (cost=0.14..4.21 rows=2 width=20)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n -> Index Only Scan using marktst_val_excl on marktst (cost=0.14..4.18 rows=2 width=16)\n Index Cond: (val && '[10,11)'::numrange)\n(4 rows)\n\nThe problem is that the planner calls ExecSupportsMarkRestore to find\nout whether a Materialize node is needed, and that function looks no\nfurther than the Path type of T_Index[Only]Path in order to return true,\neven though in this case it's a GiST index which does not support\nmark/restore.\n\n(Usually this can't be a problem because the merge join would need\nsorted input, thus the index scan would be a btree; but a merge join\nthat doesn't actually have any sort keys could take unsorted input from\nany index type.)\n\nGoing forward, this looks like IndexOptInfo needs another am* boolean\nfield, but that's probably not appropriate for the back branches; maybe\nas a workaround, ExecSupportsMarkRestore should just check for btree?\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 23 Nov 2020 19:48:29 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "mark/restore failures on unsorted merge joins"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> The problem is that the planner calls ExecSupportsMarkRestore to find\n> out whether a Materialize node is needed, and that function looks no\n> further than the Path type of T_Index[Only]Path in order to return true,\n> even though in this case it's a GiST index which does not support\n> mark/restore.\n\n> (Usually this can't be a problem because the merge join would need\n> sorted input, thus the index scan would be a btree; but a merge join\n> that doesn't actually have any sort keys could take unsorted input from\n> any index type.)\n\nSounds like the right analysis.\n\n> Going forward, this looks like IndexOptInfo needs another am* boolean\n> field, but that's probably not appropriate for the back branches; maybe\n> as a workaround, ExecSupportsMarkRestore should just check for btree?\n\nUh, why would you not just look to see if the ammarkpos/amrestrpos fields\nare non-null?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 14:54:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: mark/restore failures on unsorted merge joins"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> The problem is that the planner calls ExecSupportsMarkRestore to\n >> find out whether a Materialize node is needed, and that function\n >> looks no further than the Path type of T_Index[Only]Path in order to\n >> return true, even though in this case it's a GiST index which does\n >> not support mark/restore.\n\n >> (Usually this can't be a problem because the merge join would need\n >> sorted input, thus the index scan would be a btree; but a merge join\n >> that doesn't actually have any sort keys could take unsorted input\n >> from any index type.)\n\n Tom> Sounds like the right analysis.\n\n >> Going forward, this looks like IndexOptInfo needs another am*\n >> boolean field, but that's probably not appropriate for the back\n >> branches; maybe as a workaround, ExecSupportsMarkRestore should just\n >> check for btree?\n\n Tom> Uh, why would you not just look to see if the ammarkpos/amrestrpos\n Tom> fields are non-null?\n\nWe don't (in the back branches) seem to have a pointer to the\nIndexAmRoutine handy, only the oid? Obviously we can look it up from the\noid, but is that more overhead than we want in a join cost function,\ngiven that this will be called for all potential mergejoins considered,\nnot just JOIN_FULL? Or is the overhead not worth bothering about?\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 24 Nov 2020 18:07:59 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: mark/restore failures on unsorted merge joins"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Uh, why would you not just look to see if the ammarkpos/amrestrpos\n> Tom> fields are non-null?\n\n> We don't (in the back branches) seem to have a pointer to the\n> IndexAmRoutine handy, only the oid?\n\nOh, sorry, I misread your comment to be that you wanted to add a field\nto IndexAmRoutine. You're right, the real issue here is that\nExecSupportsMarkRestore lacks any convenient access to the needed info,\nand we need to add a bool to IndexOptInfo to fix that.\n\nI don't see any compelling reason why you couldn't add the field at\nthe end in the back branches; that's what we usually do to avoid\nABI breaks. Although actually (counts fields...) it looks like\nthere's at least one pad byte after amcanparallel, so you could\nadd a bool there without any ABI consequence, resulting in a\nreasonably natural field order in all branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Nov 2020 13:33:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: mark/restore failures on unsorted merge joins"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Oh, sorry, I misread your comment to be that you wanted to add a\n Tom> field to IndexAmRoutine. You're right, the real issue here is that\n Tom> ExecSupportsMarkRestore lacks any convenient access to the needed\n Tom> info, and we need to add a bool to IndexOptInfo to fix that.\n\n Tom> I don't see any compelling reason why you couldn't add the field\n Tom> at the end in the back branches; that's what we usually do to\n Tom> avoid ABI breaks. Although actually (counts fields...) it looks\n Tom> like there's at least one pad byte after amcanparallel, so you\n Tom> could add a bool there without any ABI consequence, resulting in a\n Tom> reasonably natural field order in all branches.\n\nI guess that's close enough; this should suffice then.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Tue, 24 Nov 2020 19:41:35 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: mark/restore failures on unsorted merge joins"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> I guess that's close enough; this should suffice then.\n\nLooks about right. Not sure if we need to bother with a regression test\ncase; once that's in, it'd be hard to break it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Nov 2020 14:48:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: mark/restore failures on unsorted merge joins"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> I guess that's close enough; this should suffice then.\n\n Tom> Looks about right. Not sure if we need to bother with a regression\n Tom> test case; once that's in, it'd be hard to break it.\n\nWe could check the EXPLAIN output (since the Materialize node would show\nup), but it's not easy to get stable plans since the choice of which\npath to put on the outside is not fixed. Based on what I found when\nactually testing the code, it probably wouldn't be worth the effort.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 24 Nov 2020 20:45:33 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: mark/restore failures on unsorted merge joins"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Looks about right. Not sure if we need to bother with a regression\n> Tom> test case; once that's in, it'd be hard to break it.\n\n> We could check the EXPLAIN output (since the Materialize node would show\n> up), but it's not easy to get stable plans since the choice of which\n> path to put on the outside is not fixed. Based on what I found when\n> actually testing the code, it probably wouldn't be worth the effort.\n\nIf it's not easy to test, I agree it's not worth it.\n\n(Given how long it took anyone to notice this, the difficulty of\nmaking a stable test case is unsurprising, perhaps.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Nov 2020 15:49:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: mark/restore failures on unsorted merge joins"
}
] |
[
{
"msg_contents": "Hello\n\nChloe Dives reported that sometimes a walsender would become stuck\nduring shutdown and *not* shutdown, thus preventing postmaster from\ncompleting the shutdown cycle. This has been observed to cause the\nservers to remain in such state for several hours.\n\nAfter a lengthy investigation and thanks to a handy reproducer by Chris\nWilson, we found that the problem is that WalSndDone wants to avoid\nshutting down until everything has been sent and acknowledged; but this\ntest is coded in a way that ignores the possibility that we have never\nreceived anything from the other end. In that case, both\nMyWalSnd->flush and MyWalSnd->write are InvalidRecPtr, so the condition\nin WalSndDone to terminate the loop is never fulfilled. So the\nwalsender is looping forever and never terminates, blocking shutdown of\nthe whole instance.\n\nThe attached patch fixes the problem by testing for the problematic\ncondition.\n\nApparently this problem has existed forever. Fujii-san almost patched\nfor it in 5c6d9fc4b2b8 (2014!), but missed it by a zillionth of an inch.\n\n-- \n�lvaro Herrera",
"msg_date": "Mon, 23 Nov 2020 17:52:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "walsender bug: stuck during shutdown"
},
{
"msg_contents": "\n\nOn 2020/11/24 5:52, Alvaro Herrera wrote:\n> Hello\n> \n> Chloe Dives reported that sometimes a walsender would become stuck\n> during shutdown and *not* shutdown, thus preventing postmaster from\n> completing the shutdown cycle. This has been observed to cause the\n> servers to remain in such state for several hours.\n> \n> After a lengthy investigation and thanks to a handy reproducer by Chris\n> Wilson, we found that the problem is that WalSndDone wants to avoid\n> shutting down until everything has been sent and acknowledged; but this\n> test is coded in a way that ignores the possibility that we have never\n> received anything from the other end. In that case, both\n> MyWalSnd->flush and MyWalSnd->write are InvalidRecPtr, so the condition\n> in WalSndDone to terminate the loop is never fulfilled. So the\n> walsender is looping forever and never terminates, blocking shutdown of\n> the whole instance.\n> \n> The attached patch fixes the problem by testing for the problematic\n> condition.\n> \n> Apparently this problem has existed forever. Fujii-san almost patched\n> for it in 5c6d9fc4b2b8 (2014!), but missed it by a zillionth of an inch.\n\nThanks for working on this!\nCould you tell me the discussion thread where Chloe Dives reported the issue to?\nSorry I could not find that..\nI'd like to see the procedure to reproduce the issue.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 24 Nov 2020 16:35:14 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender bug: stuck during shutdown"
},
{
"msg_contents": "Hello,\n\nOn 2020-Nov-24, Fujii Masao wrote:\n\n> Thanks for working on this!\n> Could you tell me the discussion thread where Chloe Dives reported the issue to?\n> Sorry I could not find that..\n\nIt was not public -- sorry I didn't make that clear.\n\n> I'd like to see the procedure to reproduce the issue.\n\nHere's the script.\n\n\nThanks!",
"msg_date": "Tue, 24 Nov 2020 12:07:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: walsender bug: stuck during shutdown"
},
{
"msg_contents": "\n\nOn 2020/11/25 0:07, Alvaro Herrera wrote:\n> Hello,\n> \n> On 2020-Nov-24, Fujii Masao wrote:\n> \n>> Thanks for working on this!\n>> Could you tell me the discussion thread where Chloe Dives reported the issue to?\n>> Sorry I could not find that..\n> \n> It was not public -- sorry I didn't make that clear.\n> \n>> I'd like to see the procedure to reproduce the issue.\n> \n> Here's the script.\n\nThanks!\n\nBut whether MyWalSnd->write is InvalidRecPtr or not, if it's behind sentPtr,\nwalsender should keep waiting for the ack to all the sent message to be\nreplied, i.e., isn't this expected behavior of normal shutdown? That is,\nif we want to shutdown walsender even when the client side doesn't\nreply message, immediate shutdown should be used or the client side\nshould be terminated, instead?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 25 Nov 2020 16:32:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender bug: stuck during shutdown"
},
{
"msg_contents": "On 2020-Nov-25, Fujii Masao wrote:\n\n> But whether MyWalSnd->write is InvalidRecPtr or not, if it's behind sentPtr,\n> walsender should keep waiting for the ack to all the sent message to be\n> replied, i.e., isn't this expected behavior of normal shutdown? That is,\n> if we want to shutdown walsender even when the client side doesn't\n> reply message, immediate shutdown should be used or the client side\n> should be terminated, instead?\n\nI don't think \"waiting forever\" can be considered the expected behavior;\nthis has caused what are nominally production outages several times\nalready, since we sent a shutdown signal to the server and it never\ncompleted shutting down.\n\nIf you want to propose a better patch to address the issue, feel free,\nbut keeping things as they are seems a bad idea to me.\n\n(Hmm, maybe another idea would be to have WalSndDone cause a keepalive\nmessage to be sent to the client and complete shut down when we get a\nreply to that.)\n\n\n",
"msg_date": "Wed, 25 Nov 2020 15:45:46 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: walsender bug: stuck during shutdown"
},
{
"msg_contents": "\n\nOn 2020/11/26 3:45, Alvaro Herrera wrote:\n> On 2020-Nov-25, Fujii Masao wrote:\n> \n>> But whether MyWalSnd->write is InvalidRecPtr or not, if it's behind sentPtr,\n>> walsender should keep waiting for the ack to all the sent message to be\n>> replied, i.e., isn't this expected behavior of normal shutdown? That is,\n>> if we want to shutdown walsender even when the client side doesn't\n>> reply message, immediate shutdown should be used or the client side\n>> should be terminated, instead?\n> \n> I don't think \"waiting forever\" can be considered the expected behavior;\n> this has caused what are nominally production outages several times\n> already, since we sent a shutdown signal to the server and it never\n> completed shutting down.\n\nOn the second thought, walsender doesn't wait forever unless\nwal_sender_timeout is disabled, even in the case in discussion?\nOr if there is the case where wal_sender_timeout doesn't work expectedly,\nwe might need to fix that at first.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 26 Nov 2020 10:47:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender bug: stuck during shutdown"
},
{
"msg_contents": "On 2020-Nov-26, Fujii Masao wrote:\n\n> On the second thought, walsender doesn't wait forever unless\n> wal_sender_timeout is disabled, even in the case in discussion?\n> Or if there is the case where wal_sender_timeout doesn't work expectedly,\n> we might need to fix that at first.\n\nHmm, no, it doesn't wait forever in that sense; tracing with the\ndebugger shows that the process is looping continuously.\n\n\n",
"msg_date": "Wed, 25 Nov 2020 23:45:04 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: walsender bug: stuck during shutdown"
},
{
"msg_contents": "\n\nOn 2020/11/26 11:45, Alvaro Herrera wrote:\n> On 2020-Nov-26, Fujii Masao wrote:\n> \n>> On the second thought, walsender doesn't wait forever unless\n>> wal_sender_timeout is disabled, even in the case in discussion?\n>> Or if there is the case where wal_sender_timeout doesn't work expectedly,\n>> we might need to fix that at first.\n> \n> Hmm, no, it doesn't wait forever in that sense; tracing with the\n> debugger shows that the process is looping continuously.\n\nYes, so the problem here is that walsender goes into the busy loop\nin that case. Seems this happens only in logical replication walsender.\nIn physical replication walsender, WaitLatchOrSocket() in WalSndLoop()\nseems to work as expected and prevent it from entering into busy loop\neven in that case.\n\n\t\t/*\n\t\t * If postmaster asked us to stop, don't wait anymore.\n\t\t *\n\t\t * It's important to do this check after the recomputation of\n\t\t * RecentFlushPtr, so we can send all remaining data before shutting\n\t\t * down.\n\t\t */\n\t\tif (got_STOPPING)\n\t\t\tbreak;\n\nThe above code in WalSndWaitForWal() seems to cause this issue. But I've\nnot come up with idea about how to fix yet.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 26 Nov 2020 16:53:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender bug: stuck during shutdown"
},
{
"msg_contents": "On 2020-Nov-26, Fujii Masao wrote:\n\n> Yes, so the problem here is that walsender goes into the busy loop\n> in that case. Seems this happens only in logical replication walsender.\n> In physical replication walsender, WaitLatchOrSocket() in WalSndLoop()\n> seems to work as expected and prevent it from entering into busy loop\n> even in that case.\n> \n> \t\t/*\n> \t\t * If postmaster asked us to stop, don't wait anymore.\n> \t\t *\n> \t\t * It's important to do this check after the recomputation of\n> \t\t * RecentFlushPtr, so we can send all remaining data before shutting\n> \t\t * down.\n> \t\t */\n> \t\tif (got_STOPPING)\n> \t\t\tbreak;\n> \n> The above code in WalSndWaitForWal() seems to cause this issue. But I've\n> not come up with idea about how to fix yet.\n\nWith DEBUG1 I observe that walsender is getting a lot of 'r' messages\n(standby reply) with all zeroes:\n\n2020-12-01 21:01:24.100 -03 [15307] DEBUG: write 0/0 flush 0/0 apply 0/0\n\nHowever, while doing that I also observed that if I do send some\nactivity to the logical replication stream, with the provided program,\nit will *still* have the 'write' pointer set to 0/0, and the 'flush'\npointer has moved forward to what was sent. I'm not clear on what\ncauses the write pointer to move forward in logical replication.\n\nStill, the previously proposed patch does resolve the problem in either\ncase.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 15:27:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: walsender bug: stuck during shutdown"
}
] |
[
{
"msg_contents": "It was only needed between these:\n\ncommit a8677e3ff6bb8ef78a9ba676faa647bba237b1c4\nAuthor: Peter Eisentraut <peter_e@gmx.net>\nDate: Fri Apr 13 17:06:28 2018 -0400\n\n Support named and default arguments in CALL\n\ncommit f09346a9c6218dd239fdf3a79a729716c0d305bd\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Jan 29 15:48:51 2019 -0500\n\n Refactor planner's header files.\n\nI noticed while looking at \"what includes what\" and wondered if some of these\nare kind of \"modularity violations\". \n\n$ find src/include/ -name '*.h' -print0 |xargs -r0 awk -F'[/\"]' 'NF>2 && /^#include \"/{split(FILENAME,a); print a[3],\"-\",$2 }' |awk '$1!=$3 && !/\\.h|statistics|partitioning|bootstrap|tsearch|foreign|jit|regex|lib|common/' |sort |uniq -c |sort -nr |awk '$1==1'\n 1 utils - rewrite\n 1 utils - port\n 1 utils - parser\n 1 tcop - storage\n 1 tcop - executor\n 1 tcop - catalog\n 1 tcop - access\n 1 storage - postmaster\n 1 rewrite - catalog\n 1 rewrite - access\n 1 replication - port\n 1 replication - datatype\n 1 postmaster - datatype\n 1 parser - catalog\n 1 nodes - commands\n 1 executor - portability\n 1 executor - port\n 1 commands - datatype\n 1 catalog - port\n 1 catalog - parser\n 1 access - tcop\n 1 access - replication\n 1 access - postmaster\n 1 access - executor\n\npryzbyj@pryzbyj:~/src/postgres$ find src/backend/ -name '*.c' -print0 |xargs -r0 awk -F'[/\"]' 'NF>2 && /^#include \"/{split(FILENAME,a); print a[3],\"-\",$2 }' |awk '$1!=$3 && !/\\.h/&&!/common|utils|tsearch|main|foreign|port|regex|bootstrap|jit/' |sort |uniq -c |sort -nr |awk '$1==1'\n 1 storage - libpq\n 1 statistics - postmaster\n 1 statistics - commands\n 1 rewrite - tcop\n 1 rewrite - optimizer\n 1 replication - syncrep_scanner.c\n 1 replication - rewrite\n 1 replication - repl_scanner.c\n 1 replication - optimizer\n 1 postmaster - mb\n 1 partitioning - rewrite\n 1 partitioning - commands\n 1 nodes - mb\n 1 libpq - postmaster\n 1 libpq - mb\n 1 libpq - commands\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 23 Nov 2020 14:55:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "optimizer/clauses.h needn't include access/htup.h"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> It was only needed between these:\n> commit a8677e3ff6bb8ef78a9ba676faa647bba237b1c4\n> commit f09346a9c6218dd239fdf3a79a729716c0d305bd\n\nHm, you're right. Removed.\n\n> I noticed while looking at \"what includes what\" and wondered if some of these\n> are kind of \"modularity violations\". \n\nYeah. I've ranted before that we ought to have some clearer idea of\nmodule layering within the backend, and avoid cross-header inclusions\nthat would break the layering. This particular case didn't really\ndo so, I suppose, since htup.h would surely be on a lower level than\nthe optimizer. But it still seems nicer to not have that inclusion.\n\nAnyway, if you're feeling motivated to explore a more wide-ranging\nrefactoring, by all means have a go at it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 17:00:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: optimizer/clauses.h needn't include access/htup.h"
},
{
"msg_contents": "On 2020-Nov-23, Tom Lane wrote:\n\n> Anyway, if you're feeling motivated to explore a more wide-ranging\n> refactoring, by all means have a go at it.\n\nI was contemplating commands/trigger.c this morning (after Heikki split\ncopy.c) thinking about the three pieces embedded in there -- one\ncatalog/pg_trigger.c, one in executor (execTrigger.c?) and what seems a\nvery small piece to remain where it is.\n\nJust thinking out loud ...\n\n\n",
"msg_date": "Mon, 23 Nov 2020 19:44:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: optimizer/clauses.h needn't include access/htup.h"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-23 19:44:37 -0300, Alvaro Herrera wrote:\n> I was contemplating commands/trigger.c this morning (after Heikki split\n> copy.c) thinking about the three pieces embedded in there -- one\n> catalog/pg_trigger.c, one in executor (execTrigger.c?) and what seems a\n> very small piece to remain where it is.\n\nOne thing that's not clear enough in general - at least in my view - is\nwhat belongs into executor/ and what should be somewhere more\ngeneral. E.g. a lot of what I assume you would move to execTrigger.c is\nnot just used within the real executor, but also from e.g. copy. There's\nplenty pre-existing examples for that (e.g. tuple slots), but I wonder\nif we should try to come up with a better split at some point.\n\n\nOh, and definitely +1 on splitting trigger.c. Wonder if the the trigger\nqueue stuff, and the directly executor interfacing functions should be\nsplit again? It seems to me the trigger queue details are isolated\nenough that that could come out clean enough.\n\n- Andres\n\n\n",
"msg_date": "Sun, 20 Dec 2020 13:38:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: optimizer/clauses.h needn't include access/htup.h"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nMy company (NTT Comware) and NTT OSS Center did verification of\npartitioned table on PG14dev, and we faced a problem that consumed\nhuge memory when we created a Foreign key constraint on a partitioned\ntable including 500 partitioning tables and inserted some data.\n\nWe investigated it a little to use the \"pg_backend_memory_contextes\"\ncommand and realized a \"CachedPlan\" of backends increased dramatically.\nSee below:\n\nWithout FKs\n==============================\nCachedPlan 0kB\n\nWith FKs (the problem is here)\n==============================\nCachedPlan 710MB\n\n\nPlease find the attached file to reproduce the problem.\n\nWe know two things as following:\n - Each backend uses the size of CachedPlan\n - The more increasing partitioning tables, the more CachedPlan\n consuming\n\nIf there are many backends, it consumes about the size of CachedPlan x\nthe number of backends. It may occur a shortage of memory and OOM killer.\nWe think the affected version are PG12 or later. I believe it would be\nbetter to fix the problem. Any thoughts?\n\nRegards,\nTatsuro Yamada",
"msg_date": "Tue, 24 Nov 2020 16:46:28 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi Hackers,\n\nAnalyzed the problem and created a patch to resolve it.\n\n# Problem 1\n\nWhen you create a foreign key to a partitioned table, referential\nintegrity function is created for the number of partitions.\nInternally, SPI_prepare() creates a plan and SPI_keepplan()\ncaches the plan.\n\nThe more partitions in the referencing table, the more plans will\nbe cached.\n\n# Problem 2\n\nWhen referenced table is partitioned table, the larger the number\nof partitions, the larger the plan size to be cached.\n\nThe actual plan processed is simple and small if pruning is\nenabled. However, the cached plan will include all partition\ninformation.\n\nThe more partitions in the referenced table, the larger the plan\nsize to be cached.\n\n# Idea for solution\n\nConstraints with the same pg_constraint.parentid can be combined\ninto one plan with the same comparentid if we can guarantee that\nall their contents are the same.\n\nProblem 1 can be solved\nand significant memory bloat can be avoided.\nCachedPlan: 710MB -> 1466kB\n\nSolving Problem 2 could also reduce memory,\nbut I don't have a good idea.\n\nCurrently, DISCARD ALL does not discard CachedPlan by SPI as in\nthis case. It may be possible to modify DISCARD ALL to discard\nCachedPlan and run it periodically. However, we think the burden\non the user is high.\n\n# result with patch(PG14 HEAD(e522024b) + patch)\n\n name | bytes | pg_size_pretty\n------------------+---------+----------------\n CachedPlanQuery | 12912 | 13 kB\n CachedPlanSource | 17448 | 17 kB\n CachedPlan | 1501192 | 1466 kB\n\nCachedPlan * 1 Record\n\npostgres=# SELECT count(*) FROM pg_backend_memory_contexts WHERE name\n= 'CachedPlan' AND ident LIKE 'SELECT 1 FROM%';\n count\n-------\n 1\n\npostgres=# SELECT * FROM pg_backend_memory_contexts WHERE name =\n'CachedPlan' AND ident LIKE 'SELECT 1 FROM%';\n-[ RECORD 1 ]-+--------------------------------------------------------------------------------------\nname | CachedPlan\nident | SELECT 1 FROM \"public\".\"ps\" x WHERE \"c1\"\nOPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x\nparent | CacheMemoryContext\nlevel | 2\ntotal_bytes | 2101248\ntotal_nblocks | 12\nfree_bytes | 613256\nfree_chunks | 1\nused_bytes | 1487992\n(1 record)\n\n# result without patch(PG14 HEAD(e522024b))\n\n name | bytes | pg_size_pretty\n------------------+-----------+----------------\n CachedPlanQuery | 1326280 | 1295 kB\n CachedPlanSource | 1474528 | 1440 kB\n CachedPlan | 744009200 | 710 MB\n\nCachedPlan * 500 Records\n\npostgres=# SELECT count(*) FROM pg_backend_memory_contexts WHERE name\n= 'CachedPlan' AND ident LIKE 'SELECT 1 FROM%';\n count\n-------\n 500\n\nSELECT * FROM pg_backend_memory_contexts WHERE name = 'CachedPlan' AND\nident LIKE 'SELECT 1 FROM%';\nname | CachedPlan\nident | SELECT 1 FROM \"public\".\"ps\" x WHERE \"c1\"\nOPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x\nparent | CacheMemoryContext\nlevel | 2\ntotal_bytes | 2101248\ntotal_nblocks | 12\nfree_bytes | 613256\nfree_chunks | 1\nused_bytes | 1487992\n...(500 same records)\n\nBest Regards,\n-- \nKeisuke Kuroda\nNTT Software Innovation Center\nkeisuke.kuroda.3862@gmail.com",
"msg_date": "Thu, 26 Nov 2020 09:59:28 +0900",
"msg_from": "Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Thu, 26 Nov 2020 09:59:28 +0900, Keisuke Kuroda <keisuke.kuroda.3862@gmail.com> wrote in \n> Hi Hackers,\n> \n> Analyzed the problem and created a patch to resolve it.\n> \n> # Problem 1\n> \n> When you create a foreign key to a partitioned table, referential\n> integrity function is created for the number of partitions.\n> Internally, SPI_prepare() creates a plan and SPI_keepplan()\n> caches the plan.\n> \n> The more partitions in the referencing table, the more plans will\n> be cached.\n> \n> # Problem 2\n> \n> When referenced table is partitioned table, the larger the number\n> of partitions, the larger the plan size to be cached.\n> \n> The actual plan processed is simple and small if pruning is\n> enabled. However, the cached plan will include all partition\n> information.\n> \n> The more partitions in the referenced table, the larger the plan\n> size to be cached.\n> \n> # Idea for solution\n> \n> Constraints with the same pg_constraint.parentid can be combined\n> into one plan with the same comparentid if we can guarantee that\n> all their contents are the same.\n\nThe memory reduction this patch gives seems quite high with a small\nfootprint.\n\nThis shares RI_ConstraintInfo cache by constraints that shares the\nsame parent constraints. But you forgot that the cache contains some\nmembers that can differ among partitions.\n\nConsider the case of attaching a partition that have experienced a\ncolumn deletion.\n\ncreate table t (a int primary key);\ncreate table p (a int, r int references t(a)) partition by range(a);\ncreate table c1 partition of p for values from (0) to (10);\ncreate table c2 (a int, r int);\nalter table c2 drop column r;\nalter table c2 add column r int;\nalter table p attach partition c2 for values from (10) to (20);\n\nIn that case riinfo->fk_attnums has different values from other\npartitions.\n\n=# select oid, conrelid::regclass, confrelid::regclass, conparentid, conname, conkey from pg_constraint where confrelid = 't'::regclass;\n\n oid | conrelid | confrelid | conparentid | conname | conkey \n-------+----------+-----------+-------------+----------+--------\n 16620 | p | t | 0 | p_r_fkey | {2}\n 16626 | c1 | t | 16620 | p_r_fkey | {2}\n 16632 | c2 | t | 16620 | p_r_fkey | {3}\n(3 rows)\n\nconkey is copied onto riinfo->fk_attnums.\n\n> Problem 1 can be solved\n> and significant memory bloat can be avoided.\n> CachedPlan: 710MB -> 1466kB\n> \n> Solving Problem 2 could also reduce memory,\n> but I don't have a good idea.\n> \n> Currently, DISCARD ALL does not discard CachedPlan by SPI as in\n> this case. It may be possible to modify DISCARD ALL to discard\n> CachedPlan and run it periodically. However, we think the burden\n> on the user is high.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 26 Nov 2020 12:18:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On 2020-Nov-26, Kyotaro Horiguchi wrote:\n\n> This shares RI_ConstraintInfo cache by constraints that shares the\n> same parent constraints. But you forgot that the cache contains some\n> members that can differ among partitions.\n> \n> Consider the case of attaching a partition that have experienced a\n> column deletion.\n\nI think this can be solved easily in the patch, by having\nri_BuildQueryKey() compare the parent's fk_attnums to the parent; if\nthey are equal then use the parent's constaint_id, otherwise use the\nchild constraint. That way, the cache entry is reused in the common\ncase where they are identical.\n\nI would embed all this knowledge in ri_BuildQueryKey though, without\nadding the new function ri_GetParentConstOid. I don't think that\nfunction meaningful abstraction value, and instead it would make what I\nsuggest more difficult.\n\n\n",
"msg_date": "Mon, 30 Nov 2020 21:03:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Mon, 30 Nov 2020 21:03:45 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2020-Nov-26, Kyotaro Horiguchi wrote:\n> \n> > This shares RI_ConstraintInfo cache by constraints that shares the\n> > same parent constraints. But you forgot that the cache contains some\n> > members that can differ among partitions.\n> > \n> > Consider the case of attaching a partition that have experienced a\n> > column deletion.\n> \n> I think this can be solved easily in the patch, by having\n> ri_BuildQueryKey() compare the parent's fk_attnums to the parent; if\n> they are equal then use the parent's constaint_id, otherwise use the\n> child constraint. That way, the cache entry is reused in the common\n> case where they are identical.\n\n*I* think it's the direction. After an off-list discussion, we\n confirmed that even in that case the patch works as is because\n fk_attnum (or contuple.conkey) always stores key attnums compatible\n to the topmost parent when conparent has a valid value (assuming the\n current usage of fk_attnum), but I still feel uneasy to rely on that\n unclear behavior.\n\n> I would embed all this knowledge in ri_BuildQueryKey though, without\n> adding the new function ri_GetParentConstOid. I don't think that\n> function meaningful abstraction value, and instead it would make what I\n> suggest more difficult.\n\nIt seems to me reasonable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 01 Dec 2020 09:40:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": ">\n> I think this can be solved easily in the patch, by having\n> ri_BuildQueryKey() compare the parent's fk_attnums to the parent; if\n> they are equal then use the parent's constaint_id, otherwise use the\n> child constraint. That way, the cache entry is reused in the common\n> case where they are identical.\n>\n\nSomewhat of a detour, but in reviewing the patch for Statement-Level RI\nchecks, Andres and I observed that SPI made for a large portion of the RI\noverhead.\n\nGiven that we're already looking at these checks, I was wondering if this\nmight be the time to consider implementing these checks by directly\nscanning the constraint index.\n\nI think this can be solved easily in the patch, by having\nri_BuildQueryKey() compare the parent's fk_attnums to the parent; if\nthey are equal then use the parent's constaint_id, otherwise use the\nchild constraint. That way, the cache entry is reused in the common\ncase where they are identical.Somewhat of a detour, but in reviewing the patch for Statement-Level RI checks, Andres and I observed that SPI made for a large portion of the RI overhead.Given that we're already looking at these checks, I was wondering if this might be the time to consider implementing these checks by directly scanning the constraint index.",
"msg_date": "Mon, 30 Nov 2020 21:38:05 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> Given that we're already looking at these checks, I was wondering if this\n> might be the time to consider implementing these checks by directly\n> scanning the constraint index.\n\nYeah, maybe. Certainly ri_triggers is putting a huge amount of effort\ninto working around the SPI/parser/planner layer, to not a lot of gain.\n\nHowever, it's not clear to me that that line of thought will work well\nfor the statement-level-trigger approach. In that case you might be\ndealing with enough tuples to make a different plan advisable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 21:48:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > Given that we're already looking at these checks, I was wondering if this\n> > might be the time to consider implementing these checks by directly\n> > scanning the constraint index.\n>\n> Yeah, maybe. Certainly ri_triggers is putting a huge amount of effort\n> into working around the SPI/parser/planner layer, to not a lot of gain.\n>\n> However, it's not clear to me that that line of thought will work well\n> for the statement-level-trigger approach. In that case you might be\n> dealing with enough tuples to make a different plan advisable.\n>\n> regards, tom lane\n>\n\nBypassing SPI would probably mean that we stay with row level triggers, and\nthe cached query plan would go away, perhaps replaced by an\nalready-looked-up-this-tuple hash sorta like what the cached nested loops\neffort is doing.\n\nI've been meaning to give this a try when I got some spare time. This may\ninspire me to try again.\n\nOn Mon, Nov 30, 2020 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> Given that we're already looking at these checks, I was wondering if this\n> might be the time to consider implementing these checks by directly\n> scanning the constraint index.\n\nYeah, maybe. Certainly ri_triggers is putting a huge amount of effort\ninto working around the SPI/parser/planner layer, to not a lot of gain.\n\nHowever, it's not clear to me that that line of thought will work well\nfor the statement-level-trigger approach. In that case you might be\ndealing with enough tuples to make a different plan advisable.\n\n regards, tom laneBypassing SPI would probably mean that we stay with row level triggers, and the cached query plan would go away, perhaps replaced by an already-looked-up-this-tuple hash sorta like what the cached nested loops effort is doing.I've been meaning to give this a try when I got some spare time. This may inspire me to try again.",
"msg_date": "Mon, 30 Nov 2020 22:25:30 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hello,\n\nOn Tue, Dec 1, 2020 at 9:40 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 30 Nov 2020 21:03:45 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > On 2020-Nov-26, Kyotaro Horiguchi wrote:\n> >\n> > > This shares RI_ConstraintInfo cache by constraints that shares the\n> > > same parent constraints. But you forgot that the cache contains some\n> > > members that can differ among partitions.\n> > >\n> > > Consider the case of attaching a partition that have experienced a\n> > > column deletion.\n> >\n> > I think this can be solved easily in the patch, by having\n> > ri_BuildQueryKey() compare the parent's fk_attnums to the parent; if\n> > they are equal then use the parent's constaint_id, otherwise use the\n> > child constraint. That way, the cache entry is reused in the common\n> > case where they are identical.\n>\n> *I* think it's the direction. After an off-list discussion, we\n> confirmed that even in that case the patch works as is because\n> fk_attnum (or contuple.conkey) always stores key attnums compatible\n> to the topmost parent when conparent has a valid value (assuming the\n> current usage of fk_attnum), but I still feel uneasy to rely on that\n> unclear behavior.\n\nHmm, I don't see what the problem is here, because it's not\nRI_ConstraintInfo that is being shared among partitions of the same\nparent, but the RI query (and the SPIPlanPtr for it) that is issued\nthrough SPI. The query (and the plan) turns out to be the same no\nmatter what partition's RI_ConstraintInfo is first used to generate\nit. What am I missing?\n\nBTW, one problem with Kuroda-san's patch is that it's using\nconparentid as the shared key, which can still lead to multiple\nqueries being generated when using multi-level partitioning, because\nthere would be many (intermediate) parent constraints in that case.\nWe should really be using the \"root\" constraint OID as the key,\nbecause there would only be one root from which all constraints in a\ngiven partition hierarchy would've originated. Actually, I had\nwritten a patch a few months back to do exactly that, a polished\nversion of which I've attached with this email. Please take a look.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 2 Dec 2020 22:02:50 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 00:03, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> cache entry is reused in the common case where they are identical.\n\nDoes a similar situation exist for partition statistics accessed\nduring planning? Or planning itself?\n\nIt would be useful to avoid repeated access to similar statistics and\nrepeated planning of sub-plans for similar partitions.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 2 Dec 2020 13:46:04 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Wed, 2 Dec 2020 22:02:50 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> Hello,\n> \n> On Tue, Dec 1, 2020 at 9:40 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 30 Nov 2020 21:03:45 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > > On 2020-Nov-26, Kyotaro Horiguchi wrote:\n> > >\n> > > > This shares RI_ConstraintInfo cache by constraints that shares the\n> > > > same parent constraints. But you forgot that the cache contains some\n> > > > members that can differ among partitions.\n> > > >\n> > > > Consider the case of attaching a partition that have experienced a\n> > > > column deletion.\n> > >\n> > > I think this can be solved easily in the patch, by having\n> > > ri_BuildQueryKey() compare the parent's fk_attnums to the parent; if\n> > > they are equal then use the parent's constaint_id, otherwise use the\n> > > child constraint. That way, the cache entry is reused in the common\n> > > case where they are identical.\n> >\n> > *I* think it's the direction. After an off-list discussion, we\n> > confirmed that even in that case the patch works as is because\n> > fk_attnum (or contuple.conkey) always stores key attnums compatible\n> > to the topmost parent when conparent has a valid value (assuming the\n> > current usage of fk_attnum), but I still feel uneasy to rely on that\n> > unclear behavior.\n> \n> Hmm, I don't see what the problem is here, because it's not\n> RI_ConstraintInfo that is being shared among partitions of the same\n> parent, but the RI query (and the SPIPlanPtr for it) that is issued\n> through SPI. The query (and the plan) turns out to be the same no\n> matter what partition's RI_ConstraintInfo is first used to generate\n> it. What am I missing?\n\nI don't think you're missing something. As I (tried to..) mentioned in\nanother branch of this thread, you're right. On the otherhand\nfk_attnums is actually used to generate the query, thus *issue*\npotentially affects the query. Although that might be paranoic, that\npossibility is eliminated by checking the congruency(?) of fk_attnums\n(or other members). We might even be able to share a riinfo among\nsuch children.\n\n> BTW, one problem with Kuroda-san's patch is that it's using\n> conparentid as the shared key, which can still lead to multiple\n> queries being generated when using multi-level partitioning, because\n> there would be many (intermediate) parent constraints in that case.\n> We should really be using the \"root\" constraint OID as the key,\n> because there would only be one root from which all constraints in a\n> given partition hierarchy would've originated. Actually, I had\n> written a patch a few months back to do exactly that, a polished\n> version of which I've attached with this email. Please take a look.\n\nI don't like that the function self-recurses but\nri_GetParentConstrOid() actually climbs up to the root constraint, so\nthe patch covers the multilevel partitioning cases.\n\nAbout your patch, it calculates the root constrid at the time an\nriinfo is created, but when the root-partition is further attached to\nanother partitioned-table after the riinfo creation,\nconstraint_root_id gets stale. Of course that dones't matter\npractically, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Dec 2020 10:14:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi Hackers,\n\n>> I would embed all this knowledge in ri_BuildQueryKey though, without\n>> adding the new function ri_GetParentConstOid. I don't think that\n>> function meaningful abstraction value, and instead it would make what I\n>> suggest more difficult.\n\n> It seems to me reasonable.\n\nIndeed, I tried to get the conparentid with ri_BuildQueryKey.\n(v2_reduce_ri_SPI-plan-hash.patch)\n\n> Somewhat of a detour, but in reviewing the patch for\n> Statement-Level RI checks, Andres and I observed that SPI\n> made for a large portion of the RI overhead.\n\nIf this can be resolved by reviewing the Statement-Level RI checks,\nthen I think it would be better.\n\n> BTW, one problem with Kuroda-san's patch is that it's using\n> conparentid as the shared key, which can still lead to multiple\n> queries being generated when using multi-level partitioning, because\n> there would be many (intermediate) parent constraints in that case.\n\nHoriguchi-san also mentiond,\nIn my patch, in ri_GetParentConstOid, iterate on getting\nthe constraint OID until comparentid == InvalidOid.\nThis should allow to get the \"root constraint OID\".\n\nHowever, the names \"comparentid\" and \"ri_GetParentConstOid\"\nare confusing. We should use names like \"constraint_root_id\",\nas in Amit-san's patch.\n\nBest Regards,\n-- \nKeisuke Kuroda\nNTT Software Innovation Center\nkeisuke.kuroda.3862@gmail.com",
"msg_date": "Thu, 3 Dec 2020 10:29:35 +0900",
"msg_from": "Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hello,\n\nOn Thu, Dec 3, 2020 at 10:15 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 2 Dec 2020 22:02:50 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > Hello,\n> >\n> > On Tue, Dec 1, 2020 at 9:40 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Mon, 30 Nov 2020 21:03:45 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > > > On 2020-Nov-26, Kyotaro Horiguchi wrote:\n> > > >\n> > > > > This shares RI_ConstraintInfo cache by constraints that shares the\n> > > > > same parent constraints. But you forgot that the cache contains some\n> > > > > members that can differ among partitions.\n> > > > >\n> > > > > Consider the case of attaching a partition that have experienced a\n> > > > > column deletion.\n> > > >\n> > > > I think this can be solved easily in the patch, by having\n> > > > ri_BuildQueryKey() compare the parent's fk_attnums to the parent; if\n> > > > they are equal then use the parent's constaint_id, otherwise use the\n> > > > child constraint. That way, the cache entry is reused in the common\n> > > > case where they are identical.\n> > >\n> > > *I* think it's the direction. After an off-list discussion, we\n> > > confirmed that even in that case the patch works as is because\n> > > fk_attnum (or contuple.conkey) always stores key attnums compatible\n> > > to the topmost parent when conparent has a valid value (assuming the\n> > > current usage of fk_attnum), but I still feel uneasy to rely on that\n> > > unclear behavior.\n> >\n> > Hmm, I don't see what the problem is here, because it's not\n> > RI_ConstraintInfo that is being shared among partitions of the same\n> > parent, but the RI query (and the SPIPlanPtr for it) that is issued\n> > through SPI. The query (and the plan) turns out to be the same no\n> > matter what partition's RI_ConstraintInfo is first used to generate\n> > it. What am I missing?\n>\n> I don't think you're missing something. As I (tried to..) mentioned in\n> another branch of this thread, you're right. On the otherhand\n> fk_attnums is actually used to generate the query, thus *issue*\n> potentially affects the query. Although that might be paranoic, that\n> possibility is eliminated by checking the congruency(?) of fk_attnums\n> (or other members). We might even be able to share a riinfo among\n> such children.\n\nJust to be sure I've not misunderstood you, let me mention why I think\nthe queries used by different partitions all turn out to be the same\ndespite attribute number differences among partitions. The places\nthat uses fk_attnums when generating a query are these among others:\n\nOid fk_type = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n...\nOid fk_coll = RIAttCollation(fk_rel, riinfo->fk_attnums[i]);\n...\n quoteOneName(attname,\n RIAttName(fk_rel, riinfo->fk_attnums[i]));\n\nFor the queries on the referencing side (\"check\" side),\ntype/collation/attribute name determined using the above are going to\nbe the same for all partitions in a given tree irrespective of the\nattribute number, because they're logically the same column. On the\nreferenced side (\"restrict\", \"cascade\", \"set\" side), as you already\nmentioned, fk_attnums refers to the top parent table of the\nreferencing side, so no possibility of they being different in the\nvarious referenced partitions' RI_ConstraintInfos.\n\nOn the topic of how we'd be able to share even the RI_ConstraintInfos\namong partitions, that would indeed look a bit more elaborate than the\npatch we have right now.\n\n> > BTW, one problem with Kuroda-san's patch is that it's using\n> > conparentid as the shared key, which can still lead to multiple\n> > queries being generated when using multi-level partitioning, because\n> > there would be many (intermediate) parent constraints in that case.\n> > We should really be using the \"root\" constraint OID as the key,\n> > because there would only be one root from which all constraints in a\n> > given partition hierarchy would've originated. Actually, I had\n> > written a patch a few months back to do exactly that, a polished\n> > version of which I've attached with this email. Please take a look.\n>\n> I don't like that the function self-recurses but\n> ri_GetParentConstrOid() actually climbs up to the root constraint, so\n> the patch covers the multilevel partitioning cases.\n\nSorry, I got too easily distracted by the name Kuroda-san chose for\nthe field in RI_ConstraintInfo and the function.\n\n> About your patch, it calculates the root constrid at the time an\n> riinfo is created, but when the root-partition is further attached to\n> another partitioned-table after the riinfo creation,\n> constraint_root_id gets stale. Of course that dones't matter\n> practically, though.\n\nMaybe we could also store the hash value of the root constraint OID as\nrootHashValue and check for that one too in\nInvalidateConstraintCacheCallBack(). That would take care of this\nunless I'm missing something.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Dec 2020 12:27:53 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Thu, 3 Dec 2020 12:27:53 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> Hello,\n> \n> On Thu, Dec 3, 2020 at 10:15 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Wed, 2 Dec 2020 22:02:50 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > > Hmm, I don't see what the problem is here, because it's not\n> > > RI_ConstraintInfo that is being shared among partitions of the same\n> > > parent, but the RI query (and the SPIPlanPtr for it) that is issued\n> > > through SPI. The query (and the plan) turns out to be the same no\n> > > matter what partition's RI_ConstraintInfo is first used to generate\n> > > it. What am I missing?\n> >\n> > I don't think you're missing something. As I (tried to..) mentioned in\n> > another branch of this thread, you're right. On the otherhand\n> > fk_attnums is actually used to generate the query, thus *issue*\n> > potentially affects the query. Although that might be paranoic, that\n> > possibility is eliminated by checking the congruency(?) of fk_attnums\n> > (or other members). We might even be able to share a riinfo among\n> > such children.\n> \n> Just to be sure I've not misunderstood you, let me mention why I think\n> the queries used by different partitions all turn out to be the same\n> despite attribute number differences among partitions. The places\n> that uses fk_attnums when generating a query are these among others:\n> \n> Oid fk_type = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n> ...\n> Oid fk_coll = RIAttCollation(fk_rel, riinfo->fk_attnums[i]);\n> ...\n> quoteOneName(attname,\n> RIAttName(fk_rel, riinfo->fk_attnums[i]));\n> \n> For the queries on the referencing side (\"check\" side),\n> type/collation/attribute name determined using the above are going to\n> be the same for all partitions in a given tree irrespective of the\n> attribute number, because they're logically the same column. On the\n\nYes, I know that, which is what I meant by \"practically\" or\n\"actually\", but it is not explicitly defined AFAICS.\n\nThus that would be no longer an issue if we explicitly define that\n\"When conpraentid stores a valid value, each element of fk_attnums\npoints to logically the same attribute with the RI_ConstraintInfo for\nthe parent constraint.\" Or I'd be happy if we have such a comment\nthere instead.\n\n> referenced side (\"restrict\", \"cascade\", \"set\" side), as you already\n> mentioned, fk_attnums refers to the top parent table of the\n> referencing side, so no possibility of they being different in the\n> various referenced partitions' RI_ConstraintInfos.\n\nRight. (I'm not sure I have mention that here, though:p)A\n\n> On the topic of how we'd be able to share even the RI_ConstraintInfos\n> among partitions, that would indeed look a bit more elaborate than the\n> patch we have right now.\n\nMaybe just letting the hash entry for the child riinfo point to the\nparent riinfo if all members (other than constraint_id, of course)\nshare the exactly the same values. No need to count references since\nwe don't going to remove riinfos.\n\n> > > BTW, one problem with Kuroda-san's patch is that it's using\n> > > conparentid as the shared key, which can still lead to multiple\n> > > queries being generated when using multi-level partitioning, because\n> > > there would be many (intermediate) parent constraints in that case.\n> > > We should really be using the \"root\" constraint OID as the key,\n> > > because there would only be one root from which all constraints in a\n> > > given partition hierarchy would've originated. Actually, I had\n> > > written a patch a few months back to do exactly that, a polished\n> > > version of which I've attached with this email. Please take a look.\n> >\n> > I don't like that the function self-recurses but\n> > ri_GetParentConstrOid() actually climbs up to the root constraint, so\n> > the patch covers the multilevel partitioning cases.\n> \n> Sorry, I got too easily distracted by the name Kuroda-san chose for\n> the field in RI_ConstraintInfo and the function.\n\nI thought the same at the first look to the function.\n\n> > About your patch, it calculates the root constrid at the time an\n> > riinfo is created, but when the root-partition is further attached to\n> > another partitioned-table after the riinfo creation,\n> > constraint_root_id gets stale. Of course that dones't matter\n> > practically, though.\n> \n> Maybe we could also store the hash value of the root constraint OID as\n> rootHashValue and check for that one too in\n> InvalidateConstraintCacheCallBack(). That would take care of this\n> unless I'm missing something.\n\nSeems to be sound.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Dec 2020 14:28:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 2:29 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 3 Dec 2020 12:27:53 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > On Thu, Dec 3, 2020 at 10:15 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > I don't think you're missing something. As I (tried to..) mentioned in\n> > > another branch of this thread, you're right. On the otherhand\n> > > fk_attnums is actually used to generate the query, thus *issue*\n> > > potentially affects the query. Although that might be paranoic, that\n> > > possibility is eliminated by checking the congruency(?) of fk_attnums\n> > > (or other members). We might even be able to share a riinfo among\n> > > such children.\n> >\n> > Just to be sure I've not misunderstood you, let me mention why I think\n> > the queries used by different partitions all turn out to be the same\n> > despite attribute number differences among partitions. The places\n> > that uses fk_attnums when generating a query are these among others:\n> >\n> > Oid fk_type = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n> > ...\n> > Oid fk_coll = RIAttCollation(fk_rel, riinfo->fk_attnums[i]);\n> > ...\n> > quoteOneName(attname,\n> > RIAttName(fk_rel, riinfo->fk_attnums[i]));\n> >\n> > For the queries on the referencing side (\"check\" side),\n> > type/collation/attribute name determined using the above are going to\n> > be the same for all partitions in a given tree irrespective of the\n> > attribute number, because they're logically the same column. On the\n>\n> Yes, I know that, which is what I meant by \"practically\" or\n> \"actually\", but it is not explicitly defined AFAICS.\n\nWell, I think it's great that we don't have to worry *in this part of\nthe code* about partition's fk_attnums not being congruent with the\nroot parent's, because ensuring that is the responsibility of the\nother parts of the system such as DDL. If we have any problems in\nthis area, they should be dealt with by ensuring that there are no\nbugs in those other parts.\n\n> Thus that would be no longer an issue if we explicitly define that\n> \"When conpraentid stores a valid value, each element of fk_attnums\n> points to logically the same attribute with the RI_ConstraintInfo for\n> the parent constraint.\" Or I'd be happy if we have such a comment\n> there instead.\n\nI saw a comment in Kuroda-san's v2 patch that is perhaps meant to\naddress this point, but the placement needs to be reconsidered:\n\n@@ -366,6 +368,14 @@ RI_FKey_check(TriggerData *trigdata)\n querysep = \"WHERE\";\n for (int i = 0; i < riinfo->nkeys; i++)\n {\n+\n+ /*\n+ * We share the same plan among all relations in a partition\n+ * hierarchy. The plan is guaranteed to be compatible since all of\n+ * the member relations are guaranteed to have the equivalent set\n+ * of foreign keys in fk_attnums[].\n+ */\n+\n Oid pk_type = RIAttType(pk_rel, riinfo->pk_attnums[i]);\n Oid fk_type = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n\nA more appropriate place for this kind of comment would be where\nfk_attnums is defined or in ri_BuildQueryKey() that is shared by\ndifferent RI query issuing functions.\n\n> > referenced side (\"restrict\", \"cascade\", \"set\" side), as you already\n> > mentioned, fk_attnums refers to the top parent table of the\n> > referencing side, so no possibility of they being different in the\n> > various referenced partitions' RI_ConstraintInfos.\n>\n> Right. (I'm not sure I have mention that here, though:p)A\n\nMaybe I misread but I think you did in your email dated Dec 1 where you said:\n\n\"After an off-list discussion, we confirmed that even in that case the\npatch works as is because fk_attnum (or contuple.conkey) always stores\nkey attnums compatible to the topmost parent when conparent has a\nvalid value (assuming the current usage of fk_attnum), but I still\nfeel uneasy to rely on that unclear behavior.\"\n\n> > On the topic of how we'd be able to share even the RI_ConstraintInfos\n> > among partitions, that would indeed look a bit more elaborate than the\n> > patch we have right now.\n>\n> Maybe just letting the hash entry for the child riinfo point to the\n> parent riinfo if all members (other than constraint_id, of course)\n> share the exactly the same values. No need to count references since\n> we don't going to remove riinfos.\n\nAh, something maybe worth trying. Although the memory we'd save by\nsharing the RI_ConstraintInfos would not add that much to the savings\nwe're having by sharing the plan, because it's the plans that are a\nmemory hog AFAIK.\n\n> > > About your patch, it calculates the root constrid at the time an\n> > > riinfo is created, but when the root-partition is further attached to\n> > > another partitioned-table after the riinfo creation,\n> > > constraint_root_id gets stale. Of course that dones't matter\n> > > practically, though.\n> >\n> > Maybe we could also store the hash value of the root constraint OID as\n> > rootHashValue and check for that one too in\n> > InvalidateConstraintCacheCallBack(). That would take care of this\n> > unless I'm missing something.\n>\n> Seems to be sound.\n\nOkay, thanks.\n\nI have attached a patch in which I've tried to merge the ideas from\nboth my patch and Kuroda-san's. I liked that his patch added\nconparentid to RI_ConstraintInfo because that saves a needless\nsyscache lookup for constraints that don't have a parent. I've kept\nmy idea to compute the root constraint id only once in\nri_LoadConstraint(), not on every invocation of ri_BuildQueryKey().\nKuroda-san, anything you'd like to add to that?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 3 Dec 2020 16:41:45 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Thu, 3 Dec 2020 16:41:45 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> On Thu, Dec 3, 2020 at 2:29 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Thu, 3 Dec 2020 12:27:53 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > > On Thu, Dec 3, 2020 at 10:15 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > For the queries on the referencing side (\"check\" side),\n> > > type/collation/attribute name determined using the above are going to\n> > > be the same for all partitions in a given tree irrespective of the\n> > > attribute number, because they're logically the same column. On the\n> >\n> > Yes, I know that, which is what I meant by \"practically\" or\n> > \"actually\", but it is not explicitly defined AFAICS.\n> \n> Well, I think it's great that we don't have to worry *in this part of\n> the code* about partition's fk_attnums not being congruent with the\n> root parent's, because ensuring that is the responsibility of the\n> other parts of the system such as DDL. If we have any problems in\n> this area, they should be dealt with by ensuring that there are no\n> bugs in those other parts.\n\nAgreed.\n\n> > Thus that would be no longer an issue if we explicitly define that\n> > \"When conpraentid stores a valid value, each element of fk_attnums\n> > points to logically the same attribute with the RI_ConstraintInfo for\n> > the parent constraint.\" Or I'd be happy if we have such a comment\n> > there instead.\n> \n> I saw a comment in Kuroda-san's v2 patch that is perhaps meant to\n> address this point, but the placement needs to be reconsidered:\n\nAh, yes, that comes from my proposal.\n\n> @@ -366,6 +368,14 @@ RI_FKey_check(TriggerData *trigdata)\n> querysep = \"WHERE\";\n> for (int i = 0; i < riinfo->nkeys; i++)\n> {\n> +\n> + /*\n> + * We share the same plan among all relations in a partition\n> + * hierarchy. The plan is guaranteed to be compatible since all of\n> + * the member relations are guaranteed to have the equivalent set\n> + * of foreign keys in fk_attnums[].\n> + */\n> +\n> Oid pk_type = RIAttType(pk_rel, riinfo->pk_attnums[i]);\n> Oid fk_type = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n> \n> A more appropriate place for this kind of comment would be where\n> fk_attnums is defined or in ri_BuildQueryKey() that is shared by\n> different RI query issuing functions.\n\nYeah, I wanted more appropriate place for the comment. That place\nseems reasonable.\n\n> > > referenced side (\"restrict\", \"cascade\", \"set\" side), as you already\n> > > mentioned, fk_attnums refers to the top parent table of the\n> > > referencing side, so no possibility of they being different in the\n> > > various referenced partitions' RI_ConstraintInfos.\n> >\n> > Right. (I'm not sure I have mention that here, though:p)A\n> \n> Maybe I misread but I think you did in your email dated Dec 1 where you said:\n> \n> \"After an off-list discussion, we confirmed that even in that case the\n> patch works as is because fk_attnum (or contuple.conkey) always stores\n> key attnums compatible to the topmost parent when conparent has a\n> valid value (assuming the current usage of fk_attnum), but I still\n> feel uneasy to rely on that unclear behavior.\"\n\nfk_attnums *doesn't* refers to to top parent talbe of the referencing\nside. it refers to attributes of the partition that is compatible with\nthe same element of fk_attnums of the topmost parent. Maybe I'm\nmisreading.\n\n\n> > > On the topic of how we'd be able to share even the RI_ConstraintInfos\n> > > among partitions, that would indeed look a bit more elaborate than the\n> > > patch we have right now.\n> >\n> > Maybe just letting the hash entry for the child riinfo point to the\n> > parent riinfo if all members (other than constraint_id, of course)\n> > share the exactly the same values. No need to count references since\n> > we don't going to remove riinfos.\n> \n> Ah, something maybe worth trying. Although the memory we'd save by\n> sharing the RI_ConstraintInfos would not add that much to the savings\n> we're having by sharing the plan, because it's the plans that are a\n> memory hog AFAIK.\n\nI agree that plans are rather large but the sharable part of the\nRI_ConstraintInfos is 536 bytes, I'm not sure it is small enough\ncomparing to the plans. But that has somewhat large footprint.. (See\nthe attached)\n\n> > > > About your patch, it calculates the root constrid at the time an\n> > > > riinfo is created, but when the root-partition is further attached to\n> > > > another partitioned-table after the riinfo creation,\n> > > > constraint_root_id gets stale. Of course that dones't matter\n> > > > practically, though.\n> > >\n> > > Maybe we could also store the hash value of the root constraint OID as\n> > > rootHashValue and check for that one too in\n> > > InvalidateConstraintCacheCallBack(). That would take care of this\n> > > unless I'm missing something.\n> >\n> > Seems to be sound.\n> \n> Okay, thanks.\n> \n> I have attached a patch in which I've tried to merge the ideas from\n> both my patch and Kuroda-san's. I liked that his patch added\n> conparentid to RI_ConstraintInfo because that saves a needless\n> syscache lookup for constraints that don't have a parent. I've kept\n> my idea to compute the root constraint id only once in\n> ri_LoadConstraint(), not on every invocation of ri_BuildQueryKey().\n> Kuroda-san, anything you'd like to add to that?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 03 Dec 2020 17:13:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Thu, 03 Dec 2020 17:13:16 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \nme> I agree that plans are rather large but the sharable part of the\nme> RI_ConstraintInfos is 536 bytes, I'm not sure it is small enough\nme> comparing to the plans. But that has somewhat large footprint.. (See\nme> the attached)\n\n0001 contains a bug about query_key and get_ri_constaint_root (from\nyour patch) is not needed there, but the core part is 0002 so please\nignore them.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Dec 2020 17:27:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 5:13 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 3 Dec 2020 16:41:45 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > Maybe I misread but I think you did in your email dated Dec 1 where you said:\n> >\n> > \"After an off-list discussion, we confirmed that even in that case the\n> > patch works as is because fk_attnum (or contuple.conkey) always stores\n> > key attnums compatible to the topmost parent when conparent has a\n> > valid value (assuming the current usage of fk_attnum), but I still\n> > feel uneasy to rely on that unclear behavior.\"\n>\n> fk_attnums *doesn't* refers to to top parent talbe of the referencing\n> side. it refers to attributes of the partition that is compatible with\n> the same element of fk_attnums of the topmost parent. Maybe I'm\n> misreading.\n\nYeah, no I am confused. Reading what I wrote, it seems I implied that\nthe referenced (PK relation's) partitions have RI_ConstraintInfo which\nmakes no sense, although there indeed is one pg_constraint entry that\nis defined on the FK root table for every PK partition with its OID as\nconfrelid, which is in addition to an entry containing the root PK\ntable's OID as confrelid. I confused those PK-partition-referencing\nentries as belonging to the partitions themselves. Although in my\ndefence, all of those entries' conkey contains the FK root table's\nattributes, so at least that much holds. :)\n\n> > > > On the topic of how we'd be able to share even the RI_ConstraintInfos\n> > > > among partitions, that would indeed look a bit more elaborate than the\n> > > > patch we have right now.\n> > >\n> > > Maybe just letting the hash entry for the child riinfo point to the\n> > > parent riinfo if all members (other than constraint_id, of course)\n> > > share the exactly the same values. No need to count references since\n> > > we don't going to remove riinfos.\n> >\n> > Ah, something maybe worth trying. Although the memory we'd save by\n> > sharing the RI_ConstraintInfos would not add that much to the savings\n> > we're having by sharing the plan, because it's the plans that are a\n> > memory hog AFAIK.\n>\n> I agree that plans are rather large but the sharable part of the\n> RI_ConstraintInfos is 536 bytes, I'm not sure it is small enough\n> comparing to the plans. But that has somewhat large footprint.. (See\n> the attached)\n\nThanks for the patch.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Dec 2020 21:40:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 12:25 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> On Mon, Nov 30, 2020 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Corey Huinker <corey.huinker@gmail.com> writes:\n>> > Given that we're already looking at these checks, I was wondering if this\n>> > might be the time to consider implementing these checks by directly\n>> > scanning the constraint index.\n>>\n>> Yeah, maybe. Certainly ri_triggers is putting a huge amount of effort\n>> into working around the SPI/parser/planner layer, to not a lot of gain.\n>>\n>> However, it's not clear to me that that line of thought will work well\n>> for the statement-level-trigger approach. In that case you might be\n>> dealing with enough tuples to make a different plan advisable.\n>\n> Bypassing SPI would probably mean that we stay with row level triggers, and the cached query plan would go away, perhaps replaced by an already-looked-up-this-tuple hash sorta like what the cached nested loops effort is doing.\n>\n> I've been meaning to give this a try when I got some spare time. This may inspire me to try again.\n\n+1 for this line of work.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Dec 2020 21:41:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hello\n\nI haven't followed this thread's latest posts, but I'm unclear on the\nlifetime of the new struct that's being allocated in TopMemoryContext.\nAt what point are those structs freed?\n\nAlso, the comment that was in RI_ConstraintInfo now appears in\nRI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\nundocumented. What is the relationship between those two structs? I\nsee that they have pointers to each other, but I think the relationship\nshould be documented more clearly.\n\nThanks!\n\n\n\n",
"msg_date": "Thu, 3 Dec 2020 10:22:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Thu, 3 Dec 2020 21:40:29 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> On Thu, Dec 3, 2020 at 5:13 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Thu, 3 Dec 2020 16:41:45 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > > Maybe I misread but I think you did in your email dated Dec 1 where you said:\n> > >\n> > > \"After an off-list discussion, we confirmed that even in that case the\n> > > patch works as is because fk_attnum (or contuple.conkey) always stores\n> > > key attnums compatible to the topmost parent when conparent has a\n> > > valid value (assuming the current usage of fk_attnum), but I still\n> > > feel uneasy to rely on that unclear behavior.\"\n> >\n> > fk_attnums *doesn't* refers to to top parent talbe of the referencing\n> > side. it refers to attributes of the partition that is compatible with\n> > the same element of fk_attnums of the topmost parent. Maybe I'm\n> > misreading.\n> \n> Yeah, no I am confused. Reading what I wrote, it seems I implied that\n> the referenced (PK relation's) partitions have RI_ConstraintInfo which\n> makes no sense, although there indeed is one pg_constraint entry that\n> is defined on the FK root table for every PK partition with its OID as\n> confrelid, which is in addition to an entry containing the root PK\n> table's OID as confrelid. I confused those PK-partition-referencing\n> entries as belonging to the partitions themselves. Although in my\n> defence, all of those entries' conkey contains the FK root table's\n> attributes, so at least that much holds. :)\n\nYes. I think that that confusion doen't hurt the correctness of the\ndiscussion:)\n\n> > > > > On the topic of how we'd be able to share even the RI_ConstraintInfos\n> > > > > among partitions, that would indeed look a bit more elaborate than the\n> > > > > patch we have right now.\n> > > >\n> > > > Maybe just letting the hash entry for the child riinfo point to the\n> > > > parent riinfo if all members (other than constraint_id, of course)\n> > > > share the exactly the same values. No need to count references since\n> > > > we don't going to remove riinfos.\n> > >\n> > > Ah, something maybe worth trying. Although the memory we'd save by\n> > > sharing the RI_ConstraintInfos would not add that much to the savings\n> > > we're having by sharing the plan, because it's the plans that are a\n> > > memory hog AFAIK.\n> >\n> > I agree that plans are rather large but the sharable part of the\n> > RI_ConstraintInfos is 536 bytes, I'm not sure it is small enough\n> > comparing to the plans. But that has somewhat large footprint.. (See\n> > the attached)\n> \n> Thanks for the patch.\n\nThat's only to show how that looks like.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Dec 2020 09:17:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi Amit,\n\n> I have attached a patch in which I've tried to merge the ideas from\n> both my patch and Kuroda-san's. I liked that his patch added\n> conparentid to RI_ConstraintInfo because that saves a needless\n> syscache lookup for constraints that don't have a parent. I've kept\n> my idea to compute the root constraint id only once in\n> ri_LoadConstraint(), not on every invocation of ri_BuildQueryKey().\n> Kuroda-san, anything you'd like to add to that?\n\nThank you for the merge! It looks good to me.\nI think a fix for InvalidateConstraintCacheCallBack() is also good.\n\nI also confirmed that the patch passed the make check-world.\n\nBest Regards,\n-- \nKeisuke Kuroda\nNTT Software Innovation Center\nkeisuke.kuroda.3862@gmail.com\n\n\n",
"msg_date": "Fri, 4 Dec 2020 12:00:09 +0900",
"msg_from": "Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Thanks, but sorry for the confusion.\n\nI intended just to show how it looks like if we share\nRI_ConstraintInfo among partition relations.\n\nAt Thu, 3 Dec 2020 10:22:47 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> Hello\n> \n> I haven't followed this thread's latest posts, but I'm unclear on the\n> lifetime of the new struct that's being allocated in TopMemoryContext.\n> At what point are those structs freed?\n\nThe choice of memory context is tentative and in order to shrink the\npatch'es footprint. I think we don't use CurrentDynaHashCxt for the\nadditional struct so a context for this use is needed.\n\nThe struct is freed only when the parent struct (RI_ConstraintInfo) is\nfound to be able to share the child struct (RI_ConstraintParam) with\nthe parent constraint. It seems like inefficient (or tending to make\n\"hole\"s in the heap area) but I chose it just to shrink the footprint.\n\nWe could create the new RI_ConstraintInfo on stack then copy it to the\ncache after we find that the RI_ConstraintInfo needs its own\nRI_ConstriantParam.\n\n> Also, the comment that was in RI_ConstraintInfo now appears in\n> RI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\n> undocumented. What is the relationship between those two structs? I\n> see that they have pointers to each other, but I think the relationship\n> should be documented more clearly.\n\nI'm not sure the footprint of this patch worth doing but here is a bit\nmore polished version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 04 Dec 2020 12:05:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Fri, 4 Dec 2020 12:00:09 +0900, Keisuke Kuroda <keisuke.kuroda.3862@gmail.com> wrote in \n> Hi Amit,\n> \n> > I have attached a patch in which I've tried to merge the ideas from\n> > both my patch and Kuroda-san's. I liked that his patch added\n> > conparentid to RI_ConstraintInfo because that saves a needless\n> > syscache lookup for constraints that don't have a parent. I've kept\n> > my idea to compute the root constraint id only once in\n> > ri_LoadConstraint(), not on every invocation of ri_BuildQueryKey().\n> > Kuroda-san, anything you'd like to add to that?\n> \n> Thank you for the merge! It looks good to me.\n> I think a fix for InvalidateConstraintCacheCallBack() is also good.\n> \n> I also confirmed that the patch passed the make check-world.\n\nIt's fine that constraint_rood_id overrides constraint_id, but how\nabout that constraint_root_id stores constraint_id if it is not a\npartition? That change makes the patch a bit simpler.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Dec 2020 14:48:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 2:48 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Fri, 4 Dec 2020 12:00:09 +0900, Keisuke Kuroda <keisuke.kuroda.3862@gmail.com> wrote in\n> > Hi Amit,\n> >\n> > > I have attached a patch in which I've tried to merge the ideas from\n> > > both my patch and Kuroda-san's. I liked that his patch added\n> > > conparentid to RI_ConstraintInfo because that saves a needless\n> > > syscache lookup for constraints that don't have a parent. I've kept\n> > > my idea to compute the root constraint id only once in\n> > > ri_LoadConstraint(), not on every invocation of ri_BuildQueryKey().\n> > > Kuroda-san, anything you'd like to add to that?\n> >\n> > Thank you for the merge! It looks good to me.\n> > I think a fix for InvalidateConstraintCacheCallBack() is also good.\n> >\n> > I also confirmed that the patch passed the make check-world.\n>\n> It's fine that constraint_rood_id overrides constraint_id, but how\n> about that constraint_root_id stores constraint_id if it is not a\n> partition? That change makes the patch a bit simpler.\n\nMy patch was like that before posting to this thread, but keeping\nconstraint_id and constraint_root_id separate looked better for\ndocumenting the partitioning case as working differently from the\nregular table case. I guess a comment in ri_BuildQueryKey is enough\nfor that though and it's not like we're using constraint_root_id in\nany other place to make matters confusing, so I changed it as you\nsuggest. Updated patch attached.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 4 Dec 2020 15:32:32 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 12:05 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > Also, the comment that was in RI_ConstraintInfo now appears in\n> > RI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\n> > undocumented. What is the relationship between those two structs? I\n> > see that they have pointers to each other, but I think the relationship\n> > should be documented more clearly.\n>\n> I'm not sure the footprint of this patch worth doing but here is a bit\n> more polished version.\n\nI noticed that the foreign_key test fails and it may have to do with\nthe fact that a partition's param info remains attached to the\nparent's RI_ConstraintInfo even after it's detached from the parent\ntable using DETACH PARTITION.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Dec 2020 23:01:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On 2020-Dec-07, Amit Langote wrote:\n\n> On Fri, Dec 4, 2020 at 12:05 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > Also, the comment that was in RI_ConstraintInfo now appears in\n> > > RI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\n> > > undocumented. What is the relationship between those two structs? I\n> > > see that they have pointers to each other, but I think the relationship\n> > > should be documented more clearly.\n> >\n> > I'm not sure the footprint of this patch worth doing but here is a bit\n> > more polished version.\n> \n> I noticed that the foreign_key test fails and it may have to do with\n> the fact that a partition's param info remains attached to the\n> parent's RI_ConstraintInfo even after it's detached from the parent\n> table using DETACH PARTITION.\n\nI think this bit about splitting the struct is a distraction. Let's get\na patch that solves the bug first, and then we can discuss what further\nrefinements we want to do. I think we should get your patch in\nCA+HiwqEOrfN9b=f3sDmySPGc4gO-L_VMFHXLLxVmmdP34e64+w@mail.gmail.com\ncommitted (which I have not read yet.) Do you agree with this plan?\n\n\n",
"msg_date": "Mon, 7 Dec 2020 11:48:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Mon, Dec 7, 2020 at 23:48 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2020-Dec-07, Amit Langote wrote:\n>\n> > On Fri, Dec 4, 2020 at 12:05 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > > Also, the comment that was in RI_ConstraintInfo now appears in\n> > > > RI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\n> > > > undocumented. What is the relationship between those two structs? I\n> > > > see that they have pointers to each other, but I think the\n> relationship\n> > > > should be documented more clearly.\n> > >\n> > > I'm not sure the footprint of this patch worth doing but here is a bit\n> > > more polished version.\n> >\n> > I noticed that the foreign_key test fails and it may have to do with\n> > the fact that a partition's param info remains attached to the\n> > parent's RI_ConstraintInfo even after it's detached from the parent\n> > table using DETACH PARTITION.\n>\n> I think this bit about splitting the struct is a distraction. Let's get\n> a patch that solves the bug first, and then we can discuss what further\n> refinements we want to do. I think we should get your patch in\n> CA+HiwqEOrfN9b=f3sDmySPGc4gO-L_VMFHXLLxVmmdP34e64+w@mail.gmail.com\n> committed (which I have not read yet.) Do you agree with this plan?\n\n\nYeah, I agree.\n\n- Amit\n\n> --\nAmit Langote\nEDB: http://www.enterprisedb.com\n\nHi Alvaro,On Mon, Dec 7, 2020 at 23:48 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2020-Dec-07, Amit Langote wrote:\n\n> On Fri, Dec 4, 2020 at 12:05 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > Also, the comment that was in RI_ConstraintInfo now appears in\n> > > RI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\n> > > undocumented. What is the relationship between those two structs? I\n> > > see that they have pointers to each other, but I think the relationship\n> > > should be documented more clearly.\n> >\n> > I'm not sure the footprint of this patch worth doing but here is a bit\n> > more polished version.\n> \n> I noticed that the foreign_key test fails and it may have to do with\n> the fact that a partition's param info remains attached to the\n> parent's RI_ConstraintInfo even after it's detached from the parent\n> table using DETACH PARTITION.\n\nI think this bit about splitting the struct is a distraction. Let's get\na patch that solves the bug first, and then we can discuss what further\nrefinements we want to do. I think we should get your patch in\nCA+HiwqEOrfN9b=f3sDmySPGc4gO-L_VMFHXLLxVmmdP34e64+w@mail.gmail.com\ncommitted (which I have not read yet.) Do you agree with this plan?Yeah, I agree.- Amit-- Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 8 Dec 2020 01:16:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "At Tue, 8 Dec 2020 01:16:00 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> Hi Alvaro,\n> \n> On Mon, Dec 7, 2020 at 23:48 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> > On 2020-Dec-07, Amit Langote wrote:\n> >\n> > > On Fri, Dec 4, 2020 at 12:05 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > > Also, the comment that was in RI_ConstraintInfo now appears in\n> > > > > RI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\n> > > > > undocumented. What is the relationship between those two structs? I\n> > > > > see that they have pointers to each other, but I think the\n> > relationship\n> > > > > should be documented more clearly.\n> > > >\n> > > > I'm not sure the footprint of this patch worth doing but here is a bit\n> > > > more polished version.\n> > >\n> > > I noticed that the foreign_key test fails and it may have to do with\n> > > the fact that a partition's param info remains attached to the\n> > > parent's RI_ConstraintInfo even after it's detached from the parent\n> > > table using DETACH PARTITION.\n> >\n> > I think this bit about splitting the struct is a distraction. Let's get\n> > a patch that solves the bug first, and then we can discuss what further\n> > refinements we want to do. I think we should get your patch in\n> > CA+HiwqEOrfN9b=f3sDmySPGc4gO-L_VMFHXLLxVmmdP34e64+w@mail.gmail.com\n> > committed (which I have not read yet.) Do you agree with this plan?\n> \n> \n> Yeah, I agree.\n\nOr https://www.postgresql.org/message-id/CA+HiwqGrr2YOO6voBM6m_OAc9w-WMxe1gOuQ-UyDPin6zJtyZw@mail.gmail.com ?\n\n+1 from me to either one.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 08 Dec 2020 12:04:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 12:04 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Tue, 8 Dec 2020 01:16:00 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > Hi Alvaro,\n> >\n> > On Mon, Dec 7, 2020 at 23:48 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > > On 2020-Dec-07, Amit Langote wrote:\n> > >\n> > > > On Fri, Dec 4, 2020 at 12:05 PM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > > > > Also, the comment that was in RI_ConstraintInfo now appears in\n> > > > > > RI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\n> > > > > > undocumented. What is the relationship between those two structs? I\n> > > > > > see that they have pointers to each other, but I think the\n> > > relationship\n> > > > > > should be documented more clearly.\n> > > > >\n> > > > > I'm not sure the footprint of this patch worth doing but here is a bit\n> > > > > more polished version.\n> > > >\n> > > > I noticed that the foreign_key test fails and it may have to do with\n> > > > the fact that a partition's param info remains attached to the\n> > > > parent's RI_ConstraintInfo even after it's detached from the parent\n> > > > table using DETACH PARTITION.\n> > >\n> > > I think this bit about splitting the struct is a distraction. Let's get\n> > > a patch that solves the bug first, and then we can discuss what further\n> > > refinements we want to do. I think we should get your patch in\n> > > CA+HiwqEOrfN9b=f3sDmySPGc4gO-L_VMFHXLLxVmmdP34e64+w@mail.gmail.com\n> > > committed (which I have not read yet.) Do you agree with this plan?\n> >\n> >\n> > Yeah, I agree.\n>\n> Or https://www.postgresql.org/message-id/CA+HiwqGrr2YOO6voBM6m_OAc9w-WMxe1gOuQ-UyDPin6zJtyZw@mail.gmail.com ?\n>\n> +1 from me to either one.\n\nOh, I hadn't actually checked the actual message that Alvaro\nmentioned, but yeah I too am fine with either that one or the latest\none.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Dec 2020 12:59:16 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 11:01 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Dec 4, 2020 at 12:05 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > Also, the comment that was in RI_ConstraintInfo now appears in\n> > > RI_ConstraintParam, and the new struct (RI_ConstraintInfo) is now\n> > > undocumented. What is the relationship between those two structs? I\n> > > see that they have pointers to each other, but I think the relationship\n> > > should be documented more clearly.\n> >\n> > I'm not sure the footprint of this patch worth doing but here is a bit\n> > more polished version.\n>\n> I noticed that the foreign_key test fails and it may have to do with\n> the fact that a partition's param info remains attached to the\n> parent's RI_ConstraintInfo even after it's detached from the parent\n> table using DETACH PARTITION.\n\nJust for (maybe not so distant) future reference, I managed to fix the\nfailure with this:\n\ndiff --git a/src/backend/utils/adt/ri_triggers.c\nb/src/backend/utils/adt/ri_triggers.c\nindex 187884f..c67f2a6 100644\n--- a/src/backend/utils/adt/ri_triggers.c\n+++ b/src/backend/utils/adt/ri_triggers.c\n@@ -1932,7 +1932,7 @@ ri_BuildQueryKey(RI_QueryKey *key, const\nRI_ConstraintInfo *riinfo,\n * We assume struct RI_QueryKey contains no padding bytes, else we'd need\n * to use memset to clear them.\n */\n- key->constr_id = riinfo->param->query_key;\n+ key->constr_id = riinfo->constraint_id;\n key->constr_queryno = constr_queryno;\n }\n\nIt seems the shareable part (param) is not properly invalidated after\na partition's constraint is detached, apparently leaving query_key in\nit set to the parent's constraint id.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Dec 2020 18:15:53 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi Amit-san,\n\nI noticed that this patch\n(v3-0001-ri_triggers.c-Use-root-constraint-OID-as-key-to-r.patch)\nis not registered in the commitfest. I think it needs to be registered for\ncommit, is that correct?\n\nI have confirmed that this patch can be applied to HEAD (master),\nand that check-world PASS.\n\nBest Regards,\n\n-- \nKeisuke Kuroda\nNTT Software Innovation Center\nkeisuke.kuroda.3862@gmail.com\n\n\n",
"msg_date": "Fri, 8 Jan 2021 13:02:38 +0900",
"msg_from": "Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Kuroda-san,\n\nOn Fri, Jan 8, 2021 at 1:02 PM Keisuke Kuroda\n<keisuke.kuroda.3862@gmail.com> wrote:\n> Hi Amit-san,\n>\n> I noticed that this patch\n> (v3-0001-ri_triggers.c-Use-root-constraint-OID-as-key-to-r.patch)\n> is not registered in the commitfest. I think it needs to be registered for\n> commit, is that correct?\n>\n> I have confirmed that this patch can be applied to HEAD (master),\n> and that check-world PASS.\n\nThanks for checking. Indeed, it should have been added to the January\ncommit-fest. I've added it to the March one:\n\nhttps://commitfest.postgresql.org/32/2930/\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Jan 2021 15:49:03 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi Amit-san,\n\n> Thanks for checking. Indeed, it should have been added to the January\n> commit-fest. I've added it to the March one:\n>\n> https://commitfest.postgresql.org/32/2930/\n\nThank you for your quick response!\n\n-- \nKeisuke Kuroda\nNTT Software Innovation Center\nkeisuke.kuroda.3862@gmail.com\n\n\n",
"msg_date": "Tue, 12 Jan 2021 09:48:59 +0900",
"msg_from": "Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On 12/7/20 10:59 PM, Amit Langote wrote:\n> On Tue, Dec 8, 2020 at 12:04 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> At Tue, 8 Dec 2020 01:16:00 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n>>> On Mon, Dec 7, 2020 at 23:48 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>>>\n>>>> I think this bit about splitting the struct is a distraction. Let's get\n>>>> a patch that solves the bug first, and then we can discuss what further\n>>>> refinements we want to do. I think we should get your patch in\n>>>> CA+HiwqEOrfN9b=f3sDmySPGc4gO-L_VMFHXLLxVmmdP34e64+w@mail.gmail.com\n>>>> committed (which I have not read yet.) Do you agree with this plan?\n>>>\n>>>\n>>> Yeah, I agree.\n>>\n>> Or https://www.postgresql.org/message-id/CA+HiwqGrr2YOO6voBM6m_OAc9w-WMxe1gOuQ-UyDPin6zJtyZw@mail.gmail.com ?\n>>\n>> +1 from me to either one.\n> \n> Oh, I hadn't actually checked the actual message that Alvaro\n> mentioned, but yeah I too am fine with either that one or the latest\n> one.\n\nAny progress on the decision of how to handle this bug? Not sure if that \nwas handled elsewhere but it appears to be key to making progress on \nthis patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 3 Mar 2021 08:21:37 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi David,\n\nOn Wed, Mar 3, 2021 at 10:21 PM David Steele <david@pgmasters.net> wrote:\n> On 12/7/20 10:59 PM, Amit Langote wrote:\n> > On Tue, Dec 8, 2020 at 12:04 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> At Tue, 8 Dec 2020 01:16:00 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> >>> On Mon, Dec 7, 2020 at 23:48 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>>>\n> >>>> I think this bit about splitting the struct is a distraction. Let's get\n> >>>> a patch that solves the bug first, and then we can discuss what further\n> >>>> refinements we want to do. I think we should get your patch in\n> >>>> CA+HiwqEOrfN9b=f3sDmySPGc4gO-L_VMFHXLLxVmmdP34e64+w@mail.gmail.com\n> >>>> committed (which I have not read yet.) Do you agree with this plan?\n> >>>\n> >>>\n> >>> Yeah, I agree.\n> >>\n> >> Or https://www.postgresql.org/message-id/CA+HiwqGrr2YOO6voBM6m_OAc9w-WMxe1gOuQ-UyDPin6zJtyZw@mail.gmail.com ?\n> >>\n> >> +1 from me to either one.\n> >\n> > Oh, I hadn't actually checked the actual message that Alvaro\n> > mentioned, but yeah I too am fine with either that one or the latest\n> > one.\n>\n> Any progress on the decision of how to handle this bug? Not sure if that\n> was handled elsewhere but it appears to be key to making progress on\n> this patch.\n\nThe v3 patch posted on Dec 4 is still what is being proposed to fix this.\n\nI don't know of any unaddressed comments on the patch, so I've marked\nthe entry Ready for Committer.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Mar 2021 22:59:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On 2021-Mar-03, Amit Langote wrote:\n\n> I don't know of any unaddressed comments on the patch, so I've marked\n> the entry Ready for Committer.\n\nThanks, I'll look at it later this week.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n",
"msg_date": "Wed, 3 Mar 2021 12:00:35 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Updated patch attached.\n\nThis claim seems false on its face:\n\n> All child constraints of a given foreign key constraint can use the\n> same RI query and the resulting plan, that is, no need to create as\n> many copies of the query and the plan as there are partitions, as\n> happens now due to the child constraint OID being used in the key\n> for ri_query_cache.\n\nWhat if the child tables don't have the same physical column numbers\nas the parent? The comment claiming that it's okay if riinfo->fk_attnums\ndoesn't match seems quite off point, because the query plan is still\ngoing to need to use the correct column numbers. Even if column numbers\nare the same, the plan would also contain table and index OIDs that could\nonly be right for one partition.\n\nI could imagine translating a parent plan to apply to a child instead of\nbuilding it from scratch, but that would take a lot of code we don't have\n(there's no rewriteHandler infrastructure for plan nodes).\n\nMaybe I'm missing something, because I see that the cfbot claims\nthis is passing, and I'd sure like to think that our test coverage\nis not so thin that it'd fail to detect probing the wrong partition\nfor foreign key matches. But that's what it looks like this patch\nwill do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 16:00:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Updated patch attached.\n>\n> This claim seems false on its face:\n>\n> > All child constraints of a given foreign key constraint can use the\n> > same RI query and the resulting plan, that is, no need to create as\n> > many copies of the query and the plan as there are partitions, as\n> > happens now due to the child constraint OID being used in the key\n> > for ri_query_cache.\n>\n> What if the child tables don't have the same physical column numbers\n> as the parent? The comment claiming that it's okay if riinfo->fk_attnums\n> doesn't match seems quite off point, because the query plan is still\n> going to need to use the correct column numbers. Even if column numbers\n> are the same, the plan would also contain table and index OIDs that could\n> only be right for one partition.\n>\n> I could imagine translating a parent plan to apply to a child instead of\n> building it from scratch, but that would take a lot of code we don't have\n> (there's no rewriteHandler infrastructure for plan nodes).\n>\n> Maybe I'm missing something, because I see that the cfbot claims\n> this is passing, and I'd sure like to think that our test coverage\n> is not so thin that it'd fail to detect probing the wrong partition\n> for foreign key matches. But that's what it looks like this patch\n> will do.\n\nThe quoted comment could have been written to be clearer about this,\nbut it's not talking about the table that is to be queried, but the\ntable whose RI trigger is being executed. In all the cases except one\n(mentioned below), the table that is queried is the same irrespective\nof which partition's trigger is being executed, so we're basically\ncreating the same plan separately for each partition.\n\ncreate table foo (a int primary key) partition by range (a);\ncreate table foo1 partition of foo for values from (minvalue) to (1);\ncreate table foo2 partition of foo for values from (1) to (maxvalue);\ncreate table bar (a int references foo) partition by range (a);\ncreate table bar1 partition of bar for values from (minvalue) to (1);\ncreate table bar2 partition of bar for values from (1) to (maxvalue);\n\ninsert into foo values (0), (1);\ninsert into bar values (0), (1);\n\nFor the 2nd insert statement, RI_FKey_check() issues the following\nquery irrespective of whether it is called from bar1's or bar2's\ninsert check RI trigger:\n\nSELECT 1 FROM foo WHERE foo.a = $1;\n\nLikewise for:\n\ndelete from foo;\n\nri_restrict() issues the following query irrespective of whether\ncalled from foo1's or foo2's delete action trigger:\n\nSELECT 1 FROM bar WHERE bar.a = $1;\n\nThe one case I found in which the queried table is the same as the\ntable whose trigger has been invoked is ri_Check_Pk_Match(), in which\ncase, it's actually wrong for partitions to share the plan, so the\npatch was wrong in that case. Apparently, we didn't test that case\nwith partitioning, so I added one as follows:\n\n+-- test that ri_Check_Pk_Match() scans the correct partition for a deferred\n+-- ON DELETE/UPDATE NO ACTION constraint\n+CREATE SCHEMA fkpart10\n+ CREATE TABLE tbl1(f1 int PRIMARY KEY) PARTITION BY RANGE(f1)\n+ CREATE TABLE tbl1_p1 PARTITION OF tbl1 FOR VALUES FROM (minvalue) TO (1)\n+ CREATE TABLE tbl1_p2 PARTITION OF tbl1 FOR VALUES FROM (1) TO (maxvalue)\n+ CREATE TABLE tbl2(f1 int REFERENCES tbl1 DEFERRABLE INITIALLY DEFERRED);\n+INSERT INTO fkpart10.tbl1 VALUES (0), (1);\n+INSERT INTO fkpart10.tbl2 VALUES (0), (1);\n+BEGIN;\n+DELETE FROM fkpart10.tbl1 WHERE f1 = 0;\n+UPDATE fkpart10.tbl1 SET f1 = 2 WHERE f1 = 1;\n+INSERT INTO fkpart10.tbl1 VALUES (0), (1);\n+COMMIT;\n+DROP SCHEMA fkpart10 CASCADE;\n+NOTICE: drop cascades to 2 other objects\n+DETAIL: drop cascades to table fkpart10.tbl1\n+drop cascades to table fkpart10.tbl2\n\nWith the v3 patch, the test case fails with the following error on COMMIT:\n\n+ERROR: update or delete on table \"tbl1_p2\" violates foreign key\nconstraint \"tbl2_f1_fkey2\" on table \"tbl2\"\n+DETAIL: Key (f1)=(1) is still referenced from table \"tbl2\".\n\nThat's because in ri_Check_Pk_Match(), partition tbl2 ends up using\nthe same cached plan as tbl1 and so scans tbl1 to check whether the\nrow deleted from tbl2 has reappeared, which is of course wrong. I\nhave fixed that by continuing to use the individual partition's\nconstraint's OID as the query key for this particular query.\n\nI have also removed the somewhat confusing comment about fk_attnums.\nThe point it was trying to make is that fk_attnums being possibly\ndifferent among partitions does not make a difference to what any\ngiven shareable query's text ultimately looks like. Those attribute\nnumbers are only used by the macros RIAttName(), RIAttType(),\nRIAttCollation() when generating the query text and they'd all return\nthe same values no matter which partition's attribute numbers are\nused.\n\nUpdated patch attached.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 8 Mar 2021 14:27:39 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 5:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Updated patch attached.\n>\n> This claim seems false on its face:\n>\n> > All child constraints of a given foreign key constraint can use the\n> > same RI query and the resulting plan, that is, no need to create as\n> > many copies of the query and the plan as there are partitions, as\n> > happens now due to the child constraint OID being used in the key\n> > for ri_query_cache.\n>\n> What if the child tables don't have the same physical column numbers\n> as the parent?\n\n\nMy point below is a bit off-topic, but I want to share it here. Since\nwe implement a partitioned table in PG with the inherited class, it has much\nmore flexibility than other databases. Like in PG, we allow different\npartitions\nhave different physical order, different indexes, maybe different index\nstates.\nthat would cause our development work hard in many places and cause some\nruntime issues as well (like catalog memory usage), have we discussed\nlimiting some flexibility so that we can have better coding/running\nexperience?\nI want to do some research in this direction, but it would be better that I\ncan\nlisten to any advice from others. More specifically, I want to reduce the\nmemory\nusage of Partitioned table/index as the first step. In my testing, each\nIndexOptInfo\nwill use 2kB memory in each backend.\n\n\n> The comment claiming that it's okay if riinfo->fk_attnums\n> doesn't match seems quite off point, because the query plan is still\n> going to need to use the correct column numbers. Even if column numbers\n> are the same, the plan would also contain table and index OIDs that could\n> only be right for one partition.\n>\n> I could imagine translating a parent plan to apply to a child instead of\n> building it from scratch, but that would take a lot of code we don't have\n> (there's no rewriteHandler infrastructure for plan nodes).\n>\n> Maybe I'm missing something, because I see that the cfbot claims\n> this is passing, and I'd sure like to think that our test coverage\n> is not so thin that it'd fail to detect probing the wrong partition\n> for foreign key matches. But that's what it looks like this patch\n> will do.\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Fri, Mar 5, 2021 at 5:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Amit Langote <amitlangote09@gmail.com> writes:\n> Updated patch attached.\n\nThis claim seems false on its face:\n\n> All child constraints of a given foreign key constraint can use the\n> same RI query and the resulting plan, that is, no need to create as\n> many copies of the query and the plan as there are partitions, as\n> happens now due to the child constraint OID being used in the key\n> for ri_query_cache.\n\nWhat if the child tables don't have the same physical column numbers\nas the parent? My point below is a bit off-topic, but I want to share it here. Sincewe implement a partitioned table in PG with the inherited class, it has muchmore flexibility than other databases. Like in PG, we allow different partitionshave different physical order, different indexes, maybe different index states.that would cause our development work hard in many places and cause someruntime issues as well (like catalog memory usage), have we discussedlimiting some flexibility so that we can have better coding/running experience? I want to do some research in this direction, but it would be better that I canlisten to any advice from others. More specifically, I want to reduce the memoryusage of Partitioned table/index as the first step. In my testing, each IndexOptInfowill use 2kB memory in each backend. The comment claiming that it's okay if riinfo->fk_attnums\ndoesn't match seems quite off point, because the query plan is still\ngoing to need to use the correct column numbers. Even if column numbers\nare the same, the plan would also contain table and index OIDs that could\nonly be right for one partition.\n\nI could imagine translating a parent plan to apply to a child instead of\nbuilding it from scratch, but that would take a lot of code we don't have\n(there's no rewriteHandler infrastructure for plan nodes).\n\nMaybe I'm missing something, because I see that the cfbot claims\nthis is passing, and I'd sure like to think that our test coverage\nis not so thin that it'd fail to detect probing the wrong partition\nfor foreign key matches. But that's what it looks like this patch\nwill do.\n\n regards, tom lane\n\n\n-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Mon, 8 Mar 2021 15:43:36 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 3:43 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Fri, Mar 5, 2021 at 5:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Amit Langote <amitlangote09@gmail.com> writes:\n>> > Updated patch attached.\n>>\n>> This claim seems false on its face:\n>>\n>> > All child constraints of a given foreign key constraint can use the\n>> > same RI query and the resulting plan, that is, no need to create as\n>> > many copies of the query and the plan as there are partitions, as\n>> > happens now due to the child constraint OID being used in the key\n>> > for ri_query_cache.\n>>\n>> What if the child tables don't have the same physical column numbers\n>> as the parent?\n>\n>\n> My point below is a bit off-topic, but I want to share it here. Since\n> we implement a partitioned table in PG with the inherited class, it has\n> much\n> more flexibility than other databases. Like in PG, we allow different\n> partitions\n> have different physical order, different indexes, maybe different index\n> states.\n> that would cause our development work hard in many places and cause some\n> runtime issues as well (like catalog memory usage), have we discussed\n> limiting some flexibility so that we can have better coding/running\n> experience?\n> I want to do some research in this direction, but it would be better that\n> I can\n> listen to any advice from others. More specifically, I want to reduce the\n> memory\n> usage of Partitioned table/index as the first step. In my testing, each\n> IndexOptInfo\n> will use 2kB memory in each backend.\n>\n\nAs for the compatible issue, will it be ok to introduce a new concept\nlike \"\nCREATE TABLE p (a int) partitioned by list(a) RESTRICTED\". We can add these\nlimitation to restricted partitioned relation only.\n\n\n>\nThe comment claiming that it's okay if riinfo->fk_attnums\n>> doesn't match seems quite off point, because the query plan is still\n>> going to need to use the correct column numbers. Even if column numbers\n>> are the same, the plan would also contain table and index OIDs that could\n>> only be right for one partition.\n>>\n>> I could imagine translating a parent plan to apply to a child instead of\n>> building it from scratch, but that would take a lot of code we don't have\n>> (there's no rewriteHandler infrastructure for plan nodes).\n>>\n>> Maybe I'm missing something, because I see that the cfbot claims\n>> this is passing, and I'd sure like to think that our test coverage\n>> is not so thin that it'd fail to detect probing the wrong partition\n>> for foreign key matches. But that's what it looks like this patch\n>> will do.\n>>\n>> regards, tom lane\n>>\n>>\n>>\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Mon, Mar 8, 2021 at 3:43 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Fri, Mar 5, 2021 at 5:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Amit Langote <amitlangote09@gmail.com> writes:\n> Updated patch attached.\n\nThis claim seems false on its face:\n\n> All child constraints of a given foreign key constraint can use the\n> same RI query and the resulting plan, that is, no need to create as\n> many copies of the query and the plan as there are partitions, as\n> happens now due to the child constraint OID being used in the key\n> for ri_query_cache.\n\nWhat if the child tables don't have the same physical column numbers\nas the parent? My point below is a bit off-topic, but I want to share it here. Sincewe implement a partitioned table in PG with the inherited class, it has muchmore flexibility than other databases. Like in PG, we allow different partitionshave different physical order, different indexes, maybe different index states.that would cause our development work hard in many places and cause someruntime issues as well (like catalog memory usage), have we discussedlimiting some flexibility so that we can have better coding/running experience? I want to do some research in this direction, but it would be better that I canlisten to any advice from others. More specifically, I want to reduce the memoryusage of Partitioned table/index as the first step. In my testing, each IndexOptInfowill use 2kB memory in each backend.As for the compatible issue, will it be ok to introduce a new concept like \"CREATE TABLE p (a int) partitioned by list(a) RESTRICTED\". We can add theselimitation to restricted partitioned relation only. The comment claiming that it's okay if riinfo->fk_attnums\ndoesn't match seems quite off point, because the query plan is still\ngoing to need to use the correct column numbers. Even if column numbers\nare the same, the plan would also contain table and index OIDs that could\nonly be right for one partition.\n\nI could imagine translating a parent plan to apply to a child instead of\nbuilding it from scratch, but that would take a lot of code we don't have\n(there's no rewriteHandler infrastructure for plan nodes).\n\nMaybe I'm missing something, because I see that the cfbot claims\nthis is passing, and I'd sure like to think that our test coverage\nis not so thin that it'd fail to detect probing the wrong partition\nfor foreign key matches. But that's what it looks like this patch\nwill do.\n\n regards, tom lane\n\n\n-- Best RegardsAndy Fan (https://www.aliyun.com/)\n-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Mon, 8 Mar 2021 19:39:28 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi Andy,\n\nOn Mon, Mar 8, 2021 at 8:39 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Mon, Mar 8, 2021 at 3:43 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> My point below is a bit off-topic, but I want to share it here. Since\n>> we implement a partitioned table in PG with the inherited class, it has much\n>> more flexibility than other databases. Like in PG, we allow different partitions\n>> have different physical order, different indexes, maybe different index states.\n>> that would cause our development work hard in many places and cause some\n>> runtime issues as well (like catalog memory usage), have we discussed\n>> limiting some flexibility so that we can have better coding/running experience?\n>> I want to do some research in this direction, but it would be better that I can\n>> listen to any advice from others. More specifically, I want to reduce the memory\n>> usage of Partitioned table/index as the first step. In my testing, each IndexOptInfo\n>> will use 2kB memory in each backend.\n>\n>\n> As for the compatible issue, will it be ok to introduce a new concept like \"\n> CREATE TABLE p (a int) partitioned by list(a) RESTRICTED\". We can add these\n> limitation to restricted partitioned relation only.\n\nI think you'd agree that the topics you want to discuss deserve a\nseparate discussion thread. You may refer to this discussion in that\nnew thread if you think that your proposals can solve the problem\nbeing discussed here more generally, which would of course be great.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Mar 2021 21:40:46 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 8:42 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi Andy,\n>\n> On Mon, Mar 8, 2021 at 8:39 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > On Mon, Mar 8, 2021 at 3:43 PM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> >> My point below is a bit off-topic, but I want to share it here. Since\n> >> we implement a partitioned table in PG with the inherited class, it has\n> much\n> >> more flexibility than other databases. Like in PG, we allow different\n> partitions\n> >> have different physical order, different indexes, maybe different\n> index states.\n> >> that would cause our development work hard in many places and cause some\n> >> runtime issues as well (like catalog memory usage), have we discussed\n> >> limiting some flexibility so that we can have better coding/running\n> experience?\n> >> I want to do some research in this direction, but it would be better\n> that I can\n> >> listen to any advice from others. More specifically, I want to reduce\n> the memory\n> >> usage of Partitioned table/index as the first step. In my testing,\n> each IndexOptInfo\n> >> will use 2kB memory in each backend.\n> >\n> >\n> > As for the compatible issue, will it be ok to introduce a new concept\n> like \"\n> > CREATE TABLE p (a int) partitioned by list(a) RESTRICTED\". We can add\n> these\n> > limitation to restricted partitioned relation only.\n>\n> I think you'd agree that the topics you want to discuss deserve a\n> separate discussion thread. You may refer to this discussion in that\n> new thread if you think that your proposals can solve the problem\n> being discussed here more generally, which would of course be great.\n>\n>\nSure, I can prepare more data and start a new thread for this.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Mon, Mar 8, 2021 at 8:42 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi Andy,\n\nOn Mon, Mar 8, 2021 at 8:39 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Mon, Mar 8, 2021 at 3:43 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> My point below is a bit off-topic, but I want to share it here. Since\n>> we implement a partitioned table in PG with the inherited class, it has much\n>> more flexibility than other databases. Like in PG, we allow different partitions\n>> have different physical order, different indexes, maybe different index states.\n>> that would cause our development work hard in many places and cause some\n>> runtime issues as well (like catalog memory usage), have we discussed\n>> limiting some flexibility so that we can have better coding/running experience?\n>> I want to do some research in this direction, but it would be better that I can\n>> listen to any advice from others. More specifically, I want to reduce the memory\n>> usage of Partitioned table/index as the first step. In my testing, each IndexOptInfo\n>> will use 2kB memory in each backend.\n>\n>\n> As for the compatible issue, will it be ok to introduce a new concept like \"\n> CREATE TABLE p (a int) partitioned by list(a) RESTRICTED\". We can add these\n> limitation to restricted partitioned relation only.\n\nI think you'd agree that the topics you want to discuss deserve a\nseparate discussion thread. You may refer to this discussion in that\nnew thread if you think that your proposals can solve the problem\nbeing discussed here more generally, which would of course be great.Sure, I can prepare more data and start a new thread for this. -- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Mon, 8 Mar 2021 20:53:33 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 9:53 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Mon, Mar 8, 2021 at 8:42 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Mon, Mar 8, 2021 at 8:39 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> > On Mon, Mar 8, 2021 at 3:43 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> >> My point below is a bit off-topic, but I want to share it here. Since\n>> >> we implement a partitioned table in PG with the inherited class, it has much\n>> >> more flexibility than other databases. Like in PG, we allow different partitions\n>> >> have different physical order, different indexes, maybe different index states.\n>> >> that would cause our development work hard in many places and cause some\n>> >> runtime issues as well (like catalog memory usage), have we discussed\n>> >> limiting some flexibility so that we can have better coding/running experience?\n>> >> I want to do some research in this direction, but it would be better that I can\n>> >> listen to any advice from others. More specifically, I want to reduce the memory\n>> >> usage of Partitioned table/index as the first step. In my testing, each IndexOptInfo\n>> >> will use 2kB memory in each backend.\n>> >\n>> >\n>> > As for the compatible issue, will it be ok to introduce a new concept like \"\n>> > CREATE TABLE p (a int) partitioned by list(a) RESTRICTED\". We can add these\n>> > limitation to restricted partitioned relation only.\n>>\n>> I think you'd agree that the topics you want to discuss deserve a\n>> separate discussion thread. You may refer to this discussion in that\n>> new thread if you think that your proposals can solve the problem\n>> being discussed here more generally, which would of course be great.\n>\n> Sure, I can prepare more data and start a new thread for this.\n\nGreat, thanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Mar 2021 21:58:03 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Mar 5, 2021 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This claim seems false on its face:\n>>> All child constraints of a given foreign key constraint can use the\n>>> same RI query and the resulting plan, that is, no need to create as\n>>> many copies of the query and the plan as there are partitions, as\n>>> happens now due to the child constraint OID being used in the key\n>>> for ri_query_cache.\n\n> The quoted comment could have been written to be clearer about this,\n> but it's not talking about the table that is to be queried, but the\n> table whose RI trigger is being executed. In all the cases except one\n> (mentioned below), the table that is queried is the same irrespective\n> of which partition's trigger is being executed, so we're basically\n> creating the same plan separately for each partition.\n\nHmm. So, the key point is that the values coming from the partitioned\nchild table are injected into the test query as parameters, not as\ncolumn references, thus it doesn't matter *to the test query* what\nnumbers the referencing columns have in that child. We just have to\nbe sure we pass the right parameter values. But ... doesn't the code\nuse riinfo->fk_attnums[] to pull out the values to be passed?\n\nIOW, I now get the point about being able to share the SPI plans,\nbut I'm still dubious about sharing the RI_ConstraintInfo cache entries.\n\nIt looks to me like the v4 patch is now actually not sharing the\ncache entries, ie their hash key is just the child constraint OID\nsame as before; but the comments are pretty confused about this.\n\nIt might be simpler if you add just one new field which is the OID of\nthe constraint that we're building the SPI query from, which might be\neither equal to constraint_id, or the OID of some parent constraint.\nIn particular it's not clear to me why we need both constraint_parent\nand constraint_root_id.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Mar 2021 18:37:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 8:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Fri, Mar 5, 2021 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> This claim seems false on its face:\n> >>> All child constraints of a given foreign key constraint can use the\n> >>> same RI query and the resulting plan, that is, no need to create as\n> >>> many copies of the query and the plan as there are partitions, as\n> >>> happens now due to the child constraint OID being used in the key\n> >>> for ri_query_cache.\n>\n> > The quoted comment could have been written to be clearer about this,\n> > but it's not talking about the table that is to be queried, but the\n> > table whose RI trigger is being executed. In all the cases except one\n> > (mentioned below), the table that is queried is the same irrespective\n> > of which partition's trigger is being executed, so we're basically\n> > creating the same plan separately for each partition.\n>\n> Hmm. So, the key point is that the values coming from the partitioned\n> child table are injected into the test query as parameters, not as\n> column references, thus it doesn't matter *to the test query* what\n> numbers the referencing columns have in that child. We just have to\n> be sure we pass the right parameter values.\n\nRight.\n\n> But ... doesn't the code\n> use riinfo->fk_attnums[] to pull out the values to be passed?\n\nYes, from a slot that belongs to the child table.\n\n> IOW, I now get the point about being able to share the SPI plans,\n> but I'm still dubious about sharing the RI_ConstraintInfo cache entries.\n\nThere was actually a proposal upthread about sharing the\nRI_ConstraintInfo too, but we decided to not pursue that for now.\n\n> It looks to me like the v4 patch is now actually not sharing the\n> cache entries, ie their hash key is just the child constraint OID\n> same as before;\n\nYeah, you may see that we're only changing ri_BuildQueryKey() in the\npatch affecting only ri_query_cache, but not ri_LoadConstraintInfo()\nwhich leaves ri_constraint_cache unaffected.\n\n> but the comments are pretty confused about this.\n\nI've tried improving the comment in ri_BuildQueryKey() a bit to make\nclear what is and what is not being shared between partitions.\n\n> It might be simpler if you add just one new field which is the OID of\n> the constraint that we're building the SPI query from, which might be\n> either equal to constraint_id, or the OID of some parent constraint.\n> In particular it's not clear to me why we need both constraint_parent\n> and constraint_root_id.\n\nYeah, I think constraint_parent is a leftover from some earlier\nhacking. I have removed it.\n\nAttached updated patch.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 11 Mar 2021 00:44:53 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Wed, Mar 10, 2021 at 8:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm. So, the key point is that the values coming from the partitioned\n>> child table are injected into the test query as parameters, not as\n>> column references, thus it doesn't matter *to the test query* what\n>> numbers the referencing columns have in that child. We just have to\n>> be sure we pass the right parameter values.\n\n> Right.\n\nI did some cosmetic fooling with this (mostly, rewriting the comments\nYA time) and pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Mar 2021 14:24:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 4:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Wed, Mar 10, 2021 at 8:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmm. So, the key point is that the values coming from the partitioned\n> >> child table are injected into the test query as parameters, not as\n> >> column references, thus it doesn't matter *to the test query* what\n> >> numbers the referencing columns have in that child. We just have to\n> >> be sure we pass the right parameter values.\n>\n> > Right.\n>\n> I did some cosmetic fooling with this (mostly, rewriting the comments\n> YA time) and pushed it.\n\nPerfect. Thanks for your time on this.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Mar 2021 09:39:27 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Hi hackers,\n\n> >\n> > I did some cosmetic fooling with this (mostly, rewriting the comments\n> > YA time) and pushed it.\n>\n> Perfect. Thanks for your time on this.\n>\n\nThank you for your help! I'm glad to solve it.\n\n-- \nKeisuke Kuroda\nNTT Software Innovation Center\nkeisuke.kuroda.3862@gmail.com\n\n\n",
"msg_date": "Thu, 11 Mar 2021 10:06:57 +0900",
"msg_from": "Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On 2021/03/11 9:39, Amit Langote wrote:\n> On Thu, Mar 11, 2021 at 4:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Amit Langote <amitlangote09@gmail.com> writes:\n>>> On Wed, Mar 10, 2021 at 8:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Hmm. So, the key point is that the values coming from the partitioned\n>>>> child table are injected into the test query as parameters, not as\n>>>> column references, thus it doesn't matter *to the test query* what\n>>>> numbers the referencing columns have in that child. We just have to\n>>>> be sure we pass the right parameter values.\n>>\n>>> Right.\n>>\n>> I did some cosmetic fooling with this (mostly, rewriting the comments\n>> YA time) and pushed it.\n> \n> Perfect. Thanks for your time on this.\n\n\nThanks for fixing the problem! :-D\n\n\nRegards,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Thu, 11 Mar 2021 15:10:04 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> writes:\n> Thanks for fixing the problem! :-D\n\nHmm, I'm not sure we're done with this patch:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=husky&dt=2021-03-10%2021%3A09%3A32\n\nThe critical log extract is\n\n2021-03-11 05:10:13.012 CET [21574:1082] pg_regress/foreign_key LOG: statement: insert into fk_notpartitioned_pk (a, b)\n\t select 2048, x from generate_series(1,10) x;\n2021-03-11 05:10:13.104 CET [11830:368] LOG: server process (PID 21574) was terminated by signal 11: Segmentation fault\n2021-03-11 05:10:13.104 CET [11830:369] DETAIL: Failed process was running: insert into fk_notpartitioned_pk (a, b)\n\t select 2048, x from generate_series(1,10) x;\n\nNow, maybe it's a coincidence that husky failed on a\npartitioned-foreign-key test right after this patch went in, but I bet\nnot. Since husky runs CLOBBER_CACHE_ALWAYS, it looks to me like we've\noverlooked some cache-reset scenario or other.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Mar 2021 12:01:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "I wrote:\n> Now, maybe it's a coincidence that husky failed on a\n> partitioned-foreign-key test right after this patch went in, but I bet\n> not. Since husky runs CLOBBER_CACHE_ALWAYS, it looks to me like we've\n> overlooked some cache-reset scenario or other.\n\nAfter reproducing it here, that *is* a coincidence. I shall now go beat\nup on the correct blame-ee, instead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Mar 2021 12:44:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
},
{
"msg_contents": "On 2021/03/12 2:44, Tom Lane wrote:\n> I wrote:\n>> Now, maybe it's a coincidence that husky failed on a\n>> partitioned-foreign-key test right after this patch went in, but I bet\n>> not. Since husky runs CLOBBER_CACHE_ALWAYS, it looks to me like we've\n>> overlooked some cache-reset scenario or other.\n> \n> After reproducing it here, that *is* a coincidence. I shall now go beat\n> up on the correct blame-ee, instead.\n\n\nI did \"make installcheck-parallel\" on 7bb97211a, just in case. It was successful. :-D\n\nRegards,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Fri, 12 Mar 2021 12:35:08 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Huge memory consumption on partitioned table with FKs"
}
] |
[
{
"msg_contents": "Hi\n I would like to know if it is possible to migrate Oracle multitenant database (with multiple PDB) to PostgreSQL ?\n\n Thanks in advance\n\nBest Regards\n\nD ROS\nEDF\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Tue, 24 Nov 2020 08:09:29 +0000",
"msg_from": "ROS Didier <didier.ros@edf.fr>",
"msg_from_op": true,
"msg_subject": "Migration Oracle multitenant database to PostgreSQL ?"
},
{
"msg_contents": "\nROS Didier schrieb am 24.11.2020 um 09:09:\n> I would like to know if it is possible to migrate Oracle multitenant\n> database (with multiple PDB) to PostgreSQL ?\nPostgres' databases are very similar to Oracle's PDBs.\n\nProbably the biggest difference is, that you can't shutdown\na single database as you can do with a PDB.\n\nDatabase users in Postgres are like Oracle's \"common users\", they are\nglobal for the whole instance (aka \"cluster\" in Postgres' terms). There\nare no database specific users.\n\nThomas\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 24 Nov 2020 09:22:26 +0100",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Migration Oracle multitenant database to PostgreSQL ?"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 09:22:26AM +0100, Thomas Kellerer wrote:\n> \n> ROS Didier schrieb am 24.11.2020 um 09:09:\n> > I would like to know if it is possible to migrate Oracle multitenant\n> > database (with multiple PDB) to PostgreSQL ?\n> Postgres' databases are very similar to Oracle's PDBs.\n> \n> Probably the biggest difference is, that you can't shutdown\n> a single database as you can do with a PDB.\n\nI guess you could lock users out of a single database by changing\npg_hba.conf and doing reload.\n\n> Database users in Postgres are like Oracle's \"common users\", they are\n> global for the whole instance (aka \"cluster\" in Postgres' terms). There\n> are no database specific users.\n\nGood to know.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 24 Nov 2020 09:09:06 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Migration Oracle multitenant database to PostgreSQL ?"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nI found that the cache_plan argument to ri_PlanCheck already been remove since\n5b7ba75f7ff854003231e8099e3038c7e2eba875. I think we can remove the comments\ntor cache_plan to ri_PlanCheck.\n\ndiff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c\nindex 7e2b2e3dd6..02b1a3868f 100644\n--- a/src/backend/utils/adt/ri_triggers.c\n+++ b/src/backend/utils/adt/ri_triggers.c\n@@ -2130,9 +2130,6 @@ InvalidateConstraintCacheCallBack(Datum arg, int cacheid, uint32 hashvalue)\n\n /*\n * Prepare execution plan for a query to enforce an RI restriction\n- *\n- * If cache_plan is true, the plan is saved into our plan hashtable\n- * so that we don't need to plan it again.\n */\n static SPIPlanPtr\n ri_PlanCheck(const char *querystr, int nargs, Oid *argtypes,\n\n--\nBest regards\nJapin Li\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Tue, 24 Nov 2020 11:16:20 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Remove cache_plan argument comments to ri_PlanCheck"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 4:46 PM Li Japin <japinli@hotmail.com> wrote:\n>\n> Hi, hackers\n>\n> I found that the cache_plan argument to ri_PlanCheck already been remove since\n> 5b7ba75f7ff854003231e8099e3038c7e2eba875. I think we can remove the comments\n> tor cache_plan to ri_PlanCheck.\n>\n> diff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c\n> index 7e2b2e3dd6..02b1a3868f 100644\n> --- a/src/backend/utils/adt/ri_triggers.c\n> +++ b/src/backend/utils/adt/ri_triggers.c\n> @@ -2130,9 +2130,6 @@ InvalidateConstraintCacheCallBack(Datum arg, int cacheid, uint32 hashvalue)\n>\n> /*\n> * Prepare execution plan for a query to enforce an RI restriction\n> - *\n> - * If cache_plan is true, the plan is saved into our plan hashtable\n> - * so that we don't need to plan it again.\n> */\n> static SPIPlanPtr\n> ri_PlanCheck(const char *querystr, int nargs, Oid *argtypes,\n>\n\nYour patch looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Nov 2020 19:26:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove cache_plan argument comments to ri_PlanCheck"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 7:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 24, 2020 at 4:46 PM Li Japin <japinli@hotmail.com> wrote:\n> >\n> > Hi, hackers\n> >\n> > I found that the cache_plan argument to ri_PlanCheck already been remove since\n> > 5b7ba75f7ff854003231e8099e3038c7e2eba875. I think we can remove the comments\n> > tor cache_plan to ri_PlanCheck.\n> >\n> > diff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c\n> > index 7e2b2e3dd6..02b1a3868f 100644\n> > --- a/src/backend/utils/adt/ri_triggers.c\n> > +++ b/src/backend/utils/adt/ri_triggers.c\n> > @@ -2130,9 +2130,6 @@ InvalidateConstraintCacheCallBack(Datum arg, int cacheid, uint32 hashvalue)\n> >\n> > /*\n> > * Prepare execution plan for a query to enforce an RI restriction\n> > - *\n> > - * If cache_plan is true, the plan is saved into our plan hashtable\n> > - * so that we don't need to plan it again.\n> > */\n> > static SPIPlanPtr\n> > ri_PlanCheck(const char *querystr, int nargs, Oid *argtypes,\n> >\n>\n> Your patch looks good to me.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 25 Nov 2020 11:15:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove cache_plan argument comments to ri_PlanCheck"
}
] |
[
{
"msg_contents": "Hi:\n\nFor example, we added a new field in a node in primnodes.h\n\nstruct FuncExpr\n{\n\n + int newf;\n};\n\nthen we modified the copy/read/out functions for this node. In\n_readFuncExpr,\nwe probably add something like\n\nstatic FuncExpr\n_readFuncExpr(..)\n{\n..\n+ READ_INT_FILED(newf);\n};\n\nThen we will get a compatible issue if we create a view with the node in\nthe older version and access the view with the new binary. I think we can\nhave bypass this issue easily with something like\n\nREAD_INT_FIELD_UNMUST(newf, defaultvalue);\n\nHowever I didn't see any code like this in our code base. does it doesn't\nwork or is it something not worth doing?\n\n-- \nBest Regards\nAndy Fan\n\nHi: For example, we added a new field in a node in primnodes.hstruct FuncExpr{ + int newf; };then we modified the copy/read/out functions for this node. In _readFuncExpr,we probably add something like static FuncExpr_readFuncExpr(..){..+ READ_INT_FILED(newf); }; Then we will get a compatible issue if we create a view with the node in the older version and access the view with the new binary. I think we canhave bypass this issue easily with something like READ_INT_FIELD_UNMUST(newf, defaultvalue); However I didn't see any code like this in our code base. does it doesn't work or is it something not worth doing? -- Best RegardsAndy Fan",
"msg_date": "Tue, 24 Nov 2020 21:49:25 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "About adding a new filed to a struct in primnodes.h"
},
{
"msg_contents": "On 2020-Nov-24, Andy Fan wrote:\n\n> then we modified the copy/read/out functions for this node. In\n> _readFuncExpr,\n> we probably add something like\n\n> [ ... ]\n\n> Then we will get a compatible issue if we create a view with the node in\n> the older version and access the view with the new binary.\n\nWhen nodes are modified, you have to increment CATALOG_VERSION_NO which\nmakes the new code incompatible with a datadir previously created -- for\nprecisely this reason.\n\n\n",
"msg_date": "Tue, 24 Nov 2020 12:11:35 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: About adding a new filed to a struct in primnodes.h"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 11:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2020-Nov-24, Andy Fan wrote:\n>\n> > then we modified the copy/read/out functions for this node. In\n> > _readFuncExpr,\n> > we probably add something like\n>\n> > [ ... ]\n>\n> > Then we will get a compatible issue if we create a view with the node in\n> > the older version and access the view with the new binary.\n>\n> When nodes are modified, you have to increment CATALOG_VERSION_NO which\n> makes the new code incompatible with a datadir previously created\n\n\nThank you Alvaro. I just think this issue can be avoided without causing\nthe incompatible issue. IIUC, after the new binary isn't compatible with\nthe datadir, the user has to dump/restore the whole database or use\npg_upgrade. It is kind of extra work.\n\n\n-- for precisely this reason.\n>\n\nI probably didn't get the real point of this, sorry about that.\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Nov 24, 2020 at 11:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2020-Nov-24, Andy Fan wrote:\n\n> then we modified the copy/read/out functions for this node. In\n> _readFuncExpr,\n> we probably add something like\n\n> [ ... ]\n\n> Then we will get a compatible issue if we create a view with the node in\n> the older version and access the view with the new binary.\n\nWhen nodes are modified, you have to increment CATALOG_VERSION_NO which\nmakes the new code incompatible with a datadir previously created Thank you Alvaro. I just think this issue can be avoided without causingthe incompatible issue. IIUC, after the new binary isn't compatible withthe datadir, the user has to dump/restore the whole database or use pg_upgrade. It is kind of extra work. -- for precisely this reason.\nI probably didn't get the real point of this, sorry about that. -- Best RegardsAndy Fan",
"msg_date": "Wed, 25 Nov 2020 08:10:21 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: About adding a new filed to a struct in primnodes.h"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 8:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Tue, Nov 24, 2020 at 11:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>\n>> On 2020-Nov-24, Andy Fan wrote:\n>>\n>> > then we modified the copy/read/out functions for this node. In\n>> > _readFuncExpr,\n>> > we probably add something like\n>>\n>> > [ ... ]\n>>\n>> > Then we will get a compatible issue if we create a view with the node in\n>> > the older version and access the view with the new binary.\n>>\n>> When nodes are modified, you have to increment CATALOG_VERSION_NO which\n>> makes the new code incompatible with a datadir previously created\n>\n>\n> Thank you Alvaro. I just think this issue can be avoided without causing\n> the incompatible issue. IIUC, after the new binary isn't compatible with\n> the datadir, the user has to dump/restore the whole database or use\n> pg_upgrade. It is kind of extra work.\n>\n>\n> -- for precisely this reason.\n>>\n>\n> I probably didn't get the real point of this, sorry about that.\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\n\nWhat I mean here is something like below.\n\ndiff --git a/src/backend/nodes/read.c b/src/backend/nodes/read.c\nindex 8c1e39044c..c3eba00639 100644\n--- a/src/backend/nodes/read.c\n+++ b/src/backend/nodes/read.c\n@@ -29,6 +29,7 @@\n\n /* Static state for pg_strtok */\n static const char *pg_strtok_ptr = NULL;\n+static const char *pg_strtok_extend(int *length, bool testonly);\n\n /* State flag that determines how readfuncs.c should treat location fields\n*/\n #ifdef WRITE_READ_PARSE_PLAN_TREES\n@@ -102,6 +103,20 @@ stringToNodeWithLocations(const char *str)\n #endif\n\n\n+const char*\n+pg_strtok(int *length)\n+{\n+ return pg_strtok_extend(length, false);\n+}\n+\n+/*\n+ * Just peak the next filed name without changing the global state.\n+ */\n+const char*\n+pg_peak_next_field(int *length)\n+{\n+ return pg_strtok_extend(length, true);\n+}\n /*****************************************************************************\n *\n * the lisp token parser\n@@ -149,7 +164,7 @@ stringToNodeWithLocations(const char *str)\n * as a single token.\n */\n const char *\n-pg_strtok(int *length)\n+pg_strtok_extend(int *length, bool testonly)\n {\n const char *local_str; /* working pointer to string */\n const char *ret_str; /* start of token to return */\n@@ -162,7 +177,8 @@ pg_strtok(int *length)\n if (*local_str == '\\0')\n {\n *length = 0;\n- pg_strtok_ptr = local_str;\n+ if (!testonly)\n+ pg_strtok_ptr = local_str;\n return NULL; /* no more tokens */\n }\n\n@@ -199,7 +215,8 @@ pg_strtok(int *length)\n if (*length == 2 && ret_str[0] == '<' && ret_str[1] == '>')\n *length = 0;\n\n- pg_strtok_ptr = local_str;\n+ if (!testonly)\n+ pg_strtok_ptr = local_str;\n\n return ret_str;\n }\n\n\n-- the below is a demo code to use it.\ndiff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c\nindex 76ab5ae8b7..c19cd45793 100644\n--- a/src/backend/nodes/readfuncs.c\n+++ b/src/backend/nodes/readfuncs.c\n@@ -689,13 +689,27 @@ _readFuncExpr(void)\n\n READ_OID_FIELD(funcid);\n READ_OID_FIELD(funcresulttype);\n- READ_BOOL_FIELD(funcretset);\n+ token = pg_peak_next_field(&length);\n+ if (memcmp(token, \":funcretset\", strlen(\":funcretset\")) == 0)\n+ {\n+ READ_BOOL_FIELD(funcretset);\n+ }\n+ else\n+ local_node->funcretset = false;\n+\n READ_BOOL_FIELD(funcvariadic);\n READ_ENUM_FIELD(funcformat, CoercionForm);\n- READ_OID_FIELD(funccollid);\n+ READ_OID_FIELD(funccollid);\n READ_OID_FIELD(inputcollid);\n READ_NODE_FIELD(args);\n- READ_LOCATION_FIELD(location);\n+\n+ token = pg_peak_next_field(&length);\n+ if (memcmp(token, \":location\", strlen(\":location\")) == 0)\n+ {\n+ READ_LOCATION_FIELD(location);\n+ }\n+ else\n+ local_node->location = -1;\n\n READ_DONE();\n }\n\n\nAfter writing it, I am feeling that this will waste a bit of performance\nsince we need to token a part of the string twice. But looks the overhead\nlooks good to me and can be avoided if we refactor the code again with\n\"read_fieldname_or_nomove(char *fieldname, int *length) \" function.\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Nov 25, 2020 at 8:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Tue, Nov 24, 2020 at 11:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2020-Nov-24, Andy Fan wrote:\n\n> then we modified the copy/read/out functions for this node. In\n> _readFuncExpr,\n> we probably add something like\n\n> [ ... ]\n\n> Then we will get a compatible issue if we create a view with the node in\n> the older version and access the view with the new binary.\n\nWhen nodes are modified, you have to increment CATALOG_VERSION_NO which\nmakes the new code incompatible with a datadir previously created Thank you Alvaro. I just think this issue can be avoided without causingthe incompatible issue. IIUC, after the new binary isn't compatible withthe datadir, the user has to dump/restore the whole database or use pg_upgrade. It is kind of extra work. -- for precisely this reason.\nI probably didn't get the real point of this, sorry about that. -- Best RegardsAndy Fan\nWhat I mean here is something like below. diff --git a/src/backend/nodes/read.c b/src/backend/nodes/read.cindex 8c1e39044c..c3eba00639 100644--- a/src/backend/nodes/read.c+++ b/src/backend/nodes/read.c@@ -29,6 +29,7 @@ /* Static state for pg_strtok */ static const char *pg_strtok_ptr = NULL;+static const char *pg_strtok_extend(int *length, bool testonly); /* State flag that determines how readfuncs.c should treat location fields */ #ifdef WRITE_READ_PARSE_PLAN_TREES@@ -102,6 +103,20 @@ stringToNodeWithLocations(const char *str) #endif+const char*+pg_strtok(int *length)+{+\treturn pg_strtok_extend(length, false);+}++/*+ * Just peak the next filed name without changing the global state.+ */+const char*+pg_peak_next_field(int *length)+{+\treturn pg_strtok_extend(length, true);+} /***************************************************************************** * * the lisp token parser@@ -149,7 +164,7 @@ stringToNodeWithLocations(const char *str) * as a single token. */ const char *-pg_strtok(int *length)+pg_strtok_extend(int *length, bool testonly) { \tconst char *local_str;\t\t/* working pointer to string */ \tconst char *ret_str;\t\t/* start of token to return */@@ -162,7 +177,8 @@ pg_strtok(int *length) \tif (*local_str == '\\0') \t{ \t\t*length = 0;-\t\tpg_strtok_ptr = local_str;+\t\tif (!testonly)+\t\t\tpg_strtok_ptr = local_str; \t\treturn NULL;\t\t\t/* no more tokens */ \t}@@ -199,7 +215,8 @@ pg_strtok(int *length) \tif (*length == 2 && ret_str[0] == '<' && ret_str[1] == '>') \t\t*length = 0;-\tpg_strtok_ptr = local_str;+\tif (!testonly)+\t\tpg_strtok_ptr = local_str; \treturn ret_str; }-- the below is a demo code to use it. diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.cindex 76ab5ae8b7..c19cd45793 100644--- a/src/backend/nodes/readfuncs.c+++ b/src/backend/nodes/readfuncs.c@@ -689,13 +689,27 @@ _readFuncExpr(void) \tREAD_OID_FIELD(funcid); \tREAD_OID_FIELD(funcresulttype);-\tREAD_BOOL_FIELD(funcretset);+\ttoken = pg_peak_next_field(&length);+\tif (memcmp(token, \":funcretset\", strlen(\":funcretset\")) == 0)+\t{+\t\tREAD_BOOL_FIELD(funcretset);+\t}+\telse+\t\tlocal_node->funcretset = false;+ \tREAD_BOOL_FIELD(funcvariadic); \tREAD_ENUM_FIELD(funcformat, CoercionForm);-\tREAD_OID_FIELD(funccollid);+ \tREAD_OID_FIELD(funccollid); \tREAD_OID_FIELD(inputcollid); \tREAD_NODE_FIELD(args);-\tREAD_LOCATION_FIELD(location);++ token = pg_peak_next_field(&length);+\tif (memcmp(token, \":location\", strlen(\":location\")) == 0)+\t{+\t\tREAD_LOCATION_FIELD(location);+\t}+\telse+\t\tlocal_node->location = -1; \tREAD_DONE(); }After writing it, I am feeling that this will waste a bit of performancesince we need to token a part of the string twice. But looks the overheadlooks good to me and can be avoided if we refactor the code again with \"read_fieldname_or_nomove(char *fieldname, int *length) \" function. -- Best RegardsAndy Fan",
"msg_date": "Wed, 25 Nov 2020 09:17:37 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: About adding a new filed to a struct in primnodes.h"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> What I mean here is something like below.\n\nWhat exactly would be the value of that?\n\nThere is work afoot, or at least on people's to-do lists, to mechanize\ncreation of the outfuncs/readfuncs/etc code directly from the Node\nstruct declarations. So special cases for particular fields are going\nto be looked on with great disfavor, even if you can make a case that\nthere's some reason to do it. (Which I'm not seeing. We've certainly\nnever had to do it in the past.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Nov 2020 20:40:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: About adding a new filed to a struct in primnodes.h"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > What I mean here is something like below.\n>\n> What exactly would be the value of that?\n>\n> There is work afoot, or at least on people's to-do lists, to mechanize\n> creation of the outfuncs/readfuncs/etc code directly from the Node\n> struct declarations. So special cases for particular fields are going\n> to be looked on with great disfavor,\n\n\nI agree with this, but I don't think there is no value in my suggestion\nunless I missed something. Per my current understanding, the code\nis too easy to make the datadir incompatible with binary, which requires\nuser to do some extra work and my suggestion is to avoid that and\nit is the value. I think it is possible to make this strategy work with\nthe\n\"mechanize creation of the outfuncs/readfuncs/ect node\", but there is\nno point to try it unless we make agreement on if we should do that.\n\n\n> even if you can make a case that\n> there's some reason to do it. (Which I'm not seeing. We've certainly\n> never had to do it in the past.)\n>\n> regards, tom lane\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Nov 25, 2020 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> What I mean here is something like below.\n\nWhat exactly would be the value of that?\n\nThere is work afoot, or at least on people's to-do lists, to mechanize\ncreation of the outfuncs/readfuncs/etc code directly from the Node\nstruct declarations. So special cases for particular fields are going\nto be looked on with great disfavor, I agree with this, but I don't think there is no value in my suggestionunless I missed something. Per my current understanding, the codeis too easy to make the datadir incompatible with binary, which requiresuser to do some extra work and my suggestion is to avoid that and it is the value. I think it is possible to make this strategy work with the \"mechanize creation of the outfuncs/readfuncs/ect node\", but there is no point to try it unless we make agreement on if we should do that. even if you can make a case that\nthere's some reason to do it. (Which I'm not seeing. We've certainly\nnever had to do it in the past.)\n\n regards, tom lane\n-- Best RegardsAndy Fan",
"msg_date": "Wed, 25 Nov 2020 10:31:00 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: About adding a new filed to a struct in primnodes.h"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> On Wed, Nov 25, 2020 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What exactly would be the value of that?\n>> ...\n\n> I agree with this, but I don't think there is no value in my suggestion\n> unless I missed something. Per my current understanding, the code\n> is too easy to make the datadir incompatible with binary, which requires\n> user to do some extra work and my suggestion is to avoid that and\n> it is the value.\n\nIt'd be fairly pointless to worry about this so far as ordinary users are\nconcerned, who are only going to encounter the situation in connection\nwith a major-version upgrade. There is no way that the only catalog\nincompatibility they'd face is an addition or removal of a field in some\nquery node. In practice, a major-version upgrade is also going to\ninvolve things like these:\n\n* addition (or, sometimes, removal) of entire system catalogs\n* addition, removal, or redefinition of columns within an existing catalog\n* addition or removal of standard entries in system catalogs\n* restructuring of stored rules/expressions in ways that are more\n complicated than simple addition/removal of fields\n\nThe second of those, in particular, is quite fatal to any idea of\nmaking a version-N backend executable work with version-not-N\ncatalogs. Catalog rowtypes are wired into the C code at a pretty\nbasic level.\n\nWe already sweat a great deal to make user table contents be upwards\ncompatible across major versions. I think that trying to take on\nsome similar guarantee with respect to system catalog contents would\nserve mostly to waste a lot of developer manpower that can be put to\nmuch better uses.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Nov 2020 22:54:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: About adding a new filed to a struct in primnodes.h"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 11:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > On Wed, Nov 25, 2020 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> What exactly would be the value of that?\n> >> ...\n>\n> > I agree with this, but I don't think there is no value in my suggestion\n> > unless I missed something. Per my current understanding, the code\n> > is too easy to make the datadir incompatible with binary, which requires\n> > user to do some extra work and my suggestion is to avoid that and\n> > it is the value.\n>\n> In practice, a major-version upgrade is also going to involve things like\n> these:\n>\n>\nActually I thought if this would be the main reason for this case, now you\nconfirmed that. Thank you.\n\n\n> We already sweat a great deal to make user table contents be upwards\n> compatible across major versions. I think that trying to take on\n> some similar guarantee with respect to system catalog contents would\n> serve mostly to waste a lot of developer manpower that can be put to\n> much better uses.\n>\n> Less experienced people know what to do, but real experienced people\nknow what not to do. Thanks for sharing the background in detail!\n\nActually I may struggle too much in a case where I port a feature with\nnode modification into a lower version which I have to consider the\ncompatible issue. The feature is inline-cte, We added a new field\nin CommonTableExpr which is used to hint optimizer if to think about\nthe \"inline\" or not. From the porting aspect, the new field will cause\nthe incompatible issue, however a hint would not. I know porting will\nnot be a concern of the community, and I would not argue about hints\n in this thread, I just share my little experience for reference here.\n\nThank you in all.\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Nov 25, 2020 at 11:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> On Wed, Nov 25, 2020 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What exactly would be the value of that?\n>> ...\n\n> I agree with this, but I don't think there is no value in my suggestion\n> unless I missed something. Per my current understanding, the code\n> is too easy to make the datadir incompatible with binary, which requires\n> user to do some extra work and my suggestion is to avoid that and\n> it is the value.\nIn practice, a major-version upgrade is also going to involve things like these:\nActually I thought if this would be the main reason for this case, now youconfirmed that. Thank you. \nWe already sweat a great deal to make user table contents be upwards\ncompatible across major versions. I think that trying to take on\nsome similar guarantee with respect to system catalog contents would\nserve mostly to waste a lot of developer manpower that can be put to\nmuch better uses.\nLess experienced people know what to do, but real experienced peopleknow what not to do. Thanks for sharing the background in detail! Actually I may struggle too much in a case where I port a feature with node modification into a lower version which I have to consider thecompatible issue. The feature is inline-cte, We added a new field in CommonTableExpr which is used to hint optimizer if to think aboutthe \"inline\" or not. From the porting aspect, the new field will causethe incompatible issue, however a hint would not. I know porting willnot be a concern of the community, and I would not argue about hints in this thread, I just share my little experience for reference here. Thank you in all. -- Best RegardsAndy Fan",
"msg_date": "Wed, 25 Nov 2020 13:12:59 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: About adding a new filed to a struct in primnodes.h"
}
] |
[
{
"msg_contents": "Hi,\n\nwith this message, I want to draw attention to the RFC patches on the \ncurrent commitfest. It would be good if committers could take a look at \nthem.\n\nWhile doing a sweep through the CF, I have kicked a couple of entries \nback to Waiting on author, so now the list is correct. Now we have 17 \nentries.\n\nhttps://commitfest.postgresql.org/30/?status=3\n\nThe oldest RFC patches are listed below. These entries are quite large \nand complex. Still they got a good amount of reviews and it looks like \nafter a long discussion they are in a good shape.\n\nGeneric type subscripting <https://commitfest.postgresql.org/30/1062/>\n\nschema variables, LET command <https://commitfest.postgresql.org/30/1608/>\n\npgbench - add pseudo-random permutation function\n<https://commitfest.postgresql.org/30/2138/>\nAllow REINDEX, CLUSTER and VACUUM FULL to rebuild on new \nTABLESPACE/INDEX_TABLESPACE <https://commitfest.postgresql.org/30/2269/>\n\nrange_agg / multiranges <https://commitfest.postgresql.org/30/2112/>\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nHi,\n\n with this message, I want to draw attention to the RFC patches on\n the current commitfest. It would be good if committers could take\n a look at them.\n\n While doing a sweep through the CF, I have kicked a couple of\n entries back to Waiting on author, so now the list is correct. Now\n we have 17 entries.\n\nhttps://commitfest.postgresql.org/30/?status=3\n\n The oldest RFC patches are listed below. These entries are quite\n large and complex. Still they got a good amount of reviews and it\n looks like after a long discussion they are in a good shape.\n\nGeneric type\n subscripting \n\n schema variables, LET command\n\n pgbench - add pseudo-random permutation function\n\n Allow REINDEX, CLUSTER and VACUUM FULL to rebuild on new\n TABLESPACE/INDEX_TABLESPACE\nrange_agg /\n multiranges\n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 24 Nov 2020 23:18:44 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Ready For Committer patches (CF 2020-11)"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nI've attached a patch to add a few new options to CHECKPOINT that\r\nmight be useful. Specifically, it adds the FORCE, IMMEDIATE, and WAIT\r\noptions. All of these options are already implemented internally, but\r\nit is not possible to adjust them for manually triggered checkpoints.\r\nThe bulk of this change is dedicated to expanding the syntax of\r\nCHECKPOINT and adding detailed documentation.\r\n\r\nI've mostly followed the pattern set forth by the options recently\r\nadded to VACUUM. With this patch, CHECKPOINT takes an optional set of\r\nparameters surround by parentheses. The new parameters are enabled by\r\ndefault but can be disabled by specifying FALSE, OFF, or 0.\r\n\r\nThe main purpose of this patch is to give users more control over\r\ntheir manually requested checkpoints or restartpoints. I suspect the\r\nmost useful option is IMMEDIATE, which can help avoid checkpoint-\r\nrelated IO spikes. However, I didn't see any strong reason to prevent\r\nusers from also adjusting FORCE and WAIT.\r\n\r\nNathan",
"msg_date": "Tue, 24 Nov 2020 22:23:34 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "A few new options for CHECKPOINT"
},
{
"msg_contents": "From: Bossart, Nathan <bossartn@amazon.com>\r\n> The main purpose of this patch is to give users more control over their manually\r\n> requested checkpoints or restartpoints. I suspect the most useful option is\r\n> IMMEDIATE, which can help avoid checkpoint- related IO spikes. However, I\r\n> didn't see any strong reason to prevent users from also adjusting FORCE and\r\n> WAIT.\r\n\r\nI think just IMMEDIATE would suffice, too. But could you tell us why you got to want to give users more control? Could we know concrete example situations where users want to perform CHECKPOINT with options?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 25 Nov 2020 00:03:31 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 11/24/20, 4:03 PM, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote:\r\n> From: Bossart, Nathan <bossartn@amazon.com>\r\n>> The main purpose of this patch is to give users more control over their manually\r\n>> requested checkpoints or restartpoints. I suspect the most useful option is\r\n>> IMMEDIATE, which can help avoid checkpoint- related IO spikes. However, I\r\n>> didn't see any strong reason to prevent users from also adjusting FORCE and\r\n>> WAIT.\r\n>\r\n> I think just IMMEDIATE would suffice, too. But could you tell us why you got to want to give users more control? Could we know concrete example situations where users want to perform CHECKPOINT with options?\r\n\r\nIt may be useful for backups taken with the \"consistent snapshot\"\r\napproach. As noted in the documentation [0], running CHECKPOINT\r\nbefore taking the snapshot can reduce recovery time. However, users\r\nmight wish to avoid the IO spike caused by an immediate checkpoint.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/backup-file.html\r\n\r\n",
"msg_date": "Wed, 25 Nov 2020 00:48:26 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "From: Bossart, Nathan <bossartn@amazon.com>\r\n> It may be useful for backups taken with the \"consistent snapshot\"\r\n> approach. As noted in the documentation [0], running CHECKPOINT\r\n> before taking the snapshot can reduce recovery time. However, users\r\n> might wish to avoid the IO spike caused by an immediate checkpoint.\r\n> \r\n> [0] https://www.postgresql.org/docs/devel/backup-file.html\r\n\r\nAh, understood. I agree that the slow or spread manual checkpoint is good to have.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 25 Nov 2020 01:07:47 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: A few new options for CHECKPOINT"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 01:07:47AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> From: Bossart, Nathan <bossartn@amazon.com>\n>> It may be useful for backups taken with the \"consistent snapshot\"\n>> approach. As noted in the documentation [0], running CHECKPOINT\n>> before taking the snapshot can reduce recovery time. However, users\n>> might wish to avoid the IO spike caused by an immediate checkpoint.\n>> \n>> [0] https://www.postgresql.org/docs/devel/backup-file.html\n> \n> Ah, understood. I agree that the slow or spread manual checkpoint is good to have.\n\nI can see the use case for IMMEDIATE, but I fail to see the use cases\nfor WAIT and FORCE. CHECKPOINT_FORCE is internally implied for the\nend-of-recovery and shutdown checkpoints. WAIT could be a dangerous\nthing if disabled, as clients could pile up requests to the\ncheckpointer for no real purpose.\n--\nMichael",
"msg_date": "Wed, 25 Nov 2020 13:47:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "\n\nOn 2020/11/25 13:47, Michael Paquier wrote:\n> On Wed, Nov 25, 2020 at 01:07:47AM +0000, tsunakawa.takay@fujitsu.com wrote:\n>> From: Bossart, Nathan <bossartn@amazon.com>\n>>> It may be useful for backups taken with the \"consistent snapshot\"\n>>> approach. As noted in the documentation [0], running CHECKPOINT\n>>> before taking the snapshot can reduce recovery time. However, users\n>>> might wish to avoid the IO spike caused by an immediate checkpoint.\n>>>\n>>> [0] https://www.postgresql.org/docs/devel/backup-file.html\n>>\n>> Ah, understood. I agree that the slow or spread manual checkpoint is good to have.\n> \n> I can see the use case for IMMEDIATE, but I fail to see the use cases\n> for WAIT and FORCE. CHECKPOINT_FORCE is internally implied for the\n> end-of-recovery and shutdown checkpoints. WAIT could be a dangerous\n> thing if disabled, as clients could pile up requests to the\n> checkpointer for no real purpose.\n\nWe may want to disable WAIT (or specify wait timeout?) when doing\ncheckpoint with IMMEDIATE disabled, to avoid long-running command.\nOTOH, if we support WAIT disabled, I'd like to have the feature to see\nwhether the checkpoint has been completed or not. We can do that\nby using log_checkpoints, but that's not convenient.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 25 Nov 2020 14:32:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Am Mittwoch, den 25.11.2020, 13:47 +0900 schrieb Michael Paquier:\n> I can see the use case for IMMEDIATE, but I fail to see the use cases\n> for WAIT and FORCE. CHECKPOINT_FORCE is internally implied for the\n> end-of-recovery and shutdown checkpoints. WAIT could be a dangerous\n> thing if disabled, as clients could pile up requests to the\n> checkpointer for no real purpose.\n\nWouldn't it be more convenient to use \"FAST\" for immediate checkpoint,\ndefaulting to \"FAST ON\"? That would be along the parameter used in the\nstreaming protocol command BASE_BACKUP, where \"FAST\" disables lazy\ncheckpointing.\n\nI agree that the other options don't seem reasonable for exposing to\nSQL.\n\n\n-- \nThanks,\n\tBernd\n\n\n\n\n",
"msg_date": "Wed, 25 Nov 2020 11:41:19 +0100",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On Wed, 2020-11-25 at 11:41 +0100, Bernd Helmle wrote:\n> Am Mittwoch, den 25.11.2020, 13:47 +0900 schrieb Michael Paquier:\n> \n> > I can see the use case for IMMEDIATE, but I fail to see the use cases\n> > for WAIT and FORCE. CHECKPOINT_FORCE is internally implied for the\n> > end-of-recovery and shutdown checkpoints. WAIT could be a dangerous\n> > thing if disabled, as clients could pile up requests to the\n> > checkpointer for no real purpose.\n> \n> Wouldn't it be more convenient to use \"FAST\" for immediate checkpoint,\n> defaulting to \"FAST ON\"? That would be along the parameter used in the\n> streaming protocol command BASE_BACKUP, where \"FAST\" disables lazy\n> checkpointing.\n\n+1\n\nThat would also match pg_basebackup's \"-c fast|spread\".\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 25 Nov 2020 16:04:36 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 11/24/20, 4:03 PM, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote:\n> > From: Bossart, Nathan <bossartn@amazon.com>\n> >> The main purpose of this patch is to give users more control over their manually\n> >> requested checkpoints or restartpoints. I suspect the most useful option is\n> >> IMMEDIATE, which can help avoid checkpoint- related IO spikes. However, I\n> >> didn't see any strong reason to prevent users from also adjusting FORCE and\n> >> WAIT.\n> >\n> > I think just IMMEDIATE would suffice, too. But could you tell us why you got to want to give users more control? Could we know concrete example situations where users want to perform CHECKPOINT with options?\n> \n> It may be useful for backups taken with the \"consistent snapshot\"\n> approach. As noted in the documentation [0], running CHECKPOINT\n> before taking the snapshot can reduce recovery time. However, users\n> might wish to avoid the IO spike caused by an immediate checkpoint.\n\nI'm a bit confused by the idea here.. The whole point of running a\nCHECKPOINT is to get the immediate behavior with the IO spike to get\nthings flushed out to disk so that, on crash recovery, there's less\noutstanding WAL to replay.\n\nAvoiding the IO spike implies that you're waiting for a regular\ncheckpoint and that additional WAL is building up since that started and\ntherefore you're going to have to replay that WAL during crash recovery\nand so you won't end up reducing your recovery time, so I'm failing to\nsee the point..? I don't think you really get to have both.. pay the\nprice at backup time, or pay it during crash recovery.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 25 Nov 2020 10:51:44 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Thanks to all for the feedback. I've attached v2 of the patch. I've\r\nremoved the WAIT and FORCE options and renamed IMMEDIATE to FAST.\r\n\r\nOn 11/25/20, 7:52 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> I'm a bit confused by the idea here.. The whole point of running a\r\n> CHECKPOINT is to get the immediate behavior with the IO spike to get\r\n> things flushed out to disk so that, on crash recovery, there's less\r\n> outstanding WAL to replay.\r\n>\r\n> Avoiding the IO spike implies that you're waiting for a regular\r\n> checkpoint and that additional WAL is building up since that started and\r\n> therefore you're going to have to replay that WAL during crash recovery\r\n> and so you won't end up reducing your recovery time, so I'm failing to\r\n> see the point..? I don't think you really get to have both.. pay the\r\n> price at backup time, or pay it during crash recovery.\r\n\r\nIt's true that you might not lower recovery time as much as if you did\r\nan immediate checkpoint, but presumably there can still be some\r\nbenefit from doing a non-immediate checkpoint. I think a similar\r\nargument can be made for pg_start_backup(), which AFAICT is presently\r\nthe only way to manually request a non-immediate checkpoint.\r\n\r\nNathan",
"msg_date": "Wed, 25 Nov 2020 18:59:12 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> Thanks to all for the feedback. I've attached v2 of the patch. I've\n> removed the WAIT and FORCE options and renamed IMMEDIATE to FAST.\n> \n> On 11/25/20, 7:52 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > I'm a bit confused by the idea here.. The whole point of running a\n> > CHECKPOINT is to get the immediate behavior with the IO spike to get\n> > things flushed out to disk so that, on crash recovery, there's less\n> > outstanding WAL to replay.\n> >\n> > Avoiding the IO spike implies that you're waiting for a regular\n> > checkpoint and that additional WAL is building up since that started and\n> > therefore you're going to have to replay that WAL during crash recovery\n> > and so you won't end up reducing your recovery time, so I'm failing to\n> > see the point..? I don't think you really get to have both.. pay the\n> > price at backup time, or pay it during crash recovery.\n> \n> It's true that you might not lower recovery time as much as if you did\n> an immediate checkpoint, but presumably there can still be some\n> benefit from doing a non-immediate checkpoint. I think a similar\n> argument can be made for pg_start_backup(), which AFAICT is presently\n> the only way to manually request a non-immediate checkpoint.\n\nIf there isn't any actual outstanding WAL since the last checkpoint, the\nonly time that the fact that pg_start_backup includes CHECKPOINT_FORCE\nis relevant, then performing a checkpoint isn't going to actually reduce\nyour crash recovery.\n\nAlso note that, in all other cases (that is, when there *is* outstanding\nWAL since the last checkpoint), pg_start_backup actually just waits for\nthe existing checkpoint to complete- and while it's waiting for that to\nhappen, there'll be additional WAL building up since that checkpoint\nstarted that will have to be replayed as part of crash recovery, just as\nif you took a snapshot of the system at any other time.\n\nSo, either there won't be any WAL outstanding, in which case running a\nCHECKPOINT FORCE just ends up creating more WAL without actually being\nuseful, or there's WAL outstanding and the only thing this does is delay\nthe snapshot being taken but doesn't actually reduce the amount of WAL\nthat's going to end up being outstanding and which will have to be\nreplayed during crash recovery.\n\nMaybe there's a useful reason to have these options, but at least the\nstated one isn't it and I wouldn't want to encourage people who are\nusing snapshot-based backups to use these options since they aren't\ngoing to work the way that this thread is implying they would.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Nov 2020 11:28:20 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 11/27/20, 8:29 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> Also note that, in all other cases (that is, when there *is* outstanding\r\n> WAL since the last checkpoint), pg_start_backup actually just waits for\r\n> the existing checkpoint to complete- and while it's waiting for that to\r\n> happen, there'll be additional WAL building up since that checkpoint\r\n> started that will have to be replayed as part of crash recovery, just as\r\n> if you took a snapshot of the system at any other time.\r\n>\r\n> So, either there won't be any WAL outstanding, in which case running a\r\n> CHECKPOINT FORCE just ends up creating more WAL without actually being\r\n> useful, or there's WAL outstanding and the only thing this does is delay\r\n> the snapshot being taken but doesn't actually reduce the amount of WAL\r\n> that's going to end up being outstanding and which will have to be\r\n> replayed during crash recovery.\r\n>\r\n> Maybe there's a useful reason to have these options, but at least the\r\n> stated one isn't it and I wouldn't want to encourage people who are\r\n> using snapshot-based backups to use these options since they aren't\r\n> going to work the way that this thread is implying they would.\r\n\r\nI don't think it's true that pg_start_backup() just waits for the\r\nexisting checkpoint to complete. It calls RequestCheckpoint() with\r\nCHECKPOINT_WAIT, which should wait for a new checkpoint to start.\r\n\r\n /* Wait for a new checkpoint to start. */\r\n ConditionVariablePrepareToSleep(&CheckpointerShmem->start_cv);\r\n for (;;)\r\n {\r\n\r\nI also tried running pg_start_backup() while an automatic checkpoint\r\nwas ongoing, and it seemed to create a new one.\r\n\r\n psql session:\r\n postgres=# SELECT now(); SELECT pg_start_backup('test'); SELECT now();\r\n now\r\n -------------------------------\r\n 2020-11-27 16:52:31.958124+00\r\n (1 row)\r\n\r\n pg_start_backup\r\n -----------------\r\n 0/D3D24F0\r\n (1 row)\r\n\r\n now\r\n -------------------------------\r\n 2020-11-27 16:52:50.113372+00\r\n (1 row)\r\n\r\n logs:\r\n 2020-11-27 16:52:20.129 UTC [16029] LOG: checkpoint starting: time\r\n 2020-11-27 16:52:35.121 UTC [16029] LOG: checkpoint complete...\r\n 2020-11-27 16:52:35.122 UTC [16029] LOG: checkpoint starting: force wait\r\n 2020-11-27 16:52:50.110 UTC [16029] LOG: checkpoint complete...\r\n\r\nThe patch I've submitted does the same thing.\r\n\r\n psql session:\r\n postgres=# SELECT now(); CHECKPOINT (FAST FALSE); SELECT now();\r\n now\r\n -------------------------------\r\n 2020-11-27 16:46:39.346131+00\r\n (1 row)\r\n\r\n CHECKPOINT\r\n now\r\n -------------------------------\r\n 2020-11-27 16:47:05.083944+00\r\n (1 row)\r\n\r\n logs:\r\n 2020-11-27 16:46:35.056 UTC [16029] LOG: checkpoint starting: time\r\n 2020-11-27 16:46:50.099 UTC [16029] LOG: checkpoint complete...\r\n 2020-11-27 16:46:50.099 UTC [16029] LOG: checkpoint starting: force wait\r\n 2020-11-27 16:47:05.083 UTC [16029] LOG: checkpoint complete...\r\n\r\nEven if it did simply wait for the existing checkpoint to complete,\r\nisn't it still preferable to take a snapshot right after a checkpoint\r\ncompletes, even if it is non-immediate? You'll need to replay WAL in\r\neither case, and it's true that you could need to replay less WAL if\r\nyou take an immediate checkpoint versus a non-immediate checkpoint.\r\nHowever, if you take a snapshot without a checkpoint, you might need\r\nto replay up to checkpoint_timeout + (time it takes for a non-\r\nimmediate checkpoint to complete) worth of WAL. For the logs just\r\nabove this paragraph, if I take a snapshot at 16:47:04, I'd need to\r\nreplay 29 seconds of WAL. However, if I take the snapshot at\r\n16:47:06, I only need to replay 16 seconds of WAL.\r\n\r\nI apologize if I'm missing something obvious here.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 27 Nov 2020 17:53:27 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 11/27/20, 8:29 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > Also note that, in all other cases (that is, when there *is* outstanding\n> > WAL since the last checkpoint), pg_start_backup actually just waits for\n> > the existing checkpoint to complete- and while it's waiting for that to\n> > happen, there'll be additional WAL building up since that checkpoint\n> > started that will have to be replayed as part of crash recovery, just as\n> > if you took a snapshot of the system at any other time.\n> >\n> > So, either there won't be any WAL outstanding, in which case running a\n> > CHECKPOINT FORCE just ends up creating more WAL without actually being\n> > useful, or there's WAL outstanding and the only thing this does is delay\n> > the snapshot being taken but doesn't actually reduce the amount of WAL\n> > that's going to end up being outstanding and which will have to be\n> > replayed during crash recovery.\n> >\n> > Maybe there's a useful reason to have these options, but at least the\n> > stated one isn't it and I wouldn't want to encourage people who are\n> > using snapshot-based backups to use these options since they aren't\n> > going to work the way that this thread is implying they would.\n> \n> I don't think it's true that pg_start_backup() just waits for the\n> existing checkpoint to complete. It calls RequestCheckpoint() with\n> CHECKPOINT_WAIT, which should wait for a new checkpoint to start.\n\nerm... right? pg_start_backup waits for a new checkpoint to start. I'm\nconfused how that's different from \"waits for the existing checkpoint to\ncomplete\". The entire point is that it *doesn't* force a checkpoint to\nhappen immediately.\n\n> /* Wait for a new checkpoint to start. */\n> ConditionVariablePrepareToSleep(&CheckpointerShmem->start_cv);\n> for (;;)\n> {\n> \n> I also tried running pg_start_backup() while an automatic checkpoint\n> was ongoing, and it seemed to create a new one.\n> \n> psql session:\n> postgres=# SELECT now(); SELECT pg_start_backup('test'); SELECT now();\n> now\n> -------------------------------\n> 2020-11-27 16:52:31.958124+00\n> (1 row)\n> \n> pg_start_backup\n> -----------------\n> 0/D3D24F0\n> (1 row)\n> \n> now\n> -------------------------------\n> 2020-11-27 16:52:50.113372+00\n> (1 row)\n> \n> logs:\n> 2020-11-27 16:52:20.129 UTC [16029] LOG: checkpoint starting: time\n> 2020-11-27 16:52:35.121 UTC [16029] LOG: checkpoint complete...\n> 2020-11-27 16:52:35.122 UTC [16029] LOG: checkpoint starting: force wait\n> 2020-11-27 16:52:50.110 UTC [16029] LOG: checkpoint complete...\n\nYes- pg_start_backup specifies 'force' because it wants a checkpoint to\nhappen even if there isn't any outstanding WAL, but it doesn't make the\nexisting checkpoint go faster, which is the point that I'm trying to\nmake here.\n\nIf you'd really like to test and see what happens, run a pgbench that\nloads the system up while doing pg_start_backup and see how long it\ntakes before pg_start_backup returns, and see how much outstanding WAL\nthere is from the starting checkpoint from pg_start_backup and the time\nit returns. To make it really clear, you should really also set\ncheckpoint completion timeout to 0.9 and make sure you max_wal_size is\nset to a pretty large value, and make sure that the pgbench is\ngenerating enough to have pretty large checkpoints while still allowing\nthem to happen due to 'time'.\n\n> The patch I've submitted does the same thing.\n\nYes, I'm not surprised by that. That doesn't change anything about the\npoint I'm trying to make..\n\n> Even if it did simply wait for the existing checkpoint to complete,\n> isn't it still preferable to take a snapshot right after a checkpoint\n> completes, even if it is non-immediate? You'll need to replay WAL in\n> either case, and it's true that you could need to replay less WAL if\n> you take an immediate checkpoint versus a non-immediate checkpoint.\n\nWhy is it preferable? Your argument here is that it's preferable\nbecause there'll be less outstanding WAL, but what I'm pointing out is\nthat that's not the case, so that isn't a reason for it to be preferable\nand so I'm asking: what other reason is it preferable..? I'm not aware\nof one..\n\n> However, if you take a snapshot without a checkpoint, you might need\n> to replay up to checkpoint_timeout + (time it takes for a non-\n> immediate checkpoint to complete) worth of WAL. For the logs just\n> above this paragraph, if I take a snapshot at 16:47:04, I'd need to\n> replay 29 seconds of WAL. However, if I take the snapshot at\n> 16:47:06, I only need to replay 16 seconds of WAL.\n\nThis is exactly the point I'm making- if you're using a deferred\ncheckpoint then you're still going to have up to checkpoint_timeout\nworth of WAL to replay. Again, these tests aren't really worth much\nbecause the system clearly isn't under any load..\n\n> I apologize if I'm missing something obvious here.\n\nIf you'd like to show that I'm wrong, and it's entirely possible that I\nam, then retry the above with actual load on the system, and also\nactually look at how much outstanding WAL you end up with given the\ndifferent scenarios which has to be replayed during crash recovery.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Nov 2020 13:57:43 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 11/27/20, 10:58 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> If you'd like to show that I'm wrong, and it's entirely possible that I\r\n> am, then retry the above with actual load on the system, and also\r\n> actually look at how much outstanding WAL you end up with given the\r\n> different scenarios which has to be replayed during crash recovery.\r\n\r\nI did a little experiment to show the behavior I'm referring to. I\r\nused these settings:\r\n\r\n checkpoint_completion_target = 0.9\r\n checkpoint_timeout = 30s\r\n max_wal_size = 20GB\r\n WAL segment size is 64MB\r\n\r\nI ran the following pgbench command for a few minutes before each\r\ntest:\r\n\r\n pgbench postgres -T 3600 -c 64 -j 64 -N\r\n\r\nFor the first test, I killed Postgres just before an automatic, non-\r\nimmediate checkpoint completed.\r\n\r\n 2020-11-28 00:31:57 UTC::@:[51770]:LOG: checkpoint complete...\r\n 2020-11-28 00:32:00 UTC::@:[51770]:LOG: checkpoint starting: time\r\n\r\n Killed Postgres at 00:32:26 UTC, 29 seconds after latest\r\n checkpoint completed.\r\n\r\n 2020-11-28 00:32:42 UTC::@:[77256]:LOG: redo starts at 3CF/FD6B8BD0\r\n 2020-11-28 00:32:56 UTC::@:[77256]:LOG: redo done at 3D0/C94D1D00\r\n\r\n Recovery took 14 seconds and replayed ~3.2 GB of WAL.\r\n\r\n postgres=> SELECT pg_wal_lsn_diff('3D0/C94D1D00', '3CF/FD6B8BD0');\r\n pg_wal_lsn_diff\r\n -----------------\r\n 3420557616\r\n (1 row)\r\n\r\nFor the second test, I killed Postgres just after an automatic, non-\r\nimmediate checkpoint completed.\r\n\r\n 2020-11-28 00:41:26 UTC::@:[77475]:LOG: checkpoint complete...\r\n\r\n Killed Postgres at 00:41:26 UTC, just after latest checkpoint\r\n completed.\r\n\r\n 2020-11-28 00:41:42 UTC::@:[8599]:LOG: redo starts at 3D3/152EDD78\r\n 2020-11-28 00:41:49 UTC::@:[8599]:LOG: redo done at 3D3/78358A40\r\n\r\n Recovery took 7 seconds and replayed ~1.5 GB of WAL.\r\n\r\n postgres=> SELECT pg_wal_lsn_diff('3D3/78358A40', '3D3/152EDD78');\r\n pg_wal_lsn_diff\r\n -----------------\r\n 1661381832\r\n (1 row)\r\n\r\nGranted, I used a rather aggressive checkpoint_timeout, but I think\r\nthis demonstrates that waiting for a non-immediate checkpoint to\r\ncomplete can lower the amount of WAL needed for recovery, even though\r\nit might not lower it as much as waiting for an immediate checkpoint\r\nwould.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sat, 28 Nov 2020 01:50:54 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 11/27/20, 10:58 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > If you'd like to show that I'm wrong, and it's entirely possible that I\n> > am, then retry the above with actual load on the system, and also\n> > actually look at how much outstanding WAL you end up with given the\n> > different scenarios which has to be replayed during crash recovery.\n> \n> I did a little experiment to show the behavior I'm referring to. I\n\nI'm rather confused why you didn't use your patch to show the actual\nbehavior that you'll get as a result of this change..? That seems like\nwhat would be meaningful here. I appreciate that the analysis you did\nmight correlate but I don't really get why we'd try to use a proxy for\nthis.\n\n> used these settings:\n> \n> checkpoint_completion_target = 0.9\n> checkpoint_timeout = 30s\n> max_wal_size = 20GB\n> WAL segment size is 64MB\n\nThat's an exceedingly short and very uncommon checkpoint timeout in my\nexperience.. If anything, folks increase checkpoint timeout from the\ndefault (in order to reduce WAL traffic and because they have a replica\nthey can flip to in the event of a crash, avoiding having to go through\nWAL replay on recovery) and so I'm not really sure that it's a sensible\nthing to look at? Even so though...\n\n> Recovery took 14 seconds and replayed ~3.2 GB of WAL.\n\n[ ... ]\n\n> Recovery took 7 seconds and replayed ~1.5 GB of WAL.\n\nThis is showing more-or-less what I expected: there's still a large\namount of outstanding WAL, even if you use a very low and unusual\ntimeout and attempt to time it perfectly. A question that is still not\nclear is what happens when you actually do an immediate checkpoint-\nthere would likely still be some outstanding WAL even in that case but\nI'd expect it to be a whole lot less, which is why that comment exists\nin the documentation in the first place.\n\n> Granted, I used a rather aggressive checkpoint_timeout, but I think\n> this demonstrates that waiting for a non-immediate checkpoint to\n> complete can lower the amount of WAL needed for recovery, even though\n> it might not lower it as much as waiting for an immediate checkpoint\n> would.\n\nThe difference here feels like order of magnitudes to me, between an\nimmediate checkpoint and a non-immediate one, vs. a much smaller\ndifference as you've shown here (though, still, kill'ing the postmaster\nisn't exactly the same as what your patch would be doing, so I don't\nreally like using this particular analysis to answer this question...).\n\nIf the use-case here was just that you wanted to add more options to the\nCHECKPOINT command because we have them internally and maybe they'd be\nuseful to expose then these things probably wouldn't matter so much, but\nto imply that this is really going to cut down on the amount of WAL\nreplay required a lot isn't really coming through even with these\nresults.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 28 Nov 2020 12:49:46 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 11/28/20, 9:50 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n>> Granted, I used a rather aggressive checkpoint_timeout, but I think\r\n>> this demonstrates that waiting for a non-immediate checkpoint to\r\n>> complete can lower the amount of WAL needed for recovery, even though\r\n>> it might not lower it as much as waiting for an immediate checkpoint\r\n>> would.\r\n>\r\n> The difference here feels like order of magnitudes to me, between an\r\n> immediate checkpoint and a non-immediate one, vs. a much smaller\r\n> difference as you've shown here (though, still, kill'ing the postmaster\r\n> isn't exactly the same as what your patch would be doing, so I don't\r\n> really like using this particular analysis to answer this question...).\r\n\r\nI agree that using an immediate checkpoint is the best for reducing\r\nrecovery time. IMO reducing the amount of WAL to recover by up to 50%\r\nfrom doing no checkpoint at all is also a reasonable use case,\r\nespecially if avoiding an IO spike is important.\r\n\r\n> If the use-case here was just that you wanted to add more options to the\r\n> CHECKPOINT command because we have them internally and maybe they'd be\r\n> useful to expose then these things probably wouldn't matter so much, but\r\n> to imply that this is really going to cut down on the amount of WAL\r\n> replay required a lot isn't really coming through even with these\r\n> results.\r\n\r\nThis is how I initially presented this patch. I only included this\r\nuse case because I was asked for concrete examples of how it might be\r\nuseful.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sat, 28 Nov 2020 21:28:52 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 11/28/20, 9:50 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> >> Granted, I used a rather aggressive checkpoint_timeout, but I think\n> >> this demonstrates that waiting for a non-immediate checkpoint to\n> >> complete can lower the amount of WAL needed for recovery, even though\n> >> it might not lower it as much as waiting for an immediate checkpoint\n> >> would.\n> >\n> > The difference here feels like order of magnitudes to me, between an\n> > immediate checkpoint and a non-immediate one, vs. a much smaller\n> > difference as you've shown here (though, still, kill'ing the postmaster\n> > isn't exactly the same as what your patch would be doing, so I don't\n> > really like using this particular analysis to answer this question...).\n> \n> I agree that using an immediate checkpoint is the best for reducing\n> recovery time. IMO reducing the amount of WAL to recover by up to 50%\n> from doing no checkpoint at all is also a reasonable use case,\n> especially if avoiding an IO spike is important.\n\nCheckpoints are always happening though, that's kind of my point..?\nSure, you get lucky sometimes that the time you snapshot might have less\noutstanding WAL than at some other time, but I'm not convinced that this\npatch is really going to give a given user that much reduced amount of\nWAL that has to be replayed more than just randomly timing the\nsnapshot, and if it's not clearly better, always, then I don't think we\ncan reasonably document it as such or imply that this is how folks\nshould implement snapshot-based backups.\n\n> > If the use-case here was just that you wanted to add more options to the\n> > CHECKPOINT command because we have them internally and maybe they'd be\n> > useful to expose then these things probably wouldn't matter so much, but\n> > to imply that this is really going to cut down on the amount of WAL\n> > replay required a lot isn't really coming through even with these\n> > results.\n> \n> This is how I initially presented this patch. I only included this\n> use case because I was asked for concrete examples of how it might be\n> useful.\n\nDid you have other concrete examples that we could reference as to when\nit would be useful to use these options?\n\nThanks,\n\nStephen",
"msg_date": "Sun, 29 Nov 2020 22:21:06 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 11/29/20, 7:21 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> Checkpoints are always happening though, that's kind of my point..?\r\n> Sure, you get lucky sometimes that the time you snapshot might have less\r\n> outstanding WAL than at some other time, but I'm not convinced that this\r\n> patch is really going to give a given user that much reduced amount of\r\n> WAL that has to be replayed more than just randomly timing the\r\n> snapshot, and if it's not clearly better, always, then I don't think we\r\n> can reasonably document it as such or imply that this is how folks\r\n> should implement snapshot-based backups.\r\n\r\nI see your point. Using a non-immediate checkpoint might help you\r\nkeep your recovery time slightly more consistent, but you're right\r\nthat it's likely not going to be a dramatic improvement. Your\r\nrecovery time will be 1 minute versus 1-2 minutes or 2 minutes versus\r\n2-4 minutes, not 3 seconds versus 5 minutes.\r\n\r\n> Did you have other concrete examples that we could reference as to when\r\n> it would be useful to use these options?\r\n\r\nI don't have any at the moment. I figured that if we're going to\r\nallow users to manually trigger checkpoints, we might as well allow\r\nthem to configure it to avoid things like IO spikes.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 30 Nov 2020 17:17:41 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On the UI of this patch, you're proposing to add the option FAST. I'm\nnot a fan of this option name and propose that (if we have it) we use\nthe name SPREAD instead (defaults to false).\n\nNow we don't actually explain the term \"spread\" much in the documentation;\nwe just say \"the writes are spread\". But it seems more natural to build\non that adjective rather than \"fast/slow\".\n\n\nI think starting a spread checkpoint has some usefulness, if your\ncheckpoint interval is very large but your completion target is not very\nclose to 1. In that case, you're expressing that you want a checkpoint\nto start now and not impact production unduly, so that you know when it\nfinishes and therefore when is it a good time to start a backup. (You\nwill still have some WAL to replay, but it won't be as much as if you\njust ignored checkpoint considerations completely.)\n\n\nOn the subject of measuring replay times for backups taking while\npgbench is pounding the database, I think a realistic test does *not*\nhave pgbench running at top speed; rather you have some non-maximal\n\"-R xyz\" option. You would probably determine a value to use by running\nwithout -R, observing what's a typical transaction rate, and using some\nfraction (say, half) of that in the real run.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 18:46:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> I think starting a spread checkpoint has some usefulness, if your\n> checkpoint interval is very large but your completion target is not very\n> close to 1. In that case, you're expressing that you want a checkpoint\n> to start now and not impact production unduly, so that you know when it\n> finishes and therefore when is it a good time to start a backup. (You\n> will still have some WAL to replay, but it won't be as much as if you\n> just ignored checkpoint considerations completely.)\n\nYou could view an immediate checkpoint as more-or-less being a 'spread'\ncheckpoint with a checkpoint completion target approaching 0. In the\nend, it's all about how much time you're going to spend trying to get\nthe data written out, because the WAL that's generated during that time\nis what's going to have to get replayed.\n\nIf the goal is to not end up with an increase in IO from this, then you\nwant to spread things out as much as you can over as much time as you're\nable to- but that then means that you're going to have that much WAL to\nreplay. If you're alright with performing IO to get the amount of WAL\nto replay to be minimal, then you just run 'CHECKPOINT;' before your\nbackup and you're good to go (and is why that's in the documentation as\na way to reduce your WAL replay time- because it reduces it as much as\npossible given your IO capabilities).\n\nIf you don't mind the increased amount of IO and WAL, you could just\nreduce checkpoint_timeout and then crash recovery and snapshot-based\nbackup recovery will also be reduced, no matter when you actually take\nthe snapshot.\n\n> On the subject of measuring replay times for backups taking while\n> pgbench is pounding the database, I think a realistic test does *not*\n> have pgbench running at top speed; rather you have some non-maximal\n> \"-R xyz\" option. You would probably determine a value to use by running\n> without -R, observing what's a typical transaction rate, and using some\n> fraction (say, half) of that in the real run.\n\nThat'd halve the amount of WAL being generated per unit time, but I\ndon't think it really changes much when it comes to this particular\nanalysis..?\n\nIf you generate 16MB of WAL per minute, and the checkpoint timeout is 5\nminutes, with a checkpoint target of 0.9, then at more-or-less any point\nin time you've got ~5 minutes worth of WAL outstanding, or around 80MB.\nIf your completion target is 0.5 then, really, you might as well make it\n0.9 and have your timeout be 2.5m, so that you've got a steady-state of\naround 40MB of WAL outstanding.\n\nWhat I'm getting around to is that the only place this kind of thing\nmakes sense is where you're front-loading all your IO during the\ncheckpoint because your checkpoint completion target is less than 0.9\nand then, sure, there's a difference between snapshotting right when the\ncheckpoint completes vs. later- because if you wait around to snapshot,\nwe aren't actually doing IO during that time and just letting the WAL\nbuild up, but that's an argument to remove checkpoint completion target\nas an option that doesn't really make much sense in the first place,\nimv, and recommend folks tune checkpoint timeout for the amount of\noutstanding WAL they want to have when they are doing recovery (either\nfrom a crash or from a snapshot).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 4 Dec 2020 17:17:39 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 12/4/20, 1:47 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> On the UI of this patch, you're proposing to add the option FAST. I'm\r\n> not a fan of this option name and propose that (if we have it) we use\r\n> the name SPREAD instead (defaults to false).\r\n>\r\n> Now we don't actually explain the term \"spread\" much in the documentation;\r\n> we just say \"the writes are spread\". But it seems more natural to build\r\n> on that adjective rather than \"fast/slow\".\r\n\r\nHere is a version of the patch that uses SPREAD instead of FAST.\r\n\r\nNathan",
"msg_date": "Fri, 4 Dec 2020 23:21:23 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 2020-Dec-04, Bossart, Nathan wrote:\n\n> On 12/4/20, 1:47 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\n> > On the UI of this patch, you're proposing to add the option FAST. I'm\n> > not a fan of this option name and propose that (if we have it) we use\n> > the name SPREAD instead (defaults to false).\n> >\n> > Now we don't actually explain the term \"spread\" much in the documentation;\n> > we just say \"the writes are spread\". But it seems more natural to build\n> > on that adjective rather than \"fast/slow\".\n> \n> Here is a version of the patch that uses SPREAD instead of FAST.\n\nWFM.\n\nInstead of adding checkpt_option_list, how about utility_option_list?\nIt seems intended for reuse.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 20:32:39 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 12/4/20, 3:33 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> Instead of adding checkpt_option_list, how about utility_option_list?\r\n> It seems intended for reuse.\r\n\r\nAh, good call. That simplifies the grammar changes quite a bit.\r\n\r\nNathan",
"msg_date": "Sat, 5 Dec 2020 00:11:13 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On Sat, Dec 05, 2020 at 12:11:13AM +0000, Bossart, Nathan wrote:\n> On 12/4/20, 3:33 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\n>> Instead of adding checkpt_option_list, how about utility_option_list?\n>> It seems intended for reuse.\n\n+1. It is intended for reuse.\n\n> Ah, good call. That simplifies the grammar changes quite a bit.\n\n+CHECKPOINT;\n+CHECKPOINT (SPREAD);\n+CHECKPOINT (SPREAD FALSE);\n+CHECKPOINT (SPREAD ON);\n+CHECKPOINT (SPREAD 0);\n+CHECKPOINT (SPREAD 2);\n+ERROR: spread requires a Boolean value\n+CHECKPOINT (NONEXISTENT);\n+ERROR: unrecognized CHECKPOINT option \"nonexistent\"\n+LINE 1: CHECKPOINT (NONEXISTENT);\nTesting for negative cases like those two last ones is fine by me, but\nI don't like much the idea of running 5 checkpoints as part of the\nmain regression test suite (think installcheck with a large shared\nbuffer pool for example).\n\n--- a/src/include/postmaster/bgwriter.h\n+++ b/src/include/postmaster/bgwriter.h\n@@ -15,6 +15,8 @@\n #ifndef _BGWRITER_H\n #define _BGWRITER_H\n\n+#include \"nodes/parsenodes.h\"\n+#include \"parser/parse_node.h\"\nI don't think you need to include parsenodes.h here.\n\n+void\n+ExecCheckPointStmt(ParseState *pstate, CheckPointStmt *stmt)\n+{\nNit: perhaps this could just be ExecCheckPoint()? See the existing\nExecVacuum().\n\n+ flags = CHECKPOINT_WAIT |\n+ (RecoveryInProgress() ? 0 : CHECKPOINT_FORCE) |\n+ (spread ? 0 : CHECKPOINT_IMMEDIATE);\nThe handling done for CHECKPOINT_FORCE and CHECKPOINT_WAIT deserve\na comment.\n--\nMichael",
"msg_date": "Sat, 5 Dec 2020 10:43:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Thanks for reviewing.\r\n\r\nOn 12/4/20, 5:44 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Sat, Dec 05, 2020 at 12:11:13AM +0000, Bossart, Nathan wrote:\r\n>> On 12/4/20, 3:33 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n>>> Instead of adding checkpt_option_list, how about utility_option_list?\r\n>>> It seems intended for reuse.\r\n>\r\n> +1. It is intended for reuse.\r\n>\r\n>> Ah, good call. That simplifies the grammar changes quite a bit.\r\n>\r\n> +CHECKPOINT;\r\n> +CHECKPOINT (SPREAD);\r\n> +CHECKPOINT (SPREAD FALSE);\r\n> +CHECKPOINT (SPREAD ON);\r\n> +CHECKPOINT (SPREAD 0);\r\n> +CHECKPOINT (SPREAD 2);\r\n> +ERROR: spread requires a Boolean value\r\n> +CHECKPOINT (NONEXISTENT);\r\n> +ERROR: unrecognized CHECKPOINT option \"nonexistent\"\r\n> +LINE 1: CHECKPOINT (NONEXISTENT);\r\n> Testing for negative cases like those two last ones is fine by me, but\r\n> I don't like much the idea of running 5 checkpoints as part of the\r\n> main regression test suite (think installcheck with a large shared\r\n> buffer pool for example).\r\n>\r\n> --- a/src/include/postmaster/bgwriter.h\r\n> +++ b/src/include/postmaster/bgwriter.h\r\n> @@ -15,6 +15,8 @@\r\n> #ifndef _BGWRITER_H\r\n> #define _BGWRITER_H\r\n>\r\n> +#include \"nodes/parsenodes.h\"\r\n> +#include \"parser/parse_node.h\"\r\n> I don't think you need to include parsenodes.h here.\r\n>\r\n> +void\r\n> +ExecCheckPointStmt(ParseState *pstate, CheckPointStmt *stmt)\r\n> +{\r\n> Nit: perhaps this could just be ExecCheckPoint()? See the existing\r\n> ExecVacuum().\r\n>\r\n> + flags = CHECKPOINT_WAIT |\r\n> + (RecoveryInProgress() ? 0 : CHECKPOINT_FORCE) |\r\n> + (spread ? 0 : CHECKPOINT_IMMEDIATE);\r\n> The handling done for CHECKPOINT_FORCE and CHECKPOINT_WAIT deserve\r\n> a comment.\r\n\r\nThis all seems reasonable to me. I've attached a new version of the\r\npatch.\r\n\r\nNathan",
"msg_date": "Sat, 5 Dec 2020 05:18:54 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> This all seems reasonable to me. I've attached a new version of the\n> patch.\n\ndiff --git a/doc/src/sgml/ref/checkpoint.sgml b/doc/src/sgml/ref/checkpoint.sgml\nindex 2afee6d7b5..2b1e56fbd7 100644\n--- a/doc/src/sgml/ref/checkpoint.sgml\n+++ b/doc/src/sgml/ref/checkpoint.sgml\n+ <para>\n+ Note that the server may consolidate concurrently requested checkpoints or\n+ restartpoints. Such consolidated requests will contain a combined set of\n+ options. For example, if one session requested a spread checkpoint and\n+ another session requested a fast checkpoint, the server may combine these\n+ requests and perform one fast checkpoint.\n+ </para>\n\n[ ... ]\n\n+ <refsect1>\n+ <title>Parameters</title>\n+\n+ <variablelist>\n+ <varlistentry>\n+ <term><literal>SPREAD</literal></term>\n+ <listitem>\n+ <para>\n+ Specifies that checkpoint activity should be throttled based on the\n+ setting for the <xref linkend=\"guc-checkpoint-completion-target\"/>\n+ parameter. If the option is turned off, <command>CHECKPOINT</command>\n+ creates a checkpoint or restartpoint as fast as possible. By default,\n+ <literal>SPREAD</literal> is turned off, and the checkpoint activity is\n+ not throttled.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nSo- just to be clear, CHECKPOINTs are more-or-less always happening in\nPG, and running this command might do something or might end up doing\nnothing depending on if a checkpoint is already in progress and this\nrequest just gets consolidated into an existing one, and it won't\nactually reduce the amount of WAL replay except in the case where\ncheckpoint completion target is set to make a checkpoint happen in less\ntime than checkpoint timeout, which ultimately isn't a great way to run\nthe system anyway.\n\nAssuming we actually want to do this, which I still generally don't\nagree with since it isn't really clear if it'll actually end up doing\nsomething, or not, wouldn't it make more sense to have a command that\njust sits and waits for the currently running (or next) checkpoint to\ncomplete..? For the use-case that was explained, at least, we don't\nactually need to cause another checkpoint to happen, we just want to\nknow when a checkpoint has completed, right?\n\nIs there some other reason for this that isn't being explained..?\n\nThanks,\n\nStephen",
"msg_date": "Sat, 5 Dec 2020 09:41:16 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 12/5/20, 6:41 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> Assuming we actually want to do this, which I still generally don't\r\n> agree with since it isn't really clear if it'll actually end up doing\r\n> something, or not, wouldn't it make more sense to have a command that\r\n> just sits and waits for the currently running (or next) checkpoint to\r\n> complete..? For the use-case that was explained, at least, we don't\r\n> actually need to cause another checkpoint to happen, we just want to\r\n> know when a checkpoint has completed, right?\r\n\r\nIf it's enough to just wait for the current checkpoint to complete or\r\nto wait for the next one to complete, I suppose you could just poll\r\npg_control_checkpoint(). I think the only downside is that you could\r\nend up sitting idle for a while, especially if checkpoint_timeout is\r\nhigh and checkpoint_completion_target is low. But, as you point out,\r\nthat may not be a typically recommended way to configure the system.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sat, 5 Dec 2020 17:03:00 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 12/5/20, 6:41 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > Assuming we actually want to do this, which I still generally don't\n> > agree with since it isn't really clear if it'll actually end up doing\n> > something, or not, wouldn't it make more sense to have a command that\n> > just sits and waits for the currently running (or next) checkpoint to\n> > complete..? For the use-case that was explained, at least, we don't\n> > actually need to cause another checkpoint to happen, we just want to\n> > know when a checkpoint has completed, right?\n> \n> If it's enough to just wait for the current checkpoint to complete or\n> to wait for the next one to complete, I suppose you could just poll\n> pg_control_checkpoint(). I think the only downside is that you could\n> end up sitting idle for a while, especially if checkpoint_timeout is\n> high and checkpoint_completion_target is low. But, as you point out,\n> that may not be a typically recommended way to configure the system.\n\nMaybe I missed something, but aren't you going to be waiting a while\nwith this patch given that it's asking for a spread checkpoint too..?\n\nI agree that you could just monitor for the next checkpoint using\npg_control_checkpoint(), which is why I'm wondering again what the\npoint is behind this patch... I'm trying to understand why we'd be\nencouraging people to increase the number of checkpoints that are\nhappening when they're still going to be waiting around for that spread\n(in other words, non-immediate) checkpoint to happen (just as if they'd\njust waited until the next regular checkpoint), and they're still going\nto have a fair bit of WAL to replay because it'll be however much WAL\nhas been written since we started the spread/non-immediate checkpoint.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 5 Dec 2020 12:10:40 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 2020-Dec-05, Stephen Frost wrote:\n\n> So- just to be clear, CHECKPOINTs are more-or-less always happening in\n> PG, and running this command might do something or might end up doing\n> nothing depending on if a checkpoint is already in progress and this\n> request just gets consolidated into an existing one, and it won't\n> actually reduce the amount of WAL replay except in the case where\n> checkpoint completion target is set to make a checkpoint happen in less\n> time than checkpoint timeout, which ultimately isn't a great way to run\n> the system anyway.\n\nYou keep making this statement, and I don't necessarily disagree, but if\nthat is the case, please explain why don't we have\ncheckpoint_completion_target set to 0.9 by default? Should we change\nthat?\n\n> Assuming we actually want to do this, which I still generally don't\n> agree with since it isn't really clear if it'll actually end up doing\n> something, or not, wouldn't it make more sense to have a command that\n> just sits and waits for the currently running (or next) checkpoint to\n> complete..? For the use-case that was explained, at least, we don't\n> actually need to cause another checkpoint to happen, we just want to\n> know when a checkpoint has completed, right?\n\nYes, I agree that the use case for this is unclear.\n\n\n",
"msg_date": "Sat, 5 Dec 2020 22:33:46 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> On 2020-Dec-05, Stephen Frost wrote:\n> > So- just to be clear, CHECKPOINTs are more-or-less always happening in\n> > PG, and running this command might do something or might end up doing\n> > nothing depending on if a checkpoint is already in progress and this\n> > request just gets consolidated into an existing one, and it won't\n> > actually reduce the amount of WAL replay except in the case where\n> > checkpoint completion target is set to make a checkpoint happen in less\n> > time than checkpoint timeout, which ultimately isn't a great way to run\n> > the system anyway.\n> \n> You keep making this statement, and I don't necessarily disagree, but if\n> that is the case, please explain why don't we have\n> checkpoint_completion_target set to 0.9 by default? Should we change\n> that?\n\nYes, I do think we should change that.. In fact, I'd argue that we can\nprobably get rid of checkpoint_completion_target entirely as an option.\nThe main argument against that is that it could be annoying for people\nupgrading, but changing the default to 0.9 would definitely be an\nimprovement.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 6 Dec 2020 10:03:08 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On Sun, Dec 06, 2020 at 10:03:08AM -0500, Stephen Frost wrote:\n> * Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n>> You keep making this statement, and I don't necessarily disagree, but if\n>> that is the case, please explain why don't we have\n>> checkpoint_completion_target set to 0.9 by default? Should we change\n>> that?\n> \n> Yes, I do think we should change that..\n\nAgreed. FWIW, no idea for others, but it is one of those parameters I\nkeep telling to update after a default installation.\n\n> In fact, I'd argue that we can\n> probably get rid of checkpoint_completion_target entirely as an option.\n> The main argument against that is that it could be annoying for people\n> upgrading, but changing the default to 0.9 would definitely be an\n> improvement.\n\nNot sure there is enough ground to do that though.\n--\nMichael",
"msg_date": "Mon, 7 Dec 2020 11:22:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On 12/5/20, 9:11 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> * Bossart, Nathan (bossartn@amazon.com) wrote:\r\n>> On 12/5/20, 6:41 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n>> > Assuming we actually want to do this, which I still generally don't\r\n>> > agree with since it isn't really clear if it'll actually end up doing\r\n>> > something, or not, wouldn't it make more sense to have a command that\r\n>> > just sits and waits for the currently running (or next) checkpoint to\r\n>> > complete..? For the use-case that was explained, at least, we don't\r\n>> > actually need to cause another checkpoint to happen, we just want to\r\n>> > know when a checkpoint has completed, right?\r\n>> \r\n>> If it's enough to just wait for the current checkpoint to complete or\r\n>> to wait for the next one to complete, I suppose you could just poll\r\n>> pg_control_checkpoint(). I think the only downside is that you could\r\n>> end up sitting idle for a while, especially if checkpoint_timeout is\r\n>> high and checkpoint_completion_target is low. But, as you point out,\r\n>> that may not be a typically recommended way to configure the system.\r\n>\r\n> Maybe I missed something, but aren't you going to be waiting a while\r\n> with this patch given that it's asking for a spread checkpoint too..?\r\n>\r\n> I agree that you could just monitor for the next checkpoint using\r\n> pg_control_checkpoint(), which is why I'm wondering again what the\r\n> point is behind this patch... I'm trying to understand why we'd be\r\n> encouraging people to increase the number of checkpoints that are\r\n> happening when they're still going to be waiting around for that spread\r\n> (in other words, non-immediate) checkpoint to happen (just as if they'd\r\n> just waited until the next regular checkpoint), and they're still going\r\n> to have a fair bit of WAL to replay because it'll be however much WAL\r\n> has been written since we started the spread/non-immediate checkpoint.\r\n\r\nI plan to mark this patch as withdrawn after the next commitfest\r\nunless anyone objects.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 14 Dec 2020 21:53:45 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: A few new options for CHECKPOINT"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 11:22:01AM +0900, Michael Paquier wrote:\n> On Sun, Dec 06, 2020 at 10:03:08AM -0500, Stephen Frost wrote:\n> > * Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> >> You keep making this statement, and I don't necessarily disagree, but if\n> >> that is the case, please explain why don't we have\n> >> checkpoint_completion_target set to 0.9 by default? Should we change\n> >> that?\n> > \n> > Yes, I do think we should change that..\n> \n> Agreed. FWIW, no idea for others, but it is one of those parameters I\n> keep telling to update after a default installation.\n\n+1 for making it 0.9. FYI< Robert Haas has argued that our estimation\nof completion isn't great, which is why it defaults to 0.5.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 16 Dec 2020 17:39:18 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: A few new options for CHECKPOINT"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI noticed that there is a table access method API added starting from \nPG12. In other words, Postgresql open the door for developers to add \ntheir own access methods, for example, zheap, zedstore etc. However, the \ncurrent pgbench doesn't have an option to allow user to specify which \ntable access method to use during the initialization phase. If we can \nhave an option to help user address this issue, it would be great. For \nexample, if user want to do a performance benchmark to compare \"zheap\" \nwith \"heap\", then the commands can be something like below:\n\npgbench -d -i postgres --table-am=heap\n\npgbench -d -i postgres --table-am=zheap\n\nI know there is a parameter like below available in postgresql.conf:\n\n#default_table_access_method = 'heap'\n\nBut, providing another option for the end user may not be a bad idea, \nand it might make the tests easier at some points.\n\nThe attached file is quick patch for this.\n\nThoughts?\n\n\nThank you,\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Tue, 24 Nov 2020 15:32:38 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Add table access method as an option to pgbench"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 03:32:38PM -0800, David Zhang wrote:\n> But, providing another option for the end user may not be a bad idea, and it\n> might make the tests easier at some points.\n\nMy first thought is that we have no need to complicate pgbench with\nthis option because there is a GUC able to do that, but we do that for\ntablespaces, so... No objections from here.\n\n> The attached file is quick patch for this.\n> \n> Thoughts?\n\nThis patch does not apply on HEAD, where you can just use\nappendPQExpBuffer() to append the new clause to the CREATE TABLE\nquery. This needs proper documentation.\n--\nMichael",
"msg_date": "Wed, 25 Nov 2020 12:41:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Thank a lot for your comments, Michael.\n\nOn 2020-11-24 7:41 p.m., Michael Paquier wrote:\n> On Tue, Nov 24, 2020 at 03:32:38PM -0800, David Zhang wrote:\n>> But, providing another option for the end user may not be a bad idea, and it\n>> might make the tests easier at some points.\n> My first thought is that we have no need to complicate pgbench with\n> this option because there is a GUC able to do that, but we do that for\n> tablespaces, so... No objections from here.\n>\n>> The attached file is quick patch for this.\n>>\n>> Thoughts?\n> This patch does not apply on HEAD, where you can just use\n> appendPQExpBuffer() to append the new clause to the CREATE TABLE\n> query. This needs proper documentation.\nThe previous patch was based on branch \"REL_13_STABLE\". Now, the \nattached new patch v2 is based on master branch. I followed the new code \nstructure using appendPQExpBuffer to append the new clause \"using \nTABLEAM\" in a proper position and tested. In the meantime, I also \nupdated the pgbench.sqml file to reflect the changes.\n> --\n> Michael\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Wed, 25 Nov 2020 12:13:55 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 12:13:55PM -0800, David Zhang wrote:\n> The previous patch was based on branch \"REL_13_STABLE\". Now, the attached\n> new patch v2 is based on master branch. I followed the new code structure\n> using appendPQExpBuffer to append the new clause \"using TABLEAM\" in a proper\n> position and tested. In the meantime, I also updated the pgbench.sqml file\n> to reflect the changes.\n\n+ <varlistentry>\n+ <term><option>--table-am=<replaceable>TABLEAM</replaceable></option></term>\n+ <listitem>\n+ <para>\n+ Create all tables with specified table access method\n+ <replaceable>TABLEAM</replaceable>.\n+ If unspecified, default is <literal>heap</literal>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\nThis needs to be in alphabetical order. And what you say here is\nwrong. The default would not be heap if default_table_access_method\nis set to something else. I would suggest to use table_access_method\ninstead of TABLEAM, and keep the paragraph minimal, say:\n\"Create tables using the specified table access method, rather than\nthe default.\"\n\n--table-am is really the best option name? --table-access-method\nsounds a bit more consistent to me.\n\n+ if (tableam != NULL)\n+ {\n+ char *escape_tableam;\n+\n+ escape_tableam = PQescapeIdentifier(con, tableam, strlen(tableam));\n+ appendPQExpBuffer(&query, \" using %s\", escape_tableam);\n+ PQfreemem(escape_tableam);\n+ }\nThe order is wrong here, generating an unsupported grammar, see by\nyourself with this command:\npgbench --partition-method=hash --table-am=heap -i --partitions 2\n\nThis makes the patch trickier than it looks as the USING clause is\nlocated between PARTITION BY and WITH. Also, partitioned tables\ncannot use the USING clause so you need to be careful that\ncreatePartitions() also uses the correct table AM.\n\nThis stuff needs tests.\n--\nMichael",
"msg_date": "Thu, 26 Nov 2020 14:36:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Hello David,\n\n> The previous patch was based on branch \"REL_13_STABLE\". Now, the attached new \n> patch v2 is based on master branch. I followed the new code structure using \n> appendPQExpBuffer to append the new clause \"using TABLEAM\" in a proper \n> position and tested. In the meantime, I also updated the pgbench.sqml file to \n> reflect the changes.\n\nMy 0.02�: I'm fine with the added feature.\n\nThe patch lacks minimal coverage test. Consider adding something to \npgbench tap tests, including failures (ie non existing method).\n\nThe option in the help string is not at the right ab place.\n\nI would merge the tableam declaration to the previous one with a extended \ncomments, eg \"tablespace and access method selection\".\n\nescape_tableam -> escaped_tableam ?\n\n-- \nFabien.",
"msg_date": "Thu, 26 Nov 2020 08:29:22 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Thanks a lot again for the review and comments.\n\nOn 2020-11-25 9:36 p.m., Michael Paquier wrote:\n> On Wed, Nov 25, 2020 at 12:13:55PM -0800, David Zhang wrote:\n>> The previous patch was based on branch \"REL_13_STABLE\". Now, the attached\n>> new patch v2 is based on master branch. I followed the new code structure\n>> using appendPQExpBuffer to append the new clause \"using TABLEAM\" in a proper\n>> position and tested. In the meantime, I also updated the pgbench.sqml file\n>> to reflect the changes.\n> + <varlistentry>\n> + <term><option>--table-am=<replaceable>TABLEAM</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Create all tables with specified table access method\n> + <replaceable>TABLEAM</replaceable>.\n> + If unspecified, default is <literal>heap</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> This needs to be in alphabetical order.\nThe order issue is fixed in v3 patch.\n> And what you say here is\n> wrong. The default would not be heap if default_table_access_method\n> is set to something else.\nRight, if user change the default settings in GUC, then the default is \nnot `heap` any more.\n> I would suggest to use table_access_method\n> instead of TABLEAM,\nAll other options if values are required, the words are all capitalized, \nsuch as TABLESPACE. Therefore, I used TABLEACCESSMETHOD in stead of \ntable_access_method in v3 patch.\n> and keep the paragraph minimal, say:\n> \"Create tables using the specified table access method, rather than\n> the default.\"\nUpdated in v3 patch.\n>\n> --table-am is really the best option name? --table-access-method\n> sounds a bit more consistent to me.\nUpdated in v3 patch.\n>\n> + if (tableam != NULL)\n> + {\n> + char *escape_tableam;\n> +\n> + escape_tableam = PQescapeIdentifier(con, tableam, strlen(tableam));\n> + appendPQExpBuffer(&query, \" using %s\", escape_tableam);\n> + PQfreemem(escape_tableam);\n> + }\nThanks for pointing this out. The order I believe is fixed in v3 patch.\n> The order is wrong here, generating an unsupported grammar, see by\n> yourself with this command:\n> pgbench --partition-method=hash --table-am=heap -i --partitions 2\n\nTested in v3 patch, with a command like,\n\npgbench --partition-method=hash --table-access-method=heap -i --partitions 2\n\n>\n> This makes the patch trickier than it looks as the USING clause is\n> located between PARTITION BY and WITH. Also, partitioned tables\n> cannot use the USING clause so you need to be careful that\n> createPartitions() also uses the correct table AM.\n>\n> This stuff needs tests.\n3 test cases has been added the tap test, but honestly I am not sure if \nI am following the rules. Any comments will be great.\n> --\n> Michael\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Thu, 26 Nov 2020 18:25:28 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Thanks Fabien for the comments.\n\nOn 2020-11-25 11:29 p.m., Fabien COELHO wrote:\n>\n> Hello David,\n>\n>> The previous patch was based on branch \"REL_13_STABLE\". Now, the \n>> attached new patch v2 is based on master branch. I followed the new \n>> code structure using appendPQExpBuffer to append the new clause \n>> \"using TABLEAM\" in a proper position and tested. In the meantime, I \n>> also updated the pgbench.sqml file to reflect the changes.\n>\n> My 0.02€: I'm fine with the added feature.\n>\n> The patch lacks minimal coverage test. Consider adding something to \n> pgbench tap tests, including failures (ie non existing method).\nI added 3 test cases to the tap tests, but not sure if they are \nfollowing the rules. One question about the failures test, my thoughts \nis that it should be in the no_server test cases, but it is hard to \nverify the input as the table access method can be any name, such as \nzheap, zedstore or zheap2 etc. Any suggestion will be great.\n>\n> The option in the help string is not at the right ab place.\nFixed in v3 patch.\n>\n> I would merge the tableam declaration to the previous one with a \n> extended comments, eg \"tablespace and access method selection\".\nUpdated in v3 patch.\n>\n> escape_tableam -> escaped_tableam ?\n\nUpdated in v3 patch.\n\nBy the way, I saw the same style for other variables, such as \nescape_tablespace, should this be fixed as well?\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n",
"msg_date": "Thu, 26 Nov 2020 18:33:44 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "\nHello David,\n\nSome feedback about v3:\n\nIn the doc I find TABLEACCESSMETHOD quite hard to read. Use TABLEAM \ninstead? Also, the next entry uses lowercase (tablespace), why change the \nstyle?\n\nRemove space after comma in help string. I'd use the more readable TABLEAM \nin the help string rather than the mouthful.\n\nIt looks that the option is *silently* ignored when creating a \npartitionned table, which currently results in an error, and not passed to \npartitions, which would accept them. This is pretty weird.\n\nI'd suggest that the table am option should rather by changing the default \ninstead, so that it would apply to everything relevant implicitely when \nappropriate.\n\nAbout tests :\n\nThey should also trigger failures, eg \n\"--table-access-method=no-such-table-am\", to be added to the @errors list.\n\nI do not understand why you duplicated all possible option entry.\n\nTest with just table access method looks redundant if the feature is make \nto work orthonogonally to partitions, which should be the case.\n\n> By the way, I saw the same style for other variables, such as \n> escape_tablespace, should this be fixed as well?\n\nNope, let it be.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 27 Nov 2020 10:04:44 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Thanks a lot for the comments, Fabien.\n> Some feedback about v3:\n>\n> In the doc I find TABLEACCESSMETHOD quite hard to read. Use TABLEAM \n> instead? \nYes, in this case, *TABLEAM* is easy to read, updated.\n> Also, the next entry uses lowercase (tablespace), why change the style?\nThe original style is not so consistent, for example, in doc, \n--partition-method using *NAME*, but --table-space using *tablespace*; \nin help, --partition-method using *(rang|hash)*, but --tablespace using \n*TABLESPACE*. To keep it more consistent, I would rather use *TABLEAM* \nin both doc and help.\n> Remove space after comma in help string. I'd use the more readable \n> TABLEAM in the help string rather than the mouthful.\nYes the *space* is removed after comma.\n>\n> It looks that the option is *silently* ignored when creating a \n> partitionned table, which currently results in an error, and not \n> passed to partitions, which would accept them. This is pretty weird.\nThe input check is added with an error message when both partitions and \ntable access method are used.\n>\n> I'd suggest that the table am option should rather by changing the \n> default instead, so that it would apply to everything relevant \n> implicitely when appropriate.\nI think this is a better idea, so in v4 patch I change it to something \nlike \"set default_table_access_method to *TABLEAM*\" instead of using the \n*using* clause.\n>\n> About tests :\n>\n> They should also trigger failures, eg \n> \"--table-access-method=no-such-table-am\", to be added to the @errors \n> list.\nNo sure how to address this properly, since the table access method \npotentially can be *any* name.\n>\n> I do not understand why you duplicated all possible option entry.\n>\n> Test with just table access method looks redundant if the feature is \n> make to work orthonogonally to partitions, which should be the case.\nOnly one positive test case added using *heap* as the table access \nmethod to verify the initialization.\n>\n>> By the way, I saw the same style for other variables, such as \n>> escape_tablespace, should this be fixed as well?\n>\n> Nope, let it be.\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Tue, 1 Dec 2020 20:52:13 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "\nHello David,\n\nSome feedback about v4.\n\n>> It looks that the option is *silently* ignored when creating a partitionned \n>> table, which currently results in an error, and not passed to partitions, \n>> which would accept them. This is pretty weird.\n> The input check is added with an error message when both partitions and table \n> access method are used.\n\nHmmm. If you take the resetting the default, I do not think that you \nshould have to test anything? AFAICT the access method is valid on \npartitions, although not on the partitioned table declaration. So I'd say \nthat you could remove the check.\n\n>> They should also trigger failures, eg \n>> \"--table-access-method=no-such-table-am\", to be added to the @errors list.\n> No sure how to address this properly, since the table access method \n> potentially can be *any* name.\n\nI think it is pretty unlikely that someone would chose the name \n\"no-such-table-am\" when developing a new storage engine for Postgres \ninside Postgres, so that it would make this internal test fail.\n\nThere are numerous such names used in tests, eg \"no-such-database\", \n\"no-such-command\".\n\nSo I'd suggest to add such a test to check for the expected failure.\n\n>> I do not understand why you duplicated all possible option entry.\n>> \n>> Test with just table access method looks redundant if the feature is make \n>> to work orthonogonally to partitions, which should be the case.\n> Only one positive test case added using *heap* as the table access method to \n> verify the initialization.\n\nYep, only \"heap\" if currently available for tables.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 2 Dec 2020 21:03:28 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Again, thanks a lot for the feedback.\n\nOn 2020-12-02 12:03 p.m., Fabien COELHO wrote:\n>\n> Hello David,\n>\n> Some feedback about v4.\n>\n>>> It looks that the option is *silently* ignored when creating a \n>>> partitionned table, which currently results in an error, and not \n>>> passed to partitions, which would accept them. This is pretty weird.\n>> The input check is added with an error message when both partitions \n>> and table access method are used.\n>\n> Hmmm. If you take the resetting the default, I do not think that you \n> should have to test anything? AFAICT the access method is valid on \n> partitions, although not on the partitioned table declaration. So I'd \n> say that you could remove the check.\nYes, it makes sense to remove the *blocking* check, and actually the \ntable access method interface does work with partitioned table. I tested \nthis/v5 by duplicating the heap access method with a different name. For \nthis reason, I removed the condition check as well when applying the \ntable access method.\n>\n>>> They should also trigger failures, eg \n>>> \"--table-access-method=no-such-table-am\", to be added to the @errors \n>>> list.\n>> No sure how to address this properly, since the table access method \n>> potentially can be *any* name.\n>\n> I think it is pretty unlikely that someone would chose the name \n> \"no-such-table-am\" when developing a new storage engine for Postgres \n> inside Postgres, so that it would make this internal test fail.\n>\n> There are numerous such names used in tests, eg \"no-such-database\", \n> \"no-such-command\".\n>\n> So I'd suggest to add such a test to check for the expected failure.\nThe test case \"no-such-table-am\" has been added to the errors list, and \nthe regression test is ok.\n>\n>>> I do not understand why you duplicated all possible option entry.\n>>>\n>>> Test with just table access method looks redundant if the feature is \n>>> make to work orthonogonally to partitions, which should be the case.\n>> Only one positive test case added using *heap* as the table access \n>> method to verify the initialization.\n>\n> Yep, only \"heap\" if currently available for tables.\n>\nThanks,\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Mon, 7 Dec 2020 13:53:33 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "> --- a/doc/src/sgml/ref/pgbench.sgml\n> +++ b/doc/src/sgml/ref/pgbench.sgml\n> @@ -359,6 +359,16 @@ pgbench <optional> <replaceable>options</replaceable> </optional> <replaceable>d\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term><option>--table-access-method=<replaceable>TABLEAM</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Create tables using the specified table access method, rather than the default.\n> + tablespace.\n\ntablespace is an extraneous word ?\n\nAlso, I mention that the regression tests do this to get a 2nd AM:\n\nsrc/test/regress/sql/create_am.sql:CREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\n...\nsrc/test/regress/sql/create_am.sql:DROP ACCESS METHOD heap2;\n\nDavid:\nIf you found pre-existing inconsistencies/errors/deficiencies, I'd suggest to\nfix them in a separate patch. Typically those are small, and you can make them\n0001 patch. Errors might be worth backpatching, so in addition to making the\n\"feature\" patches small, it's good if they're separate.\n\nLike writing NAME vs TABLESPACE. Or escape vs escapes.\n\nOr maybe using SET default_tablespace instead of modifying the CREATE sql.\nThat'd need to be done separately for indexes, and RESET after creating the\ntables, to avoid accidentally affecting indexes, too.\n\n@Fabien: I think the table should be created and populated within the same\ntransaction, to allow this optimization in v13 to apply during the\n\"initialization\" phase.\nc6b92041d Skip WAL for new relfilenodes, under wal_level=minimal.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 24 Dec 2020 21:24:45 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Hello Justin,\n\n> src/test/regress/sql/create_am.sql:CREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\n> ...\n> src/test/regress/sql/create_am.sql:DROP ACCESS METHOD heap2;\n\n> Or maybe using SET default_tablespace instead of modifying the CREATE sql.\n> That'd need to be done separately for indexes, and RESET after creating the\n> tables, to avoid accidentally affecting indexes, too.\n\nWhy should it not affect indexes?\n\n> @Fabien: I think the table should be created and populated within the same\n> transaction, to allow this optimization in v13 to apply during the\n> \"initialization\" phase.\n> c6b92041d Skip WAL for new relfilenodes, under wal_level=minimal.\n\nPossibly.\n\nThis would be a change of behavior: currently only filling tables is under \nan explicit transaction.\n\nThat would be a matter for another patch, probably with an added option.\n\nAs VACUUM is not transaction compatible, it might be a little bit tricky \nto add such a feature. Also I'm not sure about some constraint \ndeclarations.\n\nISTM that I submitted a patch a long time ago to allow \"()\" to control \ntransaction from the command line, but somehow it got lost or rejected.\nI redeveloped it, see attached. I cannot see reliable performance \nimprovement by playing with (), though.\n\n-- \nFabien.",
"msg_date": "Sun, 27 Dec 2020 09:14:53 -0400 (AST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "On Sun, Dec 27, 2020 at 09:14:53AM -0400, Fabien COELHO wrote:\n> > src/test/regress/sql/create_am.sql:CREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\n> > ...\n> > src/test/regress/sql/create_am.sql:DROP ACCESS METHOD heap2;\n> \n> > Or maybe using SET default_tablespace instead of modifying the CREATE sql.\n> > That'd need to be done separately for indexes, and RESET after creating the\n> > tables, to avoid accidentally affecting indexes, too.\n> \n> Why should it not affect indexes?\n\nI rarely use pgbench, and probably never looked at its source before, but I saw\nthat it has a separate --tablespace and --index-tablespace, and that\n--tablespace is *not* the default for indexes.\n\nSo if we changed it to use SET default_tablespace instead of amending the DDL\nsql, we'd need to make sure the SET applied only to the intended CREATE\ncommand, and not all following create commands. In the case that\n--index-tablespace is not specified, it would be buggy to do this:\n\nSET default_tablespace=foo;\nCREATE TABLE ...\nCREATE INDEX ...\nCREATE TABLE ...\nCREATE INDEX ...\n...\n\n-- \nJustin\n\nPS. Thanks for patching it to work with partitioned tables :)\n\n\n",
"msg_date": "Sun, 27 Dec 2020 11:39:43 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": ">> tablespace is an extraneous word ?\n\nThanks a lot for pointing this out. I will fix it in next patch once get all issues clarified.\n\n> On Sun, Dec 27, 2020 at 09:14:53AM -0400, Fabien COELHO wrote:\n>>> src/test/regress/sql/create_am.sql:CREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\n>>> ...\n>>> src/test/regress/sql/create_am.sql:DROP ACCESS METHOD heap2;\n\nDo you suggest to add a new positive test case by using table access method *heap2* created using the existing heap_tableam_handler?\n\n>> If you found pre-existing inconsistencies/errors/deficiencies, I'd suggest to\n>> fix them in a separate patch. Typically those are small, and you can make them\n>> 0001 patch. Errors might be worth backpatching, so in addition to making the\n>> \"feature\" patches small, it's good if they're separate.\n\n>> Like writing NAME vs TABLESPACE. Or escape vs escapes.\nI will provide a separate small patch when fixing the *tablespace* typo mention above. Can you help to clarity which releases or branches need to be back patched?\n\n>>> Or maybe using SET default_tablespace instead of modifying the CREATE sql.\n>>> That'd need to be done separately for indexes, and RESET after creating the\n>>> tables, to avoid accidentally affecting indexes, too.\n>> Why should it not affect indexes?\n> I rarely use pgbench, and probably never looked at its source before, but I saw\n> that it has a separate --tablespace and --index-tablespace, and that\n> --tablespace is *not* the default for indexes.\n>\n> So if we changed it to use SET default_tablespace instead of amending the DDL\n> sql, we'd need to make sure the SET applied only to the intended CREATE\n> command, and not all following create commands. In the case that\n> --index-tablespace is not specified, it would be buggy to do this:\n>\n> SET default_tablespace=foo;\n> CREATE TABLE ...\n> CREATE INDEX ...\n> CREATE TABLE ...\n> CREATE INDEX ...\n> ...\n\nIf this `tablespace` issue also applies to `table-access-method`, yes, I agree it could be buggy too. But, the purpose of this patch is to provide an option for end user to test a newly created table access method. If the the *new* table access method hasn't fully implemented all the interfaces yet, then no matter the end user use the option provided by this patch or the GUC parameter, the results will be the same.\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n\n\n\n>> tablespace is an extraneous word ?\n\n\nThanks a lot for pointing this out. I will fix it in next patch once get all issues clarified.\n\n\n\nOn Sun, Dec 27, 2020 at 09:14:53AM -0400, Fabien COELHO wrote:\n\n\n\nsrc/test/regress/sql/create_am.sql:CREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\n...\nsrc/test/regress/sql/create_am.sql:DROP ACCESS METHOD heap2;\n\n\n\n\nDo you suggest to add a new positive test case by using table access method *heap2* created using the existing heap_tableam_handler? \n>> If you found pre-existing inconsistencies/errors/deficiencies, I'd suggest to\n>> fix them in a separate patch. Typically those are small, and you can make them\n>> 0001 patch. Errors might be worth backpatching, so in addition to making the\n>> \"feature\" patches small, it's good if they're separate.\n\n>> Like writing NAME vs TABLESPACE. Or escape vs escapes.\nI will provide a separate small patch when fixing the *tablespace* typo mention above. Can you help to clarity which releases or branches need to be back patched? \n\n\n\n\nOr maybe using SET default_tablespace instead of modifying the CREATE sql.\nThat'd need to be done separately for indexes, and RESET after creating the\ntables, to avoid accidentally affecting indexes, too.\n\n\n\nWhy should it not affect indexes?\n\n\n\nI rarely use pgbench, and probably never looked at its source before, but I saw\nthat it has a separate --tablespace and --index-tablespace, and that\n--tablespace is *not* the default for indexes.\n\nSo if we changed it to use SET default_tablespace instead of amending the DDL\nsql, we'd need to make sure the SET applied only to the intended CREATE\ncommand, and not all following create commands. In the case that\n--index-tablespace is not specified, it would be buggy to do this:\n\nSET default_tablespace=foo;\nCREATE TABLE ...\nCREATE INDEX ...\nCREATE TABLE ...\nCREATE INDEX ...\n...\n\nIf this `tablespace` issue also applies to `table-access-method`, yes, I agree it could be buggy too. But, the purpose of this patch is to provide an option for end user to test a newly created table access method. If the the *new* table access method hasn't fully implemented all the interfaces yet, then no matter the end user use the option provided by this patch or the GUC parameter, the results will be the same.\n-- \n David\n\n Software Engineer\n Highgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Fri, 8 Jan 2021 16:59:46 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": ">+\t\t \" create tables with using specified table access method,\\n\"\r\n\r\nIn my opinion, this sentence should be \"create tables with specified table access method\" or \"create tables using specified table access method\".\r\n\"create tables with specified table access method\" may be more consistent with other options.",
"msg_date": "Sat, 09 Jan 2021 13:44:03 +0000",
"msg_from": "youichi aramaki <zyake.mk4@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "On 2021-01-09 5:44 a.m., youichi aramaki wrote:\n\n>> +\t\t \" create tables with using specified table access method,\\n\"\n> In my opinion, this sentence should be \"create tables with specified table access method\" or \"create tables using specified table access method\".\n> \"create tables with specified table access method\" may be more consistent with other options.\n\nThank you Youichi. I will change it to \"create tables with specified \ntable access method\" in next patch.\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Thu, 14 Jan 2021 15:51:03 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Hi,\n\n`v6-0001-add-table-access-method-as-an-option-to-pgbench` fixed the \nwording problems for pgbench document and help as they were pointed out \nby Justin and Youichi.\n\n`0001-update-tablespace-to-keep-document-consistency` is trying to make \nthe *tablespace* to be more consistent in pgbench document.\n\nThank you,\n\nOn 2021-01-14 3:51 p.m., David Zhang wrote:\n> On 2021-01-09 5:44 a.m., youichi aramaki wrote:\n>\n>>> + \" create tables with using specified table access \n>>> method,\\n\"\n>> In my opinion, this sentence should be \"create tables with specified \n>> table access method\" or \"create tables using specified table access \n>> method\".\n>> \"create tables with specified table access method\" may be more \n>> consistent with other options.\n>\n> Thank you Youichi. I will change it to \"create tables with specified \n> table access method\" in next patch.\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Fri, 15 Jan 2021 12:10:25 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-25 12:41:25 +0900, Michael Paquier wrote:\n> On Tue, Nov 24, 2020 at 03:32:38PM -0800, David Zhang wrote:\n> > But, providing another option for the end user may not be a bad idea, and it\n> > might make the tests easier at some points.\n> \n> My first thought is that we have no need to complicate pgbench with\n> this option because there is a GUC able to do that, but we do that for\n> tablespaces, so... No objections from here.\n\nI think that objection is right. All that's needed to change this from\nthe client side is to do something like\nPGOPTIONS='-c default_table_access_method=foo' pgbench ...\n\nI don't think adding pgbench options for individual GUCs really is a\nuseful exercise?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jan 2021 13:22:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "On Fri, Jan 15, 2021 at 01:22:45PM -0800, Andres Freund wrote:\n> I think that objection is right. All that's needed to change this from\n> the client side is to do something like\n> PGOPTIONS='-c default_table_access_method=foo' pgbench ...\n> \n> I don't think adding pgbench options for individual GUCs really is a\n> useful exercise?\n\nYeah. Looking at the latest patch, it just uses SET\ndefault_table_access_method to achieve its goal and to bypass the fact\nthat partitions don't support directly the AM clause. So I agree to\nmark this patch as rejected and move on. One thing that looks like an\nissue to me is that PGOPTIONS is not listed in the section for \nenvironment variables on the docs of pgbench. 5aaa584 has plugged in\na lot of holes, but things could be improved more, for more clients,\nwhere it makes sense of course.\n--\nMichael",
"msg_date": "Sat, 16 Jan 2021 13:34:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add table access method as an option to pgbench"
},
{
"msg_contents": "On 2021-01-15 1:22 p.m., Andres Freund wrote:\n\n> Hi,\n>\n> On 2020-11-25 12:41:25 +0900, Michael Paquier wrote:\n>> On Tue, Nov 24, 2020 at 03:32:38PM -0800, David Zhang wrote:\n>>> But, providing another option for the end user may not be a bad idea, and it\n>>> might make the tests easier at some points.\n>> My first thought is that we have no need to complicate pgbench with\n>> this option because there is a GUC able to do that, but we do that for\n>> tablespaces, so... No objections from here.\n> I think that objection is right. All that's needed to change this from\n> the client side is to do something like\n> PGOPTIONS='-c default_table_access_method=foo' pgbench ...\nYeah, this is a better solution for me too. Thanks a lot for all the \nfeedback.\n> I don't think adding pgbench options for individual GUCs really is a\n> useful exercise?\n>\n> Greetings,\n>\n> Andres Freund\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Tue, 19 Jan 2021 16:33:47 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Add table access method as an option to pgbench"
}
] |
[
{
"msg_contents": "Hackers,\n\nPostgreSQL is a complex multi-process system, and we are periodically faced\nwith complicated concurrency issues. While the postgres community does a\ngreat job on investigating and fixing the problems, our ability to\nreproduce concurrency issues in the source code test suites is limited.\n\nI think we currently have two general ways to reproduce the concurrency\nissues.\n1. A text scenario for manual reproduction of the issue, which could\ninvolve psql sessions, gdb sessions etc. Couple of examples are [1] and\n[2]. This method provides reliable reproduction of concurrency issues. But\nit's hard to automate, because it requires external instrumentation\n(debugger) and it's not stable in terms of postgres code changes (that is\nparticular line numbers for breakpoints could be changed). I think this is\nwhy we currently don't have such scenarios among postgres test suites.\n2. Another way is to reproduce the concurrency issue without directly\ntouching the database internals using pgbench or other way to simulate the\nworkload (see [3] for example). This way is easier to automate, because it\ndoesn't need external instrumentation and it's not so sensitive to source\ncode changes. But at the same time this way is not reliable and is\nresource-consuming.\n\nIn the view of above, I'd like to propose a POC patch, which implements new\nbuiltin infrastructure for reproduction of concurrency issues in automated\ntest suites. The general idea is so-called \"stop events\", which are\nspecial places in the code, where the execution could be stopped on some\ncondition. Stop event also exposes a set of parameters, encapsulated into\njsonb value. The condition over stop event parameters is defined using\njsonpath language.\n\nFollowing functions control behavior –\n * pg_stopevent_set(stopevent_name, jsonpath_conditon) – sets condition for\nthe stop event. Once the function is executed, all the backends, which run\na given stop event with parameters satisfying the given jsonpath condition,\nwill be stopped.\n * pg_stopevent_reset(stopevent_name) – resets stop events. All the\nbackends previously stopped on a given stop event will continue the\nexecution.\n\nFor sure, evaluation of stop events causes a CPU overhead. This is why\nit's controlled by enable_stopevents GUC, which is off by default. I expect\nthe overhead with enable_stopevents = off shouldn't be observable. Even if\nit would be observable, we could enable stop events only by specific\nconfigure parameter. There is also trace_stopevents GUC, which traces all\nthe stop events to the log with debug2 level.\n\nIn the code stop events are defined using macro STOPEVENT(event_id,\nparams). The 'params' should be a function call, and it's evaluated only\nif stop events are enabled. pg_isolation_test_session_is_blocked() takes\nstop events into account. So, stop events are suitable for isolation tests.\n\nPOC patch comes with a sample isolation test in\nsrc/test/isolation/specs/gin-traverse-deleted-pages.spec, which reproduces\nthe issue described in [2] (gin scan steps to the page concurrently deleted\nby vacuum).\n\n From my point of view, stop events would open great possibilities to\nimprove coverage of concurrency issues. They allow us to reliably test\nconcurrency issues in both isolation and tap test suites. And such test\nsuites don't take extraordinary resources for execution. The main cost\nhere is maintaining a set of stop events among the codebase. But I think\nthis cost is justified by much better coverage of concurrency issues.\n\nThe feedback is welcome.\n\nLinks.\n1. https://www.postgresql.org/message-id/4E1DE580.1090905%40enterprisedb.com\n2.\nhttps://www.postgresql.org/message-id/CAPpHfdvMvsw-NcE5bRS7R1BbvA4BxoDnVVjkXC5W0Czvy9LVrg%40mail.gmail.com\n3.\nhttps://www.postgresql.org/message-id/BF9B38A4-2BFF-46E8-BA87-A2D00A8047A6%40hintbits.com\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 25 Nov 2020 17:10:54 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On 2020-Nov-25, Alexander Korotkov wrote:\n\n> In the view of above, I'd like to propose a POC patch, which implements new\n> builtin infrastructure for reproduction of concurrency issues in automated\n> test suites. The general idea is so-called \"stop events\", which are\n> special places in the code, where the execution could be stopped on some\n> condition. Stop event also exposes a set of parameters, encapsulated into\n> jsonb value. The condition over stop event parameters is defined using\n> jsonpath language.\n\n+1 for the idea. I agree we have a need for something on this area;\nthere are *many* scenarios currently untested because of the lack of\nwhat you call \"stop points\". I don't know if jsonpath is the best way\nto implement it, but at least it is readily available and it seems a\ndecent way to go at it.\n\n\n\n",
"msg_date": "Fri, 4 Dec 2020 15:29:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 6:11 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> While the postgres community does a great job on investigating and fixing the problems, our ability to reproduce concurrency issues in the source code test suites is limited.\n\n+1. This seems really cool.\n\n> For sure, evaluation of stop events causes a CPU overhead. This is why it's controlled by enable_stopevents GUC, which is off by default. I expect the overhead with enable_stopevents = off shouldn't be observable. Even if it would be observable, we could enable stop events only by specific configure parameter. There is also trace_stopevents GUC, which traces all the stop events to the log with debug2 level.\n\nBut why even risk adding noticeable overhead when \"enable_stopevents =\noff \"? Even if it's a very small risk? We can still get most of the\nbenefit by enabling it only on certain builds and buildfarm animals.\nIt will be a bit annoying to not have stop events enabled in all\nbuilds, but it avoids the problem of even having to think about the\noverhead, now or in the future. I think that that trade-off is a good\none. Even if the performance trade-off is judged perfectly for the\nfirst few tests you add, what are the chances that it will stay that\nway as the infrastructure is used in more and more places? What if you\nneed to add a test to the back branches? Since we don't anticipate any\ndirect benefit for users (right?), I think that this question is\nsimple.\n\nI am not arguing for not enabling stop events on standard builds\nbecause the infrastructure isn't useful -- it's *very* useful. Useful\nenough that it would be nice to be able to use it extensively without\nreally thinking about the performance hit each time. I know that I'll\nbe *far* more likely to use it if I don't have to waste time and\nenergy on that aspect every single time.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 4 Dec 2020 10:57:13 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 9:29 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2020-Nov-25, Alexander Korotkov wrote:\n> > In the view of above, I'd like to propose a POC patch, which implements new\n> > builtin infrastructure for reproduction of concurrency issues in automated\n> > test suites. The general idea is so-called \"stop events\", which are\n> > special places in the code, where the execution could be stopped on some\n> > condition. Stop event also exposes a set of parameters, encapsulated into\n> > jsonb value. The condition over stop event parameters is defined using\n> > jsonpath language.\n>\n> +1 for the idea. I agree we have a need for something on this area;\n> there are *many* scenarios currently untested because of the lack of\n> what you call \"stop points\". I don't know if jsonpath is the best way\n> to implement it, but at least it is readily available and it seems a\n> decent way to go at it.\n\nThank you for your feedback. I agree with you regarding jsonpath. My\ninitial idea was to use the executor expressions. But executor\nexpressions require serialization/deserialization, while stop points\nneed to work cross-database or even with processes not connected to\nany database (such as checkpointer, background writer etc). That\nleads to difficulties, while jsonpath appears to be very easy for this\nuse-case.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 5 Dec 2020 00:15:15 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 9:57 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Nov 25, 2020 at 6:11 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > While the postgres community does a great job on investigating and fixing the problems, our ability to reproduce concurrency issues in the source code test suites is limited.\n>\n> +1. This seems really cool.\n>\n> > For sure, evaluation of stop events causes a CPU overhead. This is why it's controlled by enable_stopevents GUC, which is off by default. I expect the overhead with enable_stopevents = off shouldn't be observable. Even if it would be observable, we could enable stop events only by specific configure parameter. There is also trace_stopevents GUC, which traces all the stop events to the log with debug2 level.\n>\n> But why even risk adding noticeable overhead when \"enable_stopevents =\n> off \"? Even if it's a very small risk? We can still get most of the\n> benefit by enabling it only on certain builds and buildfarm animals.\n> It will be a bit annoying to not have stop events enabled in all\n> builds, but it avoids the problem of even having to think about the\n> overhead, now or in the future. I think that that trade-off is a good\n> one. Even if the performance trade-off is judged perfectly for the\n> first few tests you add, what are the chances that it will stay that\n> way as the infrastructure is used in more and more places? What if you\n> need to add a test to the back branches? Since we don't anticipate any\n> direct benefit for users (right?), I think that this question is\n> simple.\n>\n> I am not arguing for not enabling stop events on standard builds\n> because the infrastructure isn't useful -- it's *very* useful. Useful\n> enough that it would be nice to be able to use it extensively without\n> really thinking about the performance hit each time. I know that I'll\n> be *far* more likely to use it if I don't have to waste time and\n> energy on that aspect every single time.\n\nThank you for your feedback. We probably can't think over everything\nin advance. We can start with configure option enabled for developers\nand some buildfarm animals. That causes no risk of overhead in\nstandard builds. After some time, we may reconsider to enable stop\nevents even in standard build if we see they cause no regression.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 5 Dec 2020 00:20:27 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 1:20 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Thank you for your feedback. We probably can't think over everything\n> in advance. We can start with configure option enabled for developers\n> and some buildfarm animals. That causes no risk of overhead in\n> standard builds. After some time, we may reconsider to enable stop\n> events even in standard build if we see they cause no regression.\n\nI'll start using the configure option for debug builds only as soon as\npossible. It will easily work with my existing workflow.\n\nI don't know about anyone else, but for me this is only a very small\ninconvenience. Whereas the convenience of not having to think about\nthe performance impact seems huge.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 4 Dec 2020 14:00:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Wed, 25 Nov 2020 at 22:11, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hackers,\n>\n> PostgreSQL is a complex multi-process system, and we are periodically\n> faced with complicated concurrency issues. While the postgres community\n> does a great job on investigating and fixing the problems, our ability to\n> reproduce concurrency issues in the source code test suites is limited.\n>\n> I think we currently have two general ways to reproduce the concurrency\n> issues.\n> 1. A text scenario for manual reproduction of the issue, which could\n> involve psql sessions, gdb sessions etc. Couple of examples are [1] and\n> [2]. This method provides reliable reproduction of concurrency issues. But\n> it's hard to automate, because it requires external instrumentation\n> (debugger) and it's not stable in terms of postgres code changes (that is\n> particular line numbers for breakpoints could be changed). I think this is\n> why we currently don't have such scenarios among postgres test suites.\n> 2. Another way is to reproduce the concurrency issue without directly\n> touching the database internals using pgbench or other way to simulate the\n> workload (see [3] for example). This way is easier to automate, because it\n> doesn't need external instrumentation and it's not so sensitive to source\n> code changes. But at the same time this way is not reliable and is\n> resource-consuming.\n>\n\nAgreed.\n\nFor a useful but limited set of cases there's (3) the isolation tester and\npg_isolation_regress. But IIRC the patches to teach it to support multiple\nupstream nodes never got in, so it's essentially useless for any\nreplication related testing.\n\nThere's also (4), write a TAP test that uses concurrent psql sessions via\nIPC::Run. Then play games with heavyweight or advisory lock waits to order\nevents, use instance starts/stops, change ports or connstrings to simulate\nnetwork issues, use SIGSTOP/SIGCONTs, add src/test/modules extensions that\ninject faults or provide custom blocking wait functions for the event you\nwant, etc. I've done that more than I'd care to, and I don't want to do it\nany more than I have to in future.\n\nIn some cases I've gone further and written tests that use systemtap in\n\"guru\" mode (read/write, with embedded C enabled) to twiddle the memory of\nthe target process(es) when a probe is hit, e.g. to modify a function\nargument or return value or inject a fault. Not exactly portable or\nconvenient, though very powerful.\n\n\nIn the view of above, I'd like to propose a POC patch, which implements new\n> builtin infrastructure for reproduction of concurrency issues in automated\n> test suites. The general idea is so-called \"stop events\", which are\n> special places in the code, where the execution could be stopped on some\n> condition. Stop event also exposes a set of parameters, encapsulated into\n> jsonb value. The condition over stop event parameters is defined using\n> jsonpath language.\n>\n\nThe patched PostgreSQL used by 2ndQuadrant internally has a feature called\nPROBE_POINT()s that is somewhat akin to this. Since it's not a customer\nfacing feature I'm sure I can discuss it here, though I'll need to seek\npermission before I can show code.\n\nTL;DR: PROBE_POINT()s let you inject ERRORs, sleeps, crashes, and various\nother behaviour at points in the code marked by name, using GUCs, hooks\nloaded from test extensions, or even systemtap scripts to control what\nfires and when. Performance impact is essentially zero when no probes are\ncurrently enabled at runtime, so they're fine for cassert builds.\n\nDetails:\n\nA PROBE_POINT() is a macro that works as a marker, a bit like a\nTRACE_POSTGRESQL_.... dtrace macro. But instead of the super lightweight\ntracepoint that SDT marker points emit, a PROBE_POINT tests an\nunlikely(probe_points_enabled) flag, and if true, it prepares arguments for\nthe probe handler: A probe name, probe action, sleep duration, and a hit\ncounter.\n\nThe default probe action and sleep duration come from GUCs. So your control\nof the probe is limited to the granularity you can easily manage GUCs at.\nThat's often sufficient\n\nBut if you want finer control for something, there are two ways to achieve\nit.\n\nAfter picking the default arguments to the handler, the probe point checks\nfor a hook. If defined, it calls it with the probe point name and pointers\nto the action and sleep duration values, so the hook function can modify\nthem per probe-point hit. That way you can use in src/test/modules\nextensions or your own test extensions first, with the probe point name as\nan argument and the action and sleep duration as out-params, as well as any\naccessible global state, custom GUCs you define in your test extension,\netc. That's usually enough to target a probe very specifically but it's a\nbit of a hassle.\n\nAnother option is to use a systemtap script. You can write your code in\nsystemtap with its language. When the systemtap marker for a probe point\nevent fires, decide if it's the one you want and twiddle the target process\nvariables that store the probe action and sleep duration from the systemtap\nscript. I find this much more convenient for day to day testing, but\nbecause of systemtap portability challenges I don't find it as useful for\nwriting regression tests for repeat use.\n\nA PROBE_POINT() actually emits dtrace/perf SDT markers if postgres was\ncompiled with --enable-dtrace too, so you can use them with perf,\nsystemtap, bpftrace or whatever for read-only use. Including optional\narguments to the probe point. Exactly as if it was a TRACE_POSTGRESQL_foo\npoint, but without needing to hack probes.d for each one.\n\nThe PROBE_POINT() implementation can fake signal delivery with signal\nactions, which has been handy too.\n\nI also have a version of the code that takes arguments to the PROBE_POINT()\nand passes them to the handler function as a va_list too, with a\ncompile-time-generated array of argument types inferred by C11 _Generic as\nthe first argument. So your handler function can be passed\nprobe-point-specific contextual info like the current xid being committed\nor whatever. This isn't currently deployed.\n\nThe advantage of the PROBE_POINT() approach has been that it's generally\nvery cheap to check whether a probe point should fire, and it's basically\nfree to skip them if there are no probe points enabled right now. If we\nhashed the probe point names for the initial comparisons it'd be faster\nstill.\n\nI will seek approval to share the relevant code.\n\nFollowing functions control behavior –\n> * pg_stopevent_set(stopevent_name, jsonpath_conditon) – sets condition\n> for the stop event. Once the function is executed, all the backends, which\n> run a given stop event with parameters satisfying the given jsonpath\n> condition, will be stopped.\n> * pg_stopevent_reset(stopevent_name) – resets stop events. All the\n> backends previously stopped on a given stop event will continue the\n> execution.\n>\n\nDoes that offer any way to affect early startup, late shutdown, servers in\nwarm standby, etc? Or for that matter, any way to manipulate bgworkers and\nauxprocs or the postmaster itself, things you can't run a query on directly?\n\nAlso, based on my experience using PROBE_POINT()s I would suggest that in\naddition to a stop or start \"event\", it's desirable to be able to\nelog(PANIC), elog(ERROR), elog(LOG), and/or sleep() for a certain duration.\nI've found all to be extremely useful.\n\nIn the code stop events are defined using macro STOPEVENT(event_id,\n> params). The 'params' should be a function call, and it's evaluated only\n> if stop events are enabled. pg_isolation_test_session_is_blocked() takes\n> stop events into account.\n>\n\nOooh, that I like.\n\nPROBE_POINT()s don't do that, and it's annoying.\n\nOn Wed, 25 Nov 2020 at 22:11, Alexander Korotkov <aekorotkov@gmail.com> wrote:Hackers,PostgreSQL is a complex multi-process system, and we are periodically faced with complicated concurrency issues. While the postgres community does a great job on investigating and fixing the problems, our ability to reproduce concurrency issues in the source code test suites is limited.I think we currently have two general ways to reproduce the concurrency issues.1. A text scenario for manual reproduction of the issue, which could involve psql sessions, gdb sessions etc. Couple of examples are [1] and [2]. This method provides reliable reproduction of concurrency issues. But it's hard to automate, because it requires external instrumentation (debugger) and it's not stable in terms of postgres code changes (that is particular line numbers for breakpoints could be changed). I think this is why we currently don't have such scenarios among postgres test suites.2. Another way is to reproduce the concurrency issue without directly touching the database internals using pgbench or other way to simulate the workload (see [3] for example). This way is easier to automate, because it doesn't need external instrumentation and it's not so sensitive to source code changes. But at the same time this way is not reliable and is resource-consuming.Agreed.For a useful but limited set of cases there's (3) the isolation tester and pg_isolation_regress. But IIRC the patches to teach it to support multiple upstream nodes never got in, so it's essentially useless for any replication related testing.There's also (4), write a TAP test that uses concurrent psql sessions via IPC::Run. Then play games with heavyweight or advisory lock waits to order events, use instance starts/stops, change ports or connstrings to simulate network issues, use SIGSTOP/SIGCONTs, add src/test/modules extensions that inject faults or provide custom blocking wait functions for the event you want, etc. I've done that more than I'd care to, and I don't want to do it any more than I have to in future.In some cases I've gone further and written tests that use systemtap in \"guru\" mode (read/write, with embedded C enabled) to twiddle the memory of the target process(es) when a probe is hit, e.g. to modify a function argument or return value or inject a fault. Not exactly portable or convenient, though very powerful. In the view of above, I'd like to propose a POC patch, which implements new builtin infrastructure for reproduction of concurrency issues in automated test suites. The general idea is so-called \"stop events\", which are special places in the code, where the execution could be stopped on some condition. Stop event also exposes a set of parameters, encapsulated into jsonb value. The condition over stop event parameters is defined using jsonpath language.The patched PostgreSQL used by 2ndQuadrant internally has a feature called PROBE_POINT()s that is somewhat akin to this. Since it's not a customer facing feature I'm sure I can discuss it here, though I'll need to seek permission before I can show code.TL;DR: PROBE_POINT()s let you inject ERRORs, sleeps, crashes, and various other behaviour at points in the code marked by name, using GUCs, hooks loaded from test extensions, or even systemtap scripts to control what fires and when. Performance impact is essentially zero when no probes are currently enabled at runtime, so they're fine for cassert builds.Details:A PROBE_POINT() is a macro that works as a marker, a bit like a TRACE_POSTGRESQL_.... dtrace macro. But instead of the super lightweight tracepoint that SDT marker points emit, a PROBE_POINT tests an unlikely(probe_points_enabled) flag, and if true, it prepares arguments for the probe handler: A probe name, probe action, sleep duration, and a hit counter.The default probe action and sleep duration come from GUCs. So your control of the probe is limited to the granularity you can easily manage GUCs at. That's often sufficientBut if you want finer control for something, there are two ways to achieve it.After picking the default arguments to the handler, the probe point checks for a hook. If defined, it calls it with the probe point name and pointers to the action and sleep duration values, so the hook function can modify them per probe-point hit. That way you can use in src/test/modules extensions or your own test extensions first, with the probe point name as an argument and the action and sleep duration as out-params, as well as any accessible global state, custom GUCs you define in your test extension, etc. That's usually enough to target a probe very specifically but it's a bit of a hassle.Another option is to use a systemtap script. You can write your code in systemtap with its language. When the systemtap marker for a probe point event fires, decide if it's the one you want and twiddle the target process variables that store the probe action and sleep duration from the systemtap script. I find this much more convenient for day to day testing, but because of systemtap portability challenges I don't find it as useful for writing regression tests for repeat use.A PROBE_POINT() actually emits dtrace/perf SDT markers if postgres was compiled with --enable-dtrace too, so you can use them with perf, systemtap, bpftrace or whatever for read-only use. Including optional arguments to the probe point. Exactly as if it was a TRACE_POSTGRESQL_foo point, but without needing to hack probes.d for each one. The PROBE_POINT() implementation can fake signal delivery with signal actions, which has been handy too.I also have a version of the code that takes arguments to the PROBE_POINT() and passes them to the handler function as a va_list too, with a compile-time-generated array of argument types inferred by C11 _Generic as the first argument. So your handler function can be passed probe-point-specific contextual info like the current xid being committed or whatever. This isn't currently deployed.The advantage of the PROBE_POINT() approach has been that it's generally very cheap to check whether a probe point should fire, and it's basically free to skip them if there are no probe points enabled right now. If we hashed the probe point names for the initial comparisons it'd be faster still.I will seek approval to share the relevant code.Following functions control behavior – * pg_stopevent_set(stopevent_name, jsonpath_conditon) – sets condition for the stop event. Once the function is executed, all the backends, which run a given stop event with parameters satisfying the given jsonpath condition, will be stopped. * pg_stopevent_reset(stopevent_name) – resets stop events. All the backends previously stopped on a given stop event will continue the execution.Does that offer any way to affect early startup, late shutdown, servers in warm standby, etc? Or for that matter, any way to manipulate bgworkers and auxprocs or the postmaster itself, things you can't run a query on directly?Also, based on my experience using PROBE_POINT()s I would suggest that in addition to a stop or start \"event\", it's desirable to be able to elog(PANIC), elog(ERROR), elog(LOG), and/or sleep() for a certain duration. I've found all to be extremely useful. In the code stop events are defined using macro STOPEVENT(event_id, params). The 'params' should be a function call, and it's evaluated only if stop events are enabled. pg_isolation_test_session_is_blocked() takes stop events into account.Oooh, that I like.PROBE_POINT()s don't do that, and it's annoying.",
"msg_date": "Mon, 7 Dec 2020 14:09:35 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "Hi!\n\nOn Mon, Dec 7, 2020 at 9:10 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> On Wed, 25 Nov 2020 at 22:11, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> PostgreSQL is a complex multi-process system, and we are periodically faced with complicated concurrency issues. While the postgres community does a great job on investigating and fixing the problems, our ability to reproduce concurrency issues in the source code test suites is limited.\n>>\n>> I think we currently have two general ways to reproduce the concurrency issues.\n>> 1. A text scenario for manual reproduction of the issue, which could involve psql sessions, gdb sessions etc. Couple of examples are [1] and [2]. This method provides reliable reproduction of concurrency issues. But it's hard to automate, because it requires external instrumentation (debugger) and it's not stable in terms of postgres code changes (that is particular line numbers for breakpoints could be changed). I think this is why we currently don't have such scenarios among postgres test suites.\n>> 2. Another way is to reproduce the concurrency issue without directly touching the database internals using pgbench or other way to simulate the workload (see [3] for example). This way is easier to automate, because it doesn't need external instrumentation and it's not so sensitive to source code changes. But at the same time this way is not reliable and is resource-consuming.\n>\n> Agreed.\n>\n> For a useful but limited set of cases there's (3) the isolation tester and pg_isolation_regress. But IIRC the patches to teach it to support multiple upstream nodes never got in, so it's essentially useless for any replication related testing.\n>\n> There's also (4), write a TAP test that uses concurrent psql sessions via IPC::Run. Then play games with heavyweight or advisory lock waits to order events, use instance starts/stops, change ports or connstrings to simulate network issues, use SIGSTOP/SIGCONTs, add src/test/modules extensions that inject faults or provide custom blocking wait functions for the event you want, etc. I've done that more than I'd care to, and I don't want to do it any more than I have to in future.\n\nSure, there are isolation tester and TAP tests. I just meant the\nscenarios, where we can't reliably reproduce using either isolation\ntests or tap tests. Sorry for confusion.\n\n> In some cases I've gone further and written tests that use systemtap in \"guru\" mode (read/write, with embedded C enabled) to twiddle the memory of the target process(es) when a probe is hit, e.g. to modify a function argument or return value or inject a fault. Not exactly portable or convenient, though very powerful.\n\nExactly, systemtap is good, but we need something more portable and\nconvenient for builtin test suites.\n\n>> In the view of above, I'd like to propose a POC patch, which implements new builtin infrastructure for reproduction of concurrency issues in automated test suites. The general idea is so-called \"stop events\", which are special places in the code, where the execution could be stopped on some condition. Stop event also exposes a set of parameters, encapsulated into jsonb value. The condition over stop event parameters is defined using jsonpath language.\n>\n>\n> The patched PostgreSQL used by 2ndQuadrant internally has a feature called PROBE_POINT()s that is somewhat akin to this. Since it's not a customer facing feature I'm sure I can discuss it here, though I'll need to seek permission before I can show code.\n>\n> TL;DR: PROBE_POINT()s let you inject ERRORs, sleeps, crashes, and various other behaviour at points in the code marked by name, using GUCs, hooks loaded from test extensions, or even systemtap scripts to control what fires and when. Performance impact is essentially zero when no probes are currently enabled at runtime, so they're fine for cassert builds.\n>\n> Details:\n>\n> A PROBE_POINT() is a macro that works as a marker, a bit like a TRACE_POSTGRESQL_.... dtrace macro. But instead of the super lightweight tracepoint that SDT marker points emit, a PROBE_POINT tests an unlikely(probe_points_enabled) flag, and if true, it prepares arguments for the probe handler: A probe name, probe action, sleep duration, and a hit counter.\n>\n> The default probe action and sleep duration come from GUCs. So your control of the probe is limited to the granularity you can easily manage GUCs at. That's often sufficient\n>\n> But if you want finer control for something, there are two ways to achieve it.\n>\n> After picking the default arguments to the handler, the probe point checks for a hook. If defined, it calls it with the probe point name and pointers to the action and sleep duration values, so the hook function can modify them per probe-point hit. That way you can use in src/test/modules extensions or your own test extensions first, with the probe point name as an argument and the action and sleep duration as out-params, as well as any accessible global state, custom GUCs you define in your test extension, etc. That's usually enough to target a probe very specifically but it's a bit of a hassle.\n>\n> Another option is to use a systemtap script. You can write your code in systemtap with its language. When the systemtap marker for a probe point event fires, decide if it's the one you want and twiddle the target process variables that store the probe action and sleep duration from the systemtap script. I find this much more convenient for day to day testing, but because of systemtap portability challenges I don't find it as useful for writing regression tests for repeat use.\n>\n> A PROBE_POINT() actually emits dtrace/perf SDT markers if postgres was compiled with --enable-dtrace too, so you can use them with perf, systemtap, bpftrace or whatever for read-only use. Including optional arguments to the probe point. Exactly as if it was a TRACE_POSTGRESQL_foo point, but without needing to hack probes.d for each one.\n>\n> The PROBE_POINT() implementation can fake signal delivery with signal actions, which has been handy too.\n>\n> I also have a version of the code that takes arguments to the PROBE_POINT() and passes them to the handler function as a va_list too, with a compile-time-generated array of argument types inferred by C11 _Generic as the first argument. So your handler function can be passed probe-point-specific contextual info like the current xid being committed or whatever. This isn't currently deployed.\n>\n> The advantage of the PROBE_POINT() approach has been that it's generally very cheap to check whether a probe point should fire, and it's basically free to skip them if there are no probe points enabled right now. If we hashed the probe point names for the initial comparisons it'd be faster still.\n>\n> I will seek approval to share the relevant code.\n\nIt's nice to know that we've also worked in this direction. I was a\nbit surprised when I didn't find relevant patches published in the\nmailing lists. I hope you would be able to share the code, it would\nbe very nice to see.\n\n>> Following functions control behavior –\n>> * pg_stopevent_set(stopevent_name, jsonpath_conditon) – sets condition for the stop event. Once the function is executed, all the backends, which run a given stop event with parameters satisfying the given jsonpath condition, will be stopped.\n>> * pg_stopevent_reset(stopevent_name) – resets stop events. All the backends previously stopped on a given stop event will continue the execution.\n>\n>\n> Does that offer any way to affect early startup, late shutdown, servers in warm standby, etc? Or for that matter, any way to manipulate bgworkers and auxprocs or the postmaster itself, things you can't run a query on directly?\n\nUsing the current version of patch you can manipulate bgworkers and\nauxprocs as soon as they're connected to the shmem. We can write\nqueries from another backend and the setting affects the whole\ncluster. I'm planning to add the ability to access the process\ninformation from the jsonpath condition. So, we would be able to\nchoose which process to stop on the stop event.\n\nEarly startup, late shutdown, servers in warm standby are not\nsupported yet. I think this could be done using GUCa and hooks +\ncustom extensions in the similar way you describe it for\nPROBE_POINT().\n\nAlso, I don't think we need to support everything at once. It would\nbe nice to get something simple as soon as we have a clear roadmap of\nhow to add the rest of the features later.\n\n> Also, based on my experience using PROBE_POINT()s I would suggest that in addition to a stop or start \"event\", it's desirable to be able to elog(PANIC), elog(ERROR), elog(LOG), and/or sleep() for a certain duration. I've found all to be extremely useful.\n>\n>> In the code stop events are defined using macro STOPEVENT(event_id, params). The 'params' should be a function call, and it's evaluated only if stop events are enabled. pg_isolation_test_session_is_blocked() takes stop events into account.\n>\n>\n> Oooh, that I like.\n>\n> PROBE_POINT()s don't do that, and it's annoying.\n\nThank you for your feedback. I'm looking forward if you can publish\nthe PROBE_POINT() work.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 7 Dec 2020 20:31:21 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "Hi Alexander!\n\n> 25 нояб. 2020 г., в 19:10, Alexander Korotkov <aekorotkov@gmail.com> написал(а):\n> \n> In the code stop events are defined using macro STOPEVENT(event_id, params). The 'params' should be a function call, and it's evaluated only if stop events are enabled. pg_isolation_test_session_is_blocked() takes stop events into account. So, stop events are suitable for isolation tests.\n\nThanks for this infrastructure. Looks like a really nice way to increase test coverage of most difficult things.\n\nCan we also somehow prove that test was deterministic? I.e. expect number of blocked backends (if known) or something like that.\nI'm not really sure it's useful, just an idea.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 8 Dec 2020 15:26:27 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 1:26 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 25 нояб. 2020 г., в 19:10, Alexander Korotkov <aekorotkov@gmail.com> написал(а):\n> >\n> > In the code stop events are defined using macro STOPEVENT(event_id, params). The 'params' should be a function call, and it's evaluated only if stop events are enabled. pg_isolation_test_session_is_blocked() takes stop events into account. So, stop events are suitable for isolation tests.\n>\n> Thanks for this infrastructure. Looks like a really nice way to increase test coverage of most difficult things.\n>\n> Can we also somehow prove that test was deterministic? I.e. expect number of blocked backends (if known) or something like that.\n> I'm not really sure it's useful, just an idea.\n\nThank you for your feedback!\n\nI forgot to mention, patch comes with pg_stopevents() function which\nreturns rowset (stopevent text, condition jsonpath, waiters int[]).\nWaiters is an array of pids of waiting processes.\n\nAdditionally, isolation tester checks if a particular backend is\nwaiting using pg_isolation_test_session_is_blocked(), which works with\nstop events too.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 8 Dec 2020 13:41:56 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 2:42 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Thank you for your feedback!\n\nIt would be nice to use this patch to test things that are important\nbut untested inside vacuumlazy.c, such as the rare\nHEAPTUPLE_DEAD/tupgone case (grep for \"Ordinarily, DEAD tuples would\nhave been removed by...\"). Same is true of the closely related\nheap_prepare_freeze_tuple()/heap_tuple_needs_freeze() code.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 22 Feb 2021 16:08:51 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "Hi!\n\nOn Tue, Feb 23, 2021 at 3:09 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Dec 8, 2020 at 2:42 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > Thank you for your feedback!\n>\n> It would be nice to use this patch to test things that are important\n> but untested inside vacuumlazy.c, such as the rare\n> HEAPTUPLE_DEAD/tupgone case (grep for \"Ordinarily, DEAD tuples would\n> have been removed by...\"). Same is true of the closely related\n> heap_prepare_freeze_tuple()/heap_tuple_needs_freeze() code.\n\nI'll continue work on this patch. The rebased patch is attached. It\nimplements stop events as configure option (not runtime GUC option).\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 1 Sep 2022 03:51:45 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Tue, 23 Feb 2021 at 08:09, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Tue, Dec 8, 2020 at 2:42 AM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > Thank you for your feedback!\n>\n> It would be nice to use this patch to test things that are important\n> but untested inside vacuumlazy.c, such as the rare\n> HEAPTUPLE_DEAD/tupgone case (grep for \"Ordinarily, DEAD tuples would\n> have been removed by...\"). Same is true of the closely related\n> heap_prepare_freeze_tuple()/heap_tuple_needs_freeze() code.\n>\n\nThat's what the PROBE_POINT()s functionality I referenced is for, too.\n\nThe proposed stop events feature has finer grained control over when the\nevents fire than PROBE_POINT()s do. That's probably the main limitation in\nthe PROBE_POINT()s functionality right now - controlling it at runtime is a\nbit crude unless you opt for using a C test extension or a systemtap\nscript, and both those have other downsides.\n\nOn the other hand, PROBE_POINT()s are lighter weight when not actively\nturned on, to the point where they can be included in production builds to\nfacilitate support and runtime diagnostics. They interoperate very nicely\nwith static tracepoint markers (SDTs), the TRACE_POSTGRESQL_FOO(...) stuff,\nso there's no need to yet another separate set of debug markers scattered\nthrough the code. They can perform a wider set of actions useful for\ntesting and diagnostics - PANIC the current backend, self-deliver an\narbitrary signal, force a LOG message, introduce an interruptible or\nuninterruptible sleep, send a message to the client if any (handy for\nregress tests), or fire an extension-defined callback function.\n\nI'd like to find a way to get the best of both worlds if possible.\n\nRather than completely sidetrack the thread on this patch I posted the\nPROBE_POINT()s patch on a separate thread here.\n\nOn Tue, 23 Feb 2021 at 08:09, Peter Geoghegan <pg@bowt.ie> wrote:On Tue, Dec 8, 2020 at 2:42 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Thank you for your feedback!\n\nIt would be nice to use this patch to test things that are important\nbut untested inside vacuumlazy.c, such as the rare\nHEAPTUPLE_DEAD/tupgone case (grep for \"Ordinarily, DEAD tuples would\nhave been removed by...\"). Same is true of the closely related\nheap_prepare_freeze_tuple()/heap_tuple_needs_freeze() code.That's what the PROBE_POINT()s functionality I referenced is for, too.The proposed stop events feature has finer grained control over when the events fire than PROBE_POINT()s do. That's probably the main limitation in the PROBE_POINT()s functionality right now - controlling it at runtime is a bit crude unless you opt for using a C test extension or a systemtap script, and both those have other downsides.On the other hand, PROBE_POINT()s are lighter weight when not actively turned on, to the point where they can be included in production builds to facilitate support and runtime diagnostics. They interoperate very nicely with static tracepoint markers (SDTs), the TRACE_POSTGRESQL_FOO(...) stuff, so there's no need to yet another separate set of debug markers scattered through the code. They can perform a wider set of actions useful for testing and diagnostics - PANIC the current backend, self-deliver an arbitrary signal, force a LOG message, introduce an interruptible or uninterruptible sleep, send a message to the client if any (handy for regress tests), or fire an extension-defined callback function.I'd like to find a way to get the best of both worlds if possible.Rather than completely sidetrack the thread on this patch I posted the PROBE_POINT()s patch on a separate thread here.",
"msg_date": "Tue, 18 Oct 2022 09:06:14 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 20:52, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> I'll continue work on this patch. The rebased patch is attached. It\n> implements stop events as configure option (not runtime GUC option).\n\nIt looks like this patch isn't going to be ready this commitfest. And\nit hasn't received much discussion since August 2022. If I'm wrong say\nsomething but otherwise I'll mark it Returned With Feedback. It can be\nresurrected (and moved to the next commitfest) when you're free to\nwork on it again.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 28 Mar 2023 14:44:21 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 9:44 PM Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n> On Wed, 31 Aug 2022 at 20:52, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > I'll continue work on this patch. The rebased patch is attached. It\n> > implements stop events as configure option (not runtime GUC option).\n>\n> It looks like this patch isn't going to be ready this commitfest. And\n> it hasn't received much discussion since August 2022. If I'm wrong say\n> something but otherwise I'll mark it Returned With Feedback. It can be\n> resurrected (and moved to the next commitfest) when you're free to\n> work on it again.\n\nI'm good with that.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 28 Mar 2023 22:37:48 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: POC: Better infrastructure for automated testing of concurrency\n issues"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI wonder if it is considered as correct behavior that transaction \nconflict detection depends on presence of primary key:\n\ncreate table t(pk integer, val integer);\ninsert into t values (1,0),(2,0);\n\nSession1: begin isolation level serializable;\nSession2: begin isolation level serializable;\nSession1: update t set val=val+1 where pk=1;\nSession2: update t set val=val+1 where pk=2;\nSession1: commit;\nSession2: commit;\nERROR: could not serialize access due to read/write dependencies among \ntransactions\nDETAIL: Reason code: Canceled on identification as a pivot, during \ncommit attempt.\nHINT: The transaction might succeed if retried.\n\n\nNow let's repeat the same scenario but with \"pk\" declared as primary key:\n\ncreate table t(pk integer primary key, val integer);\ninsert into t values (1,0),(2,0);\n\nSession1: begin isolation level serializable;\nSession2: begin isolation level serializable;\nSession1: update t set val=val+1 where pk=1;\nSession2: update t set val=val+1 where pk=2;\nSession1: commit;\nSession2: commit;\n\nNow both transactions are succeeded.\nPlease notice, that even if it is expected behavior, hint in error \nmessage is not correct, because transaction is actually aborted and \nthere is no chance to retry it.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 25 Nov 2020 18:14:44 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Transaction isolation and table contraints"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 8:14 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n> Hi hackers,\n>\n> I wonder if it is considered as correct behavior that transaction\n> conflict detection depends on presence of primary key:\n>\n> create table t(pk integer, val integer);\n> insert into t values (1,0),(2,0);\n>\n> ERROR: could not serialize access due to read/write dependencies among\n> transactions\n> [...]\n> Now let's repeat the same scenario but with \"pk\" declared as primary key:\n>\n> create table t(pk integer primary key, val integer);\n> insert into t values (1,0),(2,0);\n>\n> Now both transactions are succeeded.\n>\n\n From the docs:\n\n\"A sequential scan will always necessitate a relation-level predicate lock.\nThis can result in an increased rate of serialization failures.\"\n\nThe two seem possibly related (I'm not experienced with using serializable)\n\nhttps://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE\n\n\n> Please notice, that even if it is expected behavior, hint in error\n> message is not correct, because transaction is actually aborted and\n> there is no chance to retry it.\n>\n>\nIt is technically correct, it just doesn't describe precisely how to\nperform said retry, leaving that up to the reader to glean from the\ndocumentation.\n\nThe hint assumes users of serializable isolation mode are familiar with\ntransaction mechanics. In particular, the application needs to be prepared\nto retry failed transactions, and part of that is knowing that PostgreSQL\nwill not automatically rollback a failed transaction for you, it is\nsomething that must be done as part of the error detection and recovery\ninfrastructure that is needed when writing applications that utilize\nserializable.\n\nDavid J.\n\nOn Wed, Nov 25, 2020 at 8:14 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:Hi hackers,\n\nI wonder if it is considered as correct behavior that transaction \nconflict detection depends on presence of primary key:\n\ncreate table t(pk integer, val integer);\ninsert into t values (1,0),(2,0);\nERROR: could not serialize access due to read/write dependencies among \ntransactions[...]\nNow let's repeat the same scenario but with \"pk\" declared as primary key:\n\ncreate table t(pk integer primary key, val integer);\ninsert into t values (1,0),(2,0);\nNow both transactions are succeeded.\nFrom the docs:\"A sequential scan will always necessitate a relation-level predicate lock. This can result in an increased rate of serialization failures.\"The two seem possibly related (I'm not experienced with using serializable)https://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE Please notice, that even if it is expected behavior, hint in error \nmessage is not correct, because transaction is actually aborted and \nthere is no chance to retry it.It is technically correct, it just doesn't describe precisely how to perform said retry, leaving that up to the reader to glean from the documentation.The hint assumes users of serializable isolation mode are familiar with transaction mechanics. In particular, the application needs to be prepared to retry failed transactions, and part of that is knowing that PostgreSQL will not automatically rollback a failed transaction for you, it is something that must be done as part of the error detection and recovery infrastructure that is needed when writing applications that utilize serializable.David J.",
"msg_date": "Wed, 25 Nov 2020 09:21:35 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction isolation and table contraints"
}
] |
[
{
"msg_contents": "Improving spin-lock implementation on ARM.\n------------------------------------------------------------\n\n* Spin-Lock is known to have a significant effect on performance\n with increasing scalability.\n\n* Existing Spin-Lock implementation for ARM is sub-optimal due to\n use of TAS (test and swap)\n\n* TAS is implemented on ARM as load-store so even if the lock is not free,\n store operation will execute to replace the same value.\n This redundant operation (mainly store) is costly.\n\n* CAS is implemented on ARM as load-check-store-check that means if the\n lock is not free, check operation, post-load will cause the loop to\n return there-by saving on costlier store operation. [1]\n\n* x86 uses optimized xchg operation.\n ARM too started supporting it (using Large System Extension) with\n ARM-v8.1 but since it not supported with ARM-v8, GCC default tends\n to roll more generic load-store assembly code.\n\n* gcc-9.4+ onwards there is support for outline-atomics that could emit\n both the variants of the code (load-store and cas/swp) and based on\n underlying supported architecture proper variant it used but still a lot\n of distros don't support GCC-9.4 as the default compiler.\n\n* In light of this, we would like to propose a CAS-based approach based on\n our local testing has shown improvement in the range of 10-40%.\n (attaching graph).\n\n* Patch enables CAS based approach if the CAS is supported depending on\n existing compiled flag HAVE_GCC__ATOMIC_INT32_CAS\n\n(Thanks to Amit Khandekar for rigorously performance testing this patch\n with different combinations).\n\n[1]: https://godbolt.org/z/jqbEsa\n\nP.S: Sorry if I missed any standard pgsql protocol since I am just starting\nwith pgsql.\n\n---\nKrunal Bauskar\n#mysqlonarm\nHuawei Technologies",
"msg_date": "Thu, 26 Nov 2020 10:00:50 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 10:00:50AM +0530, Krunal Bauskar wrote:\n> (Thanks to Amit Khandekar for rigorously performance testing this patch\n> with different combinations).\n\nFor the simple-update and tpcb-like graphs, do you have any actual\nnumbers to share between 128 and 1024 connections? The blue lines\nlook like they are missing some measurements in-between, so it is hard\nto tell if this is an actual improvement or just some lack of data.\n--\nMichael",
"msg_date": "Thu, 26 Nov 2020 14:06:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "scalability baseline patched\n----------- --------- ----------\n update tpcb update tpcb\n--------------------------------------------------------------\n128 107932 78554 108081 78569\n256 82877 64682 101543 73774\n512 55174 46494 77886 61105\n1024 32267 27020 33170 30597\n\nconfiguration:\nhttps://github.com/mysqlonarm/benchmark-suites/blob/master/pgsql-pbench/conf/pgsql.cnf/postgresql.conf\n\n\nOn Thu, 26 Nov 2020 at 10:36, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Nov 26, 2020 at 10:00:50AM +0530, Krunal Bauskar wrote:\n> > (Thanks to Amit Khandekar for rigorously performance testing this patch\n> > with different combinations).\n>\n> For the simple-update and tpcb-like graphs, do you have any actual\n> numbers to share between 128 and 1024 connections? The blue lines\n> look like they are missing some measurements in-between, so it is hard\n> to tell if this is an actual improvement or just some lack of data.\n> --\n> Michael\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nscalability baseline patched----------- --------- ---------- update tpcb update tpcb--------------------------------------------------------------128 107932\t78554\t 108081\t78569 256 82877\t64682 101543\t73774512 55174\t46494 77886\t611051024 32267\t27020 33170\t30597configuration: https://github.com/mysqlonarm/benchmark-suites/blob/master/pgsql-pbench/conf/pgsql.cnf/postgresql.confOn Thu, 26 Nov 2020 at 10:36, Michael Paquier <michael@paquier.xyz> wrote:On Thu, Nov 26, 2020 at 10:00:50AM +0530, Krunal Bauskar wrote:\n> (Thanks to Amit Khandekar for rigorously performance testing this patch\n> with different combinations).\n\nFor the simple-update and tpcb-like graphs, do you have any actual\nnumbers to share between 128 and 1024 connections? The blue lines\nlook like they are missing some measurements in-between, so it is hard\nto tell if this is an actual improvement or just some lack of data.\n--\nMichael\n-- Regards,Krunal Bauskar",
"msg_date": "Thu, 26 Nov 2020 10:44:41 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Nov 26, 2020 at 10:00:50AM +0530, Krunal Bauskar wrote:\n>> (Thanks to Amit Khandekar for rigorously performance testing this patch\n>> with different combinations).\n\n> For the simple-update and tpcb-like graphs, do you have any actual\n> numbers to share between 128 and 1024 connections?\n\nAlso, exactly what hardware/software platform were these curves\nobtained on?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Nov 2020 00:20:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Thu, 26 Nov 2020 at 10:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Thu, Nov 26, 2020 at 10:00:50AM +0530, Krunal Bauskar wrote:\n> >> (Thanks to Amit Khandekar for rigorously performance testing this patch\n> >> with different combinations).\n>\n> > For the simple-update and tpcb-like graphs, do you have any actual\n> > numbers to share between 128 and 1024 connections?\n>\n> Also, exactly what hardware/software platform were these curves\n> obtained on?\n>\n\nHardware: ARM Kunpeng 920 BareMetal Server 2.6 GHz. 64 cores (56 cores for\nserver and 8 for client) [2 numa nodes]\nStorage: 3.2 TB NVMe SSD\nOS: CentOS Linux release 7.6\nPGSQL: baseline = Release Tag 13.1\nInvocation suite:\nhttps://github.com/mysqlonarm/benchmark-suites/tree/master/pgsql-pbench (Uses\npgbench)\n\n\n\n> regards, tom lane\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Thu, 26 Nov 2020 at 10:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Nov 26, 2020 at 10:00:50AM +0530, Krunal Bauskar wrote:\n>> (Thanks to Amit Khandekar for rigorously performance testing this patch\n>> with different combinations).\n\n> For the simple-update and tpcb-like graphs, do you have any actual\n> numbers to share between 128 and 1024 connections?\n\nAlso, exactly what hardware/software platform were these curves\nobtained on?Hardware: ARM Kunpeng 920 BareMetal Server 2.6 GHz. 64 cores (56 cores for server and 8 for client) [2 numa nodes]Storage: 3.2 TB NVMe SSDOS: CentOS Linux release 7.6PGSQL: baseline = Release Tag 13.1Invocation suite: https://github.com/mysqlonarm/benchmark-suites/tree/master/pgsql-pbench (Uses pgbench) \n regards, tom lane\n-- Regards,Krunal Bauskar",
"msg_date": "Thu, 26 Nov 2020 10:54:22 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Thu, 26 Nov 2020 at 10:55, Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> Hardware: ARM Kunpeng 920 BareMetal Server 2.6 GHz. 64 cores (56 cores for server and 8 for client) [2 numa nodes]\n> Storage: 3.2 TB NVMe SSD\n> OS: CentOS Linux release 7.6\n> PGSQL: baseline = Release Tag 13.1\n> Invocation suite: https://github.com/mysqlonarm/benchmark-suites/tree/master/pgsql-pbench (Uses pgbench)\n\nUsing the same hardware, attached are my improvement figures, which\nare pretty much in line with your figures. Except that, I did not run\nfor more than 400 number of clients. And, I am getting some\nimprovement even for select-only workloads, in case of 200-400\nclients. For read-write load, I had seen that the s_lock() contention\nwas caused when the XLogFlush() uses the spinlock. But for read-only\ncase, I have not analyzed where the improvement occurred.\n\nThe .png files in the attached tar have the graphs for head versus patch.\n\nThe GUCs that I changed :\n\nwork_mem=64MB\nshared_buffers=128GB\nmaintenance_work_mem = 1GB\nmin_wal_size = 20GB\nmax_wal_size = 100GB\ncheckpoint_timeout = 60min\ncheckpoint_completion_target = 0.9\nfull_page_writes = on\nsynchronous_commit = on\neffective_io_concurrency = 200\nlog_checkpoints = on\n\nFor backends, 64 CPUs were allotted (covering 2 NUMA nodes) , and for\npgbench clients a separate set of 28 CPUs were allotted on a different\nsocket. Server was pre_warmed().",
"msg_date": "Thu, 26 Nov 2020 12:18:12 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On 26/11/2020 06:30, Krunal Bauskar wrote:\n> Improving spin-lock implementation on ARM.\n> ------------------------------------------------------------\n> \n> * Spin-Lock is known to have a significant effect on performance\n> with increasing scalability.\n> \n> * Existing Spin-Lock implementation for ARM is sub-optimal due to\n> use of TAS (test and swap)\n> \n> * TAS is implemented on ARM as load-store so even if the lock is not free,\n> store operation will execute to replace the same value.\n> This redundant operation (mainly store) is costly.\n> \n> * CAS is implemented on ARM as load-check-store-check that means if the\n> lock is not free, check operation, post-load will cause the loop to\n> return there-by saving on costlier store operation. [1]\n\nCan you add some code comments to explain that why CAS is cheaper than \nTAS on ARM?\n\nIs there some official ARM documentation, like a programmer's reference \nmanual or something like that, that would show a reference \nimplementation of a spinlock on ARM? It would be good to refer to an \nauthoritative source on this.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 26 Nov 2020 12:32:37 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 1:32 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 26/11/2020 06:30, Krunal Bauskar wrote:\n> > Improving spin-lock implementation on ARM.\n> > ------------------------------------------------------------\n> >\n> > * Spin-Lock is known to have a significant effect on performance\n> > with increasing scalability.\n> >\n> > * Existing Spin-Lock implementation for ARM is sub-optimal due to\n> > use of TAS (test and swap)\n> >\n> > * TAS is implemented on ARM as load-store so even if the lock is not free,\n> > store operation will execute to replace the same value.\n> > This redundant operation (mainly store) is costly.\n> >\n> > * CAS is implemented on ARM as load-check-store-check that means if the\n> > lock is not free, check operation, post-load will cause the loop to\n> > return there-by saving on costlier store operation. [1]\n>\n> Can you add some code comments to explain that why CAS is cheaper than\n> TAS on ARM?\n>\n> Is there some official ARM documentation, like a programmer's reference\n> manual or something like that, that would show a reference\n> implementation of a spinlock on ARM? It would be good to refer to an\n> authoritative source on this.\n\nLet me add my 2 cents.\n\nI've compared assembly output of gcc implementations of CAS and TAS.\nThe sample C-program is attached. I've compiled it on raspberry pi 4\nusing gcc 9.3.0.\n\nThe inner loop of CAS is as follows. So, if the value loaded by ldaxr\ndoesn't match expected value, then we immediately quit the loop.\n\n.L3:\n ldxr w3, [x0]\n cmp w3, w1\n bne .L4\n stlxr w4, w2, [x0]\n cbnz w4, .L3\n.L4:\n\nThe inner loop of TAS is as follows. So it really does \"stxr\"\nunconditionally. In principle, it could check if a new value matches\nthe observed value and there is nothing to do, but it doesn't.\nMoreover, stxr might fail, then we can spend multiple loops of\nldxr/stxr due to concurrent memory access. AFAIK, those concurrent\naccesses could reflect not only lock release, but also other\nunsuccessful lock attempts. So, good chance for extra loops to be\nuseless.\n\n.L7:\n ldxr w2, [x0]\n stxr w3, w1, [x0]\n cbnz w3, .L7\n\nI've also googled for spinlock implementation on arm and found a blog\npost about spinlock implementation in linux kernel [1]. Surprisingly\nit doesn't use the trick to skip stxr if the lock is busy. Instead,\nthey use some arm-specific power-saving option called WFE.\n\nSo, I think it's quite clear that switching from TAS to CAS on arm\nwould be a win. But there could be other options to do this even\nbetter.\n\nLinks\n1. https://linux-concepts.blogspot.com/2018/05/spinlock-implementation-in-arm.html\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 26 Nov 2020 14:31:29 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Thu, 26 Nov 2020 at 16:02, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 26/11/2020 06:30, Krunal Bauskar wrote:\n> > Improving spin-lock implementation on ARM.\n> > ------------------------------------------------------------\n> >\n> > * Spin-Lock is known to have a significant effect on performance\n> > with increasing scalability.\n> >\n> > * Existing Spin-Lock implementation for ARM is sub-optimal due to\n> > use of TAS (test and swap)\n> >\n> > * TAS is implemented on ARM as load-store so even if the lock is not\n> free,\n> > store operation will execute to replace the same value.\n> > This redundant operation (mainly store) is costly.\n> >\n> > * CAS is implemented on ARM as load-check-store-check that means if the\n> > lock is not free, check operation, post-load will cause the loop to\n> > return there-by saving on costlier store operation. [1]\n>\n> Can you add some code comments to explain that why CAS is cheaper than\n> TAS on ARM?\n>\n\n1. As Alexey too pointed out in followup email\nCAS = load value -> check if the value is expected -> if yes then replace\n(store new value) -> else exit/break\nTAS = load value -> store new value\n\nThis means each TAS would be converted to 2 operations that are LOAD and\nSTORE and of-course\nSTORE is costlier in terms of latency. CAS ensures optimization in this\nregard with an early check.\n\nLet's look at some micro-benchmarking. I implemented a simple spin-loop\nusing both approaches and\nas expected with increase scalability, CAS continues to out-perform TAS by\na margin up to 50%.\n\n---- TAS ----\nRunning 128 parallelism\nElapsed time: 1.34271 s\nRunning 256 parallelism\nElapsed time: 3.6487 s\nRunning 512 parallelism\nElapsed time: 11.3725 s\nRunning 1024 parallelism\nElapsed time: 43.5376 s\n---- CAS ----\nRunning 128 parallelism\nElapsed time: 1.00131 s\nRunning 256 parallelism\nElapsed time: 2.53202 s\nRunning 512 parallelism\nElapsed time: 7.66829 s\nRunning 1024 parallelism\nElapsed time: 22.6294 s\n\nThis could be also observed from the perf profiling\n\nTAS:\n 15.57 │44: ldxr w0, [x19]\n 83.93 │ stxr w1, w21, [x19]\n\nCAS:\n 81.29 │58: ↓ b.ne cc\n....\n 9.86 │cc: ldaxr w0, [x22]\n 8.84 │ cmp w0, #0x0\n │ ↑ b.ne 58\n │ stxr w1, w20, [x22]\n\n*In TAS: STORE is pretty costly.*\n\n2. I have added the needed comment in the patch. Updated patch attached.\n\n----------------------\nThanks for taking look at this and surely let me know if any more info is\nneeded.\n\n\n>\n> Is there some official ARM documentation, like a programmer's reference\n> manual or something like that, that would show a reference\n> implementation of a spinlock on ARM? It would be good to refer to an\n> authoritative source on this.\n>\n> - Heikki\n>\n\n\n-- \nRegards,\nKrunal Bauskar",
"msg_date": "Thu, 26 Nov 2020 18:10:31 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> On Thu, 26 Nov 2020 at 10:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also, exactly what hardware/software platform were these curves\n>> obtained on?\n\n> Hardware: ARM Kunpeng 920 BareMetal Server 2.6 GHz. 64 cores (56 cores for\n> server and 8 for client) [2 numa nodes]\n> Storage: 3.2 TB NVMe SSD\n> OS: CentOS Linux release 7.6\n> PGSQL: baseline = Release Tag 13.1\n\nHmm, might not be the sort of hardware ordinary mortals can get their\nhands on. What's likely to be far more common ARM64 hardware in the\nnear future is Apple's new gear. So I thought I'd try this on the new\nM1 mini I just got.\n\n... and, after retrieving my jaw from the floor, I present the\nattached. Apple's chips evidently like this style of spinlock a LOT\nbetter. The difference is so remarkable that I wonder if I made a\nmistake somewhere. Can anyone else replicate these results?\n\nTest conditions are absolutely brain dead:\n\nToday's HEAD (dcfff74fb), no special build options\n\nAll server parameters are out-of-the-box defaults, except\nI had to raise max_connections for the larger client counts\n\npgbench scale factor 100\n\nRead-only tests are like\n\tpgbench -S -T 60 -c 32 -j 16 bench\nQuoted figure is median of three runs; except for the lowest\nclient count, results were quite repeatable. (I speculate that\nat -c 4, the scheduler might've been doing something funny about\nsometimes using the slow cores instead of fast cores.)\n\nRead-write tests are like\n\tpgbench -T 300 -c 16 -j 8 bench\nI didn't have the patience to run three full repetitions,\nbut again the numbers seemed pretty repeatable.\n\nI used -j equal to half -c, except I could not get -j above 128\nto work, so the larger client counts have -j 128. Did not try\nto run down that problem yet, but I'm probably hitting some ulimit\nsomewhere. (I did have to raise \"ulimit -n\" to get these results.)\n\nAnyway, this seems to be a slam-dunk win on M1.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 Nov 2020 17:55:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Thu, Nov 26, 2020 at 1:32 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> Is there some official ARM documentation, like a programmer's reference\n>> manual or something like that, that would show a reference\n>> implementation of a spinlock on ARM? It would be good to refer to an\n>> authoritative source on this.\n\n> I've compared assembly output of gcc implementations of CAS and TAS.\n\nFWIW, I see quite different assembly using Apple's clang on their M1\nprocessor. What I get for SpinLockAcquire on HEAD is (lock pointer\ninitially in x0):\n\n\tmov\tx19, x0\n\tmov\tw8, #1\n\tswpal\tw8, w8, [x0]\n\tcbz\tw8, LBB0_2\n\tadrp\tx1, l_.str@PAGE\n\tadd\tx1, x1, l_.str@PAGEOFF\n\tadrp\tx3, l___func__.foo@PAGE\n\tadd\tx3, x3, l___func__.foo@PAGEOFF\n\tmov\tx0, x19\n\tmov\tw2, #12\n\tbl\t_s_lock\nLBB0_2:\n\t... lock is acquired\n\nwhile SpinLockRelease is just\n\n\tstlr\twzr, [x19]\n\nWith the patch, I get\n\n\tmov\tx19, x0\n\tmov\tw8, #0\n\tmov\tw9, #1\n\tcasa\tw8, w9, [x0]\n\tcmp\tw8, #0 ; =0\n\tb.eq\tLBB0_2\n\tadrp\tx1, l_.str@PAGE\n\tadd\tx1, x1, l_.str@PAGEOFF\n\tadrp\tx3, l___func__.foo@PAGE\n\tadd\tx3, x3, l___func__.foo@PAGEOFF\n\tmov\tx0, x19\n\tmov\tw2, #12\n\tbl\t_s_lock\nLBB0_2:\n\t... lock is acquired\n\nand SpinLockRelease is the same.\n\nDon't know much of anything about ARM assembly, so I don't\nknow if these instructions are late-model-only.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Nov 2020 18:20:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 1:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> > On Thu, 26 Nov 2020 at 10:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Also, exactly what hardware/software platform were these curves\n> >> obtained on?\n>\n> > Hardware: ARM Kunpeng 920 BareMetal Server 2.6 GHz. 64 cores (56 cores for\n> > server and 8 for client) [2 numa nodes]\n> > Storage: 3.2 TB NVMe SSD\n> > OS: CentOS Linux release 7.6\n> > PGSQL: baseline = Release Tag 13.1\n>\n> Hmm, might not be the sort of hardware ordinary mortals can get their\n> hands on. What's likely to be far more common ARM64 hardware in the\n> near future is Apple's new gear. So I thought I'd try this on the new\n> M1 mini I just got.\n>\n> ... and, after retrieving my jaw from the floor, I present the\n> attached. Apple's chips evidently like this style of spinlock a LOT\n> better. The difference is so remarkable that I wonder if I made a\n> mistake somewhere. Can anyone else replicate these results?\n\nResults look very surprising to me. I didn't expect there would be\nany very busy spin-lock when the number of clients is as low as 4.\nEspecially in read-only pgbench.\n\nI don't have an M1 at hand. Could you do some profiling to identify\nthe source of such a huge difference.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 27 Nov 2020 10:23:09 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 2:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > On Thu, Nov 26, 2020 at 1:32 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> Is there some official ARM documentation, like a programmer's reference\n> >> manual or something like that, that would show a reference\n> >> implementation of a spinlock on ARM? It would be good to refer to an\n> >> authoritative source on this.\n>\n> > I've compared assembly output of gcc implementations of CAS and TAS.\n>\n> FWIW, I see quite different assembly using Apple's clang on their M1\n> processor. What I get for SpinLockAcquire on HEAD is (lock pointer\n> initially in x0):\n\nYep, arm v8.1 implements single-instruction atomic operations swpal\nand casa, which much more look like x86 atomic instructions rather\nthan loops of ldxr/stlxr.\n\nSo, all the reasoning upthread shouldn't work here, but the advantage\nis much more huge.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 27 Nov 2020 10:27:08 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Fri, Nov 27, 2020 at 1:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... and, after retrieving my jaw from the floor, I present the\n>> attached. Apple's chips evidently like this style of spinlock a LOT\n>> better. The difference is so remarkable that I wonder if I made a\n>> mistake somewhere. Can anyone else replicate these results?\n\n> Results look very surprising to me. I didn't expect there would be\n> any very busy spin-lock when the number of clients is as low as 4.\n\nYeah, that wasn't making sense to me either. The most likely explanation\nseems to be that I messed up the test somehow ... but I don't see where.\nSo, again, I'm wondering if anyone else can replicate or refute this.\nI can't be the only geek around here who sprang for an M1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Nov 2020 02:50:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 02:50:30AM -0500, Tom Lane wrote:\n> Yeah, that wasn't making sense to me either. The most likely explanation\n> seems to be that I messed up the test somehow ... but I don't see where.\n> So, again, I'm wondering if anyone else can replicate or refute this.\n\nI do find your results extremely surprising not only for 4, but for\nall tests with connection numbers lower than 32. With a scale factor\nof 100 that's suspiciously a lot of difference.\n\n> I can't be the only geek around here who sprang for an M1.\n\nNot planning to buy one here, anything I have read on that tells that\nit is worth a performance study.\n--\nMichael",
"msg_date": "Fri, 27 Nov 2020 17:54:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 11:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Not planning to buy one here, anything I have read on that tells that\n> it is worth a performance study.\n\nAnother interesting area for experiments is AWS graviton2 instances.\nSpecification says it supports arm v8.2, so it should have swpal/casa\ninstructions as well.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 27 Nov 2020 12:07:26 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On 2020-11-26 23:55, Tom Lane wrote:\n> ... and, after retrieving my jaw from the floor, I present the\n> attached. Apple's chips evidently like this style of spinlock a LOT\n> better. The difference is so remarkable that I wonder if I made a\n> mistake somewhere. Can anyone else replicate these results?\n\nI tried this on a M1 MacBook Air. I cannot reproduce these results. \nThe unpatched numbers are about in the neighborhood of what you showed, \nbut the patched numbers are only about a few percent better, not the \n1.5x or 2x change that you showed.\n\n\n\n",
"msg_date": "Fri, 27 Nov 2020 14:57:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I tried this on a M1 MacBook Air. I cannot reproduce these results. \n> The unpatched numbers are about in the neighborhood of what you showed, \n> but the patched numbers are only about a few percent better, not the \n> 1.5x or 2x change that you showed.\n\nAfter redoing the test, I can't find any outside-the-noise difference\nat all between HEAD and the patch. So clearly, I screwed up yesterday.\nThe most likely theory is that I managed to measure an assert-enabled\nbuild of HEAD.\n\nIt might be that this hardware is capable of showing a difference with a\nbetter-tuned pgbench test, but with an untuned pgbench run, we just aren't\nsufficiently sensitive to the spinlock properties. (Which I guess is good\nnews, really.)\n\nOne thing that did hold up is that the thermal performance of this box\nis pretty ridiculous. After being beat on for a solid hour, the fan\nstill hasn't turned on to any noticeable level, and the enclosure is\nonly a little warm to the touch. Try that with Intel hardware ;-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Nov 2020 18:45:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "I wrote:\n> It might be that this hardware is capable of showing a difference with a\n> better-tuned pgbench test, but with an untuned pgbench run, we just aren't\n> sufficiently sensitive to the spinlock properties. (Which I guess is good\n> news, really.)\n\nIt occurred to me that if we don't insist on a semi-realistic test case,\nit's not that hard to just pound on a spinlock and see what happens.\nI made up a simple C function (attached) to repeatedly call\nXLogGetLastRemovedSegno, which is basically just a spinlock\nacquire/release. Using this as a \"transaction\":\n\n$ cat bench.sql\nselect drive_spinlocks(50000);\n\nI get this with HEAD:\n\n$ pgbench -f bench.sql -n -T 60 -c 1 bench\ntransaction type: bench.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 127597\nlatency average = 0.470 ms\ntps = 2126.479699 (including connections establishing)\ntps = 2126.595015 (excluding connections establishing)\n\n$ pgbench -f bench.sql -n -T 60 -c 2 bench\ntransaction type: bench.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 2\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 108979\nlatency average = 1.101 ms\ntps = 1816.051930 (including connections establishing)\ntps = 1816.150556 (excluding connections establishing)\n\n$ pgbench -f bench.sql -n -T 60 -c 4 bench\ntransaction type: bench.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 42862\nlatency average = 5.601 ms\ntps = 714.202152 (including connections establishing)\ntps = 714.237301 (excluding connections establishing)\n\n(With only 4 high-performance cores, it's probably not\ninteresting to go further; involving the slower cores\nwill just confuse matters.) And this with the patch:\n\n$ pgbench -f bench.sql -n -T 60 -c 1 bench\ntransaction type: bench.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 130455\nlatency average = 0.460 ms\ntps = 2174.098284 (including connections establishing)\ntps = 2174.217097 (excluding connections establishing)\n\n$ pgbench -f bench.sql -n -T 60 -c 2 bench\ntransaction type: bench.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 2\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 51533\nlatency average = 2.329 ms\ntps = 858.765176 (including connections establishing)\ntps = 858.811132 (excluding connections establishing)\n\n$ pgbench -f bench.sql -n -T 60 -c 4 bench\ntransaction type: bench.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 31154\nlatency average = 7.705 ms\ntps = 519.116788 (including connections establishing)\ntps = 519.144375 (excluding connections establishing)\n\nSo at least on Apple's hardware, it seems like the CAS\nimplementation might be a shade faster when uncontended,\nbut it's very clearly worse when there is contention for\nthe spinlock. That's interesting, because the argument\nthat CAS should involve strictly less work seems valid ...\nbut that's what I'm getting.\n\nIt might be useful to try this on other ARM platforms,\nbut I lack the energy right now (plus the only other\nthing I've got is a Raspberry Pi, which might not be\nsomething we particularly care about performance-wise).\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 27 Nov 2020 21:35:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Sat, Nov 28, 2020 at 5:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So at least on Apple's hardware, it seems like the CAS\n> implementation might be a shade faster when uncontended,\n> but it's very clearly worse when there is contention for\n> the spinlock. That's interesting, because the argument\n> that CAS should involve strictly less work seems valid ...\n> but that's what I'm getting.\n>\n> It might be useful to try this on other ARM platforms,\n> but I lack the energy right now (plus the only other\n> thing I've got is a Raspberry Pi, which might not be\n> something we particularly care about performance-wise).\n\nI guess that might depend on the implementation of CAS and TAS. I bet\nusage of CAS in spinlock gives advantage when ldxr/stxr are used, but\nnot when swpal/casa are used. I found out that I can force clang to\nuse swpal/casa by setting \"-march=armv8-a+lse\". I'm going to make\nsome experiments on a multicore AWS graviton2 instance with different\natomic implementation.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 28 Nov 2020 13:31:28 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Sat, Nov 28, 2020 at 1:31 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I guess that might depend on the implementation of CAS and TAS. I bet\n> usage of CAS in spinlock gives advantage when ldxr/stxr are used, but\n> not when swpal/casa are used. I found out that I can force clang to\n> use swpal/casa by setting \"-march=armv8-a+lse\". I'm going to make\n> some experiments on a multicore AWS graviton2 instance with different\n> atomic implementation.\n\nI've made some benchmarks on c6gd.16xlarge ec2 instance with graviton2\nprocessor of 64 virtual CPUs (graphs and raw results are attached).\nI've analyzed two patches: spinlock using cas by Krunal Bauskar, and\nmy implementation of lwlock using lwrex/strex. My arm lwlock patch\nhas the same idea as my previous patch for power: we can put lwlock\nattempt logic between lwrex and strex. In spite of my previous power\npatch, the arm patch doesn't contain assembly: instead I've used\nC-wrappers over lwrex/strex.\n\nThe first series of experiments I've made using standard compiling\noptions. So, LSE instructions from ARM v8.1 weren't used. Atomics\nwere implemented using lwrex/strex pair.\n\nIn the read-only benchmark, both spinlock (cas-spinlock graph) and\nlwlock (ldrew-strex-lwlock graph) patches give observable performance\ngain of similar value. However, performance of combination of these\npatches (ldrew-strex-lwlock-cas-spinlock graph) is close to\nperformance of unpatched version. That could be counterintuitive, but\nI've rechecked that multiple times.\n\nIn the read-write benchmark, both spinlock and lwlock patches give\nmore significant performance gain, and lwlock patch gives more effect\nthan spinlock patch. Noticeable, that combination of patches now\ngives some cumulative effect instead of counterintuitive slowdown.\n\nThen I've tried to compile postgres with LSE instruction using\n\"-march=armv8-a+lse\" flag with clang (graphs with -lse suffix). The\neffect of LSE is HUGE!!! Unpatched version with LSE is times faster\nthan any version without LSE on high concurrency. In the both\nread-only and read-write benchmarks spinlock patch doesn't show any\nsignificant difference. The lwlock patch shows a great slowdown with\nLSE. Noticeable, in read-write benchmark, lwlock patch shows worse\nresults than unpatched version without LSE. Probably, combining\ndifferent atomics implementations isn't a good idea.\n\nIt seems that ARM Kunpeng 920 should support ARM v8.1. I wonder if\nthe published benchmarks results were made with LSE. I suspect that\nit was not. It would be nice to repeat the same benchmarks with LSE.\nI'd like to ask Krunal Bauskar and Amit Khandekar to repeat these\nbenchmarks with LSE.\n\nMy preliminary conclusions are so:\n1) Since the effect of LSE is so huge, we should advise users of\nmulticore ARM servers to compile PostgreSQL with LSE support. We\nprobably should provide separate packaging for ARM v8.1 and higher\n(packages for ARM v8 are still needed for raspberry etc).\n2) It seems that atomics in ARM v8.1 becomes very similar to x86\natomics, and it doesn't need special optimizations. And I think ARM\nv8 processors don't have so many cores and aren't so heavily used in\nhigh-concurrent environments. So, special optimizations for ARM v8\nprobably aren't worth it.\n\nLinks\n1. https://www.postgresql.org/message-id/CAB10pyamDkTFWU_BVGeEVmkc8%3DEhgCjr6QBk02SCdJtKpHkdFw%40mail.gmail.com\n2. https://www.postgresql.org/message-id/CAPpHfdsKrh7c7P8-5eG-qW3VQobybbwqH%3DgL5Ck%2BdOES-gBbFg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 29 Nov 2020 19:53:44 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 7:35 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> * x86 uses optimized xchg operation.\n> ARM too started supporting it (using Large System Extension) with\n> ARM-v8.1 but since it not supported with ARM-v8, GCC default tends\n> to roll more generic load-store assembly code.\n>\n> * gcc-9.4+ onwards there is support for outline-atomics that could emit\n> both the variants of the code (load-store and cas/swp) and based on\n> underlying supported architecture proper variant it used but still a lot\n> of distros don't support GCC-9.4 as the default compiler.\n\nI've checked that gcc 9.3 with \"-march=armv8-a+lse\" option uses LSE atomics...\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 29 Nov 2020 20:07:58 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Sun, 29 Nov 2020 at 22:23, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Sat, Nov 28, 2020 at 1:31 PM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > I guess that might depend on the implementation of CAS and TAS. I bet\n> > usage of CAS in spinlock gives advantage when ldxr/stxr are used, but\n> > not when swpal/casa are used. I found out that I can force clang to\n> > use swpal/casa by setting \"-march=armv8-a+lse\". I'm going to make\n> > some experiments on a multicore AWS graviton2 instance with different\n> > atomic implementation.\n>\n> I've made some benchmarks on c6gd.16xlarge ec2 instance with graviton2\n> processor of 64 virtual CPUs (graphs and raw results are attached).\n> I've analyzed two patches: spinlock using cas by Krunal Bauskar, and\n> my implementation of lwlock using lwrex/strex. My arm lwlock patch\n> has the same idea as my previous patch for power: we can put lwlock\n> attempt logic between lwrex and strex. In spite of my previous power\n> patch, the arm patch doesn't contain assembly: instead I've used\n> C-wrappers over lwrex/strex.\n>\n> The first series of experiments I've made using standard compiling\n> options. So, LSE instructions from ARM v8.1 weren't used. Atomics\n> were implemented using lwrex/strex pair.\n>\n> In the read-only benchmark, both spinlock (cas-spinlock graph) and\n> lwlock (ldrew-strex-lwlock graph) patches give observable performance\n> gain of similar value. However, performance of combination of these\n> patches (ldrew-strex-lwlock-cas-spinlock graph) is close to\n> performance of unpatched version. That could be counterintuitive, but\n> I've rechecked that multiple times.\n>\n> In the read-write benchmark, both spinlock and lwlock patches give\n> more significant performance gain, and lwlock patch gives more effect\n> than spinlock patch. Noticeable, that combination of patches now\n> gives some cumulative effect instead of counterintuitive slowdown.\n>\n> Then I've tried to compile postgres with LSE instruction using\n> \"-march=armv8-a+lse\" flag with clang (graphs with -lse suffix). The\n> effect of LSE is HUGE!!! Unpatched version with LSE is times faster\n> than any version without LSE on high concurrency. In the both\n> read-only and read-write benchmarks spinlock patch doesn't show any\n> significant difference. The lwlock patch shows a great slowdown with\n> LSE. Noticeable, in read-write benchmark, lwlock patch shows worse\n> results than unpatched version without LSE. Probably, combining\n> different atomics implementations isn't a good idea.\n>\n> It seems that ARM Kunpeng 920 should support ARM v8.1. I wonder if\n> the published benchmarks results were made with LSE. I suspect that\n> it was not. It would be nice to repeat the same benchmarks with LSE.\n> I'd like to ask Krunal Bauskar and Amit Khandekar to repeat these\n> benchmarks with LSE.\n>\n> My preliminary conclusions are so:\n> 1) Since the effect of LSE is so huge, we should advise users of\n> multicore ARM servers to compile PostgreSQL with LSE support. We\n> probably should provide separate packaging for ARM v8.1 and higher\n> (packages for ARM v8 are still needed for raspberry etc).\n> 2) It seems that atomics in ARM v8.1 becomes very similar to x86\n> atomics, and it doesn't need special optimizations. And I think ARM\n> v8 processors don't have so many cores and aren't so heavily used in\n> high-concurrent environments. So, special optimizations for ARM v8\n> probably aren't worth it.\n>\n\nThanks for the detailed results.\n\n1. Results we shared are w/o lse enabled so using traditional store/load\napproach.\n\n2. As you pointed out LSE is enabled starting only with arm-v8.1 but not\nall aarch64 tag machines are arm-v8.1 compatible.\n This means we would need a separate package or a more optimal way would\nbe to compile pgsql with gcc-9.4 (or gcc-10.x (default)) with\n -moutline-atomics that would emit both traditional and lse code and\nflow would dynamically select depending on the target machine\n (I have blogged about it in MySQL context\nhttps://mysqlonarm.github.io/ARM-LSE-and-MySQL/)\n\n3. Problem with GCC approach is still a lot of distro don't support gcc 9.4\nas default.\n To use this approach:\n * PGSQL will have to roll out its packages using gcc-9.4+ only so that\nthey are compatible with all aarch64 machines\n * but this continues to affect all other users who tend to build pgsql\nusing standard distro based compiler. (unless they upgrade compiler).\n\n--------------------\n\nSo given all the permutations and combinations, I think we could approach\nthe problem as follows:\n\n* Enable use of CAS as it is known to have optimal performance (vs TAS)\n\n* Even with LSE enabled, CAS to continue to perform (on par or marginally\nbetter than TAS)\n\n* Add a patch to compile pgsql with outline-atomics if set GCC supports it\nso the dynamic 2-way compatible code is emitted.\n\n--------------------\n\nAlexander,\n\nWe will surely benchmark using LSE on Kunpeng 920 and share the result.\n\nI am a bit surprised to see things scale by 4-5x times just by switching to\nLSE.\n(my working experience with lse (in mysql context and micro-benchmarking)\ndidn't show that great improvement by switching to lse).\nMaybe some more hotspots (beyond s_lock) are getting addressed with the use\nof lse.\n\n\n>\n> Links\n> 1.\n> https://www.postgresql.org/message-id/CAB10pyamDkTFWU_BVGeEVmkc8%3DEhgCjr6QBk02SCdJtKpHkdFw%40mail.gmail.com\n> 2.\n> https://www.postgresql.org/message-id/CAPpHfdsKrh7c7P8-5eG-qW3VQobybbwqH%3DgL5Ck%2BdOES-gBbFg%40mail.gmail.com\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Sun, 29 Nov 2020 at 22:23, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Sat, Nov 28, 2020 at 1:31 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I guess that might depend on the implementation of CAS and TAS. I bet\n> usage of CAS in spinlock gives advantage when ldxr/stxr are used, but\n> not when swpal/casa are used. I found out that I can force clang to\n> use swpal/casa by setting \"-march=armv8-a+lse\". I'm going to make\n> some experiments on a multicore AWS graviton2 instance with different\n> atomic implementation.\n\nI've made some benchmarks on c6gd.16xlarge ec2 instance with graviton2\nprocessor of 64 virtual CPUs (graphs and raw results are attached).\nI've analyzed two patches: spinlock using cas by Krunal Bauskar, and\nmy implementation of lwlock using lwrex/strex. My arm lwlock patch\nhas the same idea as my previous patch for power: we can put lwlock\nattempt logic between lwrex and strex. In spite of my previous power\npatch, the arm patch doesn't contain assembly: instead I've used\nC-wrappers over lwrex/strex.\n\nThe first series of experiments I've made using standard compiling\noptions. So, LSE instructions from ARM v8.1 weren't used. Atomics\nwere implemented using lwrex/strex pair.\n\nIn the read-only benchmark, both spinlock (cas-spinlock graph) and\nlwlock (ldrew-strex-lwlock graph) patches give observable performance\ngain of similar value. However, performance of combination of these\npatches (ldrew-strex-lwlock-cas-spinlock graph) is close to\nperformance of unpatched version. That could be counterintuitive, but\nI've rechecked that multiple times.\n\nIn the read-write benchmark, both spinlock and lwlock patches give\nmore significant performance gain, and lwlock patch gives more effect\nthan spinlock patch. Noticeable, that combination of patches now\ngives some cumulative effect instead of counterintuitive slowdown.\n\nThen I've tried to compile postgres with LSE instruction using\n\"-march=armv8-a+lse\" flag with clang (graphs with -lse suffix). The\neffect of LSE is HUGE!!! Unpatched version with LSE is times faster\nthan any version without LSE on high concurrency. In the both\nread-only and read-write benchmarks spinlock patch doesn't show any\nsignificant difference. The lwlock patch shows a great slowdown with\nLSE. Noticeable, in read-write benchmark, lwlock patch shows worse\nresults than unpatched version without LSE. Probably, combining\ndifferent atomics implementations isn't a good idea.\n\nIt seems that ARM Kunpeng 920 should support ARM v8.1. I wonder if\nthe published benchmarks results were made with LSE. I suspect that\nit was not. It would be nice to repeat the same benchmarks with LSE.\nI'd like to ask Krunal Bauskar and Amit Khandekar to repeat these\nbenchmarks with LSE.\n\nMy preliminary conclusions are so:\n1) Since the effect of LSE is so huge, we should advise users of\nmulticore ARM servers to compile PostgreSQL with LSE support. We\nprobably should provide separate packaging for ARM v8.1 and higher\n(packages for ARM v8 are still needed for raspberry etc).\n2) It seems that atomics in ARM v8.1 becomes very similar to x86\natomics, and it doesn't need special optimizations. And I think ARM\nv8 processors don't have so many cores and aren't so heavily used in\nhigh-concurrent environments. So, special optimizations for ARM v8\nprobably aren't worth it.Thanks for the detailed results.1. Results we shared are w/o lse enabled so using traditional store/load approach.2. As you pointed out LSE is enabled starting only with arm-v8.1 but not all aarch64 tag machines are arm-v8.1 compatible. This means we would need a separate package or a more optimal way would be to compile pgsql with gcc-9.4 (or gcc-10.x (default)) with -moutline-atomics that would emit both traditional and lse code and flow would dynamically select depending on the target machine (I have blogged about it in MySQL context https://mysqlonarm.github.io/ARM-LSE-and-MySQL/)3. Problem with GCC approach is still a lot of distro don't support gcc 9.4 as default. To use this approach: * PGSQL will have to roll out its packages using gcc-9.4+ only so that they are compatible with all aarch64 machines * but this continues to affect all other users who tend to build pgsql using standard distro based compiler. (unless they upgrade compiler).--------------------So given all the permutations and combinations, I think we could approach the problem as follows:* Enable use of CAS as it is known to have optimal performance (vs TAS)* Even with LSE enabled, CAS to continue to perform (on par or marginally better than TAS)* Add a patch to compile pgsql with outline-atomics if set GCC supports it so the dynamic 2-way compatible code is emitted.--------------------Alexander,We will surely benchmark using LSE on Kunpeng 920 and share the result.I am a bit surprised to see things scale by 4-5x times just by switching to LSE.(my working experience with lse (in mysql context and micro-benchmarking) didn't show that great improvement by switching to lse).Maybe some more hotspots (beyond s_lock) are getting addressed with the use of lse. \n\nLinks\n1. https://www.postgresql.org/message-id/CAB10pyamDkTFWU_BVGeEVmkc8%3DEhgCjr6QBk02SCdJtKpHkdFw%40mail.gmail.com\n2. https://www.postgresql.org/message-id/CAPpHfdsKrh7c7P8-5eG-qW3VQobybbwqH%3DgL5Ck%2BdOES-gBbFg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n-- Regards,Krunal Bauskar",
"msg_date": "Mon, 30 Nov 2020 09:29:37 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> So given all the permutations and combinations, I think we could approach\n> the problem as follows:\n\n> * Enable use of CAS as it is known to have optimal performance (vs TAS)\n\nThe results I posted at [1] seem to contradict this for Apple's new\nmachines.\n\nIn general, I'm pretty skeptical of *all* the results posted so far on\nthis thread, because everybody seems to be testing exactly one machine.\nIf there's one thing that it's safe to assume about ARM, it's that\nthere are a lot of different implementations; and this area seems very\nvery likely to differ across implementations.\n\nI don't have a big problem with catering for a few different spinlock\nimplementations on ARM ... but it's sure unclear how we could decide\nwhich one to use.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/741389.1606530957@sss.pgh.pa.us\n\n\n",
"msg_date": "Sun, 29 Nov 2020 23:44:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Mon, 30 Nov 2020 at 10:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> > So given all the permutations and combinations, I think we could approach\n> > the problem as follows:\n>\n> > * Enable use of CAS as it is known to have optimal performance (vs TAS)\n>\n> The results I posted at [1] seem to contradict this for Apple's new\n> machines.\n>\n> In general, I'm pretty skeptical of *all* the results posted so far on\n> this thread, because everybody seems to be testing exactly one machine.\n> If there's one thing that it's safe to assume about ARM, it's that\n> there are a lot of different implementations; and this area seems very\n> very likely to differ across implementations.\n>\n\nFor the results you saw on Mac-Mini was LSE enabled by default.\nMaybe clang does it on mac by default to harvest performance.\nIf that is the case then your results are still inline with what Alexander\nposted.\n\nOn other hand, Peter saw some % improvement on M1 MacBook.\n\n------\n\n* Can you please check/confirm the LSE enable status on MAC (default).\n* Also, if possible run some quick micro-benchmark to help understand how\nTAS and CAS perform\non MAC. It would be a big surprise to learn that M1 can execute TAS better\nthan CAS (without lse).\n* I would also suggest if possible try with higher scalability (more than 4\nto check if with increase scalability CAS out-perform).\n\nResults I have seen to date (in different contexts too), CAS has\noutperformed on Kunpeng, Graviton2, (Other ARM arch), ... (matches with\ntheory too unless LSE is enabled).\nIf M1 goes the other way around that would be a big surprise for the arm\ncommunity too.\n\n\n\n>\n> I don't have a big problem with catering for a few different spinlock\n> implementations on ARM ... but it's sure unclear how we could decide\n> which one to use.\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/741389.1606530957@sss.pgh.pa.us\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Mon, 30 Nov 2020 at 10:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> So given all the permutations and combinations, I think we could approach\n> the problem as follows:\n\n> * Enable use of CAS as it is known to have optimal performance (vs TAS)\n\nThe results I posted at [1] seem to contradict this for Apple's new\nmachines.\n\nIn general, I'm pretty skeptical of *all* the results posted so far on\nthis thread, because everybody seems to be testing exactly one machine.\nIf there's one thing that it's safe to assume about ARM, it's that\nthere are a lot of different implementations; and this area seems very\nvery likely to differ across implementations.For the results you saw on Mac-Mini was LSE enabled by default.Maybe clang does it on mac by default to harvest performance.If that is the case then your results are still inline with what Alexander posted.On other hand, Peter saw some % improvement on M1 MacBook.------* Can you please check/confirm the LSE enable status on MAC (default).* Also, if possible run some quick micro-benchmark to help understand how TAS and CAS performon MAC. It would be a big surprise to learn that M1 can execute TAS better than CAS (without lse).* I would also suggest if possible try with higher scalability (more than 4 to check if with increase scalability CAS out-perform).Results I have seen to date (in different contexts too), CAS has outperformed on Kunpeng, Graviton2, (Other ARM arch), ... (matches with theory too unless LSE is enabled).If M1 goes the other way around that would be a big surprise for the arm community too. \n\nI don't have a big problem with catering for a few different spinlock\nimplementations on ARM ... but it's sure unclear how we could decide\nwhich one to use.\n\n regards, tom lane\n\n[1] https://www.postgresql.org/message-id/741389.1606530957@sss.pgh.pa.us\n-- Regards,Krunal Bauskar",
"msg_date": "Mon, 30 Nov 2020 10:32:04 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> On Mon, 30 Nov 2020 at 10:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The results I posted at [1] seem to contradict this for Apple's new\n>> machines.\n\n> For the results you saw on Mac-Mini was LSE enabled by default.\n\nHmm, I don't know how to get Apple's clang to admit what its default\nsettings are ... anybody?\n\nHowever, it does accept \"-march=armv8-a+lse\", and that seems to\nnot be the default, because I get different results from my spinlock-\npounding test than I did yesterday. Abbreviating into a table:\n\n --- CFLAGS=-O2 --- --- CFLAGS=\"-O2 -march=armv8-a+lse\" ---\n\nTPS HEAD CAS patch HEAD CAS patch\n\nclients=1 2127 2174 2612 2722\nclients=2 1816 859 892 950\nclients=4 714 519 610 468\nclients=8 - - 108 185\n\nUnfortunately, that still doesn't lead me to think that either LSE\nor CAS are net wins on this hardware. It's quite clear that LSE\nmakes the uncontended case a good bit faster, but the contended case\nis a lot worse, so is that really a tradeoff we want?\n\n> * I would also suggest if possible try with higher scalability (more than 4\n> to check if with increase scalability CAS out-perform).\n\nAs I said yesterday, running more than 4 processes is just going\nto bring the low-performance cores into the equation, which is likely\nto swamp any interesting comparison. I did run the test with \"-c 8\"\ntoday, as shown in the right-hand columns, and the results seem\nto bear that out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 01:08:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Mon, 30 Nov 2020 at 11:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> > On Mon, 30 Nov 2020 at 10:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The results I posted at [1] seem to contradict this for Apple's new\n> >> machines.\n>\n> > For the results you saw on Mac-Mini was LSE enabled by default.\n>\n> Hmm, I don't know how to get Apple's clang to admit what its default\n> settings are ... anybody?\n>\n> However, it does accept \"-march=armv8-a+lse\", and that seems to\n> not be the default, because I get different results from my spinlock-\n> pounding test than I did yesterday. Abbreviating into a table:\n>\n> --- CFLAGS=-O2 --- --- CFLAGS=\"-O2\n> -march=armv8-a+lse\" ---\n>\n> TPS HEAD CAS patch HEAD CAS patch\n>\n> clients=1 2127 2174 2612 2722\n> clients=2 1816 859 892 950\n> clients=4 714 519 610 468\n> clients=8 - - 108 185\n>\n\nThanks for trying this Tom.\n\n---------\n\nSome of us may be surprised by the fact that enabling lse is causing\nregression (1816 -> 892 or 714 -> 610) with HEAD itself.\nWhile lse is meant to improve the performance. This, unfortunately, is not\nalways the case at-least based on my previous experience with LSE.too.\n\nI am still wondering why CAS is slower than TAS on M1. What is special on\nM1 that other ARM archs has not picked up.\n\nTom, Sorry to bother you again but this is arising a lot of curiosity about\nM1.\nWhenever you get time can do some micro-benchmarking on M1 (to understand\nTAS vs CAS).\nAlso, if you can share assembly code is emitted for the TAS vs CAS.\n\n\n>\n> Unfortunately, that still doesn't lead me to think that either LSE\n> or CAS are net wins on this hardware. It's quite clear that LSE\n> makes the uncontended case a good bit faster, but the contended case\n> is a lot worse, so is that really a tradeoff we want?\n>\n> > * I would also suggest if possible try with higher scalability (more\n> than 4\n> > to check if with increase scalability CAS out-perform).\n>\n> As I said yesterday, running more than 4 processes is just going\n> to bring the low-performance cores into the equation, which is likely\n> to swamp any interesting comparison. I did run the test with \"-c 8\"\n> today, as shown in the right-hand columns, and the results seem\n> to bear that out.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Mon, 30 Nov 2020 at 11:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> On Mon, 30 Nov 2020 at 10:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The results I posted at [1] seem to contradict this for Apple's new\n>> machines.\n\n> For the results you saw on Mac-Mini was LSE enabled by default.\n\nHmm, I don't know how to get Apple's clang to admit what its default\nsettings are ... anybody?\n\nHowever, it does accept \"-march=armv8-a+lse\", and that seems to\nnot be the default, because I get different results from my spinlock-\npounding test than I did yesterday. Abbreviating into a table:\n\n --- CFLAGS=-O2 --- --- CFLAGS=\"-O2 -march=armv8-a+lse\" ---\n\nTPS HEAD CAS patch HEAD CAS patch\n\nclients=1 2127 2174 2612 2722\nclients=2 1816 859 892 950\nclients=4 714 519 610 468\nclients=8 - - 108 185Thanks for trying this Tom.---------Some of us may be surprised by the fact that enabling lse is causing regression (1816 -> 892 or 714 -> 610) with HEAD itself.While lse is meant to improve the performance. This, unfortunately, is not always the case at-least based on my previous experience with LSE.too.I am still wondering why CAS is slower than TAS on M1. What is special on M1 that other ARM archs has not picked up.Tom, Sorry to bother you again but this is arising a lot of curiosity about M1.Whenever you get time can do some micro-benchmarking on M1 (to understand TAS vs CAS).Also, if you can share assembly code is emitted for the TAS vs CAS. \n\nUnfortunately, that still doesn't lead me to think that either LSE\nor CAS are net wins on this hardware. It's quite clear that LSE\nmakes the uncontended case a good bit faster, but the contended case\nis a lot worse, so is that really a tradeoff we want?\n\n> * I would also suggest if possible try with higher scalability (more than 4\n> to check if with increase scalability CAS out-perform).\n\nAs I said yesterday, running more than 4 processes is just going\nto bring the low-performance cores into the equation, which is likely\nto swamp any interesting comparison. I did run the test with \"-c 8\"\ntoday, as shown in the right-hand columns, and the results seem\nto bear that out.\n\n regards, tom lane\n-- Regards,Krunal Bauskar",
"msg_date": "Mon, 30 Nov 2020 11:49:25 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 9:20 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> Some of us may be surprised by the fact that enabling lse is causing regression (1816 -> 892 or 714 -> 610) with HEAD itself.\n> While lse is meant to improve the performance. This, unfortunately, is not always the case at-least based on my previous experience with LSE.too.\n\nI doubt this is correct. As Tom shown upthread, Apple clang have LSE\nenabled by default [1]. It might happen that --CFLAGS=\"-O2\n-march=armv8-a+lse\" disables some other optimizations, which are\nenabled by default in Apple clang...\n\nLinks\n1. https://www.postgresql.org/message-id/663376.1606432828%40sss.pgh.pa.us\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 30 Nov 2020 10:46:48 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 9:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> > On Mon, 30 Nov 2020 at 10:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The results I posted at [1] seem to contradict this for Apple's new\n> >> machines.\n>\n> > For the results you saw on Mac-Mini was LSE enabled by default.\n>\n> Hmm, I don't know how to get Apple's clang to admit what its default\n> settings are ... anybody?\n>\n> However, it does accept \"-march=armv8-a+lse\", and that seems to\n> not be the default, because I get different results from my spinlock-\n> pounding test than I did yesterday. Abbreviating into a table:\n>\n> --- CFLAGS=-O2 --- --- CFLAGS=\"-O2 -march=armv8-a+lse\" ---\n>\n> TPS HEAD CAS patch HEAD CAS patch\n>\n> clients=1 2127 2174 2612 2722\n> clients=2 1816 859 892 950\n> clients=4 714 519 610 468\n> clients=8 - - 108 185\n>\n> Unfortunately, that still doesn't lead me to think that either LSE\n> or CAS are net wins on this hardware. It's quite clear that LSE\n> makes the uncontended case a good bit faster, but the contended case\n> is a lot worse, so is that really a tradeoff we want?\n\nI tend to think that LSE is enabled by default in Apple's clang based\non your previous message[1]. In order to dispel the doubts could you\nplease provide assembly of SpinLockAcquire for following clang\noptions.\n\"-O2\"\n\"-O2 -march=armv8-a+lse\"\n\"-O2 -march=armv8-a\"\n\nThank you!\n\nLinks\n1. https://www.postgresql.org/message-id/663376.1606432828%40sss.pgh.pa.us\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 30 Nov 2020 16:16:28 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I tend to think that LSE is enabled by default in Apple's clang based\n> on your previous message[1]. In order to dispel the doubts could you\n> please provide assembly of SpinLockAcquire for following clang\n> options.\n> \"-O2\"\n> \"-O2 -march=armv8-a+lse\"\n> \"-O2 -march=armv8-a\"\n\nHuh. Those options make exactly zero difference to the code generated\nfor SpinLockAcquire/SpinLockRelease; it's the same as I showed upthread,\nfor either the HEAD definition of TAS() or the CAS patch's version.\n\nSo now I'm at a loss as to the reason for the performance difference\nI got. -march=armv8-a+lse does make a difference to code generation\nsomeplace, because the overall size of the postgres executable changes\nby 16kB or so. One might argue that the performance difference is due\nto better code elsewhere than the spinlocks ... but the test I'm running\nis basically just\n\n while (count-- > 0)\n {\n XLogGetLastRemovedSegno();\n\n CHECK_FOR_INTERRUPTS();\n }\n\nso it's hard to see where a non-spinlock-related code change would come\nin. That loop itself definitely generates the same code either way.\n\nI did find this interesting output from \"clang -v\":\n\n-target-cpu vortex -target-feature +v8.3a -target-feature +fp-armv8 -target-feature +neon -target-feature +crc -target-feature +crypto -target-feature +fullfp16 -target-feature +ras -target-feature +lse -target-feature +rdm -target-feature +rcpc -target-feature +zcm -target-feature +zcz -target-feature +sha2 -target-feature +aes\n\nwhereas adding -march=armv8-a+lse changes that to just\n\n-target-cpu vortex -target-feature +neon -target-feature +lse -target-feature +zcm -target-feature +zcz\n\nOn the whole, that would make one think that -march=armv8-a+lse\nshould produce worse code than the default settings.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 13:21:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 9:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I tend to think that LSE is enabled by default in Apple's clang based\n> > on your previous message[1]. In order to dispel the doubts could you\n> > please provide assembly of SpinLockAcquire for following clang\n> > options.\n> > \"-O2\"\n> > \"-O2 -march=armv8-a+lse\"\n> > \"-O2 -march=armv8-a\"\n>\n> Huh. Those options make exactly zero difference to the code generated\n> for SpinLockAcquire/SpinLockRelease; it's the same as I showed upthread,\n> for either the HEAD definition of TAS() or the CAS patch's version.\n>\n> So now I'm at a loss as to the reason for the performance difference\n> I got. -march=armv8-a+lse does make a difference to code generation\n> someplace, because the overall size of the postgres executable changes\n> by 16kB or so. One might argue that the performance difference is due\n> to better code elsewhere than the spinlocks ... but the test I'm running\n> is basically just\n>\n> while (count-- > 0)\n> {\n> XLogGetLastRemovedSegno();\n>\n> CHECK_FOR_INTERRUPTS();\n> }\n>\n> so it's hard to see where a non-spinlock-related code change would come\n> in. That loop itself definitely generates the same code either way.\n>\n> I did find this interesting output from \"clang -v\":\n>\n> -target-cpu vortex -target-feature +v8.3a -target-feature +fp-armv8 -target-feature +neon -target-feature +crc -target-feature +crypto -target-feature +fullfp16 -target-feature +ras -target-feature +lse -target-feature +rdm -target-feature +rcpc -target-feature +zcm -target-feature +zcz -target-feature +sha2 -target-feature +aes\n>\n> whereas adding -march=armv8-a+lse changes that to just\n>\n> -target-cpu vortex -target-feature +neon -target-feature +lse -target-feature +zcm -target-feature +zcz\n>\n> On the whole, that would make one think that -march=armv8-a+lse\n> should produce worse code than the default settings.\n\nGreat, thanks.\n\nSo, I think the following hypothesis isn't disproved yet.\n1) On ARM with LSE support, PostgreSQL built with LSE is faster than\nPostgreSQL built without LSE. Even if the latter is patched with\nanything considered in this thread.\n2) None of the patches considered in this thread give a clear\nadvantage for PostgreSQL built with LSE.\n\nTo further confirm this let's wait for Kunpeng 920 tests by Krunal\nBauskar and Amit Khandekar. Also it would be nice if someone will run\nbenchmarks similar to [1] on Apple M1.\n\nLinks\n1. https://www.postgresql.org/message-id/CAPpHfdsGqVd6EJ4mr_RZVE5xSiCNBy4MuSvdTrKmTpM0eyWGpg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 30 Nov 2020 23:46:44 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 7:00 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> 3. Problem with GCC approach is still a lot of distro don't support gcc 9.4 as default.\n> To use this approach:\n> * PGSQL will have to roll out its packages using gcc-9.4+ only so that they are compatible with all aarch64 machines\n> * but this continues to affect all other users who tend to build pgsql using standard distro based compiler. (unless they upgrade compiler).\n\nI think if a user, who runs PostgreSQL on a multicore machine with\nhigh-concurrent load, can take care about installing the appropriate\npackage/using the appropriate compiler (especially if we publish\nexplicit instructions for that). In the same way such advanced users\ntune Linux kernel etc.\n\nBTW, how do you get that required gcc version is 9.4? I've managed to\nuse LSE with gcc 9.3.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Dec 2020 00:01:41 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 02:31, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Mon, Nov 30, 2020 at 7:00 AM Krunal Bauskar <krunalbauskar@gmail.com>\n> wrote:\n> > 3. Problem with GCC approach is still a lot of distro don't support gcc\n> 9.4 as default.\n> > To use this approach:\n> > * PGSQL will have to roll out its packages using gcc-9.4+ only so\n> that they are compatible with all aarch64 machines\n> > * but this continues to affect all other users who tend to build\n> pgsql using standard distro based compiler. (unless they upgrade compiler).\n>\n> I think if a user, who runs PostgreSQL on a multicore machine with\n> high-concurrent load, can take care about installing the appropriate\n> package/using the appropriate compiler (especially if we publish\n> explicit instructions for that). In the same way such advanced users\n> tune Linux kernel etc.\n>\n> BTW, how do you get that required gcc version is 9.4? I've managed to\n> use LSE with gcc 9.3.\n>\n\nDid they backported it to 9.3?\nI am just looking at the gcc guide.\nhttps://gcc.gnu.org/gcc-9/changes.html\n\nGCC 9.4Target Specific ChangesAArch64\n\n - The option -moutline-atomics has been added to aid deployment of the\n Large System Extensions (LSE)\n\n\n\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Tue, 1 Dec 2020 at 02:31, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Mon, Nov 30, 2020 at 7:00 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> 3. Problem with GCC approach is still a lot of distro don't support gcc 9.4 as default.\n> To use this approach:\n> * PGSQL will have to roll out its packages using gcc-9.4+ only so that they are compatible with all aarch64 machines\n> * but this continues to affect all other users who tend to build pgsql using standard distro based compiler. (unless they upgrade compiler).\n\nI think if a user, who runs PostgreSQL on a multicore machine with\nhigh-concurrent load, can take care about installing the appropriate\npackage/using the appropriate compiler (especially if we publish\nexplicit instructions for that). In the same way such advanced users\ntune Linux kernel etc.\n\nBTW, how do you get that required gcc version is 9.4? I've managed to\nuse LSE with gcc 9.3.Did they backported it to 9.3?I am just looking at the gcc guide.https://gcc.gnu.org/gcc-9/changes.htmlGCC 9.4Target Specific ChangesAArch64The option -moutline-atomics has been added to aid deployment of the Large System Extensions (LSE) \n\n------\nRegards,\nAlexander Korotkov\n-- Regards,Krunal Bauskar",
"msg_date": "Tue, 1 Dec 2020 08:55:27 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> 2) None of the patches considered in this thread give a clear\n> advantage for PostgreSQL built with LSE.\n\nYeah, I think so.\n\n> To further confirm this let's wait for Kunpeng 920 tests by Krunal\n> Bauskar and Amit Khandekar. Also it would be nice if someone will run\n> benchmarks similar to [1] on Apple M1.\n\nI did what I could in this department. It's late and I'm not going to\nhave time to run read/write benchmarks before bed, but here are some\nresults for the \"pgbench -S\" cases. I tried to match your testing\nchoices, but could not entirely:\n\n* Configure options are --enable-debug, --disable-cassert, no other\nspecial configure options or CFLAG choices.\n\n* I have not been able to find a way to make Apple's compiler not\nuse the LSE spinlock instructions, so all of these correspond to\nyour LSE cases.\n\n* I used shared_buffers = 1GB, because this machine only has 16GB\nRAM so 32GB is clearly out of reach. Also I used pgbench scale\nfactor 100 not 1000. Since we're trying to measure contention\neffects not I/O speed, I don't think a huge test case is appropriate.\n\n* I still haven't gotten pgbench to work with -j settings above 128,\nso these runs use -j equal to half -c. Shouldn't really affect\nconclusions about scaling. (BTW, I see a similar limitation on\nmacOS Catalina x86_64, so whatever that is, it's not new.)\n\n* Otherwise, the first plot shows median of three results from\n\"pgbench -S -M prepared -T 120 -c $n -j $j\", as you had it.\nThe right-hand plot shows all three of the values in yerrorbars\nformat, just to give a sense of the noise level.\n\nClearly, there is something weird going on at -c 4. There's a cluster\nof results around 180K TPS, and another cluster around 210-220K TPS,\nand nothing in between. I suspect that the scheduler is doing\nsomething bogus with sometimes putting pgbench onto the slow cores.\nAnyway, I believe that the apparent gap between HEAD and the other\ncurves at -c 4 is probably an artifact: HEAD had two 180K-ish results\nand one 220K-ish result, while the other curves had the reverse, so\nthe medians are different but there's probably not any non-chance\neffect there.\n\nBottom line is that these patches don't appear to do much of\nanything on M1, as you surmised.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 01 Dec 2020 01:01:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 6:26 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> On Tue, 1 Dec 2020 at 02:31, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> BTW, how do you get that required gcc version is 9.4? I've managed to\n>> use LSE with gcc 9.3.\n>\n> Did they backported it to 9.3?\n> I am just looking at the gcc guide.\n> https://gcc.gnu.org/gcc-9/changes.html\n>\n> GCC 9.4\n>\n> Target Specific Changes\n>\n> AArch64\n>\n> The option -moutline-atomics has been added to aid deployment of the Large System Extensions (LSE)\n\nNo, you've misread this release notes item. This item relates to a\nnew option for LSE support. See the rest of the description.\n\n\"When the option is specified code is emitted to detect the presence\nof LSE instructions at runtime and use them for standard atomic\noperations. For more information please refer to the documentation.\"\n\nLSE support itself was introduced in gcc 6.\nhttps://gcc.gnu.org/gcc-6/changes.html\n * The ARMv8.1-A architecture and the Large System Extensions are now\nsupported. They can be used by specifying the -march=armv8.1-a option.\nAdditionally, the +lse option extension can be used in a similar\nfashion to other option extensions. The Large System Extensions\nintroduce new instructions that are used in the implementation of\natomic operations.\n\nBut, -moutline-atomics looks very interesting itself. I think we\nshould add this option to our arm64 packages if we haven't already.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Dec 2020 12:46:37 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 9:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I did what I could in this department. It's late and I'm not going to\n> have time to run read/write benchmarks before bed, but here are some\n> results for the \"pgbench -S\" cases. I tried to match your testing\n> choices, but could not entirely:\n>\n> * Configure options are --enable-debug, --disable-cassert, no other\n> special configure options or CFLAG choices.\n>\n> * I have not been able to find a way to make Apple's compiler not\n> use the LSE spinlock instructions, so all of these correspond to\n> your LSE cases.\n>\n> * I used shared_buffers = 1GB, because this machine only has 16GB\n> RAM so 32GB is clearly out of reach. Also I used pgbench scale\n> factor 100 not 1000. Since we're trying to measure contention\n> effects not I/O speed, I don't think a huge test case is appropriate.\n>\n> * I still haven't gotten pgbench to work with -j settings above 128,\n> so these runs use -j equal to half -c. Shouldn't really affect\n> conclusions about scaling. (BTW, I see a similar limitation on\n> macOS Catalina x86_64, so whatever that is, it's not new.)\n>\n> * Otherwise, the first plot shows median of three results from\n> \"pgbench -S -M prepared -T 120 -c $n -j $j\", as you had it.\n> The right-hand plot shows all three of the values in yerrorbars\n> format, just to give a sense of the noise level.\n>\n> Clearly, there is something weird going on at -c 4. There's a cluster\n> of results around 180K TPS, and another cluster around 210-220K TPS,\n> and nothing in between. I suspect that the scheduler is doing\n> something bogus with sometimes putting pgbench onto the slow cores.\n> Anyway, I believe that the apparent gap between HEAD and the other\n> curves at -c 4 is probably an artifact: HEAD had two 180K-ish results\n> and one 220K-ish result, while the other curves had the reverse, so\n> the medians are different but there's probably not any non-chance\n> effect there.\n>\n> Bottom line is that these patches don't appear to do much of\n> anything on M1, as you surmised.\n\nGreat, thank you for your research!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Dec 2020 12:48:12 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 15:16, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Tue, Dec 1, 2020 at 6:26 AM Krunal Bauskar <krunalbauskar@gmail.com>\n> wrote:\n> > On Tue, 1 Dec 2020 at 02:31, Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> >> BTW, how do you get that required gcc version is 9.4? I've managed to\n> >> use LSE with gcc 9.3.\n> >\n> > Did they backported it to 9.3?\n> > I am just looking at the gcc guide.\n> > https://gcc.gnu.org/gcc-9/changes.html\n> >\n> > GCC 9.4\n> >\n> > Target Specific Changes\n> >\n> > AArch64\n> >\n> > The option -moutline-atomics has been added to aid deployment of the\n> Large System Extensions (LSE)\n>\n> No, you've misread this release notes item. This item relates to a\n> new option for LSE support. See the rest of the description.\n>\n\nSome confusion here.\n\nWhat I meant was outline-atomics support was added in GCC-9.4 and was made\ndefault in gcc-10.\nLSE support is present for quite some time.\n\nIn one of my up-thread reply I mentioned about the outline atomic that\nwould help us enable lse if underline architecture support it.\n\n<snippet from reply>\n2. As you pointed out LSE is enabled starting only with arm-v8.1 but not\nall aarch64 tag machines are arm-v8.1 compatible.\n This means we would need a separate package or a more optimal way would\nbe to compile pgsql with gcc-9.4 (or gcc-10.x (default)) with\n -moutline-atomics that would emit both traditional and lse code and\nflow would dynamically select depending on the target machine\n (I have blogged about it in MySQL context\nhttps://mysqlonarm.github.io/ARM-LSE-and-MySQL/)\n</snippet from reply>\n\nProblem with this approach is it would need gcc-9.4+ (I also ready\ngcc-9.3.1 would do) or gcc10.x but most of the distro doesn't support it so\nuser using old compiler especially\ndefault one shipped with OS would not be able to take advantage of this.\n\n\n>\n> \"When the option is specified code is emitted to detect the presence\n> of LSE instructions at runtime and use them for standard atomic\n> operations. For more information please refer to the documentation.\"\n>\n> LSE support itself was introduced in gcc 6.\n> https://gcc.gnu.org/gcc-6/changes.html\n> * The ARMv8.1-A architecture and the Large System Extensions are now\n> supported. They can be used by specifying the -march=armv8.1-a option.\n> Additionally, the +lse option extension can be used in a similar\n> fashion to other option extensions. The Large System Extensions\n> introduce new instructions that are used in the implementation of\n> atomic operations.\n>\n> But, -moutline-atomics looks very interesting itself. I think we\n> should add this option to our arm64 packages if we haven't already.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Tue, 1 Dec 2020 at 15:16, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Tue, Dec 1, 2020 at 6:26 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> On Tue, 1 Dec 2020 at 02:31, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> BTW, how do you get that required gcc version is 9.4? I've managed to\n>> use LSE with gcc 9.3.\n>\n> Did they backported it to 9.3?\n> I am just looking at the gcc guide.\n> https://gcc.gnu.org/gcc-9/changes.html\n>\n> GCC 9.4\n>\n> Target Specific Changes\n>\n> AArch64\n>\n> The option -moutline-atomics has been added to aid deployment of the Large System Extensions (LSE)\n\nNo, you've misread this release notes item. This item relates to a\nnew option for LSE support. See the rest of the description.Some confusion here.What I meant was outline-atomics support was added in GCC-9.4 and was made default in gcc-10.LSE support is present for quite some time.In one of my up-thread reply I mentioned about the outline atomic that would help us enable lse if underline architecture support it.<snippet from reply>2. As you pointed out LSE is enabled starting only with arm-v8.1 but not all aarch64 tag machines are arm-v8.1 compatible. This means we would need a separate package or a more optimal way would be to compile pgsql with gcc-9.4 (or gcc-10.x (default)) with -moutline-atomics that would emit both traditional and lse code and flow would dynamically select depending on the target machine (I have blogged about it in MySQL context https://mysqlonarm.github.io/ARM-LSE-and-MySQL/)</snippet from reply>Problem with this approach is it would need gcc-9.4+ (I also ready gcc-9.3.1 would do) or gcc10.x but most of the distro doesn't support it so user using old compiler especiallydefault one shipped with OS would not be able to take advantage of this. \n\n\"When the option is specified code is emitted to detect the presence\nof LSE instructions at runtime and use them for standard atomic\noperations. For more information please refer to the documentation.\"\n\nLSE support itself was introduced in gcc 6.\nhttps://gcc.gnu.org/gcc-6/changes.html\n * The ARMv8.1-A architecture and the Large System Extensions are now\nsupported. They can be used by specifying the -march=armv8.1-a option.\nAdditionally, the +lse option extension can be used in a similar\nfashion to other option extensions. The Large System Extensions\nintroduce new instructions that are used in the implementation of\natomic operations.\n\nBut, -moutline-atomics looks very interesting itself. I think we\nshould add this option to our arm64 packages if we haven't already.\n\n------\nRegards,\nAlexander Korotkov\n-- Regards,Krunal Bauskar",
"msg_date": "Tue, 1 Dec 2020 15:32:38 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 15:33, Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> What I meant was outline-atomics support was added in GCC-9.4 and was made default in gcc-10.\n> LSE support is present for quite some time.\n\nFWIW, here is an earlier discussion on the same (also added the\nproposal author here) :\n\nhttps://www.postgresql.org/message-id/flat/099F69EE-51D3-4214-934A-1F28C0A1A7A7%40amazon.com\n\n\n",
"msg_date": "Tue, 1 Dec 2020 15:39:59 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 02:16, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Mon, Nov 30, 2020 at 9:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > I tend to think that LSE is enabled by default in Apple's clang based\n> > > on your previous message[1]. In order to dispel the doubts could you\n> > > please provide assembly of SpinLockAcquire for following clang\n> > > options.\n> > > \"-O2\"\n> > > \"-O2 -march=armv8-a+lse\"\n> > > \"-O2 -march=armv8-a\"\n> >\n> > Huh. Those options make exactly zero difference to the code generated\n> > for SpinLockAcquire/SpinLockRelease; it's the same as I showed upthread,\n> > for either the HEAD definition of TAS() or the CAS patch's version.\n> >\n> > So now I'm at a loss as to the reason for the performance difference\n> > I got. -march=armv8-a+lse does make a difference to code generation\n> > someplace, because the overall size of the postgres executable changes\n> > by 16kB or so. One might argue that the performance difference is due\n> > to better code elsewhere than the spinlocks ... but the test I'm running\n> > is basically just\n> >\n> > while (count-- > 0)\n> > {\n> > XLogGetLastRemovedSegno();\n> >\n> > CHECK_FOR_INTERRUPTS();\n> > }\n> >\n> > so it's hard to see where a non-spinlock-related code change would come\n> > in. That loop itself definitely generates the same code either way.\n> >\n> > I did find this interesting output from \"clang -v\":\n> >\n> > -target-cpu vortex -target-feature +v8.3a -target-feature +fp-armv8\n> -target-feature +neon -target-feature +crc -target-feature +crypto\n> -target-feature +fullfp16 -target-feature +ras -target-feature +lse\n> -target-feature +rdm -target-feature +rcpc -target-feature +zcm\n> -target-feature +zcz -target-feature +sha2 -target-feature +aes\n> >\n> > whereas adding -march=armv8-a+lse changes that to just\n> >\n> > -target-cpu vortex -target-feature +neon -target-feature +lse\n> -target-feature +zcm -target-feature +zcz\n> >\n> > On the whole, that would make one think that -march=armv8-a+lse\n> > should produce worse code than the default settings.\n>\n> Great, thanks.\n>\n> So, I think the following hypothesis isn't disproved yet.\n> 1) On ARM with LSE support, PostgreSQL built with LSE is faster than\n> PostgreSQL built without LSE. Even if the latter is patched with\n> anything considered in this thread.\n> 2) None of the patches considered in this thread give a clear\n> advantage for PostgreSQL built with LSE.\n>\n> To further confirm this let's wait for Kunpeng 920 tests by Krunal\n> Bauskar and Amit Khandekar. Also it would be nice if someone will run\n> benchmarks similar to [1] on Apple M1.\n>\n\n------------------------------------------------------------------------------------------------------------------------------------------\n\nI have completed benchmarking with lse.\n\nGraph attached.\n\n* For me, lse failed to perform. With lse enabled performance with higher\nscalability was infact less than baseline (with TAS).\n [This is true with and without patch (cas)]. So lse is unable to push\nthings.\n\n* Graph traces all 4 lines (baseline (tas), baseline-patched (cas),\nbaseline-lse (tas+lse), baseline-patched-lse (cas+lse))\n\n- for select, there is no clear winner but baseline-patched-lse (cas+lse)\nperform better than others (2-4%).\n\n- for update/tpcb like baseline-patched (cas) continue to outperform all\nthe 3 options (10-40%).\n In fact, with lse enabled a regression (compared to baseline/head) is\nobserved.\n\n[I am not surprised since MySQL showed the same behavior with lse of-course\nwith a lesser percentage].\n\nI have repeated the test 2 times to confirm the behavior.\nAlso, to reduce noise I normally run 4 rounds discarding 1st round and\ntaking the median of the last 3 rounds.\n\nIn all, I see consistent numbers.\nconf:\nhttps://github.com/mysqlonarm/benchmark-suites/blob/master/pgsql-pbench/conf/pgsql.cnf/postgresql.conf\n\n-----------------------------------------------------------------------------------------------------------------------------------------------\n\n From all the observations till now following points are clear:\n\n* Patch doesn't seem to make a significant difference with lse enabled.\n Ir-respective of the machine (Kunpeng, Graviton2, M1). [infact introduces\nregression with Kunpeng].\n\n* Patch helps generate significant +ve difference in performace with lse\ndisabled.\n Observed and Confirmed with both Kunpeng, Graviton2.\n (not possible to disable lse on m1).\n\n* Does the patch makes things worse with lse enabled.\n * Graviton2: No (based on Alexander graph)\n * M1: No (based on Tom's graph. Wondering why was debug build used\n(--enable-debug?))\n * Kunpeng: regression observed.\n\n--------------\n\nShould we enable lse?\n\nBased on the comment from [1] I see pgsql aligns with decision made by\ncompiler/distribution vendors. If the compiler/distribution vendors\nhas not made something default then there could be reason for it.\n\n-------------\n\nWithout lse as we have seen patch continue to score.\n\nEffect of LSE needs to be wide studied. Scaling on some arch and regressing\non some needs to be understood may be at CPU vendor level.\n\n------------------------------------------------------------------------------------------------------------------------------------------\nLinks:\n[1]\nhttps://www.postgresql.org/message-id/flat/099F69EE-51D3-4214-934A-1F28C0A1A7A7%40amazon.com\n\n\n>\n> Links\n> 1.\n> https://www.postgresql.org/message-id/CAPpHfdsGqVd6EJ4mr_RZVE5xSiCNBy4MuSvdTrKmTpM0eyWGpg%40mail.gmail.com\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\n\n-- \nRegards,\nKrunal Bauskar",
"msg_date": "Tue, 1 Dec 2020 18:13:38 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 3:44 PM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> I have completed benchmarking with lse.\n>\n> Graph attached.\n\nThank you for benchmarking.\n\nNow I agree with this comment by Tom Lane\n\n> In general, I'm pretty skeptical of *all* the results posted so far on\n> this thread, because everybody seems to be testing exactly one machine.\n> If there's one thing that it's safe to assume about ARM, it's that\n> there are a lot of different implementations; and this area seems very\n> very likely to differ across implementations.\n\nDifferent ARM implementations look too different. As you pointed out,\nLSE is enabled in gcc-10 by default. I doubt we can accept a patch,\nwhich gives benefits for specific platform and only when the compiler\nisn't very modern. Also, we didn't cover all ARM planforms. Given\nthey are so different, we can't guarantee that patch doesn't cause\nregression of some ARM. Additionally, the effect of the CAS patch\neven for Kunpeng seems modest. It makes the drop off of TPS more\nsmooth, but it doesn't change the trend.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Dec 2020 17:54:58 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 1:10 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> On Tue, 1 Dec 2020 at 15:33, Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> > What I meant was outline-atomics support was added in GCC-9.4 and was made default in gcc-10.\n> > LSE support is present for quite some time.\n>\n> FWIW, here is an earlier discussion on the same (also added the\n> proposal author here) :\n>\n> https://www.postgresql.org/message-id/flat/099F69EE-51D3-4214-934A-1F28C0A1A7A7%40amazon.com\n\nThank you for pointing! I wonder why the effect of LSE on Graviton2\nobserved by Tsahi Zidenberg is so modest. It's probably because he\nruns the tests with a low number of clients.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Dec 2020 17:58:28 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 20:25, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Tue, Dec 1, 2020 at 3:44 PM Krunal Bauskar <krunalbauskar@gmail.com>\n> wrote:\n> > I have completed benchmarking with lse.\n> >\n> > Graph attached.\n>\n> Thank you for benchmarking.\n>\n> Now I agree with this comment by Tom Lane\n>\n> > In general, I'm pretty skeptical of *all* the results posted so far on\n> > this thread, because everybody seems to be testing exactly one machine.\n> > If there's one thing that it's safe to assume about ARM, it's that\n> > there are a lot of different implementations; and this area seems very\n> > very likely to differ across implementations.\n>\n> Different ARM implementations look too different. As you pointed out,\n> LSE is enabled in gcc-10 by default. I doubt we can accept a patch,\n> which gives benefits for specific platform and only when the compiler\n> isn't very modern. Also, we didn't cover all ARM planforms. Given\n> they are so different, we can't guarantee that patch doesn't cause\n> regression of some ARM. Additionally, the effect of the CAS patch\n> even for Kunpeng seems modest. It makes the drop off of TPS more\n> smooth, but it doesn't change the trend.\n>\n\nThere are 2 parts:\n\n** Does CAS patch help scale PGSQL. Yes.*\n** Is LSE beneficial for all architectures. Probably No.*\n\nThe patch addresses only the former one which is true for all cases.\n(Enabling LSE should be an independent process).\n\ngcc-10 made it default but when I read [1] it quotes that canonical decided\nto remove it as default\nas part of* Ubuntu-20.04 which means LSE has not proven the test of\ncanonical (probably).*\nAlso, most of the distro has not yet started shipping GCC-10 which is way\nfar before it makes it to all distro.\n\nSo if we keep the LSE effect aside and just look at the patch from\nperformance improvement it surely helps\nachieve a good gain. I see an improvement in the range of 10-40%.\nAmit during his independent testing also observed the gain in the same\nrange and your testing with G-2 has re-attested the same point.\nPardon me if this is modest as per pgsql standards.\n\nWith 1024 scalability PGSQL on other arches (beyond ARM) struggle to scale\nso there is something more\ninherent that needs to be addressed from a generic perspective.\n\nAlso, the said patch non-only helps pgbench kind of workload but other\nworkloads too.\n\n--------------\n\nI would request you guys to re-think it from this perspective to help\nensure that PGSQL can scale well on ARM.\ns_lock becomes a top-most function and LSE is not a universal solution but\nCAS surely helps ease the main bottleneck.\n\nAnd surely let me know if more data is needed.\n\nLink:\n[1]:\nhttps://www.postgresql.org/message-id/flat/099F69EE-51D3-4214-934A-1F28C0A1A7A7%40amazon.com\n\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Tue, 1 Dec 2020 at 20:25, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Tue, Dec 1, 2020 at 3:44 PM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> I have completed benchmarking with lse.\n>\n> Graph attached.\n\nThank you for benchmarking.\n\nNow I agree with this comment by Tom Lane\n\n> In general, I'm pretty skeptical of *all* the results posted so far on\n> this thread, because everybody seems to be testing exactly one machine.\n> If there's one thing that it's safe to assume about ARM, it's that\n> there are a lot of different implementations; and this area seems very\n> very likely to differ across implementations.\n\nDifferent ARM implementations look too different. As you pointed out,\nLSE is enabled in gcc-10 by default. I doubt we can accept a patch,\nwhich gives benefits for specific platform and only when the compiler\nisn't very modern. Also, we didn't cover all ARM planforms. Given\nthey are so different, we can't guarantee that patch doesn't cause\nregression of some ARM. Additionally, the effect of the CAS patch\neven for Kunpeng seems modest. It makes the drop off of TPS more\nsmooth, but it doesn't change the trend.There are 2 parts:* Does CAS patch help scale PGSQL. Yes.* Is LSE beneficial for all architectures. Probably No.The patch addresses only the former one which is true for all cases.(Enabling LSE should be an independent process).gcc-10 made it default but when I read [1] it quotes that canonical decided to remove it as defaultas part of Ubuntu-20.04 which means LSE has not proven the test of canonical (probably).Also, most of the distro has not yet started shipping GCC-10 which is way far before it makes it to all distro.So if we keep the LSE effect aside and just look at the patch from performance improvement it surely helpsachieve a good gain. I see an improvement in the range of 10-40%.Amit during his independent testing also observed the gain in the same range and your testing with G-2 has re-attested the same point.Pardon me if this is modest as per pgsql standards.With 1024 scalability PGSQL on other arches (beyond ARM) struggle to scale so there is something moreinherent that needs to be addressed from a generic perspective.Also, the said patch non-only helps pgbench kind of workload but other workloads too.--------------I would request you guys to re-think it from this perspective to help ensure that PGSQL can scale well on ARM.s_lock becomes a top-most function and LSE is not a universal solution but CAS surely helps ease the main bottleneck.And surely let me know if more data is needed.Link:[1]: https://www.postgresql.org/message-id/flat/099F69EE-51D3-4214-934A-1F28C0A1A7A7%40amazon.com------\nRegards,\nAlexander Korotkov\n-- Regards,Krunal Bauskar",
"msg_date": "Tue, 1 Dec 2020 20:49:06 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 6:19 PM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> I would request you guys to re-think it from this perspective to help ensure that PGSQL can scale well on ARM.\n> s_lock becomes a top-most function and LSE is not a universal solution but CAS surely helps ease the main bottleneck.\n\nCAS patch isn't proven to be a universal solution as well. We have\ntested the patch on just a few processors, and Tom has seen the\nregression [1]. The benchmark used by Tom was artificial, but the\nresults may be relevant for some real-life workload.\n\nI'm expressing just my personal opinion, other committers can have\ndifferent opinions. I don't particularly think this topic is\nnecessarily a non-starter. But I do think that given ambiguity we've\nobserved in the benchmark, much more research is needed to push this\ntopic forward.\n\nLinks.\n1. https://www.postgresql.org/message-id/741389.1606530957%40sss.pgh.pa.us\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Dec 2020 19:07:02 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "> On 01/12/2020, 16:59, \"Alexander Korotkov\" <aekorotkov@gmail.com> wrote:\r\n> On Tue, Dec 1, 2020 at 1:10 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\r\n> > FWIW, here is an earlier discussion on the same (also added the\r\n> > proposal author here) :\r\n\r\nThanks for looping me in!\r\n\r\n> >\r\n> > https://www.postgresql.org/message-id/flat/099F69EE-51D3-4214-934A-1F28C0A1A7A7%40amazon.com\r\n>\r\n>\r\n> Thank you for pointing! I wonder why the effect of LSE on Graviton2\r\n> observed by Tsahi Zidenberg is so modest. It's probably because he\r\n> runs the tests with a low number of clients.\r\n\r\nThere are multiple possible reasons why I saw a smaller effect of LSE, but I think an important one was that I\r\nused a 32-core instance rather than a 64-core one. The reason I did so, was that 32-cores gave me better\r\nabsolute results than 64 cores, and I didn't want to feel like I could misguide anyone.\r\n\r\nThe 64-core instance results is a good example for the benefit of LSE. LSE becomes most important in edges,\r\nand with adversarial workloads. If multiple CPUs try to acquire a lock simultaneously - LSE ensures one CPU\r\nwill indeed get the lock (with just one transaction), while LDRX/STRX could have multiple CPUS looping and\r\nno-one acquiring a lock. This is why I believe just looking at \"reasonable\" benchmarks misses out on effects\r\nreal customers will run into.\r\n\r\nHappy to see another arm-optimization thread so quickly :)\r\n\r\nThank you!\r\nTsahi.\r\n\r\n\r\n",
"msg_date": "Tue, 1 Dec 2020 16:41:24 +0000",
"msg_from": "\"Zidenberg, Tsahi\" <tsahee@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Tue, Dec 1, 2020 at 6:19 PM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n>> I would request you guys to re-think it from this perspective to help ensure that PGSQL can scale well on ARM.\n>> s_lock becomes a top-most function and LSE is not a universal solution but CAS surely helps ease the main bottleneck.\n\n> CAS patch isn't proven to be a universal solution as well. We have\n> tested the patch on just a few processors, and Tom has seen the\n> regression [1]. The benchmark used by Tom was artificial, but the\n> results may be relevant for some real-life workload.\n\nYeah. I think that the main conclusion from what we've seen here is\nthat on smaller machines like M1, a standard pgbench benchmark just\nisn't capable of driving PG into serious spinlock contention. (That\nreflects very well on the work various people have done over the years\nto get rid of spinlock contention, because ten or so years ago it was\na huge problem on this size of machine. But evidently, not any more.)\nPer the results others have posted, nowadays you need dozens of cores\nand hundreds of client threads to measure any such issue with pgbench.\n\nSo that is why I experimented with a special test that does nothing\nexcept pound on one spinlock. Sure it's artificial, but if you want\nto see the effects of different spinlock implementations then it's\njust too hard to get any results with pgbench's regular scripts.\n\nAnd that's why it disturbs me that the CAS-spinlock patch showed up\nworse in that environment. The fact that it's not visible in the\nregular pgbench test just means that the effect is too small to\nmeasure in that test. But in a test where we *can* measure an effect,\nit's not looking good.\n\nIt would be interesting to see some results from the same test I did\non other processors. I suspect the results would look a lot different\nfrom mine ... but we won't know unless someone does it. Or, if someone\nwants to propose some other test case, let's have a look.\n\n> I'm expressing just my personal opinion, other committers can have\n> different opinions. I don't particularly think this topic is\n> necessarily a non-starter. But I do think that given ambiguity we've\n> observed in the benchmark, much more research is needed to push this\n> topic forward.\n\nYeah. I'm not here to say \"do nothing\". But I think we need results\nfrom more machines and more test cases to convince ourselves whether\nthere's a consistent, worthwhile win from any specific patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Dec 2020 11:49:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 7:57 PM Zidenberg, Tsahi <tsahee@amazon.com> wrote:\n> > On 01/12/2020, 16:59, \"Alexander Korotkov\" <aekorotkov@gmail.com> wrote:\n> > On Tue, Dec 1, 2020 at 1:10 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > > FWIW, here is an earlier discussion on the same (also added the\n> > > proposal author here) :\n>\n> Thanks for looping me in!\n>\n> > >\n> > > https://www.postgresql.org/message-id/flat/099F69EE-51D3-4214-934A-1F28C0A1A7A7%40amazon.com\n> >\n> >\n> > Thank you for pointing! I wonder why the effect of LSE on Graviton2\n> > observed by Tsahi Zidenberg is so modest. It's probably because he\n> > runs the tests with a low number of clients.\n>\n> There are multiple possible reasons why I saw a smaller effect of LSE, but I think an important one was that I\n> used a 32-core instance rather than a 64-core one. The reason I did so, was that 32-cores gave me better\n> absolute results than 64 cores, and I didn't want to feel like I could misguide anyone.\n\nBTW, what number of clients did you use? I can't find it in your message.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Dec 2020 20:05:16 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "> On 01/12/2020, 19:08, \"Alexander Korotkov\" <aekorotkov@gmail.com> wrote:\r\n> BTW, what number of clients did you use? I can't find it in your message.\r\n\r\nSure. Important params seem to be:\r\n\r\nPgbench:\r\nClients: 256 \r\npgbench_jobs : 32 \r\nScale: 1000\r\nfill_factor: 90\r\n\r\nPostgresql:\r\nshared buffers: 31GB\r\nmax_connections: 1024\r\n\r\nThanks!\r\nTsahi.\r\n\r\n",
"msg_date": "Tue, 1 Dec 2020 21:43:53 +0000",
"msg_from": "\"Zidenberg, Tsahi\" <tsahee@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 22:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > On Tue, Dec 1, 2020 at 6:19 PM Krunal Bauskar <krunalbauskar@gmail.com>\n> wrote:\n> >> I would request you guys to re-think it from this perspective to help\n> ensure that PGSQL can scale well on ARM.\n> >> s_lock becomes a top-most function and LSE is not a universal solution\n> but CAS surely helps ease the main bottleneck.\n>\n> > CAS patch isn't proven to be a universal solution as well. We have\n> > tested the patch on just a few processors, and Tom has seen the\n> > regression [1]. The benchmark used by Tom was artificial, but the\n> > results may be relevant for some real-life workload.\n>\n> Yeah. I think that the main conclusion from what we've seen here is\n> that on smaller machines like M1, a standard pgbench benchmark just\n> isn't capable of driving PG into serious spinlock contention. (That\n> reflects very well on the work various people have done over the years\n> to get rid of spinlock contention, because ten or so years ago it was\n> a huge problem on this size of machine. But evidently, not any more.)\n> Per the results others have posted, nowadays you need dozens of cores\n> and hundreds of client threads to measure any such issue with pgbench.\n>\n> So that is why I experimented with a special test that does nothing\n> except pound on one spinlock. Sure it's artificial, but if you want\n> to see the effects of different spinlock implementations then it's\n> just too hard to get any results with pgbench's regular scripts.\n>\n> And that's why it disturbs me that the CAS-spinlock patch showed up\n> worse in that environment. The fact that it's not visible in the\n> regular pgbench test just means that the effect is too small to\n> measure in that test. But in a test where we *can* measure an effect,\n> it's not looking good.\n>\n> It would be interesting to see some results from the same test I did\n> on other processors. I suspect the results would look a lot different\n> from mine ... but we won't know unless someone does it. Or, if someone\n> wants to propose some other test case, let's have a look.\n>\n> > I'm expressing just my personal opinion, other committers can have\n> > different opinions. I don't particularly think this topic is\n> > necessarily a non-starter. But I do think that given ambiguity we've\n> > observed in the benchmark, much more research is needed to push this\n> > topic forward.\n>\n> Yeah. I'm not here to say \"do nothing\". But I think we need results\n> from more machines and more test cases to convince ourselves whether\n> there's a consistent, worthwhile win from any specific patch.\n>\n\nI think there is\n*an ambiguity with lse and that has been the*\n*source of some confusion* so let's make another attempt to\nunderstand all the observations and then define the next steps.\n\n-----------------------------------------------------------------\n\n\n*1. CAS patch (applied on the baseline)* - Kunpeng: 10-45% improvement\nobserved [1]\n - Graviton2: 30-50% improvement observed [2]\n - M1: Only select results are available cas continue to maintain a\nmarginal gain but not significant. [3]\n [inline with what we observed with Kunpeng and Graviton2 for select\nresults too].\n\n\n*2. Let's ignore CAS for a sec and just think of LSE independently* -\nKunpeng: regression observed\n - Graviton2: gain observed\n - M1: regression observed\n [while lse probably is default explicitly enabling it with +lse causes\nregression on the head itself [4].\n client=2/4: 1816/714 ---- vs ---- 892/610]\n\n There is enough reason not to immediately consider enabling LSE given\nits unable to perform consistently on all hardware.\n-----------------------------------------------------------------\n\nWith those 2 aspects clear let's evaluate what options we have in hand\n\n\n*1. Enable CAS approach* *- What we gain:* pgsql scale on\nKunpeng/Graviton2\n (m1 awaiting read-write result but may marginally scale [[5]: \"but\nthe patched numbers are only about a few percent better\"])\n *- What we lose:* Nothing for now.\n\n\n*2. LSE:* *- What we gain: *Scaled workload with Graviton2\n * - What we lose:* regression on M1 and Kunpeng.\n\nLet's think of both approaches independently.\n\n- Enabling CAS would help us scale on all hardware (Kunpeng/Graviton2/M1)\n- Enabling LSE would help us scale only on some but regress on others.\n [LSE could be considered in the future once it stabilizes and all\nhardware adapts to it]\n\n-------------------------------------------------------------------\n\n*Let me know what do you think about this analysis and any specific\ndirection that we should consider to help move forward.*\n\n-------------------------------------------------------------------\n\nLinks:\n[1]:\nhttps://www.postgresql.org/message-id/attachment/116612/Screenshot%20from%202020-12-01%2017-55-21.png\n[2]: https://www.postgresql.org/message-id/attachment/116521/arm-rw.png\n[3]:\nhttps://www.postgresql.org/message-id/1367116.1606802480%40sss.pgh.pa.us\n[4]:\nhttps://www.postgresql.org/message-id/1158478.1606716507%40sss.pgh.pa.us\n[5]:\nhttps://www.postgresql.org/message-id/51e2f75b-3742-7f28-4438-0425b11cf410%40enterprisedb.com\n\n\n> regards, tom lane\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Tue, 1 Dec 2020 at 22:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Tue, Dec 1, 2020 at 6:19 PM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n>> I would request you guys to re-think it from this perspective to help ensure that PGSQL can scale well on ARM.\n>> s_lock becomes a top-most function and LSE is not a universal solution but CAS surely helps ease the main bottleneck.\n\n> CAS patch isn't proven to be a universal solution as well. We have\n> tested the patch on just a few processors, and Tom has seen the\n> regression [1]. The benchmark used by Tom was artificial, but the\n> results may be relevant for some real-life workload.\n\nYeah. I think that the main conclusion from what we've seen here is\nthat on smaller machines like M1, a standard pgbench benchmark just\nisn't capable of driving PG into serious spinlock contention. (That\nreflects very well on the work various people have done over the years\nto get rid of spinlock contention, because ten or so years ago it was\na huge problem on this size of machine. But evidently, not any more.)\nPer the results others have posted, nowadays you need dozens of cores\nand hundreds of client threads to measure any such issue with pgbench.\n\nSo that is why I experimented with a special test that does nothing\nexcept pound on one spinlock. Sure it's artificial, but if you want\nto see the effects of different spinlock implementations then it's\njust too hard to get any results with pgbench's regular scripts.\n\nAnd that's why it disturbs me that the CAS-spinlock patch showed up\nworse in that environment. The fact that it's not visible in the\nregular pgbench test just means that the effect is too small to\nmeasure in that test. But in a test where we *can* measure an effect,\nit's not looking good.\n\nIt would be interesting to see some results from the same test I did\non other processors. I suspect the results would look a lot different\nfrom mine ... but we won't know unless someone does it. Or, if someone\nwants to propose some other test case, let's have a look.\n\n> I'm expressing just my personal opinion, other committers can have\n> different opinions. I don't particularly think this topic is\n> necessarily a non-starter. But I do think that given ambiguity we've\n> observed in the benchmark, much more research is needed to push this\n> topic forward.\n\nYeah. I'm not here to say \"do nothing\". But I think we need results\nfrom more machines and more test cases to convince ourselves whether\nthere's a consistent, worthwhile win from any specific patch.I think there is an ambiguity with lse and that has been thesource of some confusion so let's make another attempt tounderstand all the observations and then define the next steps.-----------------------------------------------------------------1. CAS patch (applied on the baseline) - Kunpeng: 10-45% improvement observed [1] - Graviton2: 30-50% improvement observed [2] - M1: Only select results are available cas continue to maintain a marginal gain but not significant. [3] [inline with what we observed with Kunpeng and Graviton2 for select results too]. 2. Let's ignore CAS for a sec and just think of LSE independently - Kunpeng: regression observed - Graviton2: gain observed - M1: regression observed [while lse probably is default explicitly enabling it with +lse causes regression on the head itself [4]. client=2/4: 1816/714 ---- vs ---- 892/610] There is enough reason not to immediately consider enabling LSE given its unable to perform consistently on all hardware.-----------------------------------------------------------------With those 2 aspects clear let's evaluate what options we have in hand1. Enable CAS approach - What we gain: pgsql scale on Kunpeng/Graviton2 (m1 awaiting read-write result but may marginally scale [[5]: \"but the patched numbers are only about a few percent better\"]) - What we lose: Nothing for now.2. LSE: - What we gain: Scaled workload with Graviton2 - What we lose: regression on M1 and Kunpeng.Let's think of both approaches independently.- Enabling CAS would help us scale on all hardware (Kunpeng/Graviton2/M1)- Enabling LSE would help us scale only on some but regress on others. [LSE could be considered in the future once it stabilizes and all hardware adapts to it]-------------------------------------------------------------------Let me know what do you think about this analysis and any specific direction that we should consider to help move forward.-------------------------------------------------------------------Links: [1]: https://www.postgresql.org/message-id/attachment/116612/Screenshot%20from%202020-12-01%2017-55-21.png[2]: https://www.postgresql.org/message-id/attachment/116521/arm-rw.png[3]: https://www.postgresql.org/message-id/1367116.1606802480%40sss.pgh.pa.us[4]: https://www.postgresql.org/message-id/1158478.1606716507%40sss.pgh.pa.us[5]: https://www.postgresql.org/message-id/51e2f75b-3742-7f28-4438-0425b11cf410%40enterprisedb.com\n\n regards, tom lane\n-- Regards,Krunal Bauskar",
"msg_date": "Wed, 2 Dec 2020 09:27:37 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Any updates or further inputs on this.\n\nOn Wed, 2 Dec 2020 at 09:27, Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n\n>\n>\n> On Tue, 1 Dec 2020 at 22:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Alexander Korotkov <aekorotkov@gmail.com> writes:\n>> > On Tue, Dec 1, 2020 at 6:19 PM Krunal Bauskar <krunalbauskar@gmail.com>\n>> wrote:\n>> >> I would request you guys to re-think it from this perspective to help\n>> ensure that PGSQL can scale well on ARM.\n>> >> s_lock becomes a top-most function and LSE is not a universal solution\n>> but CAS surely helps ease the main bottleneck.\n>>\n>> > CAS patch isn't proven to be a universal solution as well. We have\n>> > tested the patch on just a few processors, and Tom has seen the\n>> > regression [1]. The benchmark used by Tom was artificial, but the\n>> > results may be relevant for some real-life workload.\n>>\n>> Yeah. I think that the main conclusion from what we've seen here is\n>> that on smaller machines like M1, a standard pgbench benchmark just\n>> isn't capable of driving PG into serious spinlock contention. (That\n>> reflects very well on the work various people have done over the years\n>> to get rid of spinlock contention, because ten or so years ago it was\n>> a huge problem on this size of machine. But evidently, not any more.)\n>> Per the results others have posted, nowadays you need dozens of cores\n>> and hundreds of client threads to measure any such issue with pgbench.\n>>\n>> So that is why I experimented with a special test that does nothing\n>> except pound on one spinlock. Sure it's artificial, but if you want\n>> to see the effects of different spinlock implementations then it's\n>> just too hard to get any results with pgbench's regular scripts.\n>>\n>> And that's why it disturbs me that the CAS-spinlock patch showed up\n>> worse in that environment. The fact that it's not visible in the\n>> regular pgbench test just means that the effect is too small to\n>> measure in that test. But in a test where we *can* measure an effect,\n>> it's not looking good.\n>>\n>> It would be interesting to see some results from the same test I did\n>> on other processors. I suspect the results would look a lot different\n>> from mine ... but we won't know unless someone does it. Or, if someone\n>> wants to propose some other test case, let's have a look.\n>>\n>> > I'm expressing just my personal opinion, other committers can have\n>> > different opinions. I don't particularly think this topic is\n>> > necessarily a non-starter. But I do think that given ambiguity we've\n>> > observed in the benchmark, much more research is needed to push this\n>> > topic forward.\n>>\n>> Yeah. I'm not here to say \"do nothing\". But I think we need results\n>> from more machines and more test cases to convince ourselves whether\n>> there's a consistent, worthwhile win from any specific patch.\n>>\n>\n> I think there is\n> *an ambiguity with lse and that has been the*\n> *source of some confusion* so let's make another attempt to\n> understand all the observations and then define the next steps.\n>\n> -----------------------------------------------------------------\n>\n>\n> *1. CAS patch (applied on the baseline)* - Kunpeng: 10-45% improvement\n> observed [1]\n> - Graviton2: 30-50% improvement observed [2]\n> - M1: Only select results are available cas continue to maintain a\n> marginal gain but not significant. [3]\n> [inline with what we observed with Kunpeng and Graviton2 for select\n> results too].\n>\n>\n> *2. Let's ignore CAS for a sec and just think of LSE independently* -\n> Kunpeng: regression observed\n> - Graviton2: gain observed\n> - M1: regression observed\n> [while lse probably is default explicitly enabling it with +lse\n> causes regression on the head itself [4].\n> client=2/4: 1816/714 ---- vs ---- 892/610]\n>\n> There is enough reason not to immediately consider enabling LSE given\n> its unable to perform consistently on all hardware.\n> -----------------------------------------------------------------\n>\n> With those 2 aspects clear let's evaluate what options we have in hand\n>\n>\n> *1. Enable CAS approach* *- What we gain:* pgsql scale on\n> Kunpeng/Graviton2\n> (m1 awaiting read-write result but may marginally scale [[5]: \"but\n> the patched numbers are only about a few percent better\"])\n> *- What we lose:* Nothing for now.\n>\n>\n> *2. LSE:* *- What we gain: *Scaled workload with Graviton2\n> * - What we lose:* regression on M1 and Kunpeng.\n>\n> Let's think of both approaches independently.\n>\n> - Enabling CAS would help us scale on all hardware (Kunpeng/Graviton2/M1)\n> - Enabling LSE would help us scale only on some but regress on others.\n> [LSE could be considered in the future once it stabilizes and all\n> hardware adapts to it]\n>\n> -------------------------------------------------------------------\n>\n> *Let me know what do you think about this analysis and any specific\n> direction that we should consider to help move forward.*\n>\n> -------------------------------------------------------------------\n>\n> Links:\n> [1]:\n> https://www.postgresql.org/message-id/attachment/116612/Screenshot%20from%202020-12-01%2017-55-21.png\n> [2]: https://www.postgresql.org/message-id/attachment/116521/arm-rw.png\n> [3]:\n> https://www.postgresql.org/message-id/1367116.1606802480%40sss.pgh.pa.us\n> [4]:\n> https://www.postgresql.org/message-id/1158478.1606716507%40sss.pgh.pa.us\n> [5]:\n> https://www.postgresql.org/message-id/51e2f75b-3742-7f28-4438-0425b11cf410%40enterprisedb.com\n>\n>\n>> regards, tom lane\n>>\n>\n>\n> --\n> Regards,\n> Krunal Bauskar\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nAny updates or further inputs on this.On Wed, 2 Dec 2020 at 09:27, Krunal Bauskar <krunalbauskar@gmail.com> wrote:On Tue, 1 Dec 2020 at 22:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Tue, Dec 1, 2020 at 6:19 PM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n>> I would request you guys to re-think it from this perspective to help ensure that PGSQL can scale well on ARM.\n>> s_lock becomes a top-most function and LSE is not a universal solution but CAS surely helps ease the main bottleneck.\n\n> CAS patch isn't proven to be a universal solution as well. We have\n> tested the patch on just a few processors, and Tom has seen the\n> regression [1]. The benchmark used by Tom was artificial, but the\n> results may be relevant for some real-life workload.\n\nYeah. I think that the main conclusion from what we've seen here is\nthat on smaller machines like M1, a standard pgbench benchmark just\nisn't capable of driving PG into serious spinlock contention. (That\nreflects very well on the work various people have done over the years\nto get rid of spinlock contention, because ten or so years ago it was\na huge problem on this size of machine. But evidently, not any more.)\nPer the results others have posted, nowadays you need dozens of cores\nand hundreds of client threads to measure any such issue with pgbench.\n\nSo that is why I experimented with a special test that does nothing\nexcept pound on one spinlock. Sure it's artificial, but if you want\nto see the effects of different spinlock implementations then it's\njust too hard to get any results with pgbench's regular scripts.\n\nAnd that's why it disturbs me that the CAS-spinlock patch showed up\nworse in that environment. The fact that it's not visible in the\nregular pgbench test just means that the effect is too small to\nmeasure in that test. But in a test where we *can* measure an effect,\nit's not looking good.\n\nIt would be interesting to see some results from the same test I did\non other processors. I suspect the results would look a lot different\nfrom mine ... but we won't know unless someone does it. Or, if someone\nwants to propose some other test case, let's have a look.\n\n> I'm expressing just my personal opinion, other committers can have\n> different opinions. I don't particularly think this topic is\n> necessarily a non-starter. But I do think that given ambiguity we've\n> observed in the benchmark, much more research is needed to push this\n> topic forward.\n\nYeah. I'm not here to say \"do nothing\". But I think we need results\nfrom more machines and more test cases to convince ourselves whether\nthere's a consistent, worthwhile win from any specific patch.I think there is an ambiguity with lse and that has been thesource of some confusion so let's make another attempt tounderstand all the observations and then define the next steps.-----------------------------------------------------------------1. CAS patch (applied on the baseline) - Kunpeng: 10-45% improvement observed [1] - Graviton2: 30-50% improvement observed [2] - M1: Only select results are available cas continue to maintain a marginal gain but not significant. [3] [inline with what we observed with Kunpeng and Graviton2 for select results too]. 2. Let's ignore CAS for a sec and just think of LSE independently - Kunpeng: regression observed - Graviton2: gain observed - M1: regression observed [while lse probably is default explicitly enabling it with +lse causes regression on the head itself [4]. client=2/4: 1816/714 ---- vs ---- 892/610] There is enough reason not to immediately consider enabling LSE given its unable to perform consistently on all hardware.-----------------------------------------------------------------With those 2 aspects clear let's evaluate what options we have in hand1. Enable CAS approach - What we gain: pgsql scale on Kunpeng/Graviton2 (m1 awaiting read-write result but may marginally scale [[5]: \"but the patched numbers are only about a few percent better\"]) - What we lose: Nothing for now.2. LSE: - What we gain: Scaled workload with Graviton2 - What we lose: regression on M1 and Kunpeng.Let's think of both approaches independently.- Enabling CAS would help us scale on all hardware (Kunpeng/Graviton2/M1)- Enabling LSE would help us scale only on some but regress on others. [LSE could be considered in the future once it stabilizes and all hardware adapts to it]-------------------------------------------------------------------Let me know what do you think about this analysis and any specific direction that we should consider to help move forward.-------------------------------------------------------------------Links: [1]: https://www.postgresql.org/message-id/attachment/116612/Screenshot%20from%202020-12-01%2017-55-21.png[2]: https://www.postgresql.org/message-id/attachment/116521/arm-rw.png[3]: https://www.postgresql.org/message-id/1367116.1606802480%40sss.pgh.pa.us[4]: https://www.postgresql.org/message-id/1158478.1606716507%40sss.pgh.pa.us[5]: https://www.postgresql.org/message-id/51e2f75b-3742-7f28-4438-0425b11cf410%40enterprisedb.com\n\n regards, tom lane\n-- Regards,Krunal Bauskar\n-- Regards,Krunal Bauskar",
"msg_date": "Thu, 3 Dec 2020 15:19:41 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> Any updates or further inputs on this.\n\nAs far as LSE goes: my take is that tampering with the\ncompiler/platform's default optimization options requires *very*\nstrong evidence, which we have not got and likely won't get. Users\nwho are building for specific hardware can choose to supply custom\nCFLAGS, of course. But we shouldn't presume to do that for them,\nbecause we don't know what they are building for, or with what.\n\nI'm very willing to consider the CAS spinlock patch, but it still\nfeels like there's not enough evidence to show that it's a universal\nwin. The way to move forward on that is to collect more measurements\non additional ARM-based platforms. And I continue to think that\npgbench is only a very crude tool for testing spinlock performance;\nwe should look at other tests.\n\n From a system structural standpoint, I seriously dislike that lwlock.c\npatch: putting machine-specific variant implementations into that file\nseems like a disaster for maintainability. So it would need to show a\nvery significant gain across a range of hardware before I'd want to\nconsider adopting it ... and it has not shown that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Dec 2020 11:02:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 7:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> From a system structural standpoint, I seriously dislike that lwlock.c\n> patch: putting machine-specific variant implementations into that file\n> seems like a disaster for maintainability. So it would need to show a\n> very significant gain across a range of hardware before I'd want to\n> consider adopting it ... and it has not shown that.\n\nThe current shape of the lwlock patch is experimental. I had quite a\nbeautiful (in my opinion) idea to wrap platform-dependent parts of\nCAS-loops into macros. Then we could provide different low-level\nimplementations of CAS-loops for Power, ARM and rest platforms with\nsingle code for LWLockAttempLock() and others. However, I see that\nmodern ARM tends to efficiently implement LSE. Power doesn't seem to\nbe very popular. So, I'm going to give up with this for now.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 3 Dec 2020 22:03:03 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 6:58 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> 1. CAS patch (applied on the baseline)\n> - Kunpeng: 10-45% improvement observed [1]\n> - Graviton2: 30-50% improvement observed [2]\n\nWhat does lower boundary of improvement mean? Does it mean minimal\nimprovement observed? Obviously not, because there is no improvement\nwith a low number of clients or read-only benchmark.\n\n> - M1: Only select results are available cas continue to maintain a marginal gain but not significant. [3]\n\nRead-only benchmark doesn't involve spinlocks (I've just rechecked\nthis). So, this difference is purely speculative.\n\nAlso, Tom observed the regression [1]. The benchmark is artificial,\nbut it may correspond to some real workload with heavily-loaded\nspinlocks. And that might have an explanation. ldrex/strex\nthemselves don't work as memory barrier (this is why compiler adds\nexplicit memory barrier afterwards). And I bet ldrex unpaired with\nstrex could see an outdated value. On high-contended spinlocks that\nmay cause too pessimistic waits.\n\n> 2. Let's ignore CAS for a sec and just think of LSE independently\n> - Kunpeng: regression observed\n\nYeah, it's sad that there are ARM chips, where making the same action\nin a single instruction is slower than the same actions in multiple\ninstructions.\n\n> - Graviton2: gain observed\n\nHave to say it's 5.5x improvement, which is dramatically more than\nCAS-loop patch can give.\n\n> - M1: regression observed\n> [while lse probably is default explicitly enabling it with +lse causes regression on the head itself [4].\n> client=2/4: 1816/714 ---- vs ---- 892/610]\n\nThis is plain false. LSE was enabled in all the versions tested in M1\n[2]. So not LSE instructions or +lse option caused the regression,\nbut lack of other options enabled by default in Apple's clang.\n\n> Let me know what do you think about this analysis and any specific direction that we should consider to help move forward.\n\nIn order to move forward with ARM-optimized spinlocks I would\ninvestigate other options (at least [3]).\n\nRegarding CAS spinlock patch I can propose the following steps to\nclarify the things.\n1. Run Tom's spinlock test on ARM machines other than M1.\n2. Try to construct a pgbench script, which produces heavily-loaded\nspinlock without custom C-function, and also run it across various ARM\nmachines.\n\nLinks\n1. https://www.postgresql.org/message-id/741389.1606530957%40sss.pgh.pa.us\n2. https://www.postgresql.org/message-id/1274781.1606760475%40sss.pgh.pa.us\n3. https://linux-concepts.blogspot.com/2018/05/spinlock-implementation-in-arm.html\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 3 Dec 2020 23:00:39 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 6:58 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> Let me know what do you think about this analysis and any specific direction that we should consider to help move forward.\n\nBTW, it would be also nice to benchmark my lwlock patch on the\nKunpeng. I'm very optimistic about this patch, but it wouldn't be\nfair to completely throw it away. It still might be useful for\nLSE-disabled builds.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 5 Dec 2020 00:25:05 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Sat, 5 Dec 2020 at 02:55, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Wed, Dec 2, 2020 at 6:58 AM Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n> > Let me know what do you think about this analysis and any specific direction that we should consider to help move forward.\n>\n> BTW, it would be also nice to benchmark my lwlock patch on the\n> Kunpeng. I'm very optimistic about this patch, but it wouldn't be\n> fair to completely throw it away. It still might be useful for\n> LSE-disabled builds.\n\nCoincidentally I was also looking at some hotspot locations around\nLWLockAcquire() and LWLockAttempt() for read-only work-loads on both\narm64 and x86, and had narrowed down to the place where\npg_atomic_compare_exchange_u32() is called. So it's likely we are\nworking on the same hotspot area. When I get a chance, I do plan to\nlook at your patch while I myself am trying to see if we can do some\noptimizations. Although, this is unrelated to the optimization of this\nmail thread, so this will need a different mail thread.\n\n\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n",
"msg_date": "Mon, 7 Dec 2020 10:26:39 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Thu, 3 Dec 2020 at 21:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> > Any updates or further inputs on this.\n>\n> As far as LSE goes: my take is that tampering with the\n> compiler/platform's default optimization options requires *very*\n> strong evidence, which we have not got and likely won't get. Users\n> who are building for specific hardware can choose to supply custom\n> CFLAGS, of course. But we shouldn't presume to do that for them,\n> because we don't know what they are building for, or with what.\n>\n> I'm very willing to consider the CAS spinlock patch, but it still\n> feels like there's not enough evidence to show that it's a universal\n> win. The way to move forward on that is to collect more measurements\n> on additional ARM-based platforms. And I continue to think that\n> pgbench is only a very crude tool for testing spinlock performance;\n> we should look at other tests.\n>\n\nThanks Tom.\n\nGiven pg-bench limited option I decided to try things with sysbench to\nexpose\nthe real contention using zipfian type (zipfian pattern causes part of the\ndatabase\nto get updated there-by exposing main contention point).\n\n----------------------------------------------------------------------------\n*Baseline for 256 threads update-index use-case:*\n- 44.24% 174935 postgres postgres [.] s_lock\ntransactions:\n transactions: 5587105 (92988.40 per sec.)\n\n*Patched for 256 threads update-index use-case:*\n 0.02% 80 postgres postgres [.] s_lock\ntransactions:\n transactions: 10288781 (171305.24 per sec.)\n\n*perf diff*\n\n* 0.02% +44.22% postgres [.] s_lock*\n----------------------------------------------------------------------------\n\nAs we see from the above result s_lock is exposing major contention that\ncould be relaxed using the\nsaid cas patch. Performance improvement in range of 80% is observed.\n\nTaking this guideline we decided to run it for all scalability for update\nand non-update use-case.\nCheck the attached graph. Consistent improvement is observed.\n\nI presume this should help re-establish that for major contention cases\nexisting tas approach will always give up.\n\n-------------------------------------------------------------------------------------------\n\nUnfortunately, I don't have access to different ARM arch except for Kunpeng\nor Graviton2 where\nwe have already proved the value of the patch.\n[ref: Apple M1 as per your evaluation patch doesn't show regression for\nselect. Maybe if possible can you try update scenarios too].\n\nDo you know anyone from the community who has access to other ARM arches we\ncan request them to evaluate?\nBut since it is has proven on 2 independent ARM arch I am pretty confident\nit will scale with other ARM arches too.\n\n\n>\n> From a system structural standpoint, I seriously dislike that lwlock.c\n> patch: putting machine-specific variant implementations into that file\n> seems like a disaster for maintainability. So it would need to show a\n> very significant gain across a range of hardware before I'd want to\n> consider adopting it ... and it has not shown that.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards,\nKrunal Bauskar",
"msg_date": "Tue, 8 Dec 2020 14:33:59 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "On Tue, 8 Dec 2020 at 14:33, Krunal Bauskar <krunalbauskar@gmail.com> wrote:\n\n>\n>\n> On Thu, 3 Dec 2020 at 21:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Krunal Bauskar <krunalbauskar@gmail.com> writes:\n>> > Any updates or further inputs on this.\n>>\n>> As far as LSE goes: my take is that tampering with the\n>> compiler/platform's default optimization options requires *very*\n>> strong evidence, which we have not got and likely won't get. Users\n>> who are building for specific hardware can choose to supply custom\n>> CFLAGS, of course. But we shouldn't presume to do that for them,\n>> because we don't know what they are building for, or with what.\n>>\n>> I'm very willing to consider the CAS spinlock patch, but it still\n>> feels like there's not enough evidence to show that it's a universal\n>> win. The way to move forward on that is to collect more measurements\n>> on additional ARM-based platforms. And I continue to think that\n>> pgbench is only a very crude tool for testing spinlock performance;\n>> we should look at other tests.\n>>\n>\n> Thanks Tom.\n>\n> Given pg-bench limited option I decided to try things with sysbench to\n> expose\n> the real contention using zipfian type (zipfian pattern causes part of the\n> database\n> to get updated there-by exposing main contention point).\n>\n>\n> ----------------------------------------------------------------------------\n> *Baseline for 256 threads update-index use-case:*\n> - 44.24% 174935 postgres postgres [.] s_lock\n> transactions:\n> transactions: 5587105 (92988.40 per sec.)\n>\n> *Patched for 256 threads update-index use-case:*\n> 0.02% 80 postgres postgres [.] s_lock\n> transactions:\n> transactions: 10288781 (171305.24 per sec.)\n>\n> *perf diff*\n>\n> * 0.02% +44.22% postgres [.] s_lock*\n> ----------------------------------------------------------------------------\n>\n> As we see from the above result s_lock is exposing major contention that\n> could be relaxed using the\n> said cas patch. Performance improvement in range of 80% is observed.\n>\n> Taking this guideline we decided to run it for all scalability for update\n> and non-update use-case.\n> Check the attached graph. Consistent improvement is observed.\n>\n> I presume this should help re-establish that for major contention cases\n> existing tas approach will always give up.\n>\n>\n> -------------------------------------------------------------------------------------------\n>\n> Unfortunately, I don't have access to different ARM arch except for\n> Kunpeng or Graviton2 where\n> we have already proved the value of the patch.\n> [ref: Apple M1 as per your evaluation patch doesn't show regression for\n> select. Maybe if possible can you try update scenarios too].\n>\n> Do you know anyone from the community who has access to other ARM arches\n> we can request them to evaluate?\n> But since it is has proven on 2 independent ARM arch I am pretty confident\n> it will scale with other ARM arches too.\n>\n>\n\nAny direction on how we can proceed on this?\n\n* We have tested it with both cloud vendors that provide ARM instances.\n* We have tested it with Apple M1 (partially at-least)\n* Ampere use to provide instance on packet.com but now it is an evaluation\nprogram only.\n\nNo other active arm instance offering a cloud provider.\n\nGiven our evaluation so far has proven to be +ve can we think of\nconsidering it on basis of the available\ndata which is quite encouraging with 80% improvement seen for heavy\ncontention use-cases.\n\n\n\n>\n>> From a system structural standpoint, I seriously dislike that lwlock.c\n>> patch: putting machine-specific variant implementations into that file\n>> seems like a disaster for maintainability. So it would need to show a\n>> very significant gain across a range of hardware before I'd want to\n>> consider adopting it ... and it has not shown that.\n>>\n>> regards, tom lane\n>>\n>\n>\n> --\n> Regards,\n> Krunal Bauskar\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nOn Tue, 8 Dec 2020 at 14:33, Krunal Bauskar <krunalbauskar@gmail.com> wrote:On Thu, 3 Dec 2020 at 21:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> Any updates or further inputs on this.\n\nAs far as LSE goes: my take is that tampering with the\ncompiler/platform's default optimization options requires *very*\nstrong evidence, which we have not got and likely won't get. Users\nwho are building for specific hardware can choose to supply custom\nCFLAGS, of course. But we shouldn't presume to do that for them,\nbecause we don't know what they are building for, or with what.\n\nI'm very willing to consider the CAS spinlock patch, but it still\nfeels like there's not enough evidence to show that it's a universal\nwin. The way to move forward on that is to collect more measurements\non additional ARM-based platforms. And I continue to think that\npgbench is only a very crude tool for testing spinlock performance;\nwe should look at other tests.Thanks Tom.Given pg-bench limited option I decided to try things with sysbench to exposethe real contention using zipfian type (zipfian pattern causes part of the databaseto get updated there-by exposing main contention point).----------------------------------------------------------------------------Baseline for 256 threads update-index use-case:- 44.24% 174935 postgres postgres [.] s_lock transactions: transactions: 5587105 (92988.40 per sec.)Patched for 256 threads update-index use-case: 0.02% 80 postgres postgres [.] s_locktransactions: transactions: 10288781 (171305.24 per sec.)perf diff 0.02% +44.22% postgres [.] s_lock----------------------------------------------------------------------------As we see from the above result s_lock is exposing major contention that could be relaxed using thesaid cas patch. Performance improvement in range of 80% is observed.Taking this guideline we decided to run it for all scalability for update and non-update use-case.Check the attached graph. Consistent improvement is observed.I presume this should help re-establish that for major contention cases existing tas approach will always give up.-------------------------------------------------------------------------------------------Unfortunately, I don't have access to different ARM arch except for Kunpeng or Graviton2 wherewe have already proved the value of the patch.[ref: Apple M1 as per your evaluation patch doesn't show regression for select. Maybe if possible can you try update scenarios too].Do you know anyone from the community who has access to other ARM arches we can request them to evaluate?But since it is has proven on 2 independent ARM arch I am pretty confident it will scale with other ARM arches too. Any direction on how we can proceed on this?* We have tested it with both cloud vendors that provide ARM instances.* We have tested it with Apple M1 (partially at-least)* Ampere use to provide instance on packet.com but now it is an evaluation program only.No other active arm instance offering a cloud provider.Given our evaluation so far has proven to be +ve can we think of considering it on basis of the availabledata which is quite encouraging with 80% improvement seen for heavy contention use-cases. \n\n From a system structural standpoint, I seriously dislike that lwlock.c\npatch: putting machine-specific variant implementations into that file\nseems like a disaster for maintainability. So it would need to show a\nvery significant gain across a range of hardware before I'd want to\nconsider adopting it ... and it has not shown that.\n\n regards, tom lane\n-- Regards,Krunal Bauskar\n-- Regards,Krunal Bauskar",
"msg_date": "Thu, 10 Dec 2020 14:48:12 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
},
{
"msg_contents": "Wondering if we can take this to completion (any idea what more we could\ndo?).\n\nOn Thu, 10 Dec 2020 at 14:48, Krunal Bauskar <krunalbauskar@gmail.com>\nwrote:\n\n>\n> On Tue, 8 Dec 2020 at 14:33, Krunal Bauskar <krunalbauskar@gmail.com>\n> wrote:\n>\n>>\n>>\n>> On Thu, 3 Dec 2020 at 21:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>>> Krunal Bauskar <krunalbauskar@gmail.com> writes:\n>>> > Any updates or further inputs on this.\n>>>\n>>> As far as LSE goes: my take is that tampering with the\n>>> compiler/platform's default optimization options requires *very*\n>>> strong evidence, which we have not got and likely won't get. Users\n>>> who are building for specific hardware can choose to supply custom\n>>> CFLAGS, of course. But we shouldn't presume to do that for them,\n>>> because we don't know what they are building for, or with what.\n>>>\n>>> I'm very willing to consider the CAS spinlock patch, but it still\n>>> feels like there's not enough evidence to show that it's a universal\n>>> win. The way to move forward on that is to collect more measurements\n>>> on additional ARM-based platforms. And I continue to think that\n>>> pgbench is only a very crude tool for testing spinlock performance;\n>>> we should look at other tests.\n>>>\n>>\n>> Thanks Tom.\n>>\n>> Given pg-bench limited option I decided to try things with sysbench to\n>> expose\n>> the real contention using zipfian type (zipfian pattern causes part of\n>> the database\n>> to get updated there-by exposing main contention point).\n>>\n>>\n>> ----------------------------------------------------------------------------\n>> *Baseline for 256 threads update-index use-case:*\n>> - 44.24% 174935 postgres postgres [.]\n>> s_lock\n>> transactions:\n>> transactions: 5587105 (92988.40 per sec.)\n>>\n>> *Patched for 256 threads update-index use-case:*\n>> 0.02% 80 postgres postgres [.] s_lock\n>> transactions:\n>> transactions: 10288781 (171305.24 per sec.)\n>>\n>> *perf diff*\n>>\n>> * 0.02% +44.22% postgres [.] s_lock*\n>> ----------------------------------------------------------------------------\n>>\n>> As we see from the above result s_lock is exposing major contention that\n>> could be relaxed using the\n>> said cas patch. Performance improvement in range of 80% is observed.\n>>\n>> Taking this guideline we decided to run it for all scalability for update\n>> and non-update use-case.\n>> Check the attached graph. Consistent improvement is observed.\n>>\n>> I presume this should help re-establish that for major contention cases\n>> existing tas approach will always give up.\n>>\n>>\n>> -------------------------------------------------------------------------------------------\n>>\n>> Unfortunately, I don't have access to different ARM arch except for\n>> Kunpeng or Graviton2 where\n>> we have already proved the value of the patch.\n>> [ref: Apple M1 as per your evaluation patch doesn't show regression for\n>> select. Maybe if possible can you try update scenarios too].\n>>\n>> Do you know anyone from the community who has access to other ARM arches\n>> we can request them to evaluate?\n>> But since it is has proven on 2 independent ARM arch I am pretty\n>> confident it will scale with other ARM arches too.\n>>\n>>\n>\n> Any direction on how we can proceed on this?\n>\n> * We have tested it with both cloud vendors that provide ARM instances.\n> * We have tested it with Apple M1 (partially at-least)\n> * Ampere use to provide instance on packet.com but now it is an\n> evaluation program only.\n>\n> No other active arm instance offering a cloud provider.\n>\n> Given our evaluation so far has proven to be +ve can we think of\n> considering it on basis of the available\n> data which is quite encouraging with 80% improvement seen for heavy\n> contention use-cases.\n>\n>\n>\n>>\n>>> From a system structural standpoint, I seriously dislike that lwlock.c\n>>> patch: putting machine-specific variant implementations into that file\n>>> seems like a disaster for maintainability. So it would need to show a\n>>> very significant gain across a range of hardware before I'd want to\n>>> consider adopting it ... and it has not shown that.\n>>>\n>>> regards, tom lane\n>>>\n>>\n>>\n>> --\n>> Regards,\n>> Krunal Bauskar\n>>\n>\n>\n> --\n> Regards,\n> Krunal Bauskar\n>\n\n\n-- \nRegards,\nKrunal Bauskar\n\nWondering if we can take this to completion (any idea what more we could do?).On Thu, 10 Dec 2020 at 14:48, Krunal Bauskar <krunalbauskar@gmail.com> wrote:On Tue, 8 Dec 2020 at 14:33, Krunal Bauskar <krunalbauskar@gmail.com> wrote:On Thu, 3 Dec 2020 at 21:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:Krunal Bauskar <krunalbauskar@gmail.com> writes:\n> Any updates or further inputs on this.\n\nAs far as LSE goes: my take is that tampering with the\ncompiler/platform's default optimization options requires *very*\nstrong evidence, which we have not got and likely won't get. Users\nwho are building for specific hardware can choose to supply custom\nCFLAGS, of course. But we shouldn't presume to do that for them,\nbecause we don't know what they are building for, or with what.\n\nI'm very willing to consider the CAS spinlock patch, but it still\nfeels like there's not enough evidence to show that it's a universal\nwin. The way to move forward on that is to collect more measurements\non additional ARM-based platforms. And I continue to think that\npgbench is only a very crude tool for testing spinlock performance;\nwe should look at other tests.Thanks Tom.Given pg-bench limited option I decided to try things with sysbench to exposethe real contention using zipfian type (zipfian pattern causes part of the databaseto get updated there-by exposing main contention point).----------------------------------------------------------------------------Baseline for 256 threads update-index use-case:- 44.24% 174935 postgres postgres [.] s_lock transactions: transactions: 5587105 (92988.40 per sec.)Patched for 256 threads update-index use-case: 0.02% 80 postgres postgres [.] s_locktransactions: transactions: 10288781 (171305.24 per sec.)perf diff 0.02% +44.22% postgres [.] s_lock----------------------------------------------------------------------------As we see from the above result s_lock is exposing major contention that could be relaxed using thesaid cas patch. Performance improvement in range of 80% is observed.Taking this guideline we decided to run it for all scalability for update and non-update use-case.Check the attached graph. Consistent improvement is observed.I presume this should help re-establish that for major contention cases existing tas approach will always give up.-------------------------------------------------------------------------------------------Unfortunately, I don't have access to different ARM arch except for Kunpeng or Graviton2 wherewe have already proved the value of the patch.[ref: Apple M1 as per your evaluation patch doesn't show regression for select. Maybe if possible can you try update scenarios too].Do you know anyone from the community who has access to other ARM arches we can request them to evaluate?But since it is has proven on 2 independent ARM arch I am pretty confident it will scale with other ARM arches too. Any direction on how we can proceed on this?* We have tested it with both cloud vendors that provide ARM instances.* We have tested it with Apple M1 (partially at-least)* Ampere use to provide instance on packet.com but now it is an evaluation program only.No other active arm instance offering a cloud provider.Given our evaluation so far has proven to be +ve can we think of considering it on basis of the availabledata which is quite encouraging with 80% improvement seen for heavy contention use-cases. \n\n From a system structural standpoint, I seriously dislike that lwlock.c\npatch: putting machine-specific variant implementations into that file\nseems like a disaster for maintainability. So it would need to show a\nvery significant gain across a range of hardware before I'd want to\nconsider adopting it ... and it has not shown that.\n\n regards, tom lane\n-- Regards,Krunal Bauskar\n-- Regards,Krunal Bauskar\n-- Regards,Krunal Bauskar",
"msg_date": "Mon, 14 Dec 2020 18:06:46 +0530",
"msg_from": "Krunal Bauskar <krunalbauskar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving spin-lock implementation on ARM."
}
] |
[
{
"msg_contents": "Hello,\n\nCollation versioning has been committed, but there are still at least 2 cases\nthat aren't perfectly handled:\n\n- AMs which don't rely on stable collation ordering (hash, bloom...)\n- expressions which don't rely on a stable collation ordering (e.g. md5())\n\nHandling expressions will probably require a lot of work, so I'd like to start\nwith AM handling.\n\nMy initial idea was to add a new field in IndexAmRoutine for that (see patch\n[1] that was present in v30 [2], and the rest of the patches that used it).\nDoes anyone have any suggestion on a better way to handle that?\n\nNote also that in the patch I named the field \"amnostablecollorder\", which is a\nbit weird but required so that a custom access method won't be assumed to not\nrely on a stable collation ordering if for some reason the author forgot to\ncorrectly set it up. If we end up with a new field in IndexAmRoutine, maybe we\ncould also add an API version field that could be bumped in case of changes so\nthat authors get an immediate error? This way we wouldn't have to be worried\nof a default value anymore.\n\n[1] https://www.postgresql.org/message-id/attachment/114354/v30-0001-Add-a-new-amnostablecollorder-flag-in-IndexAmRou.patch\n[2] https://www.postgresql.org/message-id/20200924094854.abjmpfqixq6xd4o5%40nol\n\n\n",
"msg_date": "Thu, 26 Nov 2020 15:09:27 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Have collation versioning to ignore hash and similar AM"
}
] |
[
{
"msg_contents": "Hello\n\n\nThe attached patch is intended to prevent a scenario that\narchive recovery hits WALs which come from wal_level=minimal\nand the server continues to work, which was discussed in the thread of [1].\nThe motivation is to protect that user ends up with both getting replica\nthat could miss data and getting the server to miss data in targeted recovery mode.\n\nAbout how to modify this, we reached the consensus in the thread.\nIt is by changing the ereport's level from WARNING to FATAL in CheckRequiredParameterValues().\n\nIn order to test this fix, what I did is\n1 - get a base backup during wal_level is replica\n2 - stop the server and change the wal_level from replica to minimal\n3 - restart the server(to generate XLOG_PARAMETER_CHANGE)\n4 - stop the server and make the wal_level back to replica\n5 - start the server again\n6 - execute archive recoveries in both cases\n\t(1) by editing the postgresql.conf and\n\ttouching recovery.signal in the base backup from 1th step\n\t(2) by making a replica with standby.signal\n* During wal_level is replica, I enabled archive_mode in this test.\n\nFirst of all, I confirmed the server started up without this patch.\nAfter applying this safeguard patch, I checked that\nthe server cannot start up any more in the scenario case.\nI checked the log and got the result below with this patch.\n\n2020-11-26 06:49:46.003 UTC [19715] FATAL: WAL was generated with wal_level=minimal, data may be missing\n2020-11-26 06:49:46.003 UTC [19715] HINT: This happens if you temporarily set wal_level=minimal without taking a new base backup.\n\nLastly, this should be backpatched.\nAny comments ?\n\n[1]\nhttps://www.postgresql.org/message-id/TYAPR01MB29901EBE5A3ACCE55BA99186FE320%40TYAPR01MB2990.jpnprd01.prod.outlook.com\n\n\nBest,\n\tTakamichi Osumi",
"msg_date": "Thu, 26 Nov 2020 07:18:39 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "At Thu, 26 Nov 2020 07:18:39 +0000, \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> wrote in \n> Hello\n> \n> \n> The attached patch is intended to prevent a scenario that\n> archive recovery hits WALs which come from wal_level=minimal\n> and the server continues to work, which was discussed in the thread of [1].\n> The motivation is to protect that user ends up with both getting replica\n> that could miss data and getting the server to miss data in targeted recovery mode.\n> \n> About how to modify this, we reached the consensus in the thread.\n> It is by changing the ereport's level from WARNING to FATAL in CheckRequiredParameterValues().\n> \n> In order to test this fix, what I did is\n> 1 - get a base backup during wal_level is replica\n> 2 - stop the server and change the wal_level from replica to minimal\n> 3 - restart the server(to generate XLOG_PARAMETER_CHANGE)\n> 4 - stop the server and make the wal_level back to replica\n> 5 - start the server again\n> 6 - execute archive recoveries in both cases\n> \t(1) by editing the postgresql.conf and\n> \ttouching recovery.signal in the base backup from 1th step\n> \t(2) by making a replica with standby.signal\n> * During wal_level is replica, I enabled archive_mode in this test.\n> \n> First of all, I confirmed the server started up without this patch.\n> After applying this safeguard patch, I checked that\n> the server cannot start up any more in the scenario case.\n> I checked the log and got the result below with this patch.\n> \n> 2020-11-26 06:49:46.003 UTC [19715] FATAL: WAL was generated with wal_level=minimal, data may be missing\n> 2020-11-26 06:49:46.003 UTC [19715] HINT: This happens if you temporarily set wal_level=minimal without taking a new base backup.\n> \n> Lastly, this should be backpatched.\n> Any comments ?\n\nPerhaps we need the TAP test that conducts the above steps.\n\n> [1]\n> https://www.postgresql.org/message-id/TYAPR01MB29901EBE5A3ACCE55BA99186FE320%40TYAPR01MB2990.jpnprd01.prod.outlook.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 26 Nov 2020 16:28:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hello\n\n> First of all, I confirmed the server started up without this patch.\n\nIt is possible only with manually configured hot_standby = off, right?\nWe have ERROR in hot standby mode just below in CheckRequiredParameterValues.\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 26 Nov 2020 10:43:50 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hello, Horiguchi-San\n\n\nOn Thursday, Nov 26, 2020 4:29 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> At Thu, 26 Nov 2020 07:18:39 +0000, \"osumi.takamichi@fujitsu.com\"\n> <osumi.takamichi@fujitsu.com> wrote in\n> > In order to test this fix, what I did is\n> > 1 - get a base backup during wal_level is replica\n> > 2 - stop the server and change the wal_level from replica to minimal\n> > 3 - restart the server(to generate XLOG_PARAMETER_CHANGE)\n> > 4 - stop the server and make the wal_level back to replica\n> > 5 - start the server again\n> > 6 - execute archive recoveries in both cases\n> > \t(1) by editing the postgresql.conf and\n> > \ttouching recovery.signal in the base backup from 1th step\n> > \t(2) by making a replica with standby.signal\n> > * During wal_level is replica, I enabled archive_mode in this test.\n> >\n> Perhaps we need the TAP test that conducts the above steps.\nIt really makes sense that it's better to show the procedures\nabout how to reproduce the flow.\nThanks for your advice.\n\nBest,\n\tTakamichi Osumi\n\n\n",
"msg_date": "Thu, 26 Nov 2020 07:51:32 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hello, Sergei\n\n> It is possible only with manually configured hot_standby = off, right?\n> We have ERROR in hot standby mode just below in\n> CheckRequiredParameterValues.\nYes, you are right. I turned off the hot_standby in the test in the previous e-mail.\nI'm sorry that I missed writing this tip of information.\n\nBest,\n\tTakamichi Osumi\n\n\n",
"msg_date": "Thu, 26 Nov 2020 07:59:21 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hi\n\n\nOn Thursday, November 26, 2020 4:29 PM\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Thu, 26 Nov 2020 07:18:39 +0000, \"osumi.takamichi@fujitsu.com\"\n> <osumi.takamichi@fujitsu.com> wrote in\n> > The attached patch is intended to prevent a scenario that archive\n> > recovery hits WALs which come from wal_level=minimal and the server\n> > continues to work, which was discussed in the thread of [1].\n> > The motivation is to protect that user ends up with both getting\n> > replica that could miss data and getting the server to miss data in targeted\n> recovery mode.\n> >\n> > About how to modify this, we reached the consensus in the thread.\n> > It is by changing the ereport's level from WARNING to FATAL in\n> CheckRequiredParameterValues().\n> >\n> > In order to test this fix, what I did is\n> > 1 - get a base backup during wal_level is replica\n> > 2 - stop the server and change the wal_level from replica to minimal\n> > 3 - restart the server(to generate XLOG_PARAMETER_CHANGE)\n> > 4 - stop the server and make the wal_level back to replica\n> > 5 - start the server again\n> > 6 - execute archive recoveries in both cases\n> > \t(1) by editing the postgresql.conf and\n> > \ttouching recovery.signal in the base backup from 1th step\n> > \t(2) by making a replica with standby.signal\n> > * During wal_level is replica, I enabled archive_mode in this test.\n> >\n> > First of all, I confirmed the server started up without this patch.\n> > After applying this safeguard patch, I checked that the server cannot\n> > start up any more in the scenario case.\n> > I checked the log and got the result below with this patch.\n> >\n> > 2020-11-26 06:49:46.003 UTC [19715] FATAL: WAL was generated with\n> > wal_level=minimal, data may be missing\n> > 2020-11-26 06:49:46.003 UTC [19715] HINT: This happens if you\n> temporarily set wal_level=minimal without taking a new base backup.\n> >\n> > Lastly, this should be backpatched.\n> > Any comments ?\n> \n> Perhaps we need the TAP test that conducts the above steps.\nI added the TAP tests to reproduce and share the result,\nusing the case of 6-(1) described above.\nHere, I created a new file for it because the purposes of other files in\nsrc/recovery didn't match the purpose of my TAP tests perfectly.\nIf you are dubious about this idea, please have a look at the comments\nin each file.\n\nWhen the attached patch is applied,\nmy TAP tests are executed like other ones like below.\n\nt/018_wal_optimize.pl ................ ok\nt/019_replslot_limit.pl .............. ok\nt/020_archive_status.pl .............. ok\nt/021_row_visibility.pl .............. ok\nt/022_archive_recovery.pl ............ ok\nAll tests successful.\n\nAlso, I confirmed that there's no regression by make check-world.\nAny comments ?\n\nBest,\n\tTakamichi Osumi",
"msg_date": "Tue, 8 Dec 2020 03:08:20 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Tue, 2020-12-08 at 03:08 +0000, osumi.takamichi@fujitsu.com wrote:\n> On Thursday, November 26, 2020 4:29 PM\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > At Thu, 26 Nov 2020 07:18:39 +0000, \"osumi.takamichi@fujitsu.com\"\n> > <osumi.takamichi@fujitsu.com> wrote in\n> > > The attached patch is intended to prevent a scenario that archive\n> > > recovery hits WALs which come from wal_level=minimal and the server\n> > > continues to work, which was discussed in the thread of [1].\n> > \n> > Perhaps we need the TAP test that conducts the above steps.\n>\n> I added the TAP tests to reproduce and share the result,\n> using the case of 6-(1) described above.\n> Here, I created a new file for it because the purposes of other files in\n> src/recovery didn't match the purpose of my TAP tests perfectly.\n> If you are dubious about this idea, please have a look at the comments\n> in each file.\n> \n> When the attached patch is applied,\n> my TAP tests are executed like other ones like below.\n> \n> t/018_wal_optimize.pl ................ ok\n> t/019_replslot_limit.pl .............. ok\n> t/020_archive_status.pl .............. ok\n> t/021_row_visibility.pl .............. ok\n> t/022_archive_recovery.pl ............ ok\n> All tests successful.\n> \n> Also, I confirmed that there's no regression by make check-world.\n> Any comments ?\n\nThe patch applies and passes regression tests, as well as the new TAP test.\n\nI think this should be backpatched, since it fixes a bug.\n\nI am not quite happy with the message:\n\nFATAL: WAL was generated with wal_level=minimal, data may be missing\nHINT: This happens if you temporarily set wal_level=minimal without taking a new base backup.\n\nThis sounds too harmless to me and doesn't give the user a clue\nwhat would be the best way to proceed.\n\nSuggestion:\n\nFATAL: WAL was generated with wal_level=minimal, cannot continue recovering\nDETAIL: This happens if you temporarily set wal_level=minimal on the primary server.\nHINT: Create a new standby from a new base backup after setting wal_level=replica.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 14 Jan 2021 16:55:56 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hi, Laurenz\r\n\r\n\r\nOn Friday, January 15, 2021 12:56 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\r\n> On Tue, 2020-12-08 at 03:08 +0000, osumi.takamichi@fujitsu.com wrote:\r\n> > On Thursday, November 26, 2020 4:29 PM Kyotaro Horiguchi\r\n> > <horikyota.ntt@gmail.com> wrote:\r\n> > > At Thu, 26 Nov 2020 07:18:39 +0000, \"osumi.takamichi@fujitsu.com\"\r\n> > > <osumi.takamichi@fujitsu.com> wrote in\r\n> > > > The attached patch is intended to prevent a scenario that archive\r\n> > > > recovery hits WALs which come from wal_level=minimal and the\r\n> > > > server continues to work, which was discussed in the thread of [1].\r\n> > >\r\n> > > Perhaps we need the TAP test that conducts the above steps.\r\n> >\r\n> > I added the TAP tests to reproduce and share the result, using the\r\n> > case of 6-(1) described above.\r\n> > Here, I created a new file for it because the purposes of other files\r\n> > in src/recovery didn't match the purpose of my TAP tests perfectly.\r\n> > If you are dubious about this idea, please have a look at the comments\r\n> > in each file.\r\n> >\r\n> > When the attached patch is applied,\r\n> > my TAP tests are executed like other ones like below.\r\n> >\r\n> > t/018_wal_optimize.pl ................ ok t/019_replslot_limit.pl\r\n> > .............. ok t/020_archive_status.pl .............. ok\r\n> > t/021_row_visibility.pl .............. ok t/022_archive_recovery.pl\r\n> > ............ ok All tests successful.\r\n> >\r\n> > Also, I confirmed that there's no regression by make check-world.\r\n> > Any comments ?\r\n> \r\n> The patch applies and passes regression tests, as well as the new TAP test.\r\nThank you for checking.\r\n\r\n> I think this should be backpatched, since it fixes a bug.\r\nAgreed.\r\n\r\n> I am not quite happy with the message:\r\n> \r\n> FATAL: WAL was generated with wal_level=minimal, data may be missing\r\n> HINT: This happens if you temporarily set wal_level=minimal without taking a\r\n> new base backup.\r\n> \r\n> This sounds too harmless to me and doesn't give the user a clue what would be\r\n> the best way to proceed.\r\n> \r\n> Suggestion:\r\n> \r\n> FATAL: WAL was generated with wal_level=minimal, cannot continue\r\n> recovering\r\nAdopted.\r\n\r\n> DETAIL: This happens if you temporarily set wal_level=minimal on the primary\r\n> server.\r\n> HINT: Create a new standby from a new base backup after setting\r\n> wal_level=replica.\r\nThanks for your suggestion.\r\nI noticed that this message should cover both archive recovery modes,\r\nwhich means in recovery mode and standby mode. Then, I combined your\r\nsuggestion above with this point of view. Have a look at the updated patch.\r\nI also enriched the new tap tests to show this perspective.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 18 Jan 2021 07:34:44 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Mon, 2021-01-18 at 07:34 +0000, osumi.takamichi@fujitsu.com wrote:\n> I noticed that this message should cover both archive recovery modes,\n> which means in recovery mode and standby mode. Then, I combined your\n> suggestion above with this point of view. Have a look at the updated patch.\n> I also enriched the new tap tests to show this perspective.\n\nLooks good, thanks.\n\nI'll mark this patch as \"ready for committer\".\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 19 Jan 2021 17:05:35 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/01/20 1:05, Laurenz Albe wrote:\n> On Mon, 2021-01-18 at 07:34 +0000, osumi.takamichi@fujitsu.com wrote:\n>> I noticed that this message should cover both archive recovery modes,\n>> which means in recovery mode and standby mode. Then, I combined your\n>> suggestion above with this point of view. Have a look at the updated patch.\n>> I also enriched the new tap tests to show this perspective.\n> \n> Looks good, thanks.\n> \n> I'll mark this patch as \"ready for committer\".\n\n+\t\t\t\t errhint(\"Run recovery again from a new base backup taken after setting wal_level higher than minimal\")));\n\nIsn't it impossible to do this in normal archive recovery case? In that case,\nsince the server crashed and the database got corrupted, probably\nwe cannot take a new base backup.\n\nOriginally even when users accidentally set wal_level to minimal, they could\nstart the server from the backup by disabling hot_standby and salvage the data.\nBut with the patch, we lost the way to do that. Right? I was wondering that\nWARNING was used intentionally there for that case.\n\n\n\t\tif (ControlFile->wal_level < WAL_LEVEL_REPLICA)\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errmsg(\"hot standby is not possible because wal_level was not set to \\\"replica\\\" or higher on the primary server\"),\n\t\t\t\t\t errhint(\"Either set wal_level to \\\"replica\\\" on the primary, or turn off hot_standby here.\")));\n\nWith the patch, we never reach the above code?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 20 Jan 2021 13:10:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Wed, 2021-01-20 at 13:10 +0900, Fujii Masao wrote:\n> + errhint(\"Run recovery again from a new base backup taken after setting wal_level higher than minimal\")));\n> \n> Isn't it impossible to do this in normal archive recovery case? In that case,\n> since the server crashed and the database got corrupted, probably\n> we cannot take a new base backup.\n> \n> Originally even when users accidentally set wal_level to minimal, they could\n> start the server from the backup by disabling hot_standby and salvage the data.\n> But with the patch, we lost the way to do that. Right? I was wondering that\n> WARNING was used intentionally there for that case.\n\nI would argue that if you set your \"wal_level\" to minimal, do some work,\nreset it to \"replica\" and recover past that, two things could happen:\n\n1. Since \"archive_mode\" was off, you are missing some WAL segments and\n cannot recover past that point anyway.\n\n2. You are missing some relevant WAL records, and your recovered\n database is corrupted. This is very likely, because you probably\n switched to \"minimal\" with the intention to generate less WAL.\n\nNow I see your point that a corrupted database may be better than no\ndatabase at all, but wouldn't you agree that a warning in the log\n(that nobody reads) is too little information?\n\nNormally we don't take such a relaxed attitude towards data corruption.\n\nPerhaps there could be a GUC \"recovery_allow_data_corruption\" to\noverride this check, but I'd say that a warning is too little.\n\n> if (ControlFile->wal_level < WAL_LEVEL_REPLICA)\n> ereport(ERROR,\n> (errmsg(\"hot standby is not possible because wal_level was not set to \\\"replica\\\" or higher on the primary server\"),\n> errhint(\"Either set wal_level to \\\"replica\\\" on the primary, or turn off hot_standby here.\")));\n> \n> With the patch, we never reach the above code?\n\nRight, that would have to go. I didn't notice that.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 20 Jan 2021 13:02:51 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hi\r\n\r\n\r\nOn Wed, Jan 20, 2021 9:03 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\r\n> \r\n> On Wed, 2021-01-20 at 13:10 +0900, Fujii Masao wrote:\r\n> > + errhint(\"Run recovery again from a\r\n> > + new base backup taken after setting wal_level higher than\r\n> > + minimal\")));\r\n> >\r\n> > Isn't it impossible to do this in normal archive recovery case? In\r\n> > that case, since the server crashed and the database got corrupted,\r\n> > probably we cannot take a new base backup.\r\n> >\r\n> > Originally even when users accidentally set wal_level to minimal, they\r\n> > could start the server from the backup by disabling hot_standby and salvage\r\n> the data.\r\n> > But with the patch, we lost the way to do that. Right? I was wondering\r\n> > that WARNING was used intentionally there for that case.\r\nOK. I understand that this WARNING is necessary to give user a chance\r\nto start up the server again and salvage his/her data,\r\nwhen hot_standby=off and the server goes through a period of wal_level=minimal\r\nduring an archive recovery.\r\n\r\n\r\n> I would argue that if you set your \"wal_level\" to minimal, do some work, reset it\r\n> to \"replica\" and recover past that, two things could happen:\r\n> \r\n> 1. Since \"archive_mode\" was off, you are missing some WAL segments and\r\n> cannot recover past that point anyway.\r\n> \r\n> 2. You are missing some relevant WAL records, and your recovered\r\n> database is corrupted. This is very likely, because you probably\r\n> switched to \"minimal\" with the intention to generate less WAL.\r\n> \r\n> Now I see your point that a corrupted database may be better than no database\r\n> at all, but wouldn't you agree that a warning in the log (that nobody reads) is\r\n> too little information?\r\n> \r\n> Normally we don't take such a relaxed attitude towards data corruption.\r\nYeah, we thought we needed stronger protection for that reason.\r\n\r\n\r\n> Perhaps there could be a GUC \"recovery_allow_data_corruption\" to override this\r\n> check, but I'd say that a warning is too little.\r\n> \r\n> > if (ControlFile->wal_level < WAL_LEVEL_REPLICA)\r\n> > ereport(ERROR,\r\n> > (errmsg(\"hot standby is not\r\n> possible because wal_level was not set to \\\"replica\\\" or higher on the primary\r\n> server\"),\r\n> > errhint(\"Either set wal_level\r\n> > to \\\"replica\\\" on the primary, or turn off hot_standby here.\")));\r\n> >\r\n> > With the patch, we never reach the above code?\r\n> \r\n> Right, that would have to go. I didn't notice that.\r\nAdding a condition to check if \"recovery_allow_data_corruption\" is 'on' around the end of\r\nCheckRequiredParameterValues() sounds safer for me too, although\r\nimplementing a new GUC parameter sounds bigger than what I expected at first.\r\nThe default of the value should be 'off' to protect users from getting the corrupted server.\r\nDoes everyone agree with this direction ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n",
"msg_date": "Thu, 21 Jan 2021 11:49:07 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Thu, 2021-01-21 at 11:49 +0000, osumi.takamichi@fujitsu.com wrote:\n> Adding a condition to check if \"recovery_allow_data_corruption\" is 'on' around the end of\n> CheckRequiredParameterValues() sounds safer for me too, although\n> implementing a new GUC parameter sounds bigger than what I expected at first.\n> The default of the value should be 'off' to protect users from getting the corrupted server.\n> Does everyone agree with this direction ?\n\nI'd say that adding such a GUC is material for another patch, if we want it at all.\n\nI think it is very unlikely that people will switch from \"wal_level=replica\" to\n\"minimal\" and back very soon afterwards and also try to recover past such\na switch, which probably explains why nobody has complained about data corruption\ngenerated that way. To get the server to start with \"wal_level=minimal\", you must\nset \"archive_mode=off\" and \"max_wal_senders=0\", and few people will do that and\nstill expect recovery to work.\n\nMy vote is that we should not have a GUC for such an unlikely event, and that\nstopping recovery is good enough.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 21 Jan 2021 13:51:27 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hi, Laurenz\r\n\r\n\r\nOn Thursday, January 21, 2021 9:51 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\r\n> On Thu, 2021-01-21 at 11:49 +0000, osumi.takamichi@fujitsu.com wrote:\r\n> > Adding a condition to check if \"recovery_allow_data_corruption\" is\r\n> > 'on' around the end of\r\n> > CheckRequiredParameterValues() sounds safer for me too, although\r\n> > implementing a new GUC parameter sounds bigger than what I expected at\r\n> first.\r\n> > The default of the value should be 'off' to protect users from getting the\r\n> corrupted server.\r\n> > Does everyone agree with this direction ?\r\n> \r\n> I'd say that adding such a GUC is material for another patch, if we want it at all.\r\nOK. You meant another different patch.\r\n\r\n> I think it is very unlikely that people will switch from \"wal_level=replica\" to\r\n> \"minimal\" and back very soon afterwards and also try to recover past such a\r\n> switch, which probably explains why nobody has complained about data\r\n> corruption generated that way. To get the server to start with\r\n> \"wal_level=minimal\", you must set \"archive_mode=off\" and\r\n> \"max_wal_senders=0\", and few people will do that and still expect recovery to\r\n> work.\r\nYeah, the possibility is low of course.\r\n\r\n> My vote is that we should not have a GUC for such an unlikely event, and that\r\n> stopping recovery is good enough.\r\nOK. IIUC, my current patch for this fix doesn't need to be changed or withdrawn.\r\nThank you for your explanation.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n",
"msg_date": "Thu, 21 Jan 2021 13:09:26 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Thu, 2021-01-21 at 13:09 +0000, osumi.takamichi@fujitsu.com wrote:\n> > My vote is that we should not have a GUC for such an unlikely event, and that\n> > stopping recovery is good enough.\n> \n> OK. IIUC, my current patch for this fix doesn't need to be changed or withdrawn.\n> Thank you for your explanation.\n\nWell, that's just my opinion.\nFujii Masao seemed to disagree with the patch, and his voice carries weight.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 21 Jan 2021 15:30:12 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Thu, 2021-01-21 at 15:30 +0100, I wrote:\n> On Thu, 2021-01-21 at 13:09 +0000, osumi.takamichi@fujitsu.com wrote:\n> \n> > > My vote is that we should not have a GUC for such an unlikely event, and that\n> > > stopping recovery is good enough.\n> > OK. IIUC, my current patch for this fix doesn't need to be changed or withdrawn.\n> > Thank you for your explanation.\n> \n> Well, that's just my opinion.\n> \n> Fujii Masao seemed to disagree with the patch, and his voice carries weight.\n\nI think you should pst another patch where the second, now superfluous,\nerror message is removed.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Sun, 24 Jan 2021 21:12:47 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hi\r\n\r\n\r\nOn Monday, January 25, 2021 5:13 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\r\n> On Thu, 2021-01-21 at 15:30 +0100, I wrote:\r\n> > On Thu, 2021-01-21 at 13:09 +0000, osumi.takamichi@fujitsu.com wrote:\r\n> >\r\n> > > > My vote is that we should not have a GUC for such an unlikely\r\n> > > > event, and that stopping recovery is good enough.\r\n> > > OK. IIUC, my current patch for this fix doesn't need to be changed or\r\n> withdrawn.\r\n> > > Thank you for your explanation.\r\n> >\r\n> > Well, that's just my opinion.\r\n> >\r\n> > Fujii Masao seemed to disagree with the patch, and his voice carries weight.\r\n> \r\n> I think you should pst another patch where the second, now superfluous, error\r\n> message is removed.\r\nUpdated. This patch showed no failure during regression tests\r\nand has been aligned by pgindent.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 25 Jan 2021 08:19:54 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Mon, 2021-01-25 at 08:19 +0000, osumi.takamichi@fujitsu.com wrote:\n> > I think you should pst another patch where the second, now superfluous, error\n> > message is removed.\n>\n> Updated. This patch showed no failure during regression tests\n> and has been aligned by pgindent.\n\nLooks good to me.\nI'll set it to \"ready for committer\" again.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 25 Jan 2021 09:55:24 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On 1/25/21 3:55 AM, Laurenz Albe wrote:\n> On Mon, 2021-01-25 at 08:19 +0000, osumi.takamichi@fujitsu.com wrote:\n>>> I think you should pst another patch where the second, now superfluous, error\n>>> message is removed.\n>>\n>> Updated. This patch showed no failure during regression tests\n>> and has been aligned by pgindent.\n> \n> Looks good to me.\n> I'll set it to \"ready for committer\" again.\n\nFujii, does the new patch in [1] address your concerns?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/OSBPR01MB48887EFFCA39FA9B1DBAFB0FEDBD0%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n\n\n",
"msg_date": "Thu, 25 Mar 2021 10:21:29 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/03/25 23:21, David Steele wrote:\n> On 1/25/21 3:55 AM, Laurenz Albe wrote:\n>> On Mon, 2021-01-25 at 08:19 +0000, osumi.takamichi@fujitsu.com wrote:\n>>>> I think you should pst another patch where the second, now superfluous, error\n>>>> message is removed.\n>>>\n>>> Updated. This patch showed no failure during regression tests\n>>> and has been aligned by pgindent.\n>>\n>> Looks good to me.\n>> I'll set it to \"ready for committer\" again.\n> \n> Fujii, does the new patch in [1] address your concerns?\n\nNo. I'm still not sure if this patch is good idea... I understand\nwhy this safeguard is necessary. OTOH I'm afraid it increases\na bit the risk that users get unstartable database, i.e., lose whole database.\nBut maybe I'm concerned about rare case and my opinion is minority one.\nSo I'd like to hear more opinions about this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 26 Mar 2021 10:23:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On 3/25/21 9:23 PM, Fujii Masao wrote:\n> \n> On 2021/03/25 23:21, David Steele wrote:\n>> On 1/25/21 3:55 AM, Laurenz Albe wrote:\n>>> On Mon, 2021-01-25 at 08:19 +0000, osumi.takamichi@fujitsu.com wrote:\n>>>>> I think you should pst another patch where the second, now \n>>>>> superfluous, error\n>>>>> message is removed.\n>>>>\n>>>> Updated. This patch showed no failure during regression tests\n>>>> and has been aligned by pgindent.\n>>>\n>>> Looks good to me.\n>>> I'll set it to \"ready for committer\" again.\n>>\n>> Fujii, does the new patch in [1] address your concerns?\n> \n> No. I'm still not sure if this patch is good idea... I understand\n> why this safeguard is necessary. OTOH I'm afraid it increases\n> a bit the risk that users get unstartable database, i.e., lose whole \n> database.\n> But maybe I'm concerned about rare case and my opinion is minority one.\n> So I'd like to hear more opinions about this patch.\n\nAfter reviewing the patch I am +1. I think allowing corruption in \nrecovery by default is not a good idea. There is currently a warning but \nI don't think that is nearly strong enough and is easily missed.\n\nAlso, \"data may be missing\" makes this sound like an acceptable \nsituation. What is really happening is corruption is being introduced \nthat may make the cluster unusable or at the very least lead to errors \nduring normal operation.\n\nIf we want to allow recovery to play past this point I think it would \nmake more sense to have a GUC (like ignore_invalid_pages [1]) that \nallows recovery to proceed and emits a warning instead of fatal.\n\nLooking the patch, I see a few things:\n\n1) Typo in the tests:\n\nThis protection shold apply to recovery mode\n\nshould be:\n\nThis protection should apply to recovery mode\n\n2) I don't think it should be the job of this patch to restructure the \nif conditions, even if it is more efficient. It just obscures the \npurpose of the patch. So, I would revert all the changes in xlog.c \nexcept changing the warning to an error:\n\n-\t\tereport(WARNING,\n-\t\t\t\t(errmsg(\"WAL was generated with wal_level=minimal, data may be \nmissing\"),\n-\t\t\t\t errhint(\"This happens if you temporarily set wal_level=minimal \nwithout taking a new base backup.\")));\n+\t\t\tereport(FATAL,\n+\t\t\t\t\t(errmsg(\"WAL was generated with wal_level=minimal, cannot continue \nrecovering\"),\n+\t\t\t\t\t errdetail(\"This happens if you temporarily set wal_level=minimal \non the server.\"),\n+\t\t\t\t\t errhint(\"Run recovery again from a new base backup taken after \nsetting wal_level higher than minimal\")));\n\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/E1iu6D8-0000tK-Cm%40gemulon.postgresql.org\n\n\n",
"msg_date": "Fri, 26 Mar 2021 09:14:41 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/03/26 22:14, David Steele wrote:\n> On 3/25/21 9:23 PM, Fujii Masao wrote:\n>>\n>> On 2021/03/25 23:21, David Steele wrote:\n>>> On 1/25/21 3:55 AM, Laurenz Albe wrote:\n>>>> On Mon, 2021-01-25 at 08:19 +0000, osumi.takamichi@fujitsu.com wrote:\n>>>>>> I think you should pst another patch where the second, now superfluous, error\n>>>>>> message is removed.\n>>>>>\n>>>>> Updated. This patch showed no failure during regression tests\n>>>>> and has been aligned by pgindent.\n>>>>\n>>>> Looks good to me.\n>>>> I'll set it to \"ready for committer\" again.\n>>>\n>>> Fujii, does the new patch in [1] address your concerns?\n>>\n>> No. I'm still not sure if this patch is good idea... I understand\n>> why this safeguard is necessary. OTOH I'm afraid it increases\n>> a bit the risk that users get unstartable database, i.e., lose whole database.\n>> But maybe I'm concerned about rare case and my opinion is minority one.\n>> So I'd like to hear more opinions about this patch.\n> \n> After reviewing the patch I am +1. I think allowing corruption in recovery by default is not a good idea. There is currently a warning but I don't think that is nearly strong enough and is easily missed.\n> \n> Also, \"data may be missing\" makes this sound like an acceptable situation. What is really happening is corruption is being introduced that may make the cluster unusable or at the very least lead to errors during normal operation.\n\nOk, now you, Osumi-san and Laurenz agree to this change\nwhile I'm only the person not to like this. So unless we can hear\nany other objections to this change, probably we should commit the patch.\n\n> If we want to allow recovery to play past this point I think it would make more sense to have a GUC (like ignore_invalid_pages [1]) that allows recovery to proceed and emits a warning instead of fatal.\n> \n> Looking the patch, I see a few things:\n> \n> 1) Typo in the tests:\n> \n> This protection shold apply to recovery mode\n> \n> should be:\n> \n> This protection should apply to recovery mode\n> \n> 2) I don't think it should be the job of this patch to restructure the if conditions, even if it is more efficient. It just obscures the purpose of the patch.\n\n+1\n\n> So, I would revert all the changes in xlog.c except changing the warning to an error:\n> \n> - ereport(WARNING,\n> - (errmsg(\"WAL was generated with wal_level=minimal, data may be missing\"),\n> - errhint(\"This happens if you temporarily set wal_level=minimal without taking a new base backup.\")));\n> + ereport(FATAL,\n> + (errmsg(\"WAL was generated with wal_level=minimal, cannot continue recovering\"),\n> + errdetail(\"This happens if you temporarily set wal_level=minimal on the server.\"),\n> + errhint(\"Run recovery again from a new base backup taken after setting wal_level higher than minimal\")));\nI guess that users usually encounter this error because they have not\ntaken base backups yet after setting wal_level to higher than minimal\nand have to use the old base backup for archive recovery. So I'm not sure\nhow much only this HINT is helpful for them. Isn't it better to append\nsomething like \"If there is no such backup, recover to the point in time\nbefore wal_level is set to minimal even though which cause data loss,\nto start the server.\" into HINT?\n\nThe following error will never be thrown because of the patch.\nSo IMO the following code should be removed.\n\n\t\tif (ControlFile->wal_level < WAL_LEVEL_REPLICA)\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errmsg(\"hot standby is not possible because wal_level was not set to \\\"replica\\\" or higher on the primary server\"),\n\t\t\t\t\t errhint(\"Either set wal_level to \\\"replica\\\" on the primary, or turn off hot_standby here.\")));\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 31 Mar 2021 02:11:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "At Wed, 31 Mar 2021 02:11:48 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/03/26 22:14, David Steele wrote:\n> > On 3/25/21 9:23 PM, Fujii Masao wrote:\n> >>\n> >> On 2021/03/25 23:21, David Steele wrote:\n> >>> On 1/25/21 3:55 AM, Laurenz Albe wrote:\n> >>>> On Mon, 2021-01-25 at 08:19 +0000, osumi.takamichi@fujitsu.com wrote:\n> >>>>>> I think you should pst another patch where the second, now\n> >>>>>> superfluous, error\n> >>>>>> message is removed.\n> >>>>>\n> >>>>> Updated. This patch showed no failure during regression tests\n> >>>>> and has been aligned by pgindent.\n> >>>>\n> >>>> Looks good to me.\n> >>>> I'll set it to \"ready for committer\" again.\n> >>>\n> >>> Fujii, does the new patch in [1] address your concerns?\n> >>\n> >> No. I'm still not sure if this patch is good idea... I understand\n> >> why this safeguard is necessary. OTOH I'm afraid it increases\n> >> a bit the risk that users get unstartable database, i.e., lose whole\n> >> database.\n> >> But maybe I'm concerned about rare case and my opinion is minority\n> >> one.\n> >> So I'd like to hear more opinions about this patch.\n> > After reviewing the patch I am +1. I think allowing corruption in\n> > recovery by default is not a good idea. There is currently a warning\n> > but I don't think that is nearly strong enough and is easily missed.\n> > Also, \"data may be missing\" makes this sound like an acceptable\n> > situation. What is really happening is corruption is being introduced\n> > that may make the cluster unusable or at the very least lead to errors\n> > during normal operation.\n> \n> Ok, now you, Osumi-san and Laurenz agree to this change\n> while I'm only the person not to like this. So unless we can hear\n> any other objections to this change, probably we should commit the\n> patch.\n\nSorry for being late, but FWIW, I'm confused.\n\nThis patch checks if archive recovery is requested after shutting down\nwhile in the minimal mode. Since we officially reject to take a\nbackup while wal_level=minimal, the error is not supposed to be raised\nwhile recoverying from a past backup. If the cluster was running in\nminimal mode, the archive recovery does nothing substantial (would end\nwith running a crash recvoery at worst). I'm not sure how the\nconcrete curruption mode caused by the operation looks like. I agree\nthat we should reject archive recovery on a wal_level=minimal cluster,\nbut the reason is not that the operation is breaking the past backup,\nbut that it's just nonsentical.\n\nOn the other hand, I don't think this patch effectively protect\narchive from getting a stopping gap. Just starting the server without\narchive recovery succeeds and the archive is successfully broken for\nthe backups in the past. In other words, if we are intending to let\nusers know that their backups have gotten a stopping gap, this patch\ndoesn't seem to work in that sense. I would object to reject starting\nwith wal_level=replica and without archive recovery after shutting\ndown in minimal mode. That's obviously an overkill.\n\nSo, what I think we should do for the objective is to warn if\narchiving is enabled but backup is not yet taken. Yeah, (as I\nremember that such discussion has been happened) I don't think we have\na reliable way to know that.\n\nI apologize in advance if I'm missing something.\n\n> > If we want to allow recovery to play past this point I think it would\n> > make more sense to have a GUC (like ignore_invalid_pages [1]) that\n> > allows recovery to proceed and emits a warning instead of fatal.\n> > Looking the patch, I see a few things:\n> > 1) Typo in the tests:\n> > This protection shold apply to recovery mode\n> > should be:\n> > This protection should apply to recovery mode\n> > 2) I don't think it should be the job of this patch to restructure the\n> > if conditions, even if it is more efficient. It just obscures the\n> > purpose of the patch.\n> \n> +1\n\nSo, I don't think even this is needed.\n\n> > So, I would revert all the changes in xlog.c except changing the\n> > warning to an error:\n> > - ereport(WARNING,\n> > - (errmsg(\"WAL was generated with wal_level=minimal,\n> > -data may be missing\"),\n> > - errhint(\"This happens if you temporarily set\n> > -wal_level=minimal without taking a new base backup.\")));\n> > + ereport(FATAL,\n> > + (errmsg(\"WAL was generated with\n> > wal_level=minimal, cannot continue recovering\"),\n> > + errdetail(\"This happens if you temporarily set\n> > wal_level=minimal on the server.\"),\n> > + errhint(\"Run recovery again from a new base\n> > backup taken after setting wal_level higher than minimal\")));\n> I guess that users usually encounter this error because they have not\n> taken base backups yet after setting wal_level to higher than minimal\n> and have to use the old base backup for archive recovery. So I'm not\n> sure\n> how much only this HINT is helpful for them. Isn't it better to append\n> something like \"If there is no such backup, recover to the point in\n> time\n> before wal_level is set to minimal even though which cause data loss,\n> to start the server.\" into HINT?\n> The following error will never be thrown because of the patch.\n> So IMO the following code should be removed.\n> \n> \t\tif (ControlFile->wal_level < WAL_LEVEL_REPLICA)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errmsg(\"hot standby is not possible because wal_level was\n> \t\t\t\t\tnot set to \\\"replica\\\" or higher on the primary server\"),\n> \t\t\t\t\t errhint(\"Either set wal_level to \\\"replica\\\" on the\n> \t\t\t\t\t primary, or turn off hot_standby here.\")));\n\nIt seems to be alredy removed in the latest patch?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 31 Mar 2021 14:23:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "At Wed, 31 Mar 2021 14:23:26 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I apologize in advance if I'm missing something.\n\nOops! I missed the case of replica side. It's obviously useless that\nreplica continue to run allowing a stopping gap made by\nwal_level=minimal.\n\nSo, I don't object to this patch.\n\nSorry for the noise.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 31 Mar 2021 14:36:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "At Wed, 31 Mar 2021 02:11:48 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/03/26 22:14, David Steele wrote:\n> > On 3/25/21 9:23 PM, Fujii Masao wrote:\n> >>\n> >> On 2021/03/25 23:21, David Steele wrote:\n> >>> On 1/25/21 3:55 AM, Laurenz Albe wrote:\n> >>>> On Mon, 2021-01-25 at 08:19 +0000, osumi.takamichi@fujitsu.com wrote:\n> >>>>>> I think you should pst another patch where the second, now\n> >>>>>> superfluous, error\n> >>>>>> message is removed.\n> >>>>>\n> >>>>> Updated. This patch showed no failure during regression tests\n> >>>>> and has been aligned by pgindent.\n> >>>>\n> >>>> Looks good to me.\n> >>>> I'll set it to \"ready for committer\" again.\n> >>>\n> >>> Fujii, does the new patch in [1] address your concerns?\n> >>\n> >> No. I'm still not sure if this patch is good idea... I understand\n> >> why this safeguard is necessary. OTOH I'm afraid it increases\n> >> a bit the risk that users get unstartable database, i.e., lose whole\n> >> database.\n> >> But maybe I'm concerned about rare case and my opinion is minority\n> >> one.\n> >> So I'd like to hear more opinions about this patch.\n> > After reviewing the patch I am +1. I think allowing corruption in\n> > recovery by default is not a good idea. There is currently a warning\n> > but I don't think that is nearly strong enough and is easily missed.\n> > Also, \"data may be missing\" makes this sound like an acceptable\n> > situation. What is really happening is corruption is being introduced\n> > that may make the cluster unusable or at the very least lead to errors\n> > during normal operation.\n> \n> Ok, now you, Osumi-san and Laurenz agree to this change\n> while I'm only the person not to like this. So unless we can hear\n> any other objections to this change, probably we should commit the\n> patch.\n\nSo +1 from me in its direction.\n\n> > If we want to allow recovery to play past this point I think it would\n> > make more sense to have a GUC (like ignore_invalid_pages [1]) that\n> > allows recovery to proceed and emits a warning instead of fatal.\n> > Looking the patch, I see a few things:\n> > 1) Typo in the tests:\n> > This protection shold apply to recovery mode\n> > should be:\n> > This protection should apply to recovery mode\n> > 2) I don't think it should be the job of this patch to restructure the\n> > if conditions, even if it is more efficient. It just obscures the\n> > purpose of the patch.\n> \n> +1\n> \n> > So, I would revert all the changes in xlog.c except changing the\n> > warning to an error:\n> > - ereport(WARNING,\n> > - (errmsg(\"WAL was generated with wal_level=minimal,\n> > -data may be missing\"),\n> > - errhint(\"This happens if you temporarily set\n> > -wal_level=minimal without taking a new base backup.\")));\n> > + ereport(FATAL,\n> > + (errmsg(\"WAL was generated with\n> > wal_level=minimal, cannot continue recovering\"),\n> > + errdetail(\"This happens if you temporarily set\n> > wal_level=minimal on the server.\"),\n> > + errhint(\"Run recovery again from a new base\n> > backup taken after setting wal_level higher than minimal\")));\n> I guess that users usually encounter this error because they have not\n> taken base backups yet after setting wal_level to higher than minimal\n> and have to use the old base backup for archive recovery. So I'm not\n> sure\n> how much only this HINT is helpful for them. Isn't it better to append\n> something like \"If there is no such backup, recover to the point in\n> time\n> before wal_level is set to minimal even though which cause data loss,\n> to start the server.\" into HINT?\n\nI agree that the hint doesn't make sense.\n\nHINT: Restart with archive recovery turned off. The past backups are no longer usable. You need to take a new one after restart.\n\nIf it's the replica case, it would be..\n\nHINT: Start from a fresh standby created from the curent primary server.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 31 Mar 2021 15:03:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "At Wed, 31 Mar 2021 15:03:28 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 31 Mar 2021 02:11:48 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > > So, I would revert all the changes in xlog.c except changing the\n> > > warning to an error:\n> > > - ereport(WARNING,\n> > > - (errmsg(\"WAL was generated with wal_level=minimal,\n> > > -data may be missing\"),\n> > > - errhint(\"This happens if you temporarily set\n> > > -wal_level=minimal without taking a new base backup.\")));\n> > > + ereport(FATAL,\n> > > + (errmsg(\"WAL was generated with\n> > > wal_level=minimal, cannot continue recovering\"),\n> > > + errdetail(\"This happens if you temporarily set\n> > > wal_level=minimal on the server.\"),\n> > > + errhint(\"Run recovery again from a new base\n> > > backup taken after setting wal_level higher than minimal\")));\n> > I guess that users usually encounter this error because they have not\n> > taken base backups yet after setting wal_level to higher than minimal\n> > and have to use the old base backup for archive recovery. So I'm not\n> > sure\n> > how much only this HINT is helpful for them. Isn't it better to append\n> > something like \"If there is no such backup, recover to the point in\n> > time\n> > before wal_level is set to minimal even though which cause data loss,\n> > to start the server.\" into HINT?\n> \n> I agree that the hint doesn't make sense.\n\nFor the primary case,\n\n> HINT: Restart with archive recovery turned off. The past backups are no longer usable. You need to take a new one after restart.\n> \n> If it's the replica case, it would be..\n> \n> HINT: Start from a fresh standby created from the curent primary server.\n\nStart from a fresh backup...\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 31 Mar 2021 15:05:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Wednesday, March 31, 2021 3:06 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> At Wed, 31 Mar 2021 15:03:28 +0900 (JST), Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote in\r\n> > At Wed, 31 Mar 2021 02:11:48 +0900, Fujii Masao\r\n> > <masao.fujii@oss.nttdata.com> wrote in\r\n> > > > So, I would revert all the changes in xlog.c except changing the\r\n> > > > warning to an error:\r\n> > > > - ereport(WARNING,\r\n> > > > - (errmsg(\"WAL was generated with\r\n> > > > wal_level=minimal, -data may be missing\"),\r\n> > > > - errhint(\"This happens if you temporarily set\r\n> > > > -wal_level=minimal without taking a new base backup.\")));\r\n> > > > + ereport(FATAL,\r\n> > > > + (errmsg(\"WAL was generated with\r\n> > > > wal_level=minimal, cannot continue recovering\"),\r\n> > > > + errdetail(\"This happens if you temporarily\r\n> > > > +set\r\n> > > > wal_level=minimal on the server.\"),\r\n> > > > + errhint(\"Run recovery again from a new\r\n> base\r\n> > > > backup taken after setting wal_level higher than minimal\")));\r\n> > > I guess that users usually encounter this error because they have\r\n> > > not taken base backups yet after setting wal_level to higher than\r\n> > > minimal and have to use the old base backup for archive recovery. So\r\n> > > I'm not sure how much only this HINT is helpful for them. Isn't it\r\n> > > better to append something like \"If there is no such backup, recover\r\n> > > to the point in time before wal_level is set to minimal even though\r\n> > > which cause data loss, to start the server.\" into HINT?\r\n> >\r\n> > I agree that the hint doesn't make sense.\r\n> \r\n> For the primary case,\r\n> \r\n> > HINT: Restart with archive recovery turned off. The past backups are no\r\n> longer usable. You need to take a new one after restart.\r\n> >\r\n> > If it's the replica case, it would be..\r\n> >\r\n> > HINT: Start from a fresh standby created from the curent primary server.\r\n> \r\n> Start from a fresh backup...\r\nThank you for sharing your ideas about the hint. Absolutely need to change the message.\r\nIn my opinion, combining the basic idea of yours and Fujii-san's would be the best.\r\n\r\nUpdated the patch and made v05. The changes I made are\r\n\r\n* rewording of errhint although this has become long !\r\n* fix of the typo in the TAP test\r\n* modification of my past changes not to change conditions in CheckRequiredParameterValues\r\n* rename of the test file to 024_archive_recovery.pl because two files are made\r\n\tsince the last update of this patch\r\n* pgindent is conducted to check my alignment again.\r\n\r\nBy the way, when I build postgres with this patch and enable-coverage option,\r\nthe results of RT becomes unstable. Does someone know the reason ?\r\nWhen it fails, I get stderr like below\r\n\r\nt/001_start_stop.pl .. 10/24\r\n# Failed test 'pg_ctl start: no stderr'\r\n# at t/001_start_stop.pl line 48.\r\n# got: 'profiling:/home/k5user/new_disk/recheck/PostgreSQL-Source-Dev/src/backend/executor/execMain.gcda:Merge mismatch for function 15\r\n# '\r\n# expected: ''\r\nt/001_start_stop.pl .. 24/24 # Looks like you failed 1 test of 24.\r\nt/001_start_stop.pl .. Dubious, test returned 1 (wstat 256, 0x100)\r\nFailed 1/24 subtests\r\n\r\nSimilar phenomena was observed in [1] and its solution\r\nseems to upgrade my gcc higher than 7. And, I did so but still get this unstable error with\r\nenable-coverage. This didn't happen when I remove enable-option and\r\nthe make check-world passes.\r\n\r\n\r\n[1] - https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg323147.html\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 1 Apr 2021 03:45:57 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/04/01 12:45, osumi.takamichi@fujitsu.com wrote:\n> Thank you for sharing your ideas about the hint. Absolutely need to change the message.\n> In my opinion, combining the basic idea of yours and Fujii-san's would be the best.\n> \n> Updated the patch and made v05. The changes I made are\n> \n> * rewording of errhint although this has become long !\n> * fix of the typo in the TAP test\n> * modification of my past changes not to change conditions in CheckRequiredParameterValues\n> * rename of the test file to 024_archive_recovery.pl because two files are made\n> \tsince the last update of this patch\n> * pgindent is conducted to check my alignment again.\n\nThanks for updating the patch!\n\n+\t\t\t\t errhint(\"Use a backup taken after setting wal_level to higher than minimal \"\n+\t\t\t\t\t\t \"or recover to the point in time before wal_level becomes minimal even though it causes data loss\")));\n\nISTM that \"or recover to the point in time before wal_level was changed\n to minimal even though it may cause data loss\" sounds better. Thought?\n\n+# Check if standby.signal exists\n+my $pgdata = $new_node->data_dir;\n+ok (-f \"${pgdata}/standby.signal\", 'standby.signal was created');\n\n+# Check if recovery.signal exists\n+my $path = $another_node->data_dir;\n+ok (-f \"${path}/recovery.signal\", 'recovery.signal was created');\n\nWhy are these tests necessary?\nThese seem to test that init_from_backup() works expectedly based on\nthe parameter \"standby\". But if we are sure that init_from_backup() works fine,\nthese tests don't seem to be necessary.\n\n+use Config;\n\nThis is not necessary?\n\n+# Make the wal_level back to replica\n+$node->append_conf('postgresql.conf', $replica_config);\n+$node->restart;\n+check_wal_level('replica', 'wal_level went back to replica again');\n\nIMO it's better to comment why this server restart is necessary.\nAs far as I understand correctly, this is necessary to ensure\nthe WAL file containing the record about the change of wal_level\n(to minimal) is archived, so that the subsequent archive recovery\nwill be able to replay it.\n\n \n> By the way, when I build postgres with this patch and enable-coverage option,\n> the results of RT becomes unstable. Does someone know the reason ?\n> When it fails, I get stderr like below\n\nI have no idea about this. Does this happen even without the patch?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 1 Apr 2021 17:25:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Thu, 2021-04-01 at 17:25 +0900, Fujii Masao wrote:\n> Thanks for updating the patch!\n> \n> + errhint(\"Use a backup taken after setting wal_level to higher than minimal \"\n> + \"or recover to the point in time before wal_level becomes minimal even though it causes data loss\")));\n> \n> ISTM that \"or recover to the point in time before wal_level was changed\n> to minimal even though it may cause data loss\" sounds better. Thought?\n\nI would reduce it to\n\n\"Either use a later backup, or recover to a point in time before \\\"wal_level\\\" was set to \\\"minimal\\\".\"\n\nI'd say that we can leave it to the intelligence of the reader to\ndeduce that recovering to an earlier time means more data loss.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 02 Apr 2021 16:48:55 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Thursday, April 1, 2021 5:25 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> On 2021/04/01 12:45, osumi.takamichi@fujitsu.com wrote:\r\n> > Thank you for sharing your ideas about the hint. Absolutely need to change\r\n> the message.\r\n> > In my opinion, combining the basic idea of yours and Fujii-san's would be\r\n> the best.\r\n> >\r\n> > Updated the patch and made v05. The changes I made are\r\n> >\r\n> > * rewording of errhint although this has become long !\r\n> > * fix of the typo in the TAP test\r\n> > * modification of my past changes not to change conditions in\r\n> > CheckRequiredParameterValues\r\n> > * rename of the test file to 024_archive_recovery.pl because two files are\r\n> made\r\n> > \tsince the last update of this patch\r\n> > * pgindent is conducted to check my alignment again.\r\n> \r\n> Thanks for updating the patch!\r\n> \r\n> +\t\t\t\t errhint(\"Use a backup taken after setting\r\n> wal_level to higher than minimal \"\r\n> +\t\t\t\t\t\t \"or recover to the point in\r\n> time before wal_level becomes\r\n> +minimal even though it causes data loss\")));\r\n> \r\n> ISTM that \"or recover to the point in time before wal_level was changed\r\n> to minimal even though it may cause data loss\" sounds better. Thought?\r\nAdopted.\r\n\r\n\r\n> +# Check if standby.signal exists\r\n> +my $pgdata = $new_node->data_dir;\r\n> +ok (-f \"${pgdata}/standby.signal\", 'standby.signal was created');\r\n> \r\n> +# Check if recovery.signal exists\r\n> +my $path = $another_node->data_dir;\r\n> +ok (-f \"${path}/recovery.signal\", 'recovery.signal was created');\r\n> \r\n> Why are these tests necessary?\r\n> These seem to test that init_from_backup() works expectedly based on the\r\n> parameter \"standby\". But if we are sure that init_from_backup() works fine,\r\n> these tests don't seem to be necessary.\r\nAbsolutely, you are right. Fixed.\r\n\r\n\r\n> +use Config;\r\n> \r\n> This is not necessary?\r\nRemoved.\r\n\r\n\r\n> +# Make the wal_level back to replica\r\n> +$node->append_conf('postgresql.conf', $replica_config); $node->restart;\r\n> +check_wal_level('replica', 'wal_level went back to replica again');\r\n> \r\n> IMO it's better to comment why this server restart is necessary.\r\n> As far as I understand correctly, this is necessary to ensure the WAL file\r\n> containing the record about the change of wal_level (to minimal) is archived,\r\n> so that the subsequent archive recovery will be able to replay it.\r\nOK, added some comments. Further, I felt the way I wrote this part was not good at all and self-evident\r\nand developers who read this test would feel uneasy about that point.\r\nSo, a little bit fixed that test so that we can get clearer conviction for wal archive.\r\n\r\n\r\n> > By the way, when I build postgres with this patch and enable-coverage\r\n> > option, the results of RT becomes unstable. Does someone know the\r\n> reason ?\r\n> > When it fails, I get stderr like below\r\n> \r\n> I have no idea about this. Does this happen even without the patch?\r\nUnfortunately, no. I get this only with --enable-coverage and with my patch,\r\nalthought regression tests have passed with this patch.\r\nOSS HEAD doesn't produce the stderr even with --enable-coverage.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Sun, 4 Apr 2021 02:58:43 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Friday, April 2, 2021 11:49 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\r\n> On Thu, 2021-04-01 at 17:25 +0900, Fujii Masao wrote:\r\n> > Thanks for updating the patch!\r\n> >\r\n> > + errhint(\"Use a backup taken after\r\n> setting wal_level to higher than minimal \"\r\n> > + \"or recover to the\r\n> > + point in time before wal_level becomes minimal even though it causes\r\n> > + data loss\")));\r\n> >\r\n> > ISTM that \"or recover to the point in time before wal_level was changed\r\n> > to minimal even though it may cause data loss\" sounds better. Thought?\r\n> \r\n> I would reduce it to\r\n> \r\n> \"Either use a later backup, or recover to a point in time before \\\"wal_level\\\"\r\n> was set to \\\"minimal\\\".\"\r\n> \r\n> I'd say that we can leave it to the intelligence of the reader to deduce that\r\n> recovering to an earlier time means more data loss.\r\nThank you. Yet, I prefer the longer version.\r\nFor example, the later backup can be another backup that fails during archive recovery\r\nif the user have several backups during wal_level=replica\r\nand it is taken before setting wal_level=minimal, right ?\r\n\r\nLike this, giving much information is helpful for better decision taken by user, I thought.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Sun, 4 Apr 2021 03:07:16 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On 2021/04/04 11:58, osumi.takamichi@fujitsu.com wrote:\n>> IMO it's better to comment why this server restart is necessary.\n>> As far as I understand correctly, this is necessary to ensure the WAL file\n>> containing the record about the change of wal_level (to minimal) is archived,\n>> so that the subsequent archive recovery will be able to replay it.\n> OK, added some comments. Further, I felt the way I wrote this part was not good at all and self-evident\n> and developers who read this test would feel uneasy about that point.\n> So, a little bit fixed that test so that we can get clearer conviction for wal archive.\n\nLGTM. Thanks for updating the patch!\n\nAttached is the updated version of the patch. I applied the following changes.\nCould you review this version? Barring any objection, I'm thinking to\ncommit this.\n\n+sub check_wal_level\n+{\n+\tmy ($target_wal_level, $explanation) = @_;\n+\n+\tis( $node->safe_psql(\n+\t\t\t'postgres', q{show wal_level}),\n+ $target_wal_level,\n+ $explanation);\n\nDo we really need this test? This test doesn't seem to be directly related\nto the test purpose. It seems to be testing the behavior of the PostgresNode\nfunctions. So I removed this from the patch.\n\n+# This protection should apply to recovery mode\n+my $another_node = get_new_node('another_node');\n\nThe same test is performed twice with different settings. So isn't it better\nto merge the code for both tests into one common function, to simplify\nthe code? I did that.\n\nI also applied some cosmetic changes.\n\n\n>>> By the way, when I build postgres with this patch and enable-coverage\n>>> option, the results of RT becomes unstable. Does someone know the\n>> reason ?\n>>> When it fails, I get stderr like below\n>>\n>> I have no idea about this. Does this happen even without the patch?\n> Unfortunately, no. I get this only with --enable-coverage and with my patch,\n> althought regression tests have passed with this patch.\n> OSS HEAD doesn't produce the stderr even with --enable-coverage.\n\nCould you check whether the latest patch still causes this issue or not?\nIf it still causes, could you check which part (the change of xlog.c or\nthe addition of regression test) caused the issue?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 5 Apr 2021 12:34:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "From: osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com>\r\n> By the way, when I build postgres with this patch and enable-coverage option,\r\n> the results of RT becomes unstable. Does someone know the reason ?\r\n> When it fails, I get stderr like below\r\n> \r\n> t/001_start_stop.pl .. 10/24\r\n> # Failed test 'pg_ctl start: no stderr'\r\n> # at t/001_start_stop.pl line 48.\r\n> # got:\r\n> 'profiling:/home/k5user/new_disk/recheck/PostgreSQL-Source-Dev/src/bac\r\n> kend/executor/execMain.gcda:Merge mismatch for function 15\r\n> # '\r\n> # expected: ''\r\n> t/001_start_stop.pl .. 24/24 # Looks like you failed 1 test of 24.\r\n> t/001_start_stop.pl .. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/24\r\n> subtests\r\n> \r\n> Similar phenomena was observed in [1] and its solution seems to upgrade my\r\n> gcc higher than 7. And, I did so but still get this unstable error with\r\n> enable-coverage. This didn't happen when I remove enable-option and the\r\n> make check-world passes.\r\n\r\nCan you share the steps you took? e.g.,\r\n\r\n$ configure --enable-coverage ...\r\n$ make world\r\n$ make check-world\r\n$ patch -p1 < your_patch\r\n$ make world\r\n$ make check-world\r\n\r\nA bit of Googling shows that the same error message has shown up in the tests of other software than Postgres. It doesn't seem like this failure is due to your patch.\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Mon, 5 Apr 2021 07:04:09 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "At Mon, 5 Apr 2021 12:34:53 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/04/04 11:58, osumi.takamichi@fujitsu.com wrote:\n> >> IMO it's better to comment why this server restart is necessary.\n> >> As far as I understand correctly, this is necessary to ensure the WAL\n> >> file\n> >> containing the record about the change of wal_level (to minimal) is\n> >> archived,\n> >> so that the subsequent archive recovery will be able to replay it.\n> > OK, added some comments. Further, I felt the way I wrote this part was\n> > not good at all and self-evident\n> > and developers who read this test would feel uneasy about that point.\n> > So, a little bit fixed that test so that we can get clearer conviction\n> > for wal archive.\n> \n> LGTM. Thanks for updating the patch!\n> \n> Attached is the updated version of the patch. I applied the following\n> changes.\n\n+\t\t\t\t errhint(\"Use a backup taken after setting wal_level to higher than minimal \"\n+\t\t\t\t\t\t \"or recover to the point in time before wal_level was changed to minimal even though it may cause data loss.\")));\n\nLooking the HINT message, I thought that it's hard to find where up to\nI should recover. The caller knows the LSN of last PARAMETER_CHANGE\nrecord so we can show that as the recovery-end LSN. On the other\nhand, I'm not sure it's worth considering tough, we can reach here\nwhen starting archive recovery on a server with wal_level=minimal. We\ncan pass 0 as LSN to notify that case.\n\nIf we do that, we can emit more clear message like the following.\n\nWAL-while-minimal case\n FATAL: WAL generated with wal_level=minimal at 0/40000A0, data may be missing\n HINT: Use a backup later than, or recover up to before that LSN.\n\nMis-setting case\n FATAL: archive recovery is not available on server with wal_level=minimal\n HINT: Start this server with setting wal_level=replica or higher\n\n\nThe value \"replica\" is not double-quoted there (since before this\npatch), but double-quoted in another error message about hot standby\njust below. Maybe we should let them in the same style.\n\n> Could you review this version? Barring any objection, I'm thinking to\n> commit this.\n> \n> +sub check_wal_level\n> +{\n> +\tmy ($target_wal_level, $explanation) = @_;\n> +\n> +\tis( $node->safe_psql(\n> +\t\t\t'postgres', q{show wal_level}),\n> + $target_wal_level,\n> + $explanation);\n> \n> Do we really need this test? This test doesn't seem to be directly\n> related\n> to the test purpose. It seems to be testing the behavior of the\n> PostgresNode\n> functions. So I removed this from the patch.\n\n+1.\n\n> +# This protection should apply to recovery mode\n> +my $another_node = get_new_node('another_node');\n> \n> The same test is performed twice with different settings. So isn't it\n> better\n> to merge the code for both tests into one common function, to simplify\n> the code? I did that.\n\nSounds reasonable for that size.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Apr 2021 16:13:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "At Mon, 5 Apr 2021 07:04:09 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com>\n> > By the way, when I build postgres with this patch and enable-coverage option,\n> > the results of RT becomes unstable. Does someone know the reason ?\n> > When it fails, I get stderr like below\n> > \n> > t/001_start_stop.pl .. 10/24\n> > # Failed test 'pg_ctl start: no stderr'\n> > # at t/001_start_stop.pl line 48.\n> > # got:\n> > 'profiling:/home/k5user/new_disk/recheck/PostgreSQL-Source-Dev/src/bac\n> > kend/executor/execMain.gcda:Merge mismatch for function 15\n> > # '\n> > # expected: ''\n> > t/001_start_stop.pl .. 24/24 # Looks like you failed 1 test of 24.\n> > t/001_start_stop.pl .. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/24\n> > subtests\n> > \n> > Similar phenomena was observed in [1] and its solution seems to upgrade my\n> > gcc higher than 7. And, I did so but still get this unstable error with\n> > enable-coverage. This didn't happen when I remove enable-option and the\n> > make check-world passes.\n> \n> Can you share the steps you took? e.g.,\n> \n> $ configure --enable-coverage ...\n> $ make world\n> $ make check-world\n> $ patch -p1 < your_patch\n> $ make world\n> $ make check-world\n> \n> A bit of Googling shows that the same error message has shown up in the tests of other software than Postgres. It doesn't seem like this failure is due to your patch.\n\nI didn't see that, but found the following article.\n\nhttps://stackoverflow.com/questions/2590794/gcov-warning-merge-mismatch-for-summaries\n\n> This happens when one of the objects you're linking into an executable\n> changes significantly. For example it gains or loses some lines of\n> profilable code.\n\nIt seems like your working directory needs some cleanup.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Apr 2021 16:36:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> I didn't see that, but found the following article.\n> \n> https://stackoverflow.com/questions/2590794/gcov-warning-merge-mismat\n> ch-for-summaries\n...\n> It seems like your working directory needs some cleanup.\n\nThat could very well be. It'd be safer to run \"make coverage-clean\" between builds.\n\nI thought otherwise that the multiple TAP tests that were simultaneously run conflicted on the .gcda file. If this is the case, we may not be able to eliminate the failure possibility. (make -J 1 circumvents this?)\n\nhttps://www.postgresql.org/docs/devel/install-procedure.html\n\n\"If using GCC, all programs and libraries are compiled with code coverage testing instrumentation.\"\n\n33.4. TAP Tests\nhttps://www.postgresql.org/docs/devel/regress-tap.html\n\n\"Generically speaking, the TAP tests will test the executables in a previously-installed installation tree if you say make installcheck, or will build a new local installation tree from current sources if you say make check. In either case they will initialize a local instance (data directory) and transiently run a server in it. Some of these tests run more than one server.\"\n\n\"It's important to realize that the TAP tests will start test server(s) even when you say make installcheck; this is unlike the traditional non-TAP testing infrastructure, which expects to use an already-running test server in that case. Some PostgreSQL subdirectories contain both traditional-style and TAP-style tests, meaning that make installcheck will produce a mix of results from temporary servers and the already-running test server.\"\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Mon, 5 Apr 2021 08:13:23 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/04/05 16:13, Kyotaro Horiguchi wrote:\n> At Mon, 5 Apr 2021 12:34:53 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2021/04/04 11:58, osumi.takamichi@fujitsu.com wrote:\n>>>> IMO it's better to comment why this server restart is necessary.\n>>>> As far as I understand correctly, this is necessary to ensure the WAL\n>>>> file\n>>>> containing the record about the change of wal_level (to minimal) is\n>>>> archived,\n>>>> so that the subsequent archive recovery will be able to replay it.\n>>> OK, added some comments. Further, I felt the way I wrote this part was\n>>> not good at all and self-evident\n>>> and developers who read this test would feel uneasy about that point.\n>>> So, a little bit fixed that test so that we can get clearer conviction\n>>> for wal archive.\n>>\n>> LGTM. Thanks for updating the patch!\n>>\n>> Attached is the updated version of the patch. I applied the following\n>> changes.\n> \n> +\t\t\t\t errhint(\"Use a backup taken after setting wal_level to higher than minimal \"\n> +\t\t\t\t\t\t \"or recover to the point in time before wal_level was changed to minimal even though it may cause data loss.\")));\n> \n> Looking the HINT message, I thought that it's hard to find where up to\n> I should recover.\n\nYes. And, what's the worse, when archive recovery finds WAL generated with\nwal_level=minimal and fails, \"minimal\" is saved in pg_control's wal_level.\nThis means that subsequent archive recovery always fails at the beginning of\nrecovery (before entering WAL replay main loop), in that case.\nSo even if recovery_targrt_lsn is specified, archive recovery fails before\nchecking that. Any recovery target settings have no effect on that case.\n\nMaybe we can avoid this, for example, by changing xlog_redo() so that\nit calls CheckRequiredParameterValues() before UpdateControlFile().\nBut I'm not sure if this change is safe. Probably we need more time to\nconsider this, but right now there is no so much time left at this stage.\n\nAt least the HINT message \"or recover to the point in time before wal_level\nwas changed to minimal even though it may cause data loss.\" should be\nremoved because it's not helpful at all...\n\nOk, so if archive recovery finds WAL generated with wal_level=minimal and fails,\nand also there is no backup taken after wal_level is set to higher than minimal,\nbasically [1] we lose whole database. I think that those who set wal_level to\nminimal understand that this setting can cause data loss, for example,\nany data loaded with wal_level=minimal may be lost later. But I'm afraid\nthat they might not understand the risk of whole database loss.\n\nEven if they take new backup just after they set wal_level to higher than\nminimal, there is still the risk of whole database loss until the backup is\ncompleted.\n\nThis makes me think that we should document this risk.... Thought?\n\n[1]\nBTW, one very tricky way to recover from this situation seems to\ncopy all required WAL files from the archive to pg_wal and forcibly\nrun a crash recovery from the backup. Since crash recovery doesn't\ncheck wal_level, we can avoid the issue by doing that. But this is\nvery tricky.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 5 Apr 2021 21:16:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Mon Apr 5, 2021 12:35 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> On 2021/04/04 11:58, osumi.takamichi@fujitsu.com wrote:\r\n> >> IMO it's better to comment why this server restart is necessary.\r\n> >> As far as I understand correctly, this is necessary to ensure the WAL\r\n> >> file containing the record about the change of wal_level (to minimal)\r\n> >> is archived, so that the subsequent archive recovery will be able to replay\r\n> it.\r\n> > OK, added some comments. Further, I felt the way I wrote this part was\r\n> > not good at all and self-evident and developers who read this test would\r\n> feel uneasy about that point.\r\n> > So, a little bit fixed that test so that we can get clearer conviction for wal\r\n> archive.\r\n> \r\n> LGTM. Thanks for updating the patch!\r\n> \r\n> Attached is the updated version of the patch. I applied the following changes.\r\n> Could you review this version? Barring any objection, I'm thinking to commit\r\n> this.\r\n> \r\n> +sub check_wal_level\r\n> +{\r\n> +\tmy ($target_wal_level, $explanation) = @_;\r\n> +\r\n> +\tis( $node->safe_psql(\r\n> +\t\t\t'postgres', q{show wal_level}),\r\n> + $target_wal_level,\r\n> + $explanation);\r\n> \r\n> Do we really need this test? This test doesn't seem to be directly related to\r\n> the test purpose. It seems to be testing the behavior of the PostgresNode\r\n> functions. So I removed this from the patch.\r\nYeah, at the phase to commit, we don't need those.\r\nLet's make only essential parts left.\r\n\r\n\r\n> +# This protection should apply to recovery mode my $another_node =\r\n> +get_new_node('another_node');\r\n> \r\n> The same test is performed twice with different settings. So isn't it better to\r\n> merge the code for both tests into one common function, to simplify the\r\n> code? I did that.\r\n> \r\n> I also applied some cosmetic changes.\r\nThank you so much. Your comments are by far more precise.\r\nFurther, the refactoring of the two tests by the common function looks really good.\r\n\r\nSome minor comments.\r\n\r\n(1) \r\n\r\n+ # Confirm that the archive recovery fails with an error\r\n+ my $logfile = slurp_file($recovery_node->logfile());\r\n+ ok( $logfile =~\r\n+ qr/FATAL: WAL was generated with wal_level=minimal, cannot continue recovering/,\r\n+ \"$node_text ends with an error because it finds WAL generated with wal_level=minimal\");\r\n\r\nHow about a comment\r\n\"Confirm that the archive recovery fails with an expected error\" ?\r\n\r\n(2) \r\n\r\n+# Test for archive recovery\r\n+test_recovery_wal_level_minimal('archive_recovery', 'archive recovery', 0);\r\nWe have two spaces in one comment. Should be fixed.\r\n\r\n(3)\r\n\r\nI thought the function name 'test_recovery_wal_level_minimal'\r\nis a little bit weird and can be changed.\r\nHow about something like execute_recovery_scenario,\r\ntest_recovery_safeguard or test_archive_recovery_safeguard ?\r\n\r\n\r\n> >>> By the way, when I build postgres with this patch and\r\n> >>> enable-coverage option, the results of RT becomes unstable. Does\r\n> >>> someone know the\r\n> >> reason ?\r\n> >>> When it fails, I get stderr like below\r\n> >>\r\n> >> I have no idea about this. Does this happen even without the patch?\r\n> > Unfortunately, no. I get this only with --enable-coverage and with my\r\n> > patch, althought regression tests have passed with this patch.\r\n> > OSS HEAD doesn't produce the stderr even with --enable-coverage.\r\n> \r\n> Could you check whether the latest patch still causes this issue or not?\r\n> If it still causes, could you check which part (the change of xlog.c or the\r\n> addition of regression test) caused the issue?\r\nv07 reproduces the phenomena, even with make coverage-clean between tests.\r\nThe possibility is not high though.\r\n\r\nWe cannot do the regression test separately from xlog.c\r\nbecause it uses the new error message of xlog.c.\r\nApplying only the TAP test should fail because\r\nwe get an warning not error.\r\n\r\nTherefore, I took the changes of xlog.c only and\r\nI'm doing the RT in a loop now. If we can get the stderr again,\r\nthen we can guess xlog.c is the cause, right ?\r\n\r\nI think I can report the result tomorrow.\r\nJust in case, I'm running the RT for OSS HEAD in parallel...\r\nalthough I cannot reproduce it with it at all.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 5 Apr 2021 14:49:04 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Monday, April 5, 2021 9:16 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> On 2021/04/05 16:13, Kyotaro Horiguchi wrote:\r\n> > At Mon, 5 Apr 2021 12:34:53 +0900, Fujii Masao\r\n> > <masao.fujii@oss.nttdata.com> wrote in\r\n> >>\r\n> >>\r\n> >> On 2021/04/04 11:58, osumi.takamichi@fujitsu.com wrote:\r\n> >>>> IMO it's better to comment why this server restart is necessary.\r\n> >>>> As far as I understand correctly, this is necessary to ensure the\r\n> >>>> WAL file containing the record about the change of wal_level (to\r\n> >>>> minimal) is archived, so that the subsequent archive recovery will\r\n> >>>> be able to replay it.\r\n> >>> OK, added some comments. Further, I felt the way I wrote this part\r\n> >>> was not good at all and self-evident and developers who read this\r\n> >>> test would feel uneasy about that point.\r\n> >>> So, a little bit fixed that test so that we can get clearer\r\n> >>> conviction for wal archive.\r\n> >>\r\n> >> LGTM. Thanks for updating the patch!\r\n> >>\r\n> >> Attached is the updated version of the patch. I applied the following\r\n> >> changes.\r\n> >\r\n> > +\t\t\t\t errhint(\"Use a backup taken after setting\r\n> wal_level to higher than minimal \"\r\n> > +\t\t\t\t\t\t \"or recover to the point in\r\n> time before wal_level was changed\r\n> > +to minimal even though it may cause data loss.\")));\r\n> >\r\n> > Looking the HINT message, I thought that it's hard to find where up to\r\n> > I should recover.\r\n> \r\n> Yes. And, what's the worse, when archive recovery finds WAL generated with\r\n> wal_level=minimal and fails, \"minimal\" is saved in pg_control's wal_level.\r\n> This means that subsequent archive recovery always fails at the beginning of\r\n> recovery (before entering WAL replay main loop), in that case.\r\n> So even if recovery_targrt_lsn is specified, archive recovery fails before\r\n> checking that. Any recovery target settings have no effect on that case.\r\n> \r\n> Maybe we can avoid this, for example, by changing xlog_redo() so that it calls\r\n> CheckRequiredParameterValues() before UpdateControlFile().\r\n> But I'm not sure if this change is safe. Probably we need more time to\r\n> consider this, but right now there is no so much time left at this stage.\r\n> \r\n> At least the HINT message \"or recover to the point in time before wal_level\r\n> was changed to minimal even though it may cause data loss.\" should be\r\n> removed because it's not helpful at all...\r\n> \r\n> Ok, so if archive recovery finds WAL generated with wal_level=minimal and\r\n> fails, and also there is no backup taken after wal_level is set to higher than\r\n> minimal, basically [1] we lose whole database. I think that those who set\r\n> wal_level to minimal understand that this setting can cause data loss, for\r\n> example, any data loaded with wal_level=minimal may be lost later. But I'm\r\n> afraid that they might not understand the risk of whole database loss.\r\n> \r\n> Even if they take new backup just after they set wal_level to higher than\r\n> minimal, there is still the risk of whole database loss until the backup is\r\n> completed.\r\n> \r\n> This makes me think that we should document this risk.... Thought?\r\n+1. We should notify the risk when user changes\r\nthe wal_level higher than minimal to minimal\r\nto invoke a carefulness of user for such kind of operation.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 5 Apr 2021 14:54:13 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On 4/4/21 11:34 PM, Fujii Masao wrote:\n> \n> On 2021/04/04 11:58, osumi.takamichi@fujitsu.com wrote:\n>>> IMO it's better to comment why this server restart is necessary.\n>>> As far as I understand correctly, this is necessary to ensure the WAL \n>>> file\n>>> containing the record about the change of wal_level (to minimal) is \n>>> archived,\n>>> so that the subsequent archive recovery will be able to replay it.\n>> OK, added some comments. Further, I felt the way I wrote this part was \n>> not good at all and self-evident\n>> and developers who read this test would feel uneasy about that point.\n>> So, a little bit fixed that test so that we can get clearer conviction \n>> for wal archive.\n> \n> LGTM. Thanks for updating the patch!\n> \n> Attached is the updated version of the patch. I applied the following \n> changes.\n> Could you review this version? Barring any objection, I'm thinking to\n> commit this.\n\nI'm good with this patch as is. I would rather not bike shed the hint \ntoo much as time is short to get this patch in.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 5 Apr 2021 15:23:05 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Monday, April 5, 2021 11:49 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> \r\n> On Mon Apr 5, 2021 12:35 PM Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> wrote:\r\n> > >>> By the way, when I build postgres with this patch and\r\n> > >>> enable-coverage option, the results of RT becomes unstable. Does\r\n> > >>> someone know the\r\n> > >> reason ?\r\n> > >>> When it fails, I get stderr like below\r\n> > >>\r\n> > >> I have no idea about this. Does this happen even without the patch?\r\n> > > Unfortunately, no. I get this only with --enable-coverage and with\r\n> > > my patch, althought regression tests have passed with this patch.\r\n> > > OSS HEAD doesn't produce the stderr even with --enable-coverage.\r\n> >\r\n> > Could you check whether the latest patch still causes this issue or not?\r\n> > If it still causes, could you check which part (the change of xlog.c\r\n> > or the addition of regression test) caused the issue?\r\n> v07 reproduces the phenomena, even with make coverage-clean between\r\n> tests.\r\n> The possibility is not high though.\r\n> \r\n> We cannot do the regression test separately from xlog.c because it uses the\r\n> new error message of xlog.c.\r\n> Applying only the TAP test should fail because we get an warning not error.\r\n> \r\n> Therefore, I took the changes of xlog.c only and I'm doing the RT in a loop\r\n> now. If we can get the stderr again, then we can guess xlog.c is the cause,\r\n> right ?\r\n> \r\n> I think I can report the result tomorrow.\r\n> Just in case, I'm running the RT for OSS HEAD in parallel...\r\n> although I cannot reproduce it with it at all.\r\nI really apologie that this OSS HEAD reproduced that stderr with success of RT.\r\nI executed check-world in parallel with -j option so\r\nthe reason should be what Tsunakawa-san told us.\r\nIts probability is pretty low.\r\nI'm so sorry for making noises loudly.\r\nTherefore, I don't have any concern left.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 5 Apr 2021 23:31:51 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Tuesday, April 6, 2021 8:32 AM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com>\r\n> On Monday, April 5, 2021 11:49 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com>\r\n> > On Mon Apr 5, 2021 12:35 PM Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> > wrote:\r\n> > > >>> By the way, when I build postgres with this patch and\r\n> > > >>> enable-coverage option, the results of RT becomes unstable. Does\r\n> > > >>> someone know the\r\n> > > >> reason ?\r\n> > > >>> When it fails, I get stderr like below\r\n> > > >>\r\n> > > >> I have no idea about this. Does this happen even without the patch?\r\n> > > > Unfortunately, no. I get this only with --enable-coverage and with\r\n> > > > my patch, althought regression tests have passed with this patch.\r\n> > > > OSS HEAD doesn't produce the stderr even with --enable-coverage.\r\n> > >\r\n> > > Could you check whether the latest patch still causes this issue or not?\r\n> > > If it still causes, could you check which part (the change of xlog.c\r\n> > > or the addition of regression test) caused the issue?\r\n> > v07 reproduces the phenomena, even with make coverage-clean between\r\n> > tests.\r\n> > The possibility is not high though.\r\n> >\r\n> > We cannot do the regression test separately from xlog.c because it\r\n> > uses the new error message of xlog.c.\r\n> > Applying only the TAP test should fail because we get an warning not error.\r\n> >\r\n> > Therefore, I took the changes of xlog.c only and I'm doing the RT in a\r\n> > loop now. If we can get the stderr again, then we can guess xlog.c is\r\n> > the cause, right ?\r\n> >\r\n> > I think I can report the result tomorrow.\r\n> > Just in case, I'm running the RT for OSS HEAD in parallel...\r\n> > although I cannot reproduce it with it at all.\r\n> I really apologie that this OSS HEAD reproduced that stderr with success of\r\n> RT.\r\n> I executed check-world in parallel with -j option so the reason should be what\r\n> Tsunakawa-san told us.\r\n> Its probability is pretty low.\r\n> I'm so sorry for making noises loudly.\r\n> Therefore, I don't have any concern left.\r\nThis is *not* due to the patch but for future analysis.\r\nThe phenomena happens with a very little possibility, and in other case,\r\nwith --enable-coverage and make check-world causes an error like below.\r\nI used gcc 8.\r\n\r\n# Failed test 'pg_ctl start: no stderr'\r\n# at t/001_start_stop.pl line 48.\r\n# got: 'profiling:/home/(path/to/oss/head)/src/backend/utils/adt/regproc.gcda:Merge mismatch for function 24\r\n# '\r\n# expected: ''\r\n# Looks like you failed 1 test of 24.\r\nmake[2]: *** [Makefile:50: check] Error 1\r\nmake[1]: *** [Makefile:43: check-pg_ctl-recurse] Error 2\r\nmake[1]: *** Waiting for unfinished jobs....\r\nmake: *** [GNUmakefile:71: check-world-src/bin-recurse] Error 2\r\nmake: *** Waiting for unfinished jobs....\r\n\r\nThe steps I used are\r\n$ git clone and cd to OSS HEAD\r\n$ ./configure --enable-coverage --enable-cassert --enable-debug --enable-tap-tests --with-icu CFLAGS=-O0 --prefix=/where/to/put/binary\r\n$ make -j4 2> make.log\r\n$ make check-world -j4 2> make_check_world.log\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 6 Apr 2021 00:13:57 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On 2021/04/05 23:49, osumi.takamichi@fujitsu.com wrote:\n> Some minor comments.\n\nThanks for the review!\n\n> (1)\n> \n> + # Confirm that the archive recovery fails with an error\n> + my $logfile = slurp_file($recovery_node->logfile());\n> + ok( $logfile =~\n> + qr/FATAL: WAL was generated with wal_level=minimal, cannot continue recovering/,\n> + \"$node_text ends with an error because it finds WAL generated with wal_level=minimal\");\n> \n> How about a comment\n> \"Confirm that the archive recovery fails with an expected error\" ?\n\nSounds good to me. Fixed.\n\n\n> (2)\n> \n> +# Test for archive recovery\n> +test_recovery_wal_level_minimal('archive_recovery', 'archive recovery', 0);\n> We have two spaces in one comment. Should be fixed.\n\nYes, fixed.\n\n\n> (3)\n> \n> I thought the function name 'test_recovery_wal_level_minimal'\n> is a little bit weird and can be changed.\n> How about something like execute_recovery_scenario,\n> test_recovery_safeguard or test_archive_recovery_safeguard ?\n\nI prefer the original name, so I didn't change this.\nAnd we can rename it later if necessary.\n\nOn 2021/04/05 23:54, osumi.takamichi@fujitsu.com wrote:\n>> This makes me think that we should document this risk.... Thought?\n> +1. We should notify the risk when user changes\n> the wal_level higher than minimal to minimal\n> to invoke a carefulness of user for such kind of operation.\n\nI removed the HINT message \"or recover to the point in ...\" and\nadded the following note into the docs.\n\n Note that changing <varname>wal_level</varname> to\n <literal>minimal</literal> makes any base backups taken before\n unavailable for archive recovery and standby server, which may\n lead to database loss.\n\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 6 Apr 2021 09:41:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Tuesday, April 6, 2021 9:41 AM Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> On 2021/04/05 23:54, osumi.takamichi@fujitsu.com wrote:\r\n> >> This makes me think that we should document this risk.... Thought?\r\n> > +1. We should notify the risk when user changes\r\n> > the wal_level higher than minimal to minimal to invoke a carefulness\r\n> > of user for such kind of operation.\r\n> \r\n> I removed the HINT message \"or recover to the point in ...\" and added the\r\n> following note into the docs.\r\n> \r\n> Note that changing <varname>wal_level</varname> to\r\n> <literal>minimal</literal> makes any base backups taken before\r\n> unavailable for archive recovery and standby server, which may\r\n> lead to database loss.\r\nThank you for updating the patch. Let's make the sentence more strict.\r\n\r\nMy suggestion for this explanation is\r\n\"In order to prevent database corruption, changing\r\nwal_level to minimal from higher level in the middle of\r\nWAL archiving requires careful attention. It makes any base backups\r\ntaken before the operation unavailable for archive recovery\r\nand standby server. Also, it may lead to whole database loss when\r\narchive recovery fails with an error for that change.\r\nTake a new base backup immediately after making wal_level back to higher level.\"\r\n\r\nThen, we can be consistent with our new hint message,\r\n\"Use a backup taken after setting wal_level to higher than minimal.\".\r\n\r\nIs it better to add something similar to \"Take an offline backup when you stop the server\r\nand change the wal_level\" around the end of this part as another option for safeguard, also?\r\nFor the performance technique part, what we need to explain is same.\r\n\r\nAnother minor thing I felt we need to do might be to add double quotes to wrap minimal in errhint.\r\nOther errhints do so when we use it in a sentence.\r\n\r\nThere is no more additional comment from me !\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 6 Apr 2021 04:11:35 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "At Tue, 6 Apr 2021 04:11:35 +0000, \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> wrote in \n> On Tuesday, April 6, 2021 9:41 AM Fujii Masao <masao.fujii@oss.nttdata.com>\n> > On 2021/04/05 23:54, osumi.takamichi@fujitsu.com wrote:\n> > >> This makes me think that we should document this risk.... Thought?\n> > > +1. We should notify the risk when user changes\n> > > the wal_level higher than minimal to minimal to invoke a carefulness\n> > > of user for such kind of operation.\n> > \n> > I removed the HINT message \"or recover to the point in ...\" and added the\n> > following note into the docs.\n> > \n> > Note that changing <varname>wal_level</varname> to\n> > <literal>minimal</literal> makes any base backups taken before\n> > unavailable for archive recovery and standby server, which may\n> > lead to database loss.\n> Thank you for updating the patch. Let's make the sentence more strict.\n> \n> My suggestion for this explanation is\n> \"In order to prevent database corruption, changing\n> wal_level to minimal from higher level in the middle of\n> WAL archiving requires careful attention. It makes any base backups\n> taken before the operation unavailable for archive recovery\n> and standby server. Also, it may lead to whole database loss when\n> archive recovery fails with an error for that change.\n> Take a new base backup immediately after making wal_level back to higher level.\"\n\nThe first sentense looks like somewhat nanny-ish. The database is not\ncorrupt at the time of this error. We just lose updates after the last\nread segment at this point. As Fujii-san said, we can continue\nrecoverying using crash recovery and we will reach having a corrupt\ndatabase after that.\n\nAbout the last sentense, I prefer more flat wording, such as \"You need\nto take a new base backup...\"\n\n> Then, we can be consistent with our new hint message,\n> \"Use a backup taken after setting wal_level to higher than minimal.\".\n> \n> Is it better to add something similar to \"Take an offline backup when you stop the server\n> and change the wal_level\" around the end of this part as another option for safeguard, also?\n\nBackup policy is completely a matter of DBAs. If flipping wal_level\nalone highly causes unstartable corruption,,, I think it is a bug.\n\n> For the performance technique part, what we need to explain is same.\n\nMight be good, but in simpler wording.\n\n> Another minor thing I felt we need to do might be to add double quotes to wrap minimal in errhint.\n\nSince the error about hot_standby has gone, either will do for me.\n\n> Other errhints do so when we use it in a sentence.\n> \n> There is no more additional comment from me !\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 06 Apr 2021 15:24:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Tuesday, April 6, 2021 3:24 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Tue, 6 Apr 2021 04:11:35 +0000, \"osumi.takamichi@fujitsu.com\"\n> <osumi.takamichi@fujitsu.com> wrote in\n> > On Tuesday, April 6, 2021 9:41 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com>\n> > > On 2021/04/05 23:54, osumi.takamichi@fujitsu.com wrote:\n> > > >> This makes me think that we should document this risk.... Thought?\n> > > > +1. We should notify the risk when user changes\n> > > > the wal_level higher than minimal to minimal to invoke a\n> > > > carefulness of user for such kind of operation.\n> > >\n> > > I removed the HINT message \"or recover to the point in ...\" and\n> > > added the following note into the docs.\n> > >\n> > > Note that changing <varname>wal_level</varname> to\n> > > <literal>minimal</literal> makes any base backups taken before\n> > > unavailable for archive recovery and standby server, which may\n> > > lead to database loss.\n> > Thank you for updating the patch. Let's make the sentence more strict.\n> >\n> > My suggestion for this explanation is\n> > \"In order to prevent database corruption, changing wal_level to\n> > minimal from higher level in the middle of WAL archiving requires\n> > careful attention. It makes any base backups taken before the\n> > operation unavailable for archive recovery and standby server. Also,\n> > it may lead to whole database loss when archive recovery fails with an\n> > error for that change.\n> > Take a new base backup immediately after making wal_level back to higher\n> level.\"\n> \n> The first sentense looks like somewhat nanny-ish. The database is not\n> corrupt at the time of this error. \nYes. Excuse me for misleading sentence.\nI just wanted to write why the error was introduced,\nbut it was not necessary.\nWe should remove and fix the first part of the sentence.\n\n> We just lose updates after the last read\n> segment at this point. As Fujii-san said, we can continue recoverying using\n> crash recovery and we will reach having a corrupt database after that.\nOK. Thank you for explanation.\n\n\n> About the last sentence, I prefer more flat wording, such as \"You need to take\n> a new base backup...\"\nEither is fine to me.\n\n> > Then, we can be consistent with our new hint message, \"Use a backup\n> > taken after setting wal_level to higher than minimal.\".\n> \n> > Is it better to add something similar to \"Take an offline backup when\n> > you stop the server and change the wal_level\" around the end of this part as\n> another option for safeguard, also?\n> \n> Backup policy is completely a matter of DBAs. \nOK. No problem. No need to add it.\n\n> If flipping wal_level alone\n> highly causes unstartable corruption,,, I think it is a bug.\n> > For the performance technique part, what we need to explain is same.\n>\n> Might be good, but in simpler wording.\nYeah, I agree.\n \n> > Another minor thing I felt we need to do might be to add double quotes to\n> wrap minimal in errhint.\n> \n> Since the error about hot_standby has gone, either will do for me.\nThanks for sharing your thoughts.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 6 Apr 2021 06:59:08 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/04/06 15:59, osumi.takamichi@fujitsu.com wrote:\n> I just wanted to write why the error was introduced,\n> but it was not necessary.\n> We should remove and fix the first part of the sentence.\n\nSo the consensus is almost the same as the latest patch?\nIf they are not so different, I'm thinking to push the latest version at first.\nThen we can improve the docs if required.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 6 Apr 2021 20:13:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On 4/6/21 7:13 AM, Fujii Masao wrote:\n> \n> On 2021/04/06 15:59, osumi.takamichi@fujitsu.com wrote:\n>> I just wanted to write why the error was introduced,\n>> but it was not necessary.\n>> We should remove and fix the first part of the sentence.\n> \n> So the consensus is almost the same as the latest patch?\n> If they are not so different, I'm thinking to push the latest version at \n> first.\n> Then we can improve the docs if required.\n\n+1\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 6 Apr 2021 07:44:13 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Tuesday, April 6, 2021 8:44 PM David Steele <david@pgmasters.net> wrote:\n> On 4/6/21 7:13 AM, Fujii Masao wrote:\n> >\n> > On 2021/04/06 15:59, osumi.takamichi@fujitsu.com wrote:\n> >> I just wanted to write why the error was introduced, but it was not\n> >> necessary.\n> >> We should remove and fix the first part of the sentence.\n> >\n> > So the consensus is almost the same as the latest patch?\n> > If they are not so different, I'm thinking to push the latest version\n> > at first.\n> > Then we can improve the docs if required.\n> \n> +1\nYes, please. What I suggested is almost same as your idea.\nThank you for your confirmation.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 6 Apr 2021 11:48:00 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/04/06 20:44, David Steele wrote:\n> On 4/6/21 7:13 AM, Fujii Masao wrote:\n>>\n>> On 2021/04/06 15:59, osumi.takamichi@fujitsu.com wrote:\n>>> I just wanted to write why the error was introduced,\n>>> but it was not necessary.\n>>> We should remove and fix the first part of the sentence.\n>>\n>> So the consensus is almost the same as the latest patch?\n>> If they are not so different, I'm thinking to push the latest version atfirst.\n>> Then we can improve the docs if required.\n> \n> +1\n\nThanks! So I pushed the patch.\n\n\nOn 2021/04/06 20:48, osumi.takamichi@fujitsu.com wrote:\n> Yes, please. What I suggested is almost same as your idea.\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 6 Apr 2021 23:01:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On 2021/04/06 23:01, Fujii Masao wrote:\n> \n> \n> On 2021/04/06 20:44, David Steele wrote:\n>> On 4/6/21 7:13 AM, Fujii Masao wrote:\n>>>\n>>> On 2021/04/06 15:59, osumi.takamichi@fujitsu.com wrote:\n>>>> I just wanted to write why the error was introduced,\n>>>> but it was not necessary.\n>>>> We should remove and fix the first part of the sentence.\n>>>\n>>> So the consensus is almost the same as the latest patch?\n>>> If they are not so different, I'm thinking to push the latest version atfirst.\n>>> Then we can improve the docs if required.\n>>\n>> +1\n> \n> Thanks! So I pushed the patch.\n\nThe buildfarm members \"drongo\" and \"fairywren\" reported that the regression test (024_archive_recovery.pl) commit 9de9294b0c added failed with the following error. ISTM the cause of this failure is that the test calls $node->init() without \"allows_streaming => 1\" and which doesn't add pg_hba.conf entry for TCP/IP connection from pg_basebackup. So I think that the attached patch needs to be applied.\n\n pg_basebackup: error: connection to server at \"127.0.0.1\", port 52316 failed: FATAL: no pg_hba.conf entry for replication connection from host \"127.0.0.1\", user \"pgrunner\", no encryption\n Bail out! system pg_basebackup failed\n\nI guess that only \"drongo\" and \"fairywren\" reported this issue because they are running on Windows and pg_basebackup uses TCP/IP connection. OTOH I guess that other buildfarm members running on non-Windows use Unix-domain for pg_basebackup and pg_hba.conf entry for that exists by default. So ISTM they reported no such issue.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 7 Apr 2021 03:03:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Thanks! So I pushed the patch.\n\nSeems like the test case added by this commit is not\nworking on Windows.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Apr 2021 16:01:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/04/07 3:03, Fujii Masao wrote:\n> \n> \n> On 2021/04/06 23:01, Fujii Masao wrote:\n>>\n>>\n>> On 2021/04/06 20:44, David Steele wrote:\n>>> On 4/6/21 7:13 AM, Fujii Masao wrote:\n>>>>\n>>>> On 2021/04/06 15:59, osumi.takamichi@fujitsu.com wrote:\n>>>>> I just wanted to write why the error was introduced,\n>>>>> but it was not necessary.\n>>>>> We should remove and fix the first part of the sentence.\n>>>>\n>>>> So the consensus is almost the same as the latest patch?\n>>>> If they are not so different, I'm thinking to push the latest version atfirst.\n>>>> Then we can improve the docs if required.\n>>>\n>>> +1\n>>\n>> Thanks! So I pushed the patch.\n> \n> The buildfarm members \"drongo\" and \"fairywren\" reported that the regression test (024_archive_recovery.pl) commit 9de9294b0c added failed with the following error. ISTM the cause of this failure is that the test calls $node->init() without \"allows_streaming => 1\" and which doesn't add pg_hba.conf entry for TCP/IP connection from pg_basebackup. So I think that the attached patch needs to be applied.\n\nPushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 7 Apr 2021 07:46:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "\n\nOn 2021/04/07 5:01, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> Thanks! So I pushed the patch.\n> \n> Seems like the test case added by this commit is not\n> working on Windows.\n\nThanks for the report! My analysis is [1], and I pushed the proposed patch.\n\n[1]\nhttps://postgr.es/m/3cc3909d-f779-7a74-c201-f1f7f62c7497@oss.nttdata.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 7 Apr 2021 07:50:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Stronger safeguard for archive recovery not to miss data"
},
{
"msg_contents": "On Wednesday, April 7, 2021 7:50 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> On 2021/04/07 5:01, Tom Lane wrote:\r\n> > Fujii Masao <masao.fujii@oss.nttdata.com> writes:\r\n> >> Thanks! So I pushed the patch.\r\n> >\r\n> > Seems like the test case added by this commit is not working on\r\n> > Windows.\r\n> \r\n> Thanks for the report! My analysis is [1], and I pushed the proposed patch.\r\n> \r\n> [1]\r\n> https://postgr.es/m/3cc3909d-f779-7a74-c201-f1f7f62c7497@oss.nttdata.co\r\n> m\r\nFujii-san, Tom,\r\nthank you so much for your quick analysis and modification.\r\nI appreciate those.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 7 Apr 2021 05:22:22 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Stronger safeguard for archive recovery not to miss data"
}
] |
[
{
"msg_contents": "Hi,\n\nI always compile PostgreSQL from source and never install a pre-compiled package. When I run configure, the last check, which is for DocBook XML always takes very long compared to all others. It's about 15 to 20 secs or so. I noticed that with many configure scripts, not only the one of PostgreSQL. Why is that?\n\nCheers,\nPaul\n\n",
"msg_date": "Thu, 26 Nov 2020 10:11:00 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "configure and DocBook XML"
},
{
"msg_contents": "On 2020-Nov-26, Paul F�rster wrote:\n\n> Hi,\n> \n> I always compile PostgreSQL from source and never install a pre-compiled package. When I run configure, the last check, which is for DocBook XML always takes very long compared to all others. It's about 15 to 20 secs or so. I noticed that with many configure scripts, not only the one of PostgreSQL. Why is that?\n\nMy guess is that it's related to trying to obtain stylesheets from\nremote Internet locations that are missing locally.\n\n\n",
"msg_date": "Thu, 26 Nov 2020 10:47:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "Hi Alvaro,\n\n> On 26. Nov, 2020, at 14:47, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> My guess is that it's related to trying to obtain stylesheets from\n> remote Internet locations that are missing locally.\n\nI don't know DocBook at all, so I can't say. But it takes about the same time, whether I run configure on a machine that is connected to the internet or one that isn't.\n\nWhatever it does, it's more out of curiosity. PostgreSQL compiles fine and this is what counts. :-)\n\nCheers,\nPaul\n\n",
"msg_date": "Thu, 26 Nov 2020 14:52:32 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "On 2020-Nov-26, Paul F�rster wrote:\n\n> Hi Alvaro,\n> \n> > On 26. Nov, 2020, at 14:47, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > \n> > My guess is that it's related to trying to obtain stylesheets from\n> > remote Internet locations that are missing locally.\n> \n> I don't know DocBook at all, so I can't say. But it takes about the\n> same time, whether I run configure on a machine that is connected to\n> the internet or one that isn't.\n\nIt might be timing out, then. (The docbook test takes well under a\nsecond for me, but that's probably because I have all stylesheets\nlocally).\n\nOne way to know for sure would be to run it under strace, and see where\nit takes a large number of seconds. Maybe something like\n strace -f -etrace=%network -T -tt -o/tmp/configure.trace ./configure <opts>\n\n> Whatever it does, it's more out of curiosity. PostgreSQL compiles fine\n> and this is what counts. :-)\n\nWell, it is what it is partly because people have struggled to polish\nthese little annoyances :-)\n\n\n",
"msg_date": "Thu, 26 Nov 2020 11:19:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "Hi Alvaro,\n\n> On 26. Nov, 2020, at 15:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> One way to know for sure would be to run it under strace, and see where\n> it takes a large number of seconds. Maybe something like\n> strace -f -etrace=%network -T -tt -o/tmp/configure.trace ./configure <opts>\n\nok, I'll try this. Thanks very much.\n\n> Well, it is what it is partly because people have struggled to polish\n> these little annoyances :-)\n\nit's not an annoyance for me with a thing I do 3 or 4 times a year. I'm a DBA, not a developer, so I just build new packages when new versions are released. I don't have to build a couple of times per day. :-)\n\nCheers,\nPaul\n\n",
"msg_date": "Thu, 26 Nov 2020 15:24:11 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Nov-26, Paul Förster wrote:\n>> On 26. Nov, 2020, at 14:47, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>> My guess is that it's related to trying to obtain stylesheets from\n>>> remote Internet locations that are missing locally.\n\n>> I don't know DocBook at all, so I can't say. But it takes about the\n>> same time, whether I run configure on a machine that is connected to\n>> the internet or one that isn't.\n\n> It might be timing out, then. (The docbook test takes well under a\n> second for me, but that's probably because I have all stylesheets\n> locally).\n\nOn machines where I don't have the stylesheets installed, it always\ntakes several seconds (2 or 3, I think, though I've not put a stopwatch\non it). 15 to 20 sec does seem like a lot, so it makes me wonder if\nPaul's network environment is well-configured.\n\nThere's a nearby thread in which I was suggesting that we should just\nnot bother with this configure test [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/E2EE6B76-2D96-408A-B961-CAE47D1A86F0%40yesql.se\n\n\n",
"msg_date": "Thu, 26 Nov 2020 11:21:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "Hi Tom,\n\n> On 26. Nov, 2020, at 17:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> On machines where I don't have the stylesheets installed, it always\n> takes several seconds (2 or 3, I think, though I've not put a stopwatch\n> on it). 15 to 20 sec does seem like a lot, so it makes me wonder if\n> Paul's network environment is well-configured.\n\ncan't complain here. The net and all other stuff run smoothly. No timeouts, no things being slow or any other problem.\n\n> There's a nearby thread in which I was suggesting that we should just\n> not bother with this configure test [1].\n> [1] https://www.postgresql.org/message-id/flat/E2EE6B76-2D96-408A-B961-CAE47D1A86F0%40yesql.se\n\nI haven't installed DocBook at all. So the check for DocBook naturally always fails. Could that be the reason?\n\nCheers,\nPaul\n\n",
"msg_date": "Thu, 26 Nov 2020 17:32:23 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com> writes:\n> On 26. Nov, 2020, at 17:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There's a nearby thread in which I was suggesting that we should just\n>> not bother with this configure test [1].\n>> [1] https://www.postgresql.org/message-id/flat/E2EE6B76-2D96-408A-B961-CAE47D1A86F0%40yesql.se\n\n> I haven't installed DocBook at all. So the check for DocBook naturally always fails. Could that be the reason?\n\nIf you don't have the docbook stylesheets, but you do have xmllint,\nconfigure's probe will cause xmllint to try to download those\nstylesheets off the net. For me, that always succeeds, but it\ntakes two or three seconds. I find it curious that it seems to be\ntiming out for you.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Nov 2020 11:48:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "Hi Tom,\n\n> On 26. Nov, 2020, at 17:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> If you don't have the docbook stylesheets, but you do have xmllint,\n> configure's probe will cause xmllint to try to download those\n> stylesheets off the net. For me, that always succeeds, but it\n> takes two or three seconds. I find it curious that it seems to be\n> timing out for you.\n\nwell, openSUSE 15.2 Leap here:\n\npaul@weasel:~$ xmllint --version\nxmllint: using libxml version 20907\n compiled with: Threads Tree Output Push Reader Patterns Writer SAXv1 FTP HTTP DTDValid HTML Legacy C14N Catalog XPath XPointer XInclude Iconv ISO8859X Unicode Regexps Automata Expr Schemas Schematron Modules Debug Zlib Lzma \npaul@weasel:~$ type -a xmllint\nxmllint is /usr/bin/xmllint\npaul@weasel:~$ zypper se 'xmllint*'\nLoading repository data...\nReading installed packages...\nNo matching items found.\n\nI wonder why zypper tells me, it's not there. If I use yast2 (GUI) to search it, it's there.\n\nAnyway, DocBook XML is definitely not there, neither in zypper se, nor in yast2.\n\nSo why would xmllint try to download DocBook XML stylesheets if DocBook is not installed? I'm not a developer but such a thing doesn't make sense to me.\n\nCheers,\nPaul\n\n",
"msg_date": "Fri, 27 Nov 2020 09:46:01 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com> writes:\n> So why would xmllint try to download DocBook XML stylesheets if DocBook is not installed? I'm not a developer but such a thing doesn't make sense to me.\n\nYou'd have to ask the authors of those programs.\n\nTo me, it makes sense to have an option to do that, but I do find it\nsurprising that it's the default.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Nov 2020 10:29:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "On 2020-11-26 17:48, Tom Lane wrote:\n> =?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com> writes:\n>> On 26. Nov, 2020, at 17:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> There's a nearby thread in which I was suggesting that we should just\n>>> not bother with this configure test [1].\n>>> [1] https://www.postgresql.org/message-id/flat/E2EE6B76-2D96-408A-B961-CAE47D1A86F0%40yesql.se\n> \n>> I haven't installed DocBook at all. So the check for DocBook naturally always fails. Could that be the reason?\n> \n> If you don't have the docbook stylesheets, but you do have xmllint,\n> configure's probe will cause xmllint to try to download those\n> stylesheets off the net. For me, that always succeeds, but it\n> takes two or three seconds. I find it curious that it seems to be\n> timing out for you.\n\nCorrection: xmllint is interested in the DocBook XML DTD, which is \ndownloadable from \n<http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd>. This is what \nmight be in a package named \"docbook\" or \"docbook-xml\". xsltproc is \ninterested in the DocBook XSLT stylesheets, which are at \n<http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl>, or \nlocally in a package named something like \"docbook-xsl\".\n\nAFAICT, configure only runs an xmllint test, so your download issues (at \nthat point) are likely related to the DTD, not the stylesheets.\n\n\n",
"msg_date": "Fri, 27 Nov 2020 18:11:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "Hi Tom,\n\n> On 27. Nov, 2020, at 16:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> To me, it makes sense to have an option to do that, but I do find it\n> surprising that it's the default.\n\nok, last question: is there an option for configure to not check for DocBook? It's not installed and PostgreSQL doesn't need it to compile and run just fine.\n\nAs I said, it's not an annoyance or something as I only need to build new binaries after the versions (major and minor) are released, which is not often enough to call it an annoyance. :-) I just would like to switch it off if possible, but it's ok if it's not.\n\nCheers,\nPaul\n\n",
"msg_date": "Fri, 27 Nov 2020 18:50:55 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com> writes:\n> ok, last question: is there an option for configure to not check for DocBook? It's not installed and PostgreSQL doesn't need it to compile and run just fine.\n\nSee the other thread I pointed to.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Nov 2020 12:54:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "Hi Tom,\n\n> On 27. Nov, 2020, at 18:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> See the other thread I pointed to.\n\nok, thanks.\n\nCheers,\nPaul\n\n\n",
"msg_date": "Fri, 27 Nov 2020 18:56:05 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 10:29:24AM -0500, Tom Lane wrote:\n> To me, it makes sense to have an option to do that, but I do find it\n> surprising that it's the default.\n\nBut there is no need for an option, right? It is already possible to\noverride the location where xmllint looks for the catalogs by setting\nSGML_CATALOG_FILES and XML_CATALOG_FILES. Not sure for SUSE, but one\ncan use /etc/{sgml,xml}/catalog on Debian, allowing the configure\ncheck to rely on what's stored locally, making configure not rely on\nany external resource.\n--\nMichael",
"msg_date": "Sun, 29 Nov 2020 16:14:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Nov 27, 2020 at 10:29:24AM -0500, Tom Lane wrote:\n>> To me, it makes sense to have an option to do that, but I do find it\n>> surprising that it's the default.\n\n> But there is no need for an option, right? It is already possible to\n> override the location where xmllint looks for the catalogs by setting\n> SGML_CATALOG_FILES and XML_CATALOG_FILES.\n\nBut the issue here is about what happens when you don't have a local\ncopy of the DTD. I don't think adjusting those variables will\nchange that.\n\nAnyway, I think that the other thread is coming to the conclusion\nthat it's okay to just document how to inject --nonet via configure,\nso we don't need to touch the makefiles. As long as we get rid of\nthe unrequested network fetch during configure, it's okay by me if an\nactual attempt to build the docs will possibly cause a network fetch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Nov 2020 12:19:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure and DocBook XML"
}
] |
[
{
"msg_contents": "This is the latest patch for feature GTT(v38).\nEverybody, if you have any questions, please let me know.\n\n\nWenjing",
"msg_date": "Thu, 26 Nov 2020 19:46:24 +0800",
"msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "[Proposal] Global temporary tables"
},
{
"msg_contents": "Hi\n\nčt 26. 11. 2020 v 12:46 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:\n\n> This is the latest patch for feature GTT(v38).\n> Everybody, if you have any questions, please let me know.\n>\n\nplease, co you send a rebased patch. It is broken again\n\nRegards\n\nPavel\n\n\n>\n> Wenjing\n>\n>\n>\n\nHičt 26. 11. 2020 v 12:46 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:This is the latest patch for feature GTT(v38).\nEverybody, if you have any questions, please let me know.please, co you send a rebased patch. It is broken againRegardsPavel \n\n\nWenjing",
"msg_date": "Tue, 8 Dec 2020 06:28:35 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Global temporary tables"
},
{
"msg_contents": "> 2020年12月8日 13:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> Hi\n> \n> čt 26. 11. 2020 v 12:46 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> This is the latest patch for feature GTT(v38).\n> Everybody, if you have any questions, please let me know.\n> \n> please, co you send a rebased patch. It is broken again\n\nSure\n\nThis patch is base on 16c302f51235eaec05a1f85a11c1df04ef3a6785\nSimplify code for getting a unicode codepoint's canonical class.\n\n\nWenjing\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> Wenjing\n> \n>",
"msg_date": "Wed, 9 Dec 2020 14:33:59 +0800",
"msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] Global temporary tables"
},
{
"msg_contents": "st 9. 12. 2020 v 7:34 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:\n\n>\n>\n> 2020年12月8日 13:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n> Hi\n>\n> čt 26. 11. 2020 v 12:46 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>> This is the latest patch for feature GTT(v38).\n>> Everybody, if you have any questions, please let me know.\n>>\n>\n> please, co you send a rebased patch. It is broken again\n>\n>\n> Sure\n>\n> This patch is base on 16c302f51235eaec05a1f85a11c1df04ef3a6785\n> Simplify code for getting a unicode codepoint's canonical class.\n>\n>\nThank you\n\nPavel\n\n>\n> Wenjing\n>\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Wenjing\n>>\n>>\n>>\n>\n\nst 9. 12. 2020 v 7:34 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:2020年12月8日 13:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hičt 26. 11. 2020 v 12:46 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:This is the latest patch for feature GTT(v38).\nEverybody, if you have any questions, please let me know.please, co you send a rebased patch. It is broken againSureThis patch is base on 16c302f51235eaec05a1f85a11c1df04ef3a6785Simplify code for getting a unicode codepoint's canonical class.Thank youPavelWenjingRegardsPavel \n\n\nWenjing",
"msg_date": "Wed, 9 Dec 2020 10:07:12 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Global temporary tables"
},
{
"msg_contents": "HI all\n\nI added comments and refactored part of the implementation in Storage_gtt.c.\nI hope this will help Reviewer.\n\n\nWenjing\n\n\n> 2020年12月9日 17:07,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> st 9. 12. 2020 v 7:34 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年12月8日 13:28,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> čt 26. 11. 2020 v 12:46 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> This is the latest patch for feature GTT(v38).\n>> Everybody, if you have any questions, please let me know.\n>> \n>> please, co you send a rebased patch. It is broken again\n> \n> Sure\n> \n> This patch is base on 16c302f51235eaec05a1f85a11c1df04ef3a6785\n> Simplify code for getting a unicode codepoint's canonical class.\n> \n> \n> Thank you\n> \n> Pavel\n> \n> Wenjing\n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>",
"msg_date": "Thu, 17 Dec 2020 11:06:15 +0800",
"msg_from": "wenjing zeng <wjzeng2012@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Global temporary tables"
},
{
"msg_contents": "HI all\n\nI rebased patch, fix OID conflicts, fix some comments.\nCode updates to v41.\n\n\nWenjing",
"msg_date": "Tue, 22 Dec 2020 11:20:41 +0800",
"msg_from": "wenjing <wjzeng2012@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Global temporary tables"
},
{
"msg_contents": "Hi\n\nút 22. 12. 2020 v 4:20 odesílatel wenjing <wjzeng2012@gmail.com> napsal:\n\n> HI all\n>\n> I rebased patch, fix OID conflicts, fix some comments.\n> Code updates to v41.\n>\n\nPlease, can you do rebase?\n\nRegards\n\nPavel\n\n\n>\n> Wenjing\n>\n>\n\nHiút 22. 12. 2020 v 4:20 odesílatel wenjing <wjzeng2012@gmail.com> napsal:HI allI rebased patch, fix OID conflicts, fix some comments.Code updates to v41.Please, can you do rebase?RegardsPavelWenjing",
"msg_date": "Mon, 25 Jan 2021 19:02:20 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Global temporary tables"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> 于2021年1月26日周二 上午2:03写道:\n\n> Hi\n>\n> út 22. 12. 2020 v 4:20 odesílatel wenjing <wjzeng2012@gmail.com> napsal:\n>\n>> HI all\n>>\n>> I rebased patch, fix OID conflicts, fix some comments.\n>> Code updates to v41.\n>>\n>\n> Please, can you do rebase?\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Wenjing\n>>\n>>\nSure.\n\nThis patch(V42) is base on aa6e46daf5304e8d9e66fefc1a5bd77622ec6402\n\nWenjing",
"msg_date": "Mon, 1 Feb 2021 16:49:10 +0800",
"msg_from": "wenjing <wjzeng2012@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Global temporary tables"
},
{
"msg_contents": "Hi,\r\n\r\nTrying to apply the last patch (global_temporary_table_v42-pg14.patch) to the code cloned from github, ending with this error:\r\n\r\nusr/bin/ld: catalog/storage_gtt.o: in function `up_gtt_relstats':\r\nstorage_gtt.c:(.text+0x1276): undefined reference to `ReadNewTransactionId'\r\ncollect2: error: ld returned 1 exit status\r\nmake[2]: *** [Makefile:66: postgres] Error 1\r\nmake[2]: Leaving directory '/root/downloads/postgres/src/backend'\r\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\r\nmake[1]: Leaving directory '/root/downloads/postgres/src'\r\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\r\n\r\nAnyway, what's the next step to see this very important feature included in v14 ?\r\n\r\nBest regards,\r\nAlexandre",
"msg_date": "Wed, 24 Feb 2021 16:59:39 +0000",
"msg_from": "Alexandre Arruda <adaldeia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Global temporary tables"
},
{
"msg_contents": "Sorry, I don't have checkout the hash codebase for the patch.\r\nCompiled OK now.\r\n\r\nBest regards",
"msg_date": "Wed, 24 Feb 2021 18:59:53 +0000",
"msg_from": "Alexandre Arruda <adaldeia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] Global temporary tables"
}
] |
[
{
"msg_contents": "Hello,\n\nI am currently working on Library with some specific operators to\nmanipulate RangeType in PostGreSQL. I would like to know if it is possible\nto return a setof rangetype elements in Postresql in C-Language function\nusing the suggestion like specified here:\nhttps://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-RETURN-SET. I\nhave been trying this for days. If what I am trying to do is impossible, is\nthere any way I can use to have a RangeType set return?\n\nRegards,\n\n*Andjasubu Bungama, Patrick *\n\nHello,I am currently working on Library with some specific operators to manipulate RangeType in PostGreSQL. I would like to know if it is possible to return a setof rangetype elements in Postresql in C-Language function using the suggestion like specified here: https://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-RETURN-SET. I have been trying this for days. If what I am trying to do is impossible, is there any way I can use to have a RangeType set return?Regards,Andjasubu Bungama, Patrick",
"msg_date": "Thu, 26 Nov 2020 16:28:20 -0500",
"msg_from": "Patrick Handja <patrick.bungama@gmail.com>",
"msg_from_op": true,
"msg_subject": "Setof RangeType returns"
},
{
"msg_contents": "On 26/11/2020 23:28, Patrick Handja wrote:\n> Hello,\n> \n> I am currently working on Library with some specific operators to \n> manipulate RangeType in PostGreSQL. I would like to know if it is \n> possible to return a setof rangetype elements in Postresql in C-Language \n> function using the suggestion like specified here: \n> https://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-RETURN-SET \n> <https://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-RETURN-SET>. \n> I have been trying this for days. If what I am trying to do is \n> impossible, is there any way I can use to have a RangeType set return?\n\nYes, it is possible.\n\nI bet there's just a silly little bug or misunderstanding in your code. \nThis stuff can be fiddly. Feel free to post what you have here, and I'm \nsure someone will point out where the problem is very quickly.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 27 Nov 2020 11:01:40 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Setof RangeType returns"
},
{
"msg_contents": "Hello Heikki,\n\nThank you for responding to my email.\nThis is what I am doing:\n\n//========================= C file ====================================\nstatic int\nget_range_lower(FunctionCallInfo fcinfo, RangeType *r1)\n{\nTypeCacheEntry *typcache;\nRangeBound lower;\nRangeBound upper;\nbool empty;\n\ntypcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1));\nrange_deserialize(typcache, r1, &lower, &upper, &empty);\n\n/* Return NULL if there's no finite lower bound */\nif (empty || lower.infinite)\nPG_RETURN_NULL();\n\nreturn (lower.val);\n}\n\nstatic int\nget_range_upper_griis(FunctionCallInfo fcinfo, RangeType *r1)\n{\nTypeCacheEntry *typcache;\nRangeBound lower;\nRangeBound upper;\nbool empty;\n\ntypcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1));\nrange_deserialize(typcache, r1, &lower, &upper, &empty);\n\n/* Return NULL if there's no finite upper bound */\nif (empty || upper.infinite)\nPG_RETURN_NULL();\n\nreturn (upper.val);\n}\n\nstatic RangeType *\nmake_range(int start, int finish)\n{\nRangeBound lower;\nRangeBound upper;\n\nlower.val = (Datum) (start);\nlower.infinite = false;\nlower.inclusive = true;\nlower.lower = true;\n\nupper.val = (Datum) (finish);\nupper.infinite = false;\nupper.inclusive = false;\nupper.lower = false;\n\nif (!lower.infinite && !lower.inclusive)\n{\nlower.val = DirectFunctionCall2(int4pl, lower.val, Int32GetDatum(1));\nlower.inclusive = true;\n}\n\nif (!upper.infinite && upper.inclusive)\n{\nupper.val = DirectFunctionCall2(int4pl, upper.val, Int32GetDatum(1));\nupper.inclusive = false;\n}\n\nTypeCacheEntry *typcache;\nPG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n}\n\ntypedef struct\n{\nint32 current;\nint32 finish;\nint32 step;\n} generate_series_range_fctx;\n\nstatic inline bool\ncontrol_increment(int32 a, int32 b, int32 *result)\n{\nint64 res = (int64) a + (int64) b;\n\nif (res > PG_INT32_MAX || res < PG_INT32_MIN)\n{\n*result = 0x5EED;\nreturn true;\n}\n*result = (int32) res;\nreturn false;\n}\n\nPG_FUNCTION_INFO_V1(generate_ranges);\nDatum\ngenerate_ranges(PG_FUNCTION_ARGS)\n{\nFuncCallContext *funcctx;\ngenerate_series_range_fctx *fctx;\nMemoryContext oldcontext;\n\nRangeType *r1 = PG_GETARG_RANGE_P(0);\nRangeType *result;\nTypeCacheEntry *typcache;\ntypcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1));\nint32 lower = get_range_lower(fcinfo, r1);\nint32 upper = get_range_upper(fcinfo, r1);\n\nif (SRF_IS_FIRSTCALL())\n{\nint32 start = lower;\nint32 finish = upper;\nint32 step = 1;\n\nfuncctx = SRF_FIRSTCALL_INIT();\noldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\nfctx = (generate_series_range_fctx *)\npalloc(sizeof(generate_series_range_fctx));\n\nfctx->current = start;\nfctx->finish = finish;\nfctx->step = step;\n\nfuncctx->user_fctx = fctx;\nMemoryContextSwitchTo(oldcontext);\n}\n\nfuncctx = SRF_PERCALL_SETUP();\nfctx = funcctx->user_fctx;\n\nresult = make_range(fctx->current, fctx->current+1);\n\nif ((fctx->step > 0 && fctx->current <= fctx->finish) ||\n(fctx->step < 0 && fctx->current >= fctx->finish))\n{\nif (control_increment(fctx->current, fctx->step, &fctx->current))\nfctx->step = 0;\n\nSRF_RETURN_NEXT(funcctx, PointerGetDatum(result));\n}\nelse\nSRF_RETURN_DONE(funcctx);\n}\n\n//============================= SQL file ================================\nCREATE FUNCTION generate_ranges(anyrange) RETURNS setof anyrange\nAS 'MODULE_PATHNAME'\nLANGUAGE C IMMUTABLE STRICT;\n//========================= Test File Expected ============================\nSELECT generate_ranges(int4range(4,10));\n generate_ranges\n-----------------------\n [4,5)\n [5,6)\n [6,7)\n [7,8)\n [8,9)\n [9,10)\n [10,11)\n(7 row)\n//=====================================================================\n\nRegards,\n\n*Andjasubu Bungama, Patrick *\n\n\n\nLe ven. 27 nov. 2020 à 04:01, Heikki Linnakangas <hlinnaka@iki.fi> a écrit :\n\n> On 26/11/2020 23:28, Patrick Handja wrote:\n> > Hello,\n> >\n> > I am currently working on Library with some specific operators to\n> > manipulate RangeType in PostGreSQL. I would like to know if it is\n> > possible to return a setof rangetype elements in Postresql in C-Language\n> > function using the suggestion like specified here:\n> > https://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-RETURN-SET\n> > <https://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-RETURN-SET>.\n>\n> > I have been trying this for days. If what I am trying to do is\n> > impossible, is there any way I can use to have a RangeType set return?\n>\n> Yes, it is possible.\n>\n> I bet there's just a silly little bug or misunderstanding in your code.\n> This stuff can be fiddly. Feel free to post what you have here, and I'm\n> sure someone will point out where the problem is very quickly.\n>\n> - Heikki\n>\n\nHello Heikki,Thank you for responding to my email.This is what I am doing://========================= C file ====================================static intget_range_lower(FunctionCallInfo fcinfo, RangeType *r1){\tTypeCacheEntry *typcache;\tRangeBound\tlower;\tRangeBound\tupper;\tbool\t\tempty;\ttypcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1));\trange_deserialize(typcache, r1, &lower, &upper, &empty);\t/* Return NULL if there's no finite lower bound */\tif (empty || lower.infinite)\t\tPG_RETURN_NULL();\treturn (lower.val);}static intget_range_upper_griis(FunctionCallInfo fcinfo, RangeType *r1){\tTypeCacheEntry *typcache;\tRangeBound\tlower;\tRangeBound\tupper;\tbool\t\tempty;\ttypcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1));\trange_deserialize(typcache, r1, &lower, &upper, &empty);\t/* Return NULL if there's no finite upper bound */\tif (empty || upper.infinite)\t\tPG_RETURN_NULL();\treturn (upper.val);}static RangeType *make_range(int start, int finish){\t\tRangeBound\tlower;\tRangeBound\tupper; \tlower.val = (Datum) (start);\tlower.infinite = false;\tlower.inclusive = true;\tlower.lower = true;\tupper.val = (Datum) (finish);\tupper.infinite = false;\tupper.inclusive = false;\tupper.lower = false;\tif (!lower.infinite && !lower.inclusive)\t{\t\tlower.val = DirectFunctionCall2(int4pl, lower.val, Int32GetDatum(1));\t\tlower.inclusive = true;\t}\tif (!upper.infinite && upper.inclusive)\t{\t\tupper.val = DirectFunctionCall2(int4pl, upper.val, Int32GetDatum(1));\t\tupper.inclusive = false;\t}\tTypeCacheEntry *typcache;\tPG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));}typedef struct{\tint32\t\tcurrent;\tint32\t\tfinish;\tint32\t\tstep;} generate_series_range_fctx;static inline boolcontrol_increment(int32 a, int32 b, int32 *result){\tint64\t\tres = (int64) a + (int64) b;\tif (res > PG_INT32_MAX || res < PG_INT32_MIN)\t{\t\t*result = 0x5EED;\t\treturn true;\t}\t*result = (int32) res;\treturn false;}PG_FUNCTION_INFO_V1(generate_ranges);Datumgenerate_ranges(PG_FUNCTION_ARGS){ \tFuncCallContext *funcctx;\tgenerate_series_range_fctx *fctx;\tMemoryContext oldcontext;\tRangeType *r1 = PG_GETARG_RANGE_P(0);\tRangeType *result;\tTypeCacheEntry *typcache;\ttypcache = range_get_typcache(fcinfo, RangeTypeGetOid(r1));\tint32 lower = get_range_lower(fcinfo, r1);\tint32 upper = get_range_upper(fcinfo, r1);\tif (SRF_IS_FIRSTCALL())\t{\t\tint32\t\tstart = lower;\t\tint32\t\tfinish = upper;\t\tint32\t\tstep = 1;\t\tfuncctx = SRF_FIRSTCALL_INIT();\t\toldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\t\tfctx = (generate_series_range_fctx *) palloc(sizeof(generate_series_range_fctx));\t\tfctx->current = start;\t\tfctx->finish = finish;\t\tfctx->step = step;\t\tfuncctx->user_fctx = fctx;\t\tMemoryContextSwitchTo(oldcontext);\t}\tfuncctx = SRF_PERCALL_SETUP();\tfctx = funcctx->user_fctx; \tresult = make_range(fctx->current, fctx->current+1); \tif ((fctx->step > 0 && fctx->current <= fctx->finish) ||\t\t(fctx->step < 0 && fctx->current >= fctx->finish))\t{\t\tif (control_increment(fctx->current, fctx->step, &fctx->current))\t\t\tfctx->step = 0;\t\tSRF_RETURN_NEXT(funcctx, PointerGetDatum(result));\t}\telse\t\tSRF_RETURN_DONE(funcctx);}//============================= SQL file ================================CREATE FUNCTION generate_ranges(anyrange) RETURNS setof anyrangeAS 'MODULE_PATHNAME'LANGUAGE C IMMUTABLE STRICT;//========================= Test File Expected ============================SELECT generate_ranges(int4range(4,10)); generate_ranges ----------------------- [4,5) [5,6) [6,7) [7,8) [8,9) [9,10) [10,11)(7 row)//=====================================================================Regards,Andjasubu Bungama, Patrick Le ven. 27 nov. 2020 à 04:01, Heikki Linnakangas <hlinnaka@iki.fi> a écrit :On 26/11/2020 23:28, Patrick Handja wrote:\n> Hello,\n> \n> I am currently working on Library with some specific operators to \n> manipulate RangeType in PostGreSQL. I would like to know if it is \n> possible to return a setof rangetype elements in Postresql in C-Language \n> function using the suggestion like specified here: \n> https://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-RETURN-SET \n> <https://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-RETURN-SET>. \n> I have been trying this for days. If what I am trying to do is \n> impossible, is there any way I can use to have a RangeType set return?\n\nYes, it is possible.\n\nI bet there's just a silly little bug or misunderstanding in your code. \nThis stuff can be fiddly. Feel free to post what you have here, and I'm \nsure someone will point out where the problem is very quickly.\n\n- Heikki",
"msg_date": "Fri, 27 Nov 2020 13:46:49 -0500",
"msg_from": "Patrick Handja <patrick.bungama@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Setof RangeType returns"
},
{
"msg_contents": "Patrick Handja <patrick.bungama@gmail.com> writes:\n> This is what I am doing:\n\n> static int\n> get_range_lower(FunctionCallInfo fcinfo, RangeType *r1)\n> {\n> /* Return NULL if there's no finite lower bound */\n> if (empty || lower.infinite)\n> PG_RETURN_NULL();\n\nYou can't really use PG_RETURN_NULL here, mainly because there is\nno good value for it to return from get_range_lower().\n\n> return (lower.val);\n\nHere and elsewhere, you're cavalierly casting between Datum and int.\nWhile you can get away with that as long as the SQL type you're\nworking with is int4, it's bad style; mainly because it's confusing,\nbut also because you'll have a hard time adapting the code if you\never want to work with some other type. Use DatumGetInt32 or\nInt32GetDatum as appropriate.\n\n> TypeCacheEntry *typcache;\n> PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n\nThis sure appears to be passing an uninitialized typcache pointer\nto range_serialize(). If your compiler isn't whining about that,\nyou don't have adequately paranoid warning options enabled.\nThat's an excellent way to waste a lot of time, as you just have.\nC is an unforgiving language, so you need all the help you can get.\n\nBTW, use of PG_RETURN_RANGE_P here isn't very appropriate either,\nsince the function is not declared as returning Datum.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Nov 2020 14:24:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Setof RangeType returns"
},
{
"msg_contents": "Thanks for the feedback Tom!\n\n> TypeCacheEntry *typcache;\n> PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n\nThe use of typcache really confuses me. range_get_typcache() is used in\norder to initialize typcache\n> typcache =range_get_typcache(fcinfo, RangeTypeGetOid(r1));\n\nIn my case, I do not have a range as an argument, I am receiving 2 int,\nwhich I am using to create a range. How can I initialize typcache in this\ncase?\nThat's the part where I am really stuck.\n\nDatum\nmake_range_griis(PG_FUNCTION_ARGS){\nRangeBound lower;\nRangeBound upper;\n\nint32 start = PG_GETARG_INT32(0);\nint32 finish = PG_GETARG_INT32(1);\n\nlower.val = (Datum) (start);\nlower.infinite = false;\nlower.lower = true;\n\nupper.val = (Datum) (finish);\nupper.infinite = false;\nupper.lower = false;\n\nif (!lower.infinite && !lower.inclusive){\nlower.val = DirectFunctionCall2(int4pl, lower.val, Int32GetDatum(1));\nlower.inclusive = true;\n}\n\nif (!upper.infinite && upper.inclusive){\nupper.val = DirectFunctionCall2(int4pl, upper.val, Int32GetDatum(1));\nupper.inclusive = false;\n}\n\nTypeCacheEntry *typcache;\n//> typcache = ??????;\nPG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n}\n\nregards,\n\n*Andjasubu Bungama, Patrick *\n\n\n\n\nLe ven. 27 nov. 2020 à 14:24, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Patrick Handja <patrick.bungama@gmail.com> writes:\n> > This is what I am doing:\n>\n> > static int\n> > get_range_lower(FunctionCallInfo fcinfo, RangeType *r1)\n> > {\n> > /* Return NULL if there's no finite lower bound */\n> > if (empty || lower.infinite)\n> > PG_RETURN_NULL();\n>\n> You can't really use PG_RETURN_NULL here, mainly because there is\n> no good value for it to return from get_range_lower().\n>\n> > return (lower.val);\n>\n> Here and elsewhere, you're cavalierly casting between Datum and int.\n> While you can get away with that as long as the SQL type you're\n> working with is int4, it's bad style; mainly because it's confusing,\n> but also because you'll have a hard time adapting the code if you\n> ever want to work with some other type. Use DatumGetInt32 or\n> Int32GetDatum as appropriate.\n>\n> > TypeCacheEntry *typcache;\n> > PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n>\n> This sure appears to be passing an uninitialized typcache pointer\n> to range_serialize(). If your compiler isn't whining about that,\n> you don't have adequately paranoid warning options enabled.\n> That's an excellent way to waste a lot of time, as you just have.\n> C is an unforgiving language, so you need all the help you can get.\n>\n> BTW, use of PG_RETURN_RANGE_P here isn't very appropriate either,\n> since the function is not declared as returning Datum.\n>\n> regards, tom lane\n>\n\nThanks for the feedback Tom! > TypeCacheEntry *typcache;> PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));The use of typcache really confuses me. range_get_typcache() is used in order to initialize typcache> typcache =range_get_typcache(fcinfo, RangeTypeGetOid(r1));In my case, I do not have a range as an argument, I am receiving 2 int, which I am using to create a range. How can I initialize typcache in this case?That's the part where I am really stuck.Datummake_range_griis(PG_FUNCTION_ARGS){\t\tRangeBound\tlower;\tRangeBound\tupper;\tint32\t start = PG_GETARG_INT32(0);\tint32\t finish = PG_GETARG_INT32(1); \tlower.val = (Datum) (start);\tlower.infinite = false;\tlower.lower = true;\tupper.val = (Datum) (finish);\tupper.infinite = false;\tupper.lower = false;\tif (!lower.infinite && !lower.inclusive){\t\tlower.val = DirectFunctionCall2(int4pl, lower.val, Int32GetDatum(1));\t\tlower.inclusive = true;\t}\tif (!upper.infinite && upper.inclusive){\t\tupper.val = DirectFunctionCall2(int4pl, upper.val, Int32GetDatum(1));\t\tupper.inclusive = false;\t}\tTypeCacheEntry *typcache;\t//> typcache = ??????;\tPG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));}regards,Andjasubu Bungama, Patrick Le ven. 27 nov. 2020 à 14:24, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Patrick Handja <patrick.bungama@gmail.com> writes:\n> This is what I am doing:\n\n> static int\n> get_range_lower(FunctionCallInfo fcinfo, RangeType *r1)\n> {\n> /* Return NULL if there's no finite lower bound */\n> if (empty || lower.infinite)\n> PG_RETURN_NULL();\n\nYou can't really use PG_RETURN_NULL here, mainly because there is\nno good value for it to return from get_range_lower().\n\n> return (lower.val);\n\nHere and elsewhere, you're cavalierly casting between Datum and int.\nWhile you can get away with that as long as the SQL type you're\nworking with is int4, it's bad style; mainly because it's confusing,\nbut also because you'll have a hard time adapting the code if you\never want to work with some other type. Use DatumGetInt32 or\nInt32GetDatum as appropriate.\n\n> TypeCacheEntry *typcache;\n> PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n\nThis sure appears to be passing an uninitialized typcache pointer\nto range_serialize(). If your compiler isn't whining about that,\nyou don't have adequately paranoid warning options enabled.\nThat's an excellent way to waste a lot of time, as you just have.\nC is an unforgiving language, so you need all the help you can get.\n\nBTW, use of PG_RETURN_RANGE_P here isn't very appropriate either,\nsince the function is not declared as returning Datum.\n\n regards, tom lane",
"msg_date": "Tue, 1 Dec 2020 16:42:56 -0500",
"msg_from": "Patrick Handja <patrick.bungama@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Setof RangeType returns"
},
{
"msg_contents": "Just figured out. I think I can use RANGE_EMPTY and it will be like:\n\n> typcache =range_get_typcache(fcinfo, RangeTypeGetOid(RANGE_EMPTY));\n\nRegards,\n\n*Andjasubu Bungama, Patrick *\n\n\n\nLe mar. 1 déc. 2020 à 16:42, Patrick Handja <patrick.bungama@gmail.com> a\nécrit :\n\n> Thanks for the feedback Tom!\n>\n> > TypeCacheEntry *typcache;\n> > PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n>\n> The use of typcache really confuses me. range_get_typcache() is used in\n> order to initialize typcache\n> > typcache =range_get_typcache(fcinfo, RangeTypeGetOid(r1));\n>\n> In my case, I do not have a range as an argument, I am receiving 2 int,\n> which I am using to create a range. How can I initialize typcache in this\n> case?\n> That's the part where I am really stuck.\n>\n> Datum\n> make_range_griis(PG_FUNCTION_ARGS){\n> RangeBound lower;\n> RangeBound upper;\n>\n> int32 start = PG_GETARG_INT32(0);\n> int32 finish = PG_GETARG_INT32(1);\n>\n> lower.val = (Datum) (start);\n> lower.infinite = false;\n> lower.lower = true;\n>\n> upper.val = (Datum) (finish);\n> upper.infinite = false;\n> upper.lower = false;\n>\n> if (!lower.infinite && !lower.inclusive){\n> lower.val = DirectFunctionCall2(int4pl, lower.val, Int32GetDatum(1));\n> lower.inclusive = true;\n> }\n>\n> if (!upper.infinite && upper.inclusive){\n> upper.val = DirectFunctionCall2(int4pl, upper.val, Int32GetDatum(1));\n> upper.inclusive = false;\n> }\n>\n> TypeCacheEntry *typcache;\n> //> typcache = ??????;\n> PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n> }\n>\n> regards,\n>\n> *Andjasubu Bungama, Patrick *\n>\n>\n>\n>\n> Le ven. 27 nov. 2020 à 14:24, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>\n>> Patrick Handja <patrick.bungama@gmail.com> writes:\n>> > This is what I am doing:\n>>\n>> > static int\n>> > get_range_lower(FunctionCallInfo fcinfo, RangeType *r1)\n>> > {\n>> > /* Return NULL if there's no finite lower bound */\n>> > if (empty || lower.infinite)\n>> > PG_RETURN_NULL();\n>>\n>> You can't really use PG_RETURN_NULL here, mainly because there is\n>> no good value for it to return from get_range_lower().\n>>\n>> > return (lower.val);\n>>\n>> Here and elsewhere, you're cavalierly casting between Datum and int.\n>> While you can get away with that as long as the SQL type you're\n>> working with is int4, it's bad style; mainly because it's confusing,\n>> but also because you'll have a hard time adapting the code if you\n>> ever want to work with some other type. Use DatumGetInt32 or\n>> Int32GetDatum as appropriate.\n>>\n>> > TypeCacheEntry *typcache;\n>> > PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n>>\n>> This sure appears to be passing an uninitialized typcache pointer\n>> to range_serialize(). If your compiler isn't whining about that,\n>> you don't have adequately paranoid warning options enabled.\n>> That's an excellent way to waste a lot of time, as you just have.\n>> C is an unforgiving language, so you need all the help you can get.\n>>\n>> BTW, use of PG_RETURN_RANGE_P here isn't very appropriate either,\n>> since the function is not declared as returning Datum.\n>>\n>> regards, tom lane\n>>\n>\n\nJust figured out. I think I can use RANGE_EMPTY and it will be like:> typcache =range_get_typcache(fcinfo, RangeTypeGetOid(RANGE_EMPTY));Regards,Andjasubu Bungama, Patrick Le mar. 1 déc. 2020 à 16:42, Patrick Handja <patrick.bungama@gmail.com> a écrit :Thanks for the feedback Tom! > TypeCacheEntry *typcache;> PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));The use of typcache really confuses me. range_get_typcache() is used in order to initialize typcache> typcache =range_get_typcache(fcinfo, RangeTypeGetOid(r1));In my case, I do not have a range as an argument, I am receiving 2 int, which I am using to create a range. How can I initialize typcache in this case?That's the part where I am really stuck.Datummake_range_griis(PG_FUNCTION_ARGS){\t\tRangeBound\tlower;\tRangeBound\tupper;\tint32\t start = PG_GETARG_INT32(0);\tint32\t finish = PG_GETARG_INT32(1); \tlower.val = (Datum) (start);\tlower.infinite = false;\tlower.lower = true;\tupper.val = (Datum) (finish);\tupper.infinite = false;\tupper.lower = false;\tif (!lower.infinite && !lower.inclusive){\t\tlower.val = DirectFunctionCall2(int4pl, lower.val, Int32GetDatum(1));\t\tlower.inclusive = true;\t}\tif (!upper.infinite && upper.inclusive){\t\tupper.val = DirectFunctionCall2(int4pl, upper.val, Int32GetDatum(1));\t\tupper.inclusive = false;\t}\tTypeCacheEntry *typcache;\t//> typcache = ??????;\tPG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));}regards,Andjasubu Bungama, Patrick Le ven. 27 nov. 2020 à 14:24, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Patrick Handja <patrick.bungama@gmail.com> writes:\n> This is what I am doing:\n\n> static int\n> get_range_lower(FunctionCallInfo fcinfo, RangeType *r1)\n> {\n> /* Return NULL if there's no finite lower bound */\n> if (empty || lower.infinite)\n> PG_RETURN_NULL();\n\nYou can't really use PG_RETURN_NULL here, mainly because there is\nno good value for it to return from get_range_lower().\n\n> return (lower.val);\n\nHere and elsewhere, you're cavalierly casting between Datum and int.\nWhile you can get away with that as long as the SQL type you're\nworking with is int4, it's bad style; mainly because it's confusing,\nbut also because you'll have a hard time adapting the code if you\never want to work with some other type. Use DatumGetInt32 or\nInt32GetDatum as appropriate.\n\n> TypeCacheEntry *typcache;\n> PG_RETURN_RANGE_P(range_serialize(typcache, &lower, &upper, false));\n\nThis sure appears to be passing an uninitialized typcache pointer\nto range_serialize(). If your compiler isn't whining about that,\nyou don't have adequately paranoid warning options enabled.\nThat's an excellent way to waste a lot of time, as you just have.\nC is an unforgiving language, so you need all the help you can get.\n\nBTW, use of PG_RETURN_RANGE_P here isn't very appropriate either,\nsince the function is not declared as returning Datum.\n\n regards, tom lane",
"msg_date": "Tue, 1 Dec 2020 17:19:02 -0500",
"msg_from": "Patrick Handja <patrick.bungama@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Setof RangeType returns"
},
{
"msg_contents": "On 12/01/20 17:19, Patrick Handja wrote:\n> Just figured out. I think I can use RANGE_EMPTY and it will be like:\n> \n>> typcache =range_get_typcache(fcinfo, RangeTypeGetOid(RANGE_EMPTY));\n\nAre you sure you are not simply looking for\nrange_get_typcache(fcinfo, INT4RANGEOID) ?\n\n\nTop-posting is not used for replies on these lists.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 1 Dec 2020 17:31:29 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Setof RangeType returns"
},
{
"msg_contents": "Patrick Handja <patrick.bungama@gmail.com> writes:\n> In my case, I do not have a range as an argument, I am receiving 2 int,\n> which I am using to create a range. How can I initialize typcache in this\n> case?\n\nYou should be passing down the pointer from the outer function, which\ndoes have it at hand, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Dec 2020 17:39:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Setof RangeType returns"
},
{
"msg_contents": "Hello,\n\nAfter some days, I finally found what I was looking for.\nActually this worked:\n\n> Oid rngtypid = get_fn_expr_rettype(fcinfo->flinfo);\n....\n> typcache = range_get_typcache(fcinfo, rngtypid);\n\nThanks for the help.\n\n*Andjasubu Bungama, Patrick *\n\n\n\nLe mar. 1 déc. 2020 à 17:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Patrick Handja <patrick.bungama@gmail.com> writes:\n> > In my case, I do not have a range as an argument, I am receiving 2 int,\n> > which I am using to create a range. How can I initialize typcache in this\n> > case?\n>\n> You should be passing down the pointer from the outer function, which\n> does have it at hand, no?\n>\n> regards, tom lane\n>\n\nHello,After some days, I finally found what I was looking for.Actually this worked:> Oid\t\t\trngtypid = get_fn_expr_rettype(fcinfo->flinfo);....> typcache = range_get_typcache(fcinfo, rngtypid);Thanks for the help.Andjasubu Bungama, Patrick Le mar. 1 déc. 2020 à 17:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Patrick Handja <patrick.bungama@gmail.com> writes:\n> In my case, I do not have a range as an argument, I am receiving 2 int,\n> which I am using to create a range. How can I initialize typcache in this\n> case?\n\nYou should be passing down the pointer from the outer function, which\ndoes have it at hand, no?\n\n regards, tom lane",
"msg_date": "Wed, 9 Dec 2020 20:54:13 -0500",
"msg_from": "Patrick Handja <patrick.bungama@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Setof RangeType returns"
}
] |
[
{
"msg_contents": "Hello,\n\n\nI got too annoyed at building queries for gexec all the time. So wrote a patch to fix the issue that the rename of partitioned trigger doesn't affect children.\n\n\nIn case there isn't a dependent trigger with the same name on the child the function errors out. If the user touched the trigger of the child, it's not a sensitive idea to mess with it.\n\n\nI'm not happy with the interface change in trigger.h. Yet I don't like the idea of creating another layer of indirection either.\n\n\nEven though it's a small patch, there might be other things I'm missing, so I'd appreciate feedback.\n\n\nRegards\n\nArne",
"msg_date": "Thu, 26 Nov 2020 23:28:31 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Rename of triggers for partitioned tables"
},
{
"msg_contents": "I'm too unhappy with the interface change, so please view attached patch instead.\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Arne Roland <A.Roland@index.de>\nSent: Friday, November 27, 2020 12:28:31 AM\nTo: Pg Hackers\nSubject: Rename of triggers for partitioned tables\n\n\nHello,\n\n\nI got too annoyed at building queries for gexec all the time. So wrote a patch to fix the issue that the rename of partitioned trigger doesn't affect children.\n\n\nIn case there isn't a dependent trigger with the same name on the child the function errors out. If the user touched the trigger of the child, it's not a sensitive idea to mess with it.\n\n\nI'm not happy with the interface change in trigger.h. Yet I don't like the idea of creating another layer of indirection either.\n\n\nEven though it's a small patch, there might be other things I'm missing, so I'd appreciate feedback.\n\n\nRegards\n\nArne",
"msg_date": "Fri, 27 Nov 2020 00:05:02 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "I'm sorry for the spam. Here a patch with updated comments, I missed one before.\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Arne Roland <A.Roland@index.de>\nSent: Friday, November 27, 2020 1:05:02 AM\nTo: Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n\n\nI'm too unhappy with the interface change, so please view attached patch instead.\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Arne Roland <A.Roland@index.de>\nSent: Friday, November 27, 2020 12:28:31 AM\nTo: Pg Hackers\nSubject: Rename of triggers for partitioned tables\n\n\nHello,\n\n\nI got too annoyed at building queries for gexec all the time. So wrote a patch to fix the issue that the rename of partitioned trigger doesn't affect children.\n\n\nIn case there isn't a dependent trigger with the same name on the child the function errors out. If the user touched the trigger of the child, it's not a sensitive idea to mess with it.\n\n\nI'm not happy with the interface change in trigger.h. Yet I don't like the idea of creating another layer of indirection either.\n\n\nEven though it's a small patch, there might be other things I'm missing, so I'd appreciate feedback.\n\n\nRegards\n\nArne",
"msg_date": "Fri, 27 Nov 2020 00:19:57 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Hi Arne,\n\nOn Fri, Nov 27, 2020 at 9:20 AM Arne Roland <A.Roland@index.de> wrote:\n>\n> I'm sorry for the spam. Here a patch with updated comments, I missed one before.\n>\n\nYou sent in your patch, recursive_tgrename.2.patch to pgsql-hackers on\nNov 27, but you did not post it to the next CommitFest[1]. If this\nwas intentional, then you need to take no action. However, if you\nwant your patch to be reviewed as part of the upcoming CommitFest,\nthen you need to add it yourself before 2021-01-01 AoE[2]. Thanks for\nyour contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 20:33:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2020-Nov-27, Arne Roland wrote:\n\n> I got too annoyed at building queries for gexec all the time. So wrote\n> a patch to fix the issue that the rename of partitioned trigger\n> doesn't affect children.\n\nAs you say, triggers on children don't necessarily have to have the same\nname as on parent; this already happens when the trigger is renamed in\nthe child table but not on parent. In that situation the search on the\nchild will fail, which will cause the whole thing to fail I think.\n\nWe now have the column pg_trigger.tgparentid, and I think it would be\nbetter (more reliable) to search for the trigger in the child by the\ntgparentid column instead, rather than by name.\n\nAlso, I think it would be good to have\n ALTER TRIGGER .. ON ONLY parent RENAME TO ..\nto avoid recursing to children. This seems mostly pointless, but since\nwe've allowed changing the name of the trigger in children thus far,\nthen we've implicitly made it supported to have triggers that are named\ndifferently. (And it's not entirely academic, since the trigger name\ndetermines firing order.)\n\nAlternatively to this last point, we could decide to disallow renaming\nof triggers on children (i.e. if trigger has tgparentid set, then\nrenaming is disallowed). I don't have a problem with that, but it would\nhave to be an explicit decision to take.\n\nI think you did not register the patch in commitfest, so I did that for\nyou: https://commitfest.postgresql.org/32/2943/\n\nThanks!\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"�C�mo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germ�n Poo)\n\n\n",
"msg_date": "Fri, 15 Jan 2021 19:26:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 1/15/21 5:26 PM, Alvaro Herrera wrote:\n> On 2020-Nov-27, Arne Roland wrote:\n> \n>> I got too annoyed at building queries for gexec all the time. So wrote\n>> a patch to fix the issue that the rename of partitioned trigger\n>> doesn't affect children.\n> \n> As you say, triggers on children don't necessarily have to have the same\n> name as on parent; this already happens when the trigger is renamed in\n> the child table but not on parent. In that situation the search on the\n> child will fail, which will cause the whole thing to fail I think.\n> \n> We now have the column pg_trigger.tgparentid, and I think it would be\n> better (more reliable) to search for the trigger in the child by the\n> tgparentid column instead, rather than by name.\n> \n> Also, I think it would be good to have\n> ALTER TRIGGER .. ON ONLY parent RENAME TO ..\n> to avoid recursing to children. This seems mostly pointless, but since\n> we've allowed changing the name of the trigger in children thus far,\n> then we've implicitly made it supported to have triggers that are named\n> differently. (And it's not entirely academic, since the trigger name\n> determines firing order.)\n> \n> Alternatively to this last point, we could decide to disallow renaming\n> of triggers on children (i.e. if trigger has tgparentid set, then\n> renaming is disallowed). I don't have a problem with that, but it would\n> have to be an explicit decision to take.\n\nArne, thoughts on Álvaro's comments?\n\nMarking this patch as Waiting for Author.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 18 Mar 2021 11:50:37 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "David Steele wrote:\n> Arne, thoughts on Álvaro's comments?\n>\n> Marking this patch as Waiting for Author.\n\nThank you for the reminder. I was to absorbed by other tasks. I should be more present in the upcoming weeks.\n\n> I think you did not register the patch in commitfest, so I did that for\n> you: https://commitfest.postgresql.org/32/2943/\n\nThank you! I should have done that.\n\n> As you say, triggers on children don't necessarily have to have the same\n> name as on parent; this already happens when the trigger is renamed in\n> the child table but not on parent. In that situation the search on the\n> child will fail, which will cause the whole thing to fail I think.\n>\n> We now have the column pg_trigger.tgparentid, and I think it would be\n> better (more reliable) to search for the trigger in the child by the\n> tgparentid column instead, rather than by name.\n\nThe last patch already checks tgparentid and errors out if either of those do not match.\nThe reasoning behind that was, that I don't see a clear reasonable way to handle this scenario.\nIf the user choose to rename the child trigger, I am at a loss what to do.\nI thought to pass it back to the user to solve the situation instead of simply ignoring the users earlier rename.\nSilently renaming sounded a bit presumptuous to me, even though I don't care that much either way.\n\nDo you think silently renaming is better than yielding an error?\n\n> Also, I think it would be good to have\n> ALTER TRIGGER .. ON ONLY parent RENAME TO ..\n> to avoid recursing to children. This seems mostly pointless, but since\n> we've allowed changing the name of the trigger in children thus far,\n> then we've implicitly made it supported to have triggers that are named\n> differently. (And it's not entirely academic, since the trigger name\n> determines firing order.)\n\nIf it would be entirely academic, I wouldn't have noticed the whole thing in the first place.\n\nThe on only variant sounds like a good idea to me. Even though hardly worth any efford, it seems simple enough. I will work on that.\n\n> Alternatively to this last point, we could decide to disallow renaming\n> of triggers on children (i.e. if trigger has tgparentid set, then\n> renaming is disallowed). I don't have a problem with that, but it would\n> have to be an explicit decision to take.\n\nI rejected that idea, because I was unsure what to do with migrations. If we would be discussing this a few years back, this would have been likely my vote, but I don't see it happening now.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nDavid Steele wrote:\n> Arne, thoughts on Álvaro's comments?\n> \n> Marking this patch as Waiting for Author.\n\nThank you for the reminder. I was to absorbed by other tasks. I should be more present in the upcoming weeks.\n\n> I think you did not register the patch in commitfest, so I did that for\n> you: https://commitfest.postgresql.org/32/2943/\n\nThank you! I should have done that.\n\n> As you say, triggers on children don't necessarily have to have the same\n> name as on parent; this already happens when the trigger is renamed in\n> the child table but not on parent. In that situation the search on the\n> child will fail, which will cause the whole thing to fail I think.\n> \n> We now have the column pg_trigger.tgparentid, and I think it would be\n> better (more reliable) to search for the trigger in the child by the\n> tgparentid column instead, rather than by name.\n\nThe last patch already checks tgparentid and errors out if either of those do not match.\nThe reasoning behind that was, that I don't see a clear reasonable way to handle this scenario.\nIf the user choose to rename the child trigger, I am at a loss what to do.\nI thought to pass it back to the user to solve the situation instead of simply ignoring the users earlier rename.\nSilently renaming sounded a bit presumptuous to me, even though I don't care that much either way.\n\nDo you think silently renaming is better than yielding an error?\n\n> Also, I think it would be good to have\n> ALTER TRIGGER .. ON ONLY parent RENAME TO ..\n> to avoid recursing to children. This seems mostly pointless, but since\n> we've allowed changing the name of the trigger in children thus far,\n> then we've implicitly made it supported to have triggers that are named\n> differently. (And it's not entirely academic, since the trigger name\n> determines firing order.)\n\nIf it would be entirely academic, I wouldn't have noticed the whole thing in the first place.\n\nThe on only variant sounds like a good idea to me. Even though hardly worth any efford, it seems simple enough. I will work on that.\n\n> Alternatively to this last point, we could decide to disallow renaming\n> of triggers on children (i.e. if trigger has tgparentid set, then\n> renaming is disallowed). I don't have a problem with that, but it would\n> have to be an explicit decision to take.\n\nI rejected that idea, because I was unsure what to do with migrations. If we would be discussing this a few years back, this would have been likely my vote, but I don't see it happening now.\n\nRegards\nArne",
"msg_date": "Mon, 29 Mar 2021 07:37:53 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Attached a rebased patch with the suggested \"ONLY ON\" change.\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Arne Roland <A.Roland@index.de>\nSent: Monday, March 29, 2021 9:37:53 AM\nTo: David Steele; Alvaro Herrera\nCc: Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n\nDavid Steele wrote:\n> Arne, thoughts on Álvaro's comments?\n>\n> Marking this patch as Waiting for Author.\n\nThank you for the reminder. I was to absorbed by other tasks. I should be more present in the upcoming weeks.\n\n> I think you did not register the patch in commitfest, so I did that for\n> you: https://commitfest.postgresql.org/32/2943/\n\nThank you! I should have done that.\n\n> As you say, triggers on children don't necessarily have to have the same\n> name as on parent; this already happens when the trigger is renamed in\n> the child table but not on parent. In that situation the search on the\n> child will fail, which will cause the whole thing to fail I think.\n>\n> We now have the column pg_trigger.tgparentid, and I think it would be\n> better (more reliable) to search for the trigger in the child by the\n> tgparentid column instead, rather than by name.\n\nThe last patch already checks tgparentid and errors out if either of those do not match.\nThe reasoning behind that was, that I don't see a clear reasonable way to handle this scenario.\nIf the user choose to rename the child trigger, I am at a loss what to do.\nI thought to pass it back to the user to solve the situation instead of simply ignoring the users earlier rename.\nSilently renaming sounded a bit presumptuous to me, even though I don't care that much either way.\n\nDo you think silently renaming is better than yielding an error?\n\n> Also, I think it would be good to have\n> ALTER TRIGGER .. ON ONLY parent RENAME TO ..\n> to avoid recursing to children. This seems mostly pointless, but since\n> we've allowed changing the name of the trigger in children thus far,\n> then we've implicitly made it supported to have triggers that are named\n> differently. (And it's not entirely academic, since the trigger name\n> determines firing order.)\n\nIf it would be entirely academic, I wouldn't have noticed the whole thing in the first place.\n\nThe on only variant sounds like a good idea to me. Even though hardly worth any efford, it seems simple enough. I will work on that.\n\n> Alternatively to this last point, we could decide to disallow renaming\n> of triggers on children (i.e. if trigger has tgparentid set, then\n> renaming is disallowed). I don't have a problem with that, but it would\n> have to be an explicit decision to take.\n\nI rejected that idea, because I was unsure what to do with migrations. If we would be discussing this a few years back, this would have been likely my vote, but I don't see it happening now.\n\nRegards\nArne",
"msg_date": "Mon, 29 Mar 2021 09:38:12 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Mar-29, Arne Roland wrote:\n\n> @@ -1475,7 +1467,12 @@ renametrig(RenameStmt *stmt)\n> \t\ttuple = heap_copytuple(tuple);\t/* need a modifiable copy */\n> \t\ttrigform = (Form_pg_trigger) GETSTRUCT(tuple);\n> \t\ttgoid = trigform->oid;\n> -\n> +\t\tif (tgparent != 0 && tgparent != trigform->tgparentid) {\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n> +\t\t\t\t\t errmsg(\"trigger \\\"%s\\\" for table \\\"%s\\\" is not dependent on the trigger on \\\"%s\\\"\",\n> +\t\t\t\t\t\t\tstmt->subname, RelationGetRelationName(targetrel), stmt->relation->relname)));\n> +\t\t}\n\nI think this is not what we want to do. What you're doing here as I\nunderstand is traversing the inheritance hierarchy down using the\ntrigger name, and then fail if the trigger with that name has a\ndifferent parent or if no trigger with that name exists in the child.\n\nWhat we really want to have happen, is to search the list of triggers in\nthe child, see which one has tgparentid=<the one we're renaming>, and\nrename that one -- regardless of what name it had originally.\n\nAlso, you added grammar support for the ONLY keyword in the command, but\nit doesn't do anything different when given that flag, does it? At\nleast, I see no change or addition to recursion behavior in ATExecCmd.\nI would expect that if I say \"ALTER TRIGGER .. ON ONLY tab\" then it\nrenames the trigger on table \"tab\" but not on its child tables; only if\nI remove the ONLY from the command it recurses.\n\nI think it would be good to have a new test for pg_dump that verifies\nthat this stuff is doing the right thing.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Siempre hay que alimentar a los dioses, aunque la tierra est� seca\" (Orual)\n\n\n",
"msg_date": "Mon, 29 Mar 2021 10:25:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Thank you for the quick reply!\n\nAlvaro Herrera wrote:\n> I think this is not what we want to do. What you're doing here as I\n> understand is traversing the inheritance hierarchy down using the\n> trigger name, and then fail if the trigger with that name has a\n> different parent or if no trigger with that name exists in the child.\n\nThe basic concept here was to fail in any circumstance where the trigger on the child isn't as expected.\nMay it be the name or the parent for that matter.\n\n> What we really want to have happen, is to search the list of triggers in\n> the child, see which one has tgparentid=<the one we're renaming>, and\n>rename that one -- regardless of what name it had originally.\n\nSo if we have a trigger named t42 can be renamed to my_trigger by\nALTER TRIGGER sometrigg ON my_table RENAME TO my_trigger;?\nEquivalently there is the question whether we just want to silently ignore\nif the corresponding child doesn't have a corresponding trigger.\nI am not convinced this is really what we want, but I could agree to raise a notice in these cases.\n\n> Also, you added grammar support for the ONLY keyword in the command, but\n> it doesn't do anything different when given that flag, does it? At\n> least, I see no change or addition to recursion behavior in ATExecCmd.\n> I would expect that if I say \"ALTER TRIGGER .. ON ONLY tab\" then it\n> renames the trigger on table \"tab\" but not on its child tables; only if\n> I remove the ONLY from the command it recurses.\n\nThe syntax does work via\n+ if (stmt->relation->inh && has_subclass(relid))\nin renametrigHelper (src/backend/commands/trigger.c line 1543).\nI am not sure which sort of addition in ATExecCmd you expected.\nMaybe I can make this more obvious one way or the other. You input would be appreciated.\n\n> I think it would be good to have a new test for pg_dump that verifies\n> that this stuff is doing the right thing.\n\nGood idea, I will add a case involving pg_dump.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\nThank you for the quick reply!\n\nAlvaro Herrera wrote:\n> I think this is not what we want to do. What you're doing here as I\n> understand is traversing the inheritance hierarchy down using the\n> trigger name, and then fail if the trigger with that name has a\n> different parent or if no trigger with that name exists in the child.\n\nThe basic concept here was to fail in any circumstance where the trigger on the child isn't as expected.\nMay it be the name or the parent for that matter.\n\n> What we really want to have happen, is to search the list of triggers in\n> the child, see which one has tgparentid=<the one we're renaming>, and\n>rename that one -- regardless of what name it had originally.\n\nSo if we have a trigger named t42 can be renamed to my_trigger by \nALTER TRIGGER sometrigg ON my_table RENAME TO my_trigger;?\nEquivalently there is the question whether we just want to silently ignore\nif the corresponding child doesn't have a corresponding trigger.\nI am not convinced this is really what we want, but I could agree to raise a notice in these cases.\n\n> Also, you added grammar support for the ONLY keyword in the command, but\n> it doesn't do anything different when given that flag, does it? At\n> least, I see no change or addition to recursion behavior in ATExecCmd.\n> I would expect that if I say \"ALTER TRIGGER .. ON ONLY tab\" then it\n> renames the trigger on table \"tab\" but not on its child tables; only if\n> I remove the ONLY from the command it recurses.\n\nThe syntax does work via \n+ if (stmt->relation->inh && has_subclass(relid))\nin renametrigHelper (src/backend/commands/trigger.c line 1543).\nI am not sure which sort of addition in ATExecCmd you expected.\nMaybe I can make this more obvious one way or the other. You input would be appreciated.\n\n> I think it would be good to have a new test for pg_dump that verifies\n> that this stuff is doing the right thing.\n\nGood idea, I will add a case involving pg_dump.\n\n\nRegards\n\nArne",
"msg_date": "Mon, 29 Mar 2021 14:02:35 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Mar-29, Arne Roland wrote:\n\n> Alvaro Herrera wrote:\n> > I think this is not what we want to do. What you're doing here as I\n> > understand is traversing the inheritance hierarchy down using the\n> > trigger name, and then fail if the trigger with that name has a\n> > different parent or if no trigger with that name exists in the child.\n> \n> The basic concept here was to fail in any circumstance where the\n> trigger on the child isn't as expected.\n> May it be the name or the parent for that matter.\n\nConsider this. You have table p and partition p1. Add trigger t to p,\nand do\nALTER TRIGGER t ON p1 RENAME TO q;\n\nWhat I'm saying is that if you later do\nALTER TRIGGER t ON ONLY p RENAME TO r;\nthen the trigger on parent is changed, and the trigger on child stays q.\nIf you later do\nALTER TRIGGER r ON p RENAME TO s;\nthen triggers on both tables end up with name s.\n\n> > What we really want to have happen, is to search the list of triggers in\n> > the child, see which one has tgparentid=<the one we're renaming>, and\n> >rename that one -- regardless of what name it had originally.\n> \n> So if we have a trigger named t42 can be renamed to my_trigger by\n> ALTER TRIGGER sometrigg ON my_table RENAME TO my_trigger;?\n> Equivalently there is the question whether we just want to silently ignore\n> if the corresponding child doesn't have a corresponding trigger.\n\nI think the child cannot not have a corresponding trigger. If you try\nto drop the trigger on child, it should say that it cannot be dropped\nbecause the trigger on parent requires it. So I don't think there's any\ncase where altering name of trigger in parent would raise an user-facing\nerror. If you can't find the trigger in child, that's a case of catalog\ncorruption and should be reported with something like\n\nelog(ERROR, \"cannot find trigger cloned from trigger %s in partition %s\")\n\n> > Also, you added grammar support for the ONLY keyword in the command, but\n> > it doesn't do anything different when given that flag, does it? At\n> > least, I see no change or addition to recursion behavior in ATExecCmd.\n> > I would expect that if I say \"ALTER TRIGGER .. ON ONLY tab\" then it\n> > renames the trigger on table \"tab\" but not on its child tables; only if\n> > I remove the ONLY from the command it recurses.\n> \n> The syntax does work via\n> + if (stmt->relation->inh && has_subclass(relid))\n> in renametrigHelper (src/backend/commands/trigger.c line 1543).\n> I am not sure which sort of addition in ATExecCmd you expected.\n> Maybe I can make this more obvious one way or the other. You input would be appreciated.\n\nHmm, ok, maybe this is sufficient, I didn't actually test it. I think\nyou do need to add a few test cases to ensure everything is sane. Make\nsure to verify what happens if you have multi-level partitioning\n(grandparent, parent, child) and you ALTER the one in the middle. Also\nthings like if parent has two children and one child has multiple\nchildren.\n\nAlso, please make sure that it works correctly with INHERITS (legacy\ninheritance). We probably don't want to cause recursion in that case.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Mon, 29 Mar 2021 12:16:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> I think the child cannot not have a corresponding trigger. If you try\n> to drop the trigger on child, it should say that it cannot be dropped\n> because the trigger on parent requires it. So I don't think there's any\n> case where altering name of trigger in parent would raise an user-facing\n> error. If you can't find the trigger in child, that's a case of catalog\n> corruption [...]\n\nAh, I didn't know that. That changes my reasoning about that. I only knew, that the alter table ... detach partition ... doesn't detach the child trigger from the partitioned trigger, but the the attach partition seem to add one. Maybe we should be able to make sure that there is a one to one correspondence for child triggers and child partitions.\n\nCurrently upon detaching we keep the trigger within the partitioned trigger of the parent relation. (So the trigger remains in place and can only be dropped with the parent one.) Only we try to attach the partition again any partitioned triggers are simply removed.\n\nOne idea would be to detach partitioned triggers there in a similar manner we detach partitioned indexes. That could give above guarantee. (At least if one would be willing to write a migration for that.) But then we can't tell anymore where the trigger stems from. That hence would break our attach/detach workflow.\n\nI was annoyed by the inability to drop partitioned triggers from detached partitions, but it seems I just got a workaround. Ugh.\n\nThis is a bit of a sidetrack discussion, but it is related.\n\n> Consider this. You have table p and partition p1. Add trigger t to p,\n> and do\n> ALTER TRIGGER t ON p1 RENAME TO q;\n\n> What I'm saying is that if you later do\n> ALTER TRIGGER t ON ONLY p RENAME TO r;\n> then the trigger on parent is changed, and the trigger on child stays q.\n> If you later do\n> ALTER TRIGGER r ON p RENAME TO s;\n> then triggers on both tables end up with name s.\n\nIf we get into the mindset of expecting one trigger on the child, we have the following edge case:\n\n- The trigger is renamed. You seem to favor the silent rename in that case over raising a notice (or even an error). I am not convinced a notice wouldn't the better in that case.\n- The trigger is outside of partitioned table. That still means that the trigger might still be in the inheritance tree, but likely isn't. Should we rename it anyways, or skip that? Should be raise a notice about what we are doing here? I think that would be helpful to the end user.\n\n> Hmm, ok, maybe this is sufficient, I didn't actually test it. I think\n> you do need to add a few test cases to ensure everything is sane. Make\n> sure to verify what happens if you have multi-level partitioning\n> (grandparent, parent, child) and you ALTER the one in the middle. Also\n> things like if parent has two children and one child has multiple\n> children.\n\nThe test I had somewhere upwards this thread, already tested that.\n\nI am not sure how to add a test for pg_dump though. Can you point me to the location in pg_regress for pg_dump?\n\n> Also, please make sure that it works correctly with INHERITS (legacy\n> inheritance). We probably don't want to cause recursion in that case.\n\nThat works, but I will add a test to make sure it stays that way. Thank you for your input!\n\nRegards\nArne\n\n\n\n\n\n\n\n\nAlvaro Herrera wrote:\n\n\n> I think the child cannot not have a corresponding trigger. If you try\n> to drop the trigger on child, it should say that it cannot be dropped\n> because the trigger on parent requires it. So I don't think there's any\n> case where altering name of trigger in parent would raise an user-facing\n> error. If you can't find the trigger in child, that's a case of catalog\n> corruption [...]\n\n\nAh, I didn't know that. That changes my reasoning about that. I only knew, that the alter table ... detach partition ... doesn't detach the child trigger from the partitioned trigger, but the the attach partition seem to add one. Maybe we should be able to\n make sure that there is a one to one correspondence for child triggers and child partitions.\n\n\nCurrently upon detaching we keep the trigger within the partitioned trigger of the parent relation. (So the trigger remains in place and can only be dropped with the parent one.) Only we try to attach the partition again any partitioned\n triggers are simply removed.\n\n\nOne idea would be to detach partitioned triggers there in a similar manner we detach partitioned indexes. That could give above guarantee. (At least if one would be willing to write a migration for that.)\nBut then we can't tell anymore where the trigger stems from. That hence would break our attach/detach workflow.\n\n\nI was annoyed by the inability to drop partitioned triggers from detached partitions, but it seems I just got a workaround. Ugh.\n\n\nThis is a bit of a sidetrack discussion, but it is related.\n\n\n> Consider this. You have table p and partition p1. Add trigger t to p,\n> and do\n> ALTER TRIGGER t ON p1 RENAME TO q;\n\n> What I'm saying is that if you later do\n> ALTER TRIGGER t ON ONLY p RENAME TO r;\n> then the trigger on parent is changed, and the trigger on child stays q.\n> If you later do\n> ALTER TRIGGER r ON p RENAME TO s;\n> then triggers on both tables end up with name s.\n\nIf we get into the mindset of expecting one trigger on the child, we have the following edge case:\n\n\n- The trigger is renamed. You seem to favor the silent rename in that case over raising a notice (or even an error). I am not convinced a notice wouldn't the better in that case.\n- The trigger is outside of partitioned table. That still means that the trigger might still be in the inheritance tree, but likely isn't. Should we rename it anyways, or skip that? Should be raise a notice about what we are doing here?\n I think that would be helpful to the end user.\n\n> Hmm, ok, maybe this is sufficient, I didn't actually test it. I think\n> you do need to add a few test cases to ensure everything is sane. Make\n> sure to verify what happens if you have multi-level partitioning\n> (grandparent, parent, child) and you ALTER the one in the middle. Also\n> things like if parent has two children and one child has multiple\n> children.\n\n\n\nThe test I had somewhere upwards this thread, already tested that.\n\n\nI am not sure how to add a test for pg_dump though. Can you point me to the location in pg_regress for pg_dump?\n\n\n> Also, please make sure that it works correctly with INHERITS (legacy\n> inheritance). We probably don't want to cause recursion in that case.\n\nThat works, but I will add a test to make sure it stays that way. Thank you for your input!\n\n\nRegards\nArne",
"msg_date": "Thu, 1 Apr 2021 14:38:59 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Hi,\n\n\nattached a patch with some cleanup and a couple of test cases. The lookup is now done with the parentid and the renaming affects detached partitions as well.\n\n\nThe test cases are not perfectly concise, but only add about 0.4 % to the total runtime of regression tests at my machine. So I didn't bother to much. They only consist of basic sql tests.\n\n\nFurther feedback appreciated! Thank you!\n\n\nRegards\n\nArne\n\n\n________________________________\nFrom: Arne Roland <A.Roland@index.de>\nSent: Thursday, April 1, 2021 4:38:59 PM\nTo: Alvaro Herrera\nCc: David Steele; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n\nAlvaro Herrera wrote:\n> I think the child cannot not have a corresponding trigger. If you try\n> to drop the trigger on child, it should say that it cannot be dropped\n> because the trigger on parent requires it. So I don't think there's any\n> case where altering name of trigger in parent would raise an user-facing\n> error. If you can't find the trigger in child, that's a case of catalog\n> corruption [...]\n\nAh, I didn't know that. That changes my reasoning about that. I only knew, that the alter table ... detach partition ... doesn't detach the child trigger from the partitioned trigger, but the the attach partition seem to add one. Maybe we should be able to make sure that there is a one to one correspondence for child triggers and child partitions.\n\nCurrently upon detaching we keep the trigger within the partitioned trigger of the parent relation. (So the trigger remains in place and can only be dropped with the parent one.) Only we try to attach the partition again any partitioned triggers are simply removed.\n\nOne idea would be to detach partitioned triggers there in a similar manner we detach partitioned indexes. That could give above guarantee. (At least if one would be willing to write a migration for that.) But then we can't tell anymore where the trigger stems from. That hence would break our attach/detach workflow.\n\nI was annoyed by the inability to drop partitioned triggers from detached partitions, but it seems I just got a workaround. Ugh.\n\nThis is a bit of a sidetrack discussion, but it is related.\n\n> Consider this. You have table p and partition p1. Add trigger t to p,\n> and do\n> ALTER TRIGGER t ON p1 RENAME TO q;\n\n> What I'm saying is that if you later do\n> ALTER TRIGGER t ON ONLY p RENAME TO r;\n> then the trigger on parent is changed, and the trigger on child stays q.\n> If you later do\n> ALTER TRIGGER r ON p RENAME TO s;\n> then triggers on both tables end up with name s.\n\nIf we get into the mindset of expecting one trigger on the child, we have the following edge case:\n\n- The trigger is renamed. You seem to favor the silent rename in that case over raising a notice (or even an error). I am not convinced a notice wouldn't the better in that case.\n- The trigger is outside of partitioned table. That still means that the trigger might still be in the inheritance tree, but likely isn't. Should we rename it anyways, or skip that? Should be raise a notice about what we are doing here? I think that would be helpful to the end user.\n\n> Hmm, ok, maybe this is sufficient, I didn't actually test it. I think\n> you do need to add a few test cases to ensure everything is sane. Make\n> sure to verify what happens if you have multi-level partitioning\n> (grandparent, parent, child) and you ALTER the one in the middle. Also\n> things like if parent has two children and one child has multiple\n> children.\n\nThe test I had somewhere upwards this thread, already tested that.\n\nI am not sure how to add a test for pg_dump though. Can you point me to the location in pg_regress for pg_dump?\n\n> Also, please make sure that it works correctly with INHERITS (legacy\n> inheritance). We probably don't want to cause recursion in that case.\n\nThat works, but I will add a test to make sure it stays that way. Thank you for your input!\n\nRegards\nArne",
"msg_date": "Fri, 2 Apr 2021 17:55:16 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Hello Álvaro Herrera,\n\nI am sorry to bother, but I am still annoyed by this. So I wonder if you had a look.\n\nRegards\nArne\n\n\n\n________________________________\nFrom: Arne Roland <A.Roland@index.de>\nSent: Friday, April 2, 2021 7:55:16 PM\nTo: Alvaro Herrera\nCc: David Steele; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n\n\nHi,\n\n\nattached a patch with some cleanup and a couple of test cases. The lookup is now done with the parentid and the renaming affects detached partitions as well.\n\n\nThe test cases are not perfectly concise, but only add about 0.4 % to the total runtime of regression tests at my machine. So I didn't bother to much. They only consist of basic sql tests.\n\n\nFurther feedback appreciated! Thank you!\n\n\nRegards\n\nArne\n\n\n________________________________\nFrom: Arne Roland <A.Roland@index.de>\nSent: Thursday, April 1, 2021 4:38:59 PM\nTo: Alvaro Herrera\nCc: David Steele; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n\nAlvaro Herrera wrote:\n> I think the child cannot not have a corresponding trigger. If you try\n> to drop the trigger on child, it should say that it cannot be dropped\n> because the trigger on parent requires it. So I don't think there's any\n> case where altering name of trigger in parent would raise an user-facing\n> error. If you can't find the trigger in child, that's a case of catalog\n> corruption [...]\n\nAh, I didn't know that. That changes my reasoning about that. I only knew, that the alter table ... detach partition ... doesn't detach the child trigger from the partitioned trigger, but the the attach partition seem to add one. Maybe we should be able to make sure that there is a one to one correspondence for child triggers and child partitions.\n\nCurrently upon detaching we keep the trigger within the partitioned trigger of the parent relation. (So the trigger remains in place and can only be dropped with the parent one.) Only we try to attach the partition again any partitioned triggers are simply removed.\n\nOne idea would be to detach partitioned triggers there in a similar manner we detach partitioned indexes. That could give above guarantee. (At least if one would be willing to write a migration for that.) But then we can't tell anymore where the trigger stems from. That hence would break our attach/detach workflow.\n\nI was annoyed by the inability to drop partitioned triggers from detached partitions, but it seems I just got a workaround. Ugh.\n\nThis is a bit of a sidetrack discussion, but it is related.\n\n> Consider this. You have table p and partition p1. Add trigger t to p,\n> and do\n> ALTER TRIGGER t ON p1 RENAME TO q;\n\n> What I'm saying is that if you later do\n> ALTER TRIGGER t ON ONLY p RENAME TO r;\n> then the trigger on parent is changed, and the trigger on child stays q.\n> If you later do\n> ALTER TRIGGER r ON p RENAME TO s;\n> then triggers on both tables end up with name s.\n\nIf we get into the mindset of expecting one trigger on the child, we have the following edge case:\n\n- The trigger is renamed. You seem to favor the silent rename in that case over raising a notice (or even an error). I am not convinced a notice wouldn't the better in that case.\n- The trigger is outside of partitioned table. That still means that the trigger might still be in the inheritance tree, but likely isn't. Should we rename it anyways, or skip that? Should be raise a notice about what we are doing here? I think that would be helpful to the end user.\n\n> Hmm, ok, maybe this is sufficient, I didn't actually test it. I think\n> you do need to add a few test cases to ensure everything is sane. Make\n> sure to verify what happens if you have multi-level partitioning\n> (grandparent, parent, child) and you ALTER the one in the middle. Also\n> things like if parent has two children and one child has multiple\n> children.\n\nThe test I had somewhere upwards this thread, already tested that.\n\nI am not sure how to add a test for pg_dump though. Can you point me to the location in pg_regress for pg_dump?\n\n> Also, please make sure that it works correctly with INHERITS (legacy\n> inheritance). We probably don't want to cause recursion in that case.\n\nThat works, but I will add a test to make sure it stays that way. Thank you for your input!\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\nHello Álvaro Herrera,\n\nI am sorry to bother, but I am still annoyed by this. So I wonder if you had a look.\n\nRegards\nArne\n\n\n\n\n\n\nFrom: Arne Roland <A.Roland@index.de>\nSent: Friday, April 2, 2021 7:55:16 PM\nTo: Alvaro Herrera\nCc: David Steele; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n \n\n\n\nHi,\n\n\nattached a patch with some cleanup and a couple of test cases. The lookup is now done with the parentid and the renaming affects detached partitions as well.\n\n\nThe test cases are not perfectly concise, but only add about 0.4 % to the total runtime of regression tests at my machine. So I didn't bother to much. They only consist of basic sql tests.\n\n\nFurther feedback appreciated! Thank you!\n\n\nRegards\nArne\n\n\n\n\nFrom: Arne Roland <A.Roland@index.de>\nSent: Thursday, April 1, 2021 4:38:59 PM\nTo: Alvaro Herrera\nCc: David Steele; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n \n\n\n\nAlvaro Herrera wrote:\n\n\n> I think the child cannot not have a corresponding trigger. If you try\n> to drop the trigger on child, it should say that it cannot be dropped\n> because the trigger on parent requires it. So I don't think there's any\n> case where altering name of trigger in parent would raise an user-facing\n> error. If you can't find the trigger in child, that's a case of catalog\n> corruption [...]\n\n\nAh, I didn't know that. That changes my reasoning about that. I only knew, that the alter table ... detach partition ... doesn't detach the child trigger from the partitioned trigger, but the the attach partition seem to add one. Maybe we should be able to\n make sure that there is a one to one correspondence for child triggers and child partitions.\n\n\nCurrently upon detaching we keep the trigger within the partitioned trigger of the parent relation. (So the trigger remains in place and can only be dropped with the parent one.) Only we try to attach the partition again any partitioned\n triggers are simply removed.\n\n\nOne idea would be to detach partitioned triggers there in a similar manner we detach partitioned indexes. That could give above guarantee. (At least if one would be willing to write a migration for that.)\nBut then we can't tell anymore where the trigger stems from. That hence would break our attach/detach workflow.\n\n\nI was annoyed by the inability to drop partitioned triggers from detached partitions, but it seems I just got a workaround. Ugh.\n\n\nThis is a bit of a sidetrack discussion, but it is related.\n\n\n> Consider this. You have table p and partition p1. Add trigger t to p,\n> and do\n> ALTER TRIGGER t ON p1 RENAME TO q;\n\n> What I'm saying is that if you later do\n> ALTER TRIGGER t ON ONLY p RENAME TO r;\n> then the trigger on parent is changed, and the trigger on child stays q.\n> If you later do\n> ALTER TRIGGER r ON p RENAME TO s;\n> then triggers on both tables end up with name s.\n\nIf we get into the mindset of expecting one trigger on the child, we have the following edge case:\n\n\n- The trigger is renamed. You seem to favor the silent rename in that case over raising a notice (or even an error). I am not convinced a notice wouldn't the better in that case.\n- The trigger is outside of partitioned table. That still means that the trigger might still be in the inheritance tree, but likely isn't. Should we rename it anyways, or skip that? Should be raise a notice about what we are doing here?\n I think that would be helpful to the end user.\n\n> Hmm, ok, maybe this is sufficient, I didn't actually test it. I think\n> you do need to add a few test cases to ensure everything is sane. Make\n> sure to verify what happens if you have multi-level partitioning\n> (grandparent, parent, child) and you ALTER the one in the middle. Also\n> things like if parent has two children and one child has multiple\n> children.\n\n\n\nThe test I had somewhere upwards this thread, already tested that.\n\n\nI am not sure how to add a test for pg_dump though. Can you point me to the location in pg_regress for pg_dump?\n\n\n> Also, please make sure that it works correctly with INHERITS (legacy\n> inheritance). We probably don't want to cause recursion in that case.\n\nThat works, but I will add a test to make sure it stays that way. Thank you for your input!\n\n\nRegards\nArne",
"msg_date": "Sat, 26 Jun 2021 16:24:40 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jun-26, Arne Roland wrote:\n\n> Hello �lvaro Herrera,\n> \n> I am sorry to bother, but I am still annoyed by this. So I wonder if you had a look.\n\nI'll have a look during the commitfest which starts late next week.\n\n(However, I'm fairly sure that it won't be in released versions ...\nit'll only be fixed in the version to be released next year, postgres\n15. Things that are clear bug fixes can be applied\nin released versions, but this patch probably changes behavior in ways\nthat are not acceptable in released versions. Sorry about that.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nAl principio era UNIX, y UNIX habl� y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n\n\n",
"msg_date": "Sat, 26 Jun 2021 12:36:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "From: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Saturday, June 26, 2021 18:36\nSubject: Re: Rename of triggers for partitioned tables\n\n> I'll have a look during the commitfest which starts late next week.\n\nThank you!\n\n> (However, I'm fairly sure that it won't be in released versions ...\n> it'll only be fixed in the version to be released next year, postgres\n> 15. Things that are clear bug fixes can be applied\n> in released versions, but this patch probably changes behavior in ways\n> that are not acceptable in released versions. Sorry about that.)\n\nI never expected any backports. Albeit I don't know anyone, who would mind, I agree with you on that assessment. This is very annoying, but not really breaking the product.\n\nSure, I was hoping for pg14 initially. But I will survive another year of gexec.\nI am grateful you agreed to have a look!\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\n\n\nSent: Saturday, June 26, 2021 18:36\nSubject: Re: Rename of triggers for partitioned tables\n \n> I'll have a look during the commitfest which starts late next week.\n\n\n\n\n\nThank you!\n\n> (However, I'm fairly sure that it won't be in released versions ...\n> it'll only be fixed in the version to be released next year, postgres\n> 15. Things that are clear bug fixes can be applied\n> in released versions, but this patch probably changes behavior in ways\n> that are not acceptable in released versions. Sorry about that.)\n\nI never expected any backports. Albeit I don't know anyone, who would mind, I agree with you\n on that assessment. This is very annoying, but not really breaking the product.\n\n\n\nSure, I was hoping for pg14 initially.\nBut\nI will survive another year of gexec.\n\nI am grateful you agreed to\nhave a look!\n\n\nRegards\nArne",
"msg_date": "Sat, 26 Jun 2021 17:08:37 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On Sat, Jun 26, 2021 at 10:08 AM Arne Roland <A.Roland@index.de> wrote:\n\n> *From:* Alvaro Herrera <alvherre@alvh.no-ip.org>\n> *Sent:* Saturday, June 26, 2021 18:36\n> *Subject:* Re: Rename of triggers for partitioned tables\n>\n> > I'll have a look during the commitfest which starts late next week.\n>\n> Thank you!\n>\n> > (However, I'm fairly sure that it won't be in released versions ...\n> > it'll only be fixed in the version to be released next year, postgres\n> > 15. Things that are clear bug fixes can be applied\n> > in released versions, but this patch probably changes behavior in ways\n> > that are not acceptable in released versions. Sorry about that.)\n>\n> I never expected any backports. Albeit I don't know anyone, who would\n> mind, I agree with you on that assessment. This is very annoying, but not\n> really breaking the product.\n>\n> Sure, I was hoping for pg14 initially. But I will survive another year of\n> gexec.\n> I am grateful you agreed to have a look!\n>\n> Regards\n> Arne\n>\n> Hi, Arne:\nIt seems the patch no longer applies cleanly on master branch.\nDo you mind updating the patch ?\n\nThanks\n\n1 out of 7 hunks FAILED -- saving rejects to file\nsrc/backend/commands/trigger.c.rej\npatching file src/backend/parser/gram.y\nHunk #1 succeeded at 8886 (offset 35 lines).\npatching file src/backend/utils/adt/ri_triggers.c\nReversed (or previously applied) patch detected! Assume -R? [n]\nApply anyway? [n]\nSkipping patch.\n1 out of 1 hunk ignored -- saving rejects to file\nsrc/backend/utils/adt/ri_triggers.c.rej\n\nOn Sat, Jun 26, 2021 at 10:08 AM Arne Roland <A.Roland@index.de> wrote:\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\n\n\nSent: Saturday, June 26, 2021 18:36\nSubject: Re: Rename of triggers for partitioned tables\n \n> I'll have a look during the commitfest which starts late next week.\n\n\n\n\n\nThank you!\n\n> (However, I'm fairly sure that it won't be in released versions ...\n> it'll only be fixed in the version to be released next year, postgres\n> 15. Things that are clear bug fixes can be applied\n> in released versions, but this patch probably changes behavior in ways\n> that are not acceptable in released versions. Sorry about that.)\n\nI never expected any backports. Albeit I don't know anyone, who would mind, I agree with you\n on that assessment. This is very annoying, but not really breaking the product.\n\n\n\nSure, I was hoping for pg14 initially.\nBut\nI will survive another year of gexec.\n\nI am grateful you agreed to\nhave a look!\n\n\nRegards\nArne\nHi, Arne:It seems the patch no longer applies cleanly on master branch.Do you mind updating the patch ?Thanks1 out of 7 hunks FAILED -- saving rejects to file src/backend/commands/trigger.c.rejpatching file src/backend/parser/gram.yHunk #1 succeeded at 8886 (offset 35 lines).patching file src/backend/utils/adt/ri_triggers.cReversed (or previously applied) patch detected! Assume -R? [n]Apply anyway? [n]Skipping patch.1 out of 1 hunk ignored -- saving rejects to file src/backend/utils/adt/ri_triggers.c.rej",
"msg_date": "Sat, 26 Jun 2021 11:32:38 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Hi!\n\nFrom: Zhihong Yu <zyu@yugabyte.com>\nSent: Saturday, June 26, 2021 20:32\nSubject: Re: Rename of triggers for partitioned tables\n> Hi, Arne:\n> It seems the patch no longer applies cleanly on master branch.\n> Do you mind updating the patch ?\n>\n> Thanks\n\nThank you for having a look! Please let me know if the attached rebased patch works for you.\n\nRegards\nArne",
"msg_date": "Mon, 28 Jun 2021 10:16:04 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 11:45 AM Arne Roland <A.Roland@index.de> wrote:\n\n> Hi,\n>\n> *From:* Zhihong Yu <zyu@yugabyte.com>\n> *Sent:* Monday, June 28, 2021 15:28\n> *Subject:* Re: Rename of triggers for partitioned tables\n>\n> > - void *arg)\n> > +callbackForRenameTrigger(char *relname, Oid relid)\n> >\n> > Since this method is only called by RangeVarCallbackForRenameTrigger(),\n> it seems the method can be folded into RangeVarCallbackForRenameTrigger.\n>\n> that seems obvious. I have no idea why I didn't do that.\n>\n> > + * renametrig - changes the name of a trigger on a possibly\n> partitioned relation recurisvely\n> > ...\n> > +renametrigHelper(RenameStmt *stmt, Oid tgoid, Oid relid)\n> >\n> > Comment doesn't match the name of method. I think it is better to use _\n> instead of camel case.\n>\n> The other functions in this file seem to be in mixed case (camel case with\n> lower first character). I don't have a strong point either way, but the\n> consistency seems to be worth it to me.\n>\n> > -renametrig(RenameStmt *stmt)\n> > +renameonlytrig(RenameStmt *stmt, Relation tgrel, Oid relid, Oid\n> tgparent)\n> >\n> > Would renametrig_unlocked be better name for the method ?\n>\n> I think the name renametrig_unlocked might be confusing. I am not sure my\n> name is good. But upon calling this function everything is locked already\n> and stays locked. So unlocked seems a confusing name to me.\n> renameOnlyOneTriggerWithoutAccountingForChildrenWhereAllLocksAreAquiredAlready\n> sadly isn't very concise anymore. Do you have another idea?\n>\n> > strcmp(stmt->subname, NameStr(trigform->tgname)) can be saved in a\n> variable since it is used after the if statement.\n>\n> It's always needed later. I did miss it due to the short circuiting\n> expression. Thanks!\n>\n> > + tgrel = table_open(TriggerRelationId, RowExclusiveLock);\n> > ...\n> > + ReleaseSysCache(tuple); /* We are still holding the\n> > + * AccessExclusiveLock. */\n> >\n> > The lock mode in comment doesn't seem to match the lock taken.\n>\n> I made the comment slightly more verbose. I hope it's clear now.\n>\n> Thank you very much for the feedback! I attached a new version of the\n> patch based on your suggestions.\n>\n> Regards\n> Arne\n>\n> Adding mailing list.\n\nOn Mon, Jun 28, 2021 at 11:45 AM Arne Roland <A.Roland@index.de> wrote:\n\n\nHi,\n\n\n\nFrom: Zhihong Yu <zyu@yugabyte.com>\nSent: Monday, June 28, 2021 15:28\nSubject: Re: Rename of triggers for partitioned tables\n \n\n\n> - void *arg)\n\n> +callbackForRenameTrigger(char *relname, Oid relid)\n> \n\n> Since this method is only called by RangeVarCallbackForRenameTrigger(), it seems the method can be folded into RangeVarCallbackForRenameTrigger.\n\n\nthat seems obvious. I have no idea why I didn't do that.\n\n\n\n> + * renametrig - changes the name of a trigger on a possibly partitioned relation recurisvely\n\n> ...\n> +renametrigHelper(RenameStmt *stmt, Oid tgoid, Oid relid)\n\n> \n\n> Comment doesn't match the name of method. I think it is better to use _ instead of camel case.\n\n\nThe other functions in this file seem to be in mixed case (camel case with lower first character). I don't have a strong point either way, but the consistency seems to be worth it to me.\n\n\n\n> -renametrig(RenameStmt *stmt)\n> +renameonlytrig(RenameStmt *stmt, Relation tgrel, Oid relid, Oid tgparent)\n> \n\n> Would renametrig_unlocked be better name for the method ?\n\n\nI think the name renametrig_unlocked might be confusing. I am not sure my name is good. But upon calling this function everything is locked already and stays locked. So unlocked seems a confusing name to me. renameOnlyOneTriggerWithoutAccountingForChildrenWhereAllLocksAreAquiredAlready\n sadly isn't very concise anymore. Do you have another idea?\n\n\n> strcmp(stmt->subname, NameStr(trigform->tgname)) can be saved in a variable since it is used after the if statement.\n\n\nIt's always needed later. I did miss it due to the short circuiting expression. Thanks!\n\n\n> + tgrel = table_open(TriggerRelationId, RowExclusiveLock);\n\n> ...\n> + ReleaseSysCache(tuple); /* We are still holding the\n> + * AccessExclusiveLock. */\n\n>\n\n> The lock mode in comment doesn't seem to match the lock taken.\n\n\nI made the comment slightly more verbose. I hope it's clear now.\n\n\nThank you very much for the feedback! I attached a new version of the patch based on your suggestions.\n\n\nRegards\nArne\nAdding mailing list.",
"msg_date": "Mon, 28 Jun 2021 12:22:17 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 3:46 PM Arne Roland <A.Roland@index.de> wrote:\n>\n> Hi!\n>\n>\n> From: Zhihong Yu <zyu@yugabyte.com>\n> Sent: Saturday, June 26, 2021 20:32\n> Subject: Re: Rename of triggers for partitioned tables\n>\n> > Hi, Arne:\n> > It seems the patch no longer applies cleanly on master branch.\n> > Do you mind updating the patch ?\n> >\n> > Thanks\n>\n> Thank you for having a look! Please let me know if the attached rebased patch works for you.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 15 Jul 2021 17:38:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Hello Vignesh,\n\nthank you for your interest!\n\nFrom: vignesh C <vignesh21@gmail.com>\nSent: Thursday, July 15, 2021 14:08\nTo: Arne Roland\nCc: Zhihong Yu; Alvaro Herrera; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\n\nPlease let me know, whether the attached patch works for you.\n\nRegards\nArne",
"msg_date": "Thu, 15 Jul 2021 12:43:20 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "I found this coding too convoluted, so I rewrote it in a different way.\nYou tests pass with this, but I admit I haven't double-checked them yet;\nI'll do that next.\n\nI don't think we need to give a NOTICE when the trigger name does not\nmatch; it doesn't really matter that the trigger was named differently\nbefore the command, does it?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 19 Jul 2021 20:03:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Hi!\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Tuesday, July 20, 2021 02:03\nTo: Arne Roland\nCc: vignesh C; Zhihong Yu; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n\nI found this coding too convoluted, so I rewrote it in a different way.\nYou tests pass with this, but I admit I haven't double-checked them yet;\nI'll do that next.\n\nIs your patch based on master? It doesn't apply at my end.\n\nI don't think we need to give a NOTICE when the trigger name does not\nmatch; it doesn't really matter that the trigger was named differently\nbefore the command, does it?\n\nI'd expect the command\nALTER TRIGGER name ON table_name RENAME TO new_name;\nto rename a trigger named \"name\". We are referring the trigger via it's name after all. If a child is named differently we break with that assumption. I think informing the user about that, is very valuable.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nHi!\n\n\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Tuesday, July 20, 2021 02:03\nTo: Arne Roland\nCc: vignesh C; Zhihong Yu; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n \n\n\n\nI found this coding too convoluted, so I rewrote it in a different way.\nYou tests pass with this, but I admit I haven't double-checked them yet;\nI'll do that next.\n\n\nIs your patch based on master? It doesn't apply at my end.\n\nI don't think we need to give a NOTICE when the trigger name does not\nmatch; it doesn't really matter that the trigger was named differently\nbefore the command, does it?\n\n\n\nI'd expect the command\nALTER TRIGGER name ON table_name RENAME TO new_name;\nto rename a trigger named \"name\". We are referring the trigger via it's name after all. If a child is named differently we break with that assumption. I think informing the user about that, is very valuable.\n\n\nRegards\n\nArne",
"msg_date": "Tue, 20 Jul 2021 00:22:13 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "> We are referring the trigger via it's name after all. If a child is named differently we break with that assumption. I think informing the user about that, is very valuable.\n\n\nTo elaborate on that:\n\nWhile we to similar things for things like set schema, here it has a functional relevance.\n\n\nALTER TRIGGER a5 ON table_name RENAME TO a9;\n\n\nsuggests that the trigger is now fired later from now on. Which obviously might not be the case, if one of the child triggers have had a different name.\n\n\n\n\nI like your naming suggestions. I'm looking forward to test this patch when it applies at my end!\n\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n> \nWe are referring the trigger via it's name after all. If a child is named differently we\n break with that assumption. I think informing the user about that, is very valuable.\n\n\nTo elaborate on that:\nWhile we to similar things for things like set schema, here it has a functional relevance.\n\n\nALTER TRIGGER a5 ON table_name RENAME TO a9;\n\n\nsuggests that the trigger is now fired later from now on. Which obviously might not be the case, if one of the child triggers have had a different name.\n\n\n\n\n\n\nI like your naming suggestions. I'm looking forward to test this patch when it applies at my end!\n\n\nRegards\nArne",
"msg_date": "Tue, 20 Jul 2021 00:51:32 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-20, Arne Roland wrote:\n\n> Is your patch based on master? It doesn't apply at my end.\n\nIt does ... master being dd498998a3 here,\n\n$ patch -p1 < /tmp/renametrig-8.patch \npatching file src/backend/commands/trigger.c\npatching file src/backend/parser/gram.y\npatching file src/test/regress/expected/triggers.out\npatching file src/test/regress/sql/triggers.sql\n\napplies fine.\n\n>> I don't think we need to give a NOTICE when the trigger name does not\n>> match; it doesn't really matter that the trigger was named differently\n>> before the command, does it?\n\n> I'd expect the command\n> ALTER TRIGGER name ON table_name RENAME TO new_name;\n> to rename a trigger named \"name\". We are referring the trigger via it's name after all. If a child is named differently we break with that assumption. I think informing the user about that, is very valuable.\n\nWell, it does rename a trigger named 'name' on the table 'table_name',\nas well as all its descendant triggers. I guess I am surprised that\nanybody would rename a descendant trigger in the first place. I'm not\nwedded to the decision of removing the NOTICE, though ... are there any\nother votes for that, anyone?\n\nThanks\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 19 Jul 2021 21:39:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Thank you! I get a compile time error\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -O2 -I../../../src/include -D_GNU_SOURCE -c -o trigger.o trigger.c\n\nIn file included from ../../../src/include/postgres.h:46:0,\n from trigger.c:14:\ntrigger.c: In function ‘renametrig_partition’:\n../../../src/include/c.h:118:31: error: expected ‘,’ or ‘;’ before ‘__attribute__’\n #define pg_attribute_unused() __attribute__((unused))\n ^\n../../../src/include/c.h:155:34: note: in expansion of macro ‘pg_attribute_unused’\n #define PG_USED_FOR_ASSERTS_ONLY pg_attribute_unused()\n ^~~~~~~~~~~~~~~~~~~\ntrigger.c:1588:17: note: in expansion of macro ‘PG_USED_FOR_ASSERTS_ONLY’\n int found = 0 PG_USED_FOR_ASSERTS_ONLY;\n ^~~~~~~~~~~~~~~~~~~~~~~~\ntrigger.c:1588:7: warning: unused variable ‘found’ [-Wunused-variable]\n int found = 0 PG_USED_FOR_ASSERTS_ONLY;\n ^~~~~\nmake[3]: *** [<builtin>: trigger.o] Error 1\n\n\nAny ideas on what I might be doing wrong?\n\n\nRegards\n\nArne\n\n\n\n\n\n\n\n\nThank you! I get a compile time error\n\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation\n -O2 -I../../../src/include -D_GNU_SOURCE -c -o trigger.o trigger.c\n\nIn file included from ../../../src/include/postgres.h:46:0,\n from trigger.c:14:\ntrigger.c: In function ‘renametrig_partition’:\n../../../src/include/c.h:118:31: error: expected ‘,’ or ‘;’ before ‘__attribute__’\n #define pg_attribute_unused() __attribute__((unused))\n ^\n../../../src/include/c.h:155:34: note: in expansion of macro ‘pg_attribute_unused’\n #define PG_USED_FOR_ASSERTS_ONLY pg_attribute_unused()\n ^~~~~~~~~~~~~~~~~~~\ntrigger.c:1588:17: note: in expansion of macro ‘PG_USED_FOR_ASSERTS_ONLY’\n int found = 0 PG_USED_FOR_ASSERTS_ONLY;\n ^~~~~~~~~~~~~~~~~~~~~~~~\ntrigger.c:1588:7: warning: unused variable ‘found’ [-Wunused-variable]\n int found = 0 PG_USED_FOR_ASSERTS_ONLY;\n ^~~~~\nmake[3]: *** [<builtin>: trigger.o] Error 1\n\n\n\nAny ideas on what I might be doing wrong?\n\n\nRegards\nArne",
"msg_date": "Tue, 20 Jul 2021 09:25:35 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-20, Arne Roland wrote:\n\n> Thank you! I get a compile time error\n\n> trigger.c:1588:17: note: in expansion of macro ‘PG_USED_FOR_ASSERTS_ONLY’\n> int found = 0 PG_USED_FOR_ASSERTS_ONLY;\n> ^~~~~~~~~~~~~~~~~~~~~~~~\n\nOh, I misplaced the annotation of course. It has to be\n\n int found PG_USED_FOR_ASSERTS_ONLY = 0;\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Jul 2021 10:42:23 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-19, Alvaro Herrera wrote:\n\n> Well, it does rename a trigger named 'name' on the table 'table_name',\n> as well as all its descendant triggers. I guess I am surprised that\n> anybody would rename a descendant trigger in the first place. I'm not\n> wedded to the decision of removing the NOTICE, though ... are there any\n> other votes for that, anyone?\n\nI put it back, mostly because I don't really care and it's easily\nremoved if people don't want it. (I used a different wording though,\nnot necessarily final.)\n\nI also realized that if you rename a trigger and the target name is\nalready occupied, we'd better throw a nicer error message than failure\nby violation of a unique constraint; so I moved the check for the name\nto within renametrig_internal(). Added a test for this.\n\nAlso added a test to ensure that nothing happens with statement\ntriggers. This doesn't need any new code, because those triggers don't\nhave tgparentid set.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 20 Jul 2021 16:13:21 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "... now, thinking about how does pg_dump deal with this, I think this\nthread is all wrong, or at least mostly wrong. We cannot have triggers\nin partitions with different names from the triggers in the parents.\npg_dump would only dump the trigger in the parent and totally ignore the\nones in children if they have different names.\n\nSo we still need the recursion bits; but\n\n1. if we detect that we're running in a trigger which has tgparentid\nset, we have to raise an error,\n2. if we are running on a trigger in a partitioned table, then ONLY must\nnot be specified.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n",
"msg_date": "Wed, 21 Jul 2021 18:16:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Hi,\n\nlooking at the patch, I realized the renametrig_partition could use an index leading with tgparentid, without the need to traverse the child tables. Since we still need to lock them, there is likely no practical performance gain. But I am surprised there is no unique index on (tgparentid, tgrelid), which sounds like a decent sanity check to have anyways.\n\n________________________________\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Thursday, July 22, 2021 00:16\nTo: Arne Roland\nCc: vignesh C; Zhihong Yu; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n\n... now, thinking about how does pg_dump deal with this, I think this\nthread is all wrong, or at least mostly wrong. We cannot have triggers\nin partitions with different names from the triggers in the parents.\npg_dump would only dump the trigger in the parent and totally ignore the\nones in children if they have different names.\n\nThat makes things simpler. The reasoning behind the ONLY variant was not to break things that worked in the past. If it never worked in the past, there is no reason to offer backwards compatibility.\n\n1. if we detect that we're running in a trigger which has tgparentid\nset, we have to raise an error,\n2. if we are running on a trigger in a partitioned table, then ONLY must\nnot be specified.\n\nI think that would make the whole idea of allowing ONLY obsolete altogether anyways. I'd vote to simply scratch that syntax. I will work on updating your version 9 of this patch accordingly.\n\nI think we should do something about pg_upgrade. The inconsistency is obviously broken. I'd like it to make sure every partitioned trigger is named correctly, simply taking the name of the parent. Simplifying the state our database can have, might help us to avoid future headaches.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nlooking at the patch, I realized the\nrenametrig_partition could use an index leading with tgparentid, without the need to traverse the child tables. Since we still need to lock them, there\nis likely no practical performance gain. But I am surprised there is no unique index on (tgparentid,\ntgrelid), which sounds like a decent sanity check to have anyways.\n\n\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Thursday, July 22, 2021 00:16\nTo: Arne Roland\nCc: vignesh C; Zhihong Yu; Pg Hackers\nSubject: Re: Rename of triggers for partitioned tables\n \n\n\n\n... now, thinking about how does pg_dump deal with this, I think this\nthread is all wrong, or at least mostly wrong. We cannot have triggers\nin partitions with different names from the triggers in the parents.\npg_dump would only dump the trigger in the parent and totally ignore the\nones in children if they have different names.\n\n\n\nThat makes things simpler. The reasoning behind the ONLY variant was not to break things that worked in the past. If it never worked in the past, there is no reason to offer backwards compatibility.\n\n\n1. if we detect that we're running in a trigger which has tgparentid\nset, we have to raise an error,\n2. if we are running on a trigger in a partitioned table, then ONLY must\nnot be specified.\n\n\n\nI think that would make the whole idea of allowing ONLY obsolete altogether anyways. I'd vote to simply scratch that syntax. I will work on updating your version 9 of this patch\n accordingly.\n\n\nI think we should do something about pg_upgrade. The inconsistency is obviously broken. I'd like it to make sure every partitioned\n trigger is named correctly, simply taking the name of the parent. Simplifying the state our database can have, might help us to\navoid future headaches.\n\n\nRegards\nArne",
"msg_date": "Thu, 22 Jul 2021 15:06:32 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-22, Arne Roland wrote:\n\n> \n> Hi,\n> \n> looking at the patch, I realized the renametrig_partition could use an index leading with tgparentid, without the need to traverse the child tables. Since we still need to lock them, there is likely no practical performance gain. But I am surprised there is no unique index on (tgparentid, tgrelid), which sounds like a decent sanity check to have anyways.\n\nIf we have good use for such an index, I don't see why we can't add it.\nBut I'm not sure that it is justified -- certainly if the only benefit\nis to make ALTER TRIGGER RENAME recurse faster on partitioned tables, it\nis not justified.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Thu, 22 Jul 2021 12:20:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "From: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Thursday, July 22, 2021 18:20\nTo: Arne Roland\nSubject: Re: Rename of triggers for partitioned tables\n\nIf we have good use for such an index, I don't see why we can't add it.\nBut I'm not sure that it is justified -- certainly if the only benefit\nis to make ALTER TRIGGER RENAME recurse faster on partitioned tables, it\nis not justified.\n\nSpeed is not really the issue here, most people will have under 20 triggers per table anyways. I think even the simpler code would be worth more than the speed gain. The main value of such an index would be the enforced uniqueness.\n\nI just noticed that apparently the\nALTER TABLE [ IF EXISTS ] [ ONLY ] name [ ENABLE | DISABLE ] TRIGGER ...\nsyntax albeit looking totally different and it already recurses, it has precisely the same issue with pg_dump. In that case the ONLY syntax isn't optional and the 2. from your above post does inevitably apply.\n\nSince it is sort of the same problem, I think it might be worthwhile to address it as well within this patch. Adding two to four ereports doesn't sound like scope creeping to me, even though it touches completely different code. I'll look into that as well.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\n\n\nSent: Thursday, July 22, 2021 18:20\nTo: Arne Roland\nSubject: Re: Rename of triggers for partitioned tables\n \n\n\nIf we have good use for such an index, I don't see why we can't add it.\n\nBut I'm not sure that it is justified -- certainly if the only benefit\nis to make ALTER TRIGGER RENAME recurse faster on partitioned tables, it\nis not justified.\n\n\nSpeed is not really the issue here, most people will have under 20 triggers per table anyways. I think even the simpler code would be worth more than the speed gain. The\n main value of such an index would be the enforced uniqueness.\n\n\n\nI just noticed that apparently the\n\nALTER TABLE [ IF EXISTS ] [ ONLY ] name [ ENABLE | DISABLE ] TRIGGER ...\nsyntax albeit looking\ntotally different and it already recurses, it has precisely the same issue with pg_dump. In that case the ONLY syntax isn't optional and the 2. from your above post does inevitably\n apply.\n\n\nSince it is sort of the same problem, I think it might be worthwhile to address it as well within this patch. Adding two to four ereports doesn't sound like scope creeping to me, even though it touches\n completely different code. I'll look into that as well.\n\n\n\nRegards\nArne",
"msg_date": "Thu, 22 Jul 2021 17:22:57 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-22, Arne Roland wrote:\n\n> Since it is sort of the same problem, I think it might be worthwhile\n> to address it as well within this patch. Adding two to four ereports\n> doesn't sound like scope creeping to me, even though it touches\n> completely different code. I'll look into that as well.\n\nI don't understand what you mean. But here's an updated patch, with the\nfollowing changes\n\n1. support for ONLY is removed, since evidently the only thing it is\ngood for is introduce inconsistencies\n\n2. recursion to partitioned tables always occurs; no more conditionally\non relation->inh. This is sensible because due to point 1 above, inh\ncan no longer be false.\n\n3. renaming a trigger that's not topmost is forbidden.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 22 Jul 2021 13:33:10 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-22, Arne Roland wrote:\n\n> I just noticed that apparently the\n> ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ ENABLE | DISABLE ] TRIGGER ...\n> syntax albeit looking totally different and it already recurses, it\n> has precisely the same issue with pg_dump. In that case the ONLY\n> syntax isn't optional and the 2. from your above post does inevitably\n> apply.\n> \n> Since it is sort of the same problem, I think it might be worthwhile\n> to address it as well within this patch. Adding two to four ereports\n> doesn't sound like scope creeping to me, even though it touches\n> completely different code. I'll look into that as well.\n\nOh! For some reason I failed to notice that you were talking about\nENABLE / DISABLE, not RENAME anymore. It turns out that I recently\nspent some time in this area; see commits below (these are from branch\nREL_11_STABLE). Are you talking about something that need to be handled\n*beyond* these fixes?\n\ncommit ccfc3cbb341abf515d930097c4851d9e2152504f\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org> [Álvaro Herrera <alvherre@alvh.no-ip.org>]\nAuthorDate: Fri Jul 16 17:29:22 2021 -0400\nCommitDate: Fri Jul 16 17:29:22 2021 -0400\n\n Fix pg_dump for disabled triggers on partitioned tables\n \n pg_dump failed to preserve the 'enabled' flag (which can be not only\n disabled, but also REPLICA or ALWAYS) for partitions which had it\n changed from their respective parents. Attempt to handle that by\n including a definition for such triggers in the dump, but replace the\n standard CREATE TRIGGER line with an ALTER TRIGGER line.\n \n Backpatch to 11, where these triggers can exist. In branches 11 and 12,\n pick up a few test lines from commit b9b408c48724 to verify that\n pg_upgrade is okay with these arrangements.\n \n Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>\n Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>\n Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com\n\ncommit fed35bd4a65032f714af6bc88c102d0e90422dee\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org> [Álvaro Herrera <alvherre@alvh.no-ip.org>]\nAuthorDate: Fri Jul 16 13:01:43 2021 -0400\nCommitDate: Fri Jul 16 13:01:43 2021 -0400\n\n Preserve firing-on state when cloning row triggers to partitions\n \n When triggers are cloned from partitioned tables to their partitions,\n the 'tgenabled' flag (origin/replica/always/disable) was not propagated.\n Make it so that the flag on the trigger on partition is initially set to\n the same value as on the partitioned table.\n \n Add a test case to verify the behavior.\n \n Backpatch to 11, where this appeared in commit 86f575948c77.\n \n Author: Álvaro Herrera <alvherre@alvh.no-ip.org>\n Reported-by: Justin Pryzby <pryzby@telsasoft.com>\n Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com\n\ncommit bbb927b4db9b3b449ccd0f76c1296de382a2f0c1\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org> [Álvaro Herrera <alvherre@alvh.no-ip.org>]\nAuthorDate: Tue Oct 20 19:22:09 2020 -0300\nCommitDate: Tue Oct 20 19:22:09 2020 -0300\n\n Fix ALTER TABLE .. ENABLE/DISABLE TRIGGER recursion\n \n More precisely, correctly handle the ONLY flag indicating not to\n recurse. This was implemented in 86f575948c77 by recursing in\n trigger.c, but that's the wrong place; use ATSimpleRecursion instead,\n which behaves properly. However, because legacy inheritance has never\n recursed in that situation, make sure to do that only for new-style\n partitioning.\n \n I noticed this problem while testing a fix for another bug in the\n vicinity.\n \n This has been wrong all along, so backpatch to 11.\n \n Discussion: https://postgr.es/m/20201016235925.GA29829@alvherre.pgsql\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 22 Jul 2021 13:41:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "From: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Thursday, July 22, 2021 19:33\nTo: Arne Roland\nSubject: Re: Rename of triggers for partitioned tables\n\nOn 2021-Jul-22, Arne Roland wrote:\n\n> Since it is sort of the same problem, I think it might be worthwhile\n> to address it as well within this patch. Adding two to four ereports\n> doesn't sound like scope creeping to me, even though it touches\n> completely different code. I'll look into that as well.\n\nI don't understand what you mean. But here's an updated patch, with the\nfollowing changes\n\nalter table middle disable trigger b;\ncreates the same kind of inconsistency\nalter trigger b on middle rename to something;\ndoes.\nWith other words: enableing/disabling non-topmost triggers should be forbidden as well.\n\nRegards\nArne\n\n\n\n\n\n\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\n\n\nSent: Thursday, July 22, 2021 19:33\nTo: Arne Roland\nSubject: Re: Rename of triggers for partitioned tables\n \n\n\n\nOn 2021-Jul-22, Arne Roland wrote:\n\n> Since it is sort of the same problem, I think it might be worthwhile\n> to address it as well within this patch. Adding two to four ereports\n> doesn't sound like scope creeping to me, even though it touches\n> completely different code. I'll look into that as well.\n\nI don't understand what you mean. But here's an updated patch, with the\nfollowing changes\n\n\nalter table middle disable trigger b;\ncreates the same kind of inconsistency\nalter trigger b on\nmiddle\n rename to something;\ndoes.\n\n\nWith other words:\nenableing/disabling non-topmost triggers should be forbidden as well.\n\n\nRegards\nArne",
"msg_date": "Thu, 22 Jul 2021 17:45:49 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-22, Arne Roland wrote:\n\n> alter table middle disable trigger b;\n> creates the same kind of inconsistency\n> alter trigger b on middle rename to something;\n> does.\n> With other words: enableing/disabling non-topmost triggers should be forbidden as well.\n\nI'm not so sure about that ... I think enabling/disabling triggers on\nindividual partitions is a valid use case. Renaming them, not so much.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"No necesitamos banderas\n No reconocemos fronteras\" (Jorge González)\n\n\n",
"msg_date": "Thu, 22 Jul 2021 13:51:50 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "I pushed this patch with some minor corrections (mostly improved the\nmessage wording.)\n\nThanks\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n",
"msg_date": "Thu, 22 Jul 2021 18:35:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-22, Alvaro Herrera wrote:\n\n> I pushed this patch with some minor corrections (mostly improved the\n> message wording.)\n\nHmm, the sorting fails for Czech or something like that. Will fix ...\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 22 Jul 2021 19:43:41 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I pushed this patch with some minor corrections (mostly improved the\n> message wording.)\n\nCoverity does not like this bit, and I fully agree:\n\n/srv/coverity/git/pgsql-git/postgresql/src/backend/commands/trigger.c: 1639 in renametrig_partition()\n>>> CID 1489387: Incorrect expression (ASSERT_SIDE_EFFECT)\n>>> Argument \"found++\" of Assert() has a side effect. The containing function might work differently in a non-debug build.\n1639 \t\tAssert(found++ <= 0);\n\nPerhaps there's no actual bug there, but it's still horrible coding.\nFor one thing, the implication that found could be negative is extremely\nconfusing to readers. A boolean might be better. However, I wonder why\nyou bothered with a flag in the first place. The usual convention if\nwe know there can be only one match is to just not write a loop at all,\nwith a suitable comment, like this pre-existing example elsewhere in\ntrigger.c:\n\n\t\t/* There should be at most one matching tuple */\n\t\tif (HeapTupleIsValid(tuple = systable_getnext(tgscan)))\n\nIf you're not quite convinced there can be only one match, then it\nstill shouldn't be an Assert --- a real test-and-elog would be better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Jul 2021 10:49:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "On 2021-Jul-25, Tom Lane wrote:\n\n> Perhaps there's no actual bug there, but it's still horrible coding.\n> For one thing, the implication that found could be negative is extremely\n> confusing to readers. A boolean might be better. However, I wonder why\n> you bothered with a flag in the first place. The usual convention if\n> we know there can be only one match is to just not write a loop at all,\n> with a suitable comment, like this pre-existing example elsewhere in\n> trigger.c:\n> \n> \t\t/* There should be at most one matching tuple */\n> \t\tif (HeapTupleIsValid(tuple = systable_getnext(tgscan)))\n> \n> If you're not quite convinced there can be only one match, then it\n> still shouldn't be an Assert --- a real test-and-elog would be better.\n\nI agree that coding there was dubious. I've removed the flag and assert.\n\nArne complained that there should be a unique constraint on (tgrelid,\ntgparentid) which would sidestep the need for this to be a loop. I\ndon't think it's really necessary, and I'm not sure how to create a\nsystem index WHERE tgparentid <> 0.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n",
"msg_date": "Mon, 26 Jul 2021 13:14:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Arne complained that there should be a unique constraint on (tgrelid,\n> tgparentid) which would sidestep the need for this to be a loop. I\n> don't think it's really necessary, and I'm not sure how to create a\n> system index WHERE tgparentid <> 0.\n\nYeah, we don't support partial indexes on catalogs, and this example\ndoesn't make me feel like we ought to open that can of worms. But\nthen maybe this function shouldn't assume there's only one match?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Jul 2021 13:38:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rename of triggers for partitioned tables"
},
{
"msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Monday, July 26, 2021 19:38\nTo: Alvaro Herrera\nSubject: Re: Rename of triggers for partitioned tables\n\n> Yeah, we don't support partial indexes on catalogs, and this example\n> doesn't make me feel like we ought to open that can of worms.\n\nI asked why such an index doesn't exists already. I guess that answers it. I definitely don't want to push for supporting that, especially not knowing what it entails.\n\n> But then maybe this function shouldn't assume there's only one match?\n\n I'd consider a duplication here a serious bug. That is pretty much catalog corruption. Having two children within a single child table sounds like it would break a lot of things.\n\nThat being said this function is hardly performance critical. I don't know whether throwing an elog indicating what's going wrong here would be worth it.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\n\n\nSent: Monday, July 26, 2021 19:38\nTo: Alvaro Herrera\nSubject: Re: Rename of triggers for partitioned tables\n \n\n\n> Yeah, we don't support partial indexes on catalogs, and this example\n\n> doesn't make me feel\nlike we ought to open that can of worms. \n\n\nI asked why such an index doesn't exists already. I guess that answers it. I definitely don't want to push for supporting that, especially not knowing what it entails.\n\n\n> But then maybe this function shouldn't assume there's only one match?\n\n\n\n I'd consider a duplication here a serious bug. That is pretty much catalog corruption. Having two children within a single child table sounds like it would break a lot of things.\n\n\nThat being said this function is hardly performance critical. I don't know whether throwing an elog indicating what's\ngoing wrong here would be worth it.\n\n\nRegards\nArne",
"msg_date": "Mon, 26 Jul 2021 18:11:28 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: Rename of triggers for partitioned tables"
}
] |
[
{
"msg_contents": "Hi All,\nAn LSN is printed using format \"%X/%X\" and passing (uint32) (lsn >>\n32), (uint32) lsn as arguments to that format specifier. This pattern\nis repeated at 180 places according to Cscope. I find it hard to\nremember this pattern every time I have to print LSN. Usually I end up\ncopying from some other place. Finding such a place takes a few\nminutes and might be error prone esp when the lsn to be printed is an\nexpression. If we ever have to change this pattern in future, we will\nneed to change all those 180 places.\n\nThe solution seems to be simple though. In the attached patch, I have\nadded two macros\n#define LSN_FORMAT \"%X/%X\"\n#define LSN_FORMAT_ARG(lsn) (uint32) ((lsn) >> 32), (uint32) (lsn)\n\nwhich can be used instead.\n\nE.g. the following change in the patch\n@@ -261,8 +261,7 @@ page_header(PG_FUNCTION_ARGS)\n {\n char lsnchar[64];\n\n- snprintf(lsnchar, sizeof(lsnchar), \"%X/%X\",\n- (uint32) (lsn >> 32), (uint32) lsn);\n+ snprintf(lsnchar, sizeof(lsnchar), LSN_FORMAT,\nLSN_FORMAT_ARG(lsn));\n values[0] = CStringGetTextDatum(lsnchar);\n\nLSN_FORMAT_ARG expands to two comma separated arguments and is kinda\nopen at both ends but it's handy that way.\n\nOff list Peter Eisentraut pointed out that we can not use these macros\nin elog/ereport since it creates problems for translations. He\nsuggested adding functions which return strings and use %s when doing\nso.\n\nThe patch has two functions pg_lsn_out_internal() which takes an LSN\nas input and returns a palloc'ed string containing the string\nrepresentation of LSN. This may not be suitable in performance\ncritical paths and also may leak memory if not freed. So there's\nanother function pg_lsn_out_buffer() which takes LSN and a char array\nas input, fills the char array with the string representation and\nreturns the pointer to the char array. This allows the function to be\nused as an argument in printf/elog etc. Macro MAXPG_LSNLEN has been\nextern'elized for this purpose.\n\nOff list Craig Ringer suggested introducing a new format specifier\nsimilar to %m for LSN but I did not get time to take a look at the\nrelevant code. AFAIU it's available only to elog/ereport, so may not\nbe useful generally. But teaching printf variants about the new format\nwould be the best solution. However, I didn't find any way to do that.\n\nIn that patch I have changed some random code to use one of the above\nmethods appropriately, demonstrating their usage. Given that we have\ngrown 180 places already, I think that something like would have been\nwelcome earlier. But given that replication code is being actively\nworked upon, it may not be too late. As a precedence lfirst_node() and\nits variants arrived when the corresponding pattern had been repeated\nat so many places.\n\nI think we should move pg_lsn_out_internal() and pg_lsn_out_buffer()\nsomewhere else. Ideally xlogdefs.c would have been the best place but\nthere's no such file. May be we add those functions in pg_lsn.c and\nadd their declarations i xlogdefs.h.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 27 Nov 2020 16:10:27 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Printing LSN made easy"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-27 13:40, Ashutosh Bapat wrote:\n> \n> Off list Peter Eisentraut pointed out that we can not use these macros\n> in elog/ereport since it creates problems for translations. He\n> suggested adding functions which return strings and use %s when doing\n> so.\n> \n> The patch has two functions pg_lsn_out_internal() which takes an LSN\n> as input and returns a palloc'ed string containing the string\n> representation of LSN. This may not be suitable in performance\n> critical paths and also may leak memory if not freed. So there's\n> another function pg_lsn_out_buffer() which takes LSN and a char array\n> as input, fills the char array with the string representation and\n> returns the pointer to the char array. This allows the function to be\n> used as an argument in printf/elog etc. Macro MAXPG_LSNLEN has been\n> extern'elized for this purpose.\n> \n\nIf usage of macros in elog/ereport can cause problems for translation, \nthen even with this patch life is not get simpler significantly. For \nexample, instead of just doing like:\n\n elog(WARNING,\n- \"xlog min recovery request %X/%X is past current point \n%X/%X\",\n- (uint32) (lsn >> 32), (uint32) lsn,\n- (uint32) (newMinRecoveryPoint >> 32),\n- (uint32) newMinRecoveryPoint);\n+ \"xlog min recovery request \" LSN_FORMAT \" is past \ncurrent point \" LSN_FORMAT,\n+ LSN_FORMAT_ARG(lsn),\n+ LSN_FORMAT_ARG(newMinRecoveryPoint));\n\nwe have to either declare two additional local buffers, which is \nverbose; or use pg_lsn_out_internal() and rely on memory contexts (or do \npfree() manually, which is verbose again) to prevent memory leaks.\n\n> \n> Off list Craig Ringer suggested introducing a new format specifier\n> similar to %m for LSN but I did not get time to take a look at the\n> relevant code. AFAIU it's available only to elog/ereport, so may not\n> be useful generally. But teaching printf variants about the new format\n> would be the best solution. However, I didn't find any way to do that.\n> \n\nIt seems that this topic has been extensively discussed off-list, but \nstill strong +1 for the patch. I always wanted LSN printing to be more \nconcise.\n\nI have just tried new printing utilities in a couple of new places and \nit looks good to me.\n\n+char *\n+pg_lsn_out_internal(XLogRecPtr lsn)\n+{\n+\tchar\t\tbuf[MAXPG_LSNLEN + 1];\n+\n+\tsnprintf(buf, sizeof(buf), LSN_FORMAT, LSN_FORMAT_ARG(lsn));\n+\n+\treturn pstrdup(buf);\n+}\n\nWould it be a bit more straightforward if we palloc buf initially and \njust return a pointer instead of doing pstrdup()?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Fri, 27 Nov 2020 17:24:20 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "Hi,\n\nHere, we cannot use sizeof(but) to get the buf size, because it is a pointer, so it always\n8 bytes on 64-bit or 4 bytes on 32-bit machine.\n\n+char *\n+pg_lsn_out_buffer(XLogRecPtr lsn, char *buf)\n+{\n+ snprintf(buf, sizeof(buf), LSN_FORMAT, LSN_FORMAT_ARG(lsn));\n+\n+ return buf;\n+}\n\n--\nBest regards\nJapin Li\nChengDu WenWu Information Technolog Co.,Ltd.\n\n\n> On Nov 27, 2020, at 10:24 PM, Alexey Kondratov <a.kondratov@postgrespro.ru> wrote:\n> \n> Hi,\n> \n> On 2020-11-27 13:40, Ashutosh Bapat wrote:\n>> Off list Peter Eisentraut pointed out that we can not use these macros\n>> in elog/ereport since it creates problems for translations. He\n>> suggested adding functions which return strings and use %s when doing\n>> so.\n>> The patch has two functions pg_lsn_out_internal() which takes an LSN\n>> as input and returns a palloc'ed string containing the string\n>> representation of LSN. This may not be suitable in performance\n>> critical paths and also may leak memory if not freed. So there's\n>> another function pg_lsn_out_buffer() which takes LSN and a char array\n>> as input, fills the char array with the string representation and\n>> returns the pointer to the char array. This allows the function to be\n>> used as an argument in printf/elog etc. Macro MAXPG_LSNLEN has been\n>> extern'elized for this purpose.\n> \n> If usage of macros in elog/ereport can cause problems for translation, then even with this patch life is not get simpler significantly. For example, instead of just doing like:\n> \n> elog(WARNING,\n> - \"xlog min recovery request %X/%X is past current point %X/%X\",\n> - (uint32) (lsn >> 32), (uint32) lsn,\n> - (uint32) (newMinRecoveryPoint >> 32),\n> - (uint32) newMinRecoveryPoint);\n> + \"xlog min recovery request \" LSN_FORMAT \" is past current point \" LSN_FORMAT,\n> + LSN_FORMAT_ARG(lsn),\n> + LSN_FORMAT_ARG(newMinRecoveryPoint));\n> \n> we have to either declare two additional local buffers, which is verbose; or use pg_lsn_out_internal() and rely on memory contexts (or do pfree() manually, which is verbose again) to prevent memory leaks.\n> \n>> Off list Craig Ringer suggested introducing a new format specifier\n>> similar to %m for LSN but I did not get time to take a look at the\n>> relevant code. AFAIU it's available only to elog/ereport, so may not\n>> be useful generally. But teaching printf variants about the new format\n>> would be the best solution. However, I didn't find any way to do that.\n> \n> It seems that this topic has been extensively discussed off-list, but still strong +1 for the patch. I always wanted LSN printing to be more concise.\n> \n> I have just tried new printing utilities in a couple of new places and it looks good to me.\n> \n> +char *\n> +pg_lsn_out_internal(XLogRecPtr lsn)\n> +{\n> +\tchar\t\tbuf[MAXPG_LSNLEN + 1];\n> +\n> +\tsnprintf(buf, sizeof(buf), LSN_FORMAT, LSN_FORMAT_ARG(lsn));\n> +\n> +\treturn pstrdup(buf);\n> +}\n> \n> Would it be a bit more straightforward if we palloc buf initially and just return a pointer instead of doing pstrdup()?\n> \n> \n> Regards\n> -- \n> Alexey Kondratov\n> \n> Postgres Professional https://www.postgrespro.com\n> Russian Postgres Company<0001-Make-it-easy-to-print-LSN.patch>\n\n\n\n",
"msg_date": "Fri, 27 Nov 2020 16:21:27 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 04:10:27PM +0530, Ashutosh Bapat wrote:\n> LSN_FORMAT_ARG expands to two comma separated arguments and is kinda\n> open at both ends but it's handy that way.\n\nAgreed that's useful.\n\n> Off list Peter Eisentraut pointed out that we can not use these macros\n> in elog/ereport since it creates problems for translations. He\n> suggested adding functions which return strings and use %s when doing\n> so.\n\nWhat's the problem with translations? We already have equivalent\nstuff with INT64_FORMAT that may map to %ld or %lld. But here we have\na format that's set in stone.\n\n> The patch has two functions pg_lsn_out_internal() which takes an LSN\n> as input and returns a palloc'ed string containing the string\n> representation of LSN. This may not be suitable in performance\n> critical paths and also may leak memory if not freed.\n\nI'd rather never use this flavor. An OOM could mask the actual error\nbehind.\n\n> Off list Craig Ringer suggested introducing a new format specifier\n> similar to %m for LSN but I did not get time to take a look at the\n> relevant code. AFAIU it's available only to elog/ereport, so may not\n> be useful generally. But teaching printf variants about the new format\n> would be the best solution. However, I didn't find any way to do that.\n\n-1. %m maps to errno, that is much more generic. A set of macros\nthat maps to our internal format would be fine enough IMO.\n--\nMichael",
"msg_date": "Sun, 29 Nov 2020 16:52:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Nov 27, 2020 at 04:10:27PM +0530, Ashutosh Bapat wrote:\n>> Off list Craig Ringer suggested introducing a new format specifier\n>> similar to %m for LSN but I did not get time to take a look at the\n>> relevant code. AFAIU it's available only to elog/ereport, so may not\n>> be useful generally. But teaching printf variants about the new format\n>> would be the best solution. However, I didn't find any way to do that.\n\n> -1. %m maps to errno, that is much more generic. A set of macros\n> that maps to our internal format would be fine enough IMO.\n\nAgreed. snprintf.c is meant to implement a recognized standard\n(ok, %m is a GNU extension, but it's still pretty standard).\nI'm not on board with putting PG-only extensions in there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Nov 2020 12:10:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 1:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Fri, Nov 27, 2020 at 04:10:27PM +0530, Ashutosh Bapat wrote:\n> >> Off list Craig Ringer suggested introducing a new format specifier\n> >> similar to %m for LSN but I did not get time to take a look at the\n> >> relevant code. AFAIU it's available only to elog/ereport, so may not\n> >> be useful generally. But teaching printf variants about the new format\n> >> would be the best solution. However, I didn't find any way to do that.\n>\n> > -1. %m maps to errno, that is much more generic. A set of macros\n> > that maps to our internal format would be fine enough IMO.\n>\n> Agreed. snprintf.c is meant to implement a recognized standard\n> (ok, %m is a GNU extension, but it's still pretty standard).\n> I'm not on board with putting PG-only extensions in there.\n>\n\nFair enough. I did not realise that %m was a GNU extension (never looked\nclosely) so I thought we had precedent for Pg specific extensions there.\n\nAgreed in that case, no argument from me.\n\nOn Mon, Nov 30, 2020 at 1:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Nov 27, 2020 at 04:10:27PM +0530, Ashutosh Bapat wrote:\n>> Off list Craig Ringer suggested introducing a new format specifier\n>> similar to %m for LSN but I did not get time to take a look at the\n>> relevant code. AFAIU it's available only to elog/ereport, so may not\n>> be useful generally. But teaching printf variants about the new format\n>> would be the best solution. However, I didn't find any way to do that.\n\n> -1. %m maps to errno, that is much more generic. A set of macros\n> that maps to our internal format would be fine enough IMO.\n\nAgreed. snprintf.c is meant to implement a recognized standard\n(ok, %m is a GNU extension, but it's still pretty standard).\nI'm not on board with putting PG-only extensions in there.Fair enough. I did not realise that %m was a GNU extension (never looked closely) so I thought we had precedent for Pg specific extensions there. Agreed in that case, no argument from me.",
"msg_date": "Mon, 30 Nov 2020 10:12:35 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 7:54 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> Hi,\n>\n> On 2020-11-27 13:40, Ashutosh Bapat wrote:\n> >\n> > Off list Peter Eisentraut pointed out that we can not use these macros\n> > in elog/ereport since it creates problems for translations. He\n> > suggested adding functions which return strings and use %s when doing\n> > so.\n> >\n> > The patch has two functions pg_lsn_out_internal() which takes an LSN\n> > as input and returns a palloc'ed string containing the string\n> > representation of LSN. This may not be suitable in performance\n> > critical paths and also may leak memory if not freed. So there's\n> > another function pg_lsn_out_buffer() which takes LSN and a char array\n> > as input, fills the char array with the string representation and\n> > returns the pointer to the char array. This allows the function to be\n> > used as an argument in printf/elog etc. Macro MAXPG_LSNLEN has been\n> > extern'elized for this purpose.\n> >\n>\n> If usage of macros in elog/ereport can cause problems for translation,\n> then even with this patch life is not get simpler significantly. For\n> example, instead of just doing like:\n>\n> elog(WARNING,\n> - \"xlog min recovery request %X/%X is past current point\n> %X/%X\",\n> - (uint32) (lsn >> 32), (uint32) lsn,\n> - (uint32) (newMinRecoveryPoint >> 32),\n> - (uint32) newMinRecoveryPoint);\n> + \"xlog min recovery request \" LSN_FORMAT \" is past\n> current point \" LSN_FORMAT,\n> + LSN_FORMAT_ARG(lsn),\n> + LSN_FORMAT_ARG(newMinRecoveryPoint));\n>\n> we have to either declare two additional local buffers, which is\n> verbose; or use pg_lsn_out_internal() and rely on memory contexts (or do\n> pfree() manually, which is verbose again) to prevent memory leaks.\n\nI agree, that using LSN_FORMAT is best, but if that's not allowed, at\nleast the pg_lsn_out_internal() and variants encapsulate the\nLSN_FORMAT so that the callers don't need to remember those and we\nhave to change only a couple of places when the LSN_FORMAT itself\nchanges.\n\n>\n> >\n> > Off list Craig Ringer suggested introducing a new format specifier\n> > similar to %m for LSN but I did not get time to take a look at the\n> > relevant code. AFAIU it's available only to elog/ereport, so may not\n> > be useful generally. But teaching printf variants about the new format\n> > would be the best solution. However, I didn't find any way to do that.\n> >\n>\n> It seems that this topic has been extensively discussed off-list, but\n> still strong +1 for the patch. I always wanted LSN printing to be more\n> concise.\n>\n> I have just tried new printing utilities in a couple of new places and\n> it looks good to me.\n\nThanks.\n\n>\n> +char *\n> +pg_lsn_out_internal(XLogRecPtr lsn)\n> +{\n> + char buf[MAXPG_LSNLEN + 1];\n> +\n> + snprintf(buf, sizeof(buf), LSN_FORMAT, LSN_FORMAT_ARG(lsn));\n> +\n> + return pstrdup(buf);\n> +}\n>\n> Would it be a bit more straightforward if we palloc buf initially and\n> just return a pointer instead of doing pstrdup()?\n\nPossibly. I just followed the code in pg_lsn_out(), which snprintf()\nin a char array and then does pstrdup(). I don't quite understand the\npurpose of that.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 30 Nov 2020 18:32:46 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 9:51 PM Li Japin <japinli@hotmail.com> wrote:\n>\n> Hi,\n>\n> Here, we cannot use sizeof(but) to get the buf size, because it is a pointer, so it always\n> 8 bytes on 64-bit or 4 bytes on 32-bit machine.\n\nFor an array, the sizeof() returns the size of memory consumed by the\narray. See section \"Application to arrays\" at\nhttps://en.wikipedia.org/wiki/Sizeof.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 30 Nov 2020 18:36:42 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 1:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Nov 27, 2020 at 04:10:27PM +0530, Ashutosh Bapat wrote:\n> > LSN_FORMAT_ARG expands to two comma separated arguments and is kinda\n> > open at both ends but it's handy that way.\n>\n> Agreed that's useful.\n>\n> > Off list Peter Eisentraut pointed out that we can not use these macros\n> > in elog/ereport since it creates problems for translations. He\n> > suggested adding functions which return strings and use %s when doing\n> > so.\n>\n> What's the problem with translations? We already have equivalent\n> stuff with INT64_FORMAT that may map to %ld or %lld. But here we have\n> a format that's set in stone.\n>\n\nPeter Eisentraut explained that the translation system can not handle\nstrings broken by macros, e.g. errmsg(\"foo\" MACRO \"bar\"), since it doesn't\nknow statically what the macro resolves to. But I am not familiar with our\ntranslation system myself. But if we allow INT64_FORMAT, LSN_FORMAT should\nbe allowed. That makes life much easier. We do not need other functions at\nall.\n\nPeter E or somebody familiar with translations can provide more\ninformation.\n\n>\n> > The patch has two functions pg_lsn_out_internal() which takes an LSN\n> > as input and returns a palloc'ed string containing the string\n> > representation of LSN. This may not be suitable in performance\n> > critical paths and also may leak memory if not freed.\n>\n> I'd rather never use this flavor. An OOM could mask the actual error\n> behind.\n>\n\nHmm that's a good point. It might be useful in non-ERROR cases, merely\nbecause it avoids declaring a char array :).\n\nNo one seems to reject this idea so I will add it to the next commitfest to\nget more reviews esp. clarification around whether macros are enough or not.\n\n--\nBest Wishes,\nAshutosh\n\nOn Sun, Nov 29, 2020 at 1:23 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Nov 27, 2020 at 04:10:27PM +0530, Ashutosh Bapat wrote:\n> LSN_FORMAT_ARG expands to two comma separated arguments and is kinda\n> open at both ends but it's handy that way.\n\nAgreed that's useful.\n\n> Off list Peter Eisentraut pointed out that we can not use these macros\n> in elog/ereport since it creates problems for translations. He\n> suggested adding functions which return strings and use %s when doing\n> so.\n\nWhat's the problem with translations? We already have equivalent\nstuff with INT64_FORMAT that may map to %ld or %lld. But here we have\na format that's set in stone.Peter Eisentraut explained that the translation system can not handle strings broken by macros, e.g. errmsg(\"foo\" MACRO \"bar\"), since it doesn't know statically what the macro resolves to. But I am not familiar with our translation system myself. But if we allow INT64_FORMAT, LSN_FORMAT should be allowed. That makes life much easier. We do not need other functions at all.Peter E or somebody familiar with translations can provide more information. \n\n> The patch has two functions pg_lsn_out_internal() which takes an LSN\n> as input and returns a palloc'ed string containing the string\n> representation of LSN. This may not be suitable in performance\n> critical paths and also may leak memory if not freed.\n\nI'd rather never use this flavor. An OOM could mask the actual error\nbehind.Hmm that's a good point. It might be useful in non-ERROR cases, merely because it avoids declaring a char array :).No one seems to reject this idea so I will add it to the next commitfest to get more reviews esp. clarification around whether macros are enough or not.--Best Wishes,Ashutosh",
"msg_date": "Mon, 30 Nov 2020 18:48:32 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 10:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Fri, Nov 27, 2020 at 04:10:27PM +0530, Ashutosh Bapat wrote:\n> >> Off list Craig Ringer suggested introducing a new format specifier\n> >> similar to %m for LSN but I did not get time to take a look at the\n> >> relevant code. AFAIU it's available only to elog/ereport, so may not\n> >> be useful generally. But teaching printf variants about the new format\n> >> would be the best solution. However, I didn't find any way to do that.\n>\n> > -1. %m maps to errno, that is much more generic. A set of macros\n> > that maps to our internal format would be fine enough IMO.\n>\n> Agreed. snprintf.c is meant to implement a recognized standard\n> (ok, %m is a GNU extension, but it's still pretty standard).\n> I'm not on board with putting PG-only extensions in there.\n>\n\nThanks for the clarification.\n\n--\nBest Wishes,\nAshutosh\n\nOn Sun, Nov 29, 2020 at 10:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Nov 27, 2020 at 04:10:27PM +0530, Ashutosh Bapat wrote:\n>> Off list Craig Ringer suggested introducing a new format specifier\n>> similar to %m for LSN but I did not get time to take a look at the\n>> relevant code. AFAIU it's available only to elog/ereport, so may not\n>> be useful generally. But teaching printf variants about the new format\n>> would be the best solution. However, I didn't find any way to do that.\n\n> -1. %m maps to errno, that is much more generic. A set of macros\n> that maps to our internal format would be fine enough IMO.\n\nAgreed. snprintf.c is meant to implement a recognized standard\n(ok, %m is a GNU extension, but it's still pretty standard).\nI'm not on board with putting PG-only extensions in there.Thanks for the clarification. --Best Wishes,Ashutosh",
"msg_date": "Mon, 30 Nov 2020 18:49:03 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "Hi,\r\n\r\nOn Nov 30, 2020, at 9:06 PM, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com<mailto:ashutosh.bapat.oss@gmail.com>> wrote:\r\n\r\nOn Fri, Nov 27, 2020 at 9:51 PM Li Japin <japinli@hotmail.com<mailto:japinli@hotmail.com>> wrote:\r\n\r\nHi,\r\n\r\nHere, we cannot use sizeof(but) to get the buf size, because it is a pointer, so it always\r\n8 bytes on 64-bit or 4 bytes on 32-bit machine.\r\n\r\nFor an array, the sizeof() returns the size of memory consumed by the\r\narray. See section \"Application to arrays\" at\r\nhttps://en.wikipedia.org/wiki/Sizeof.\r\n\r\nThat’s true! However, in pg_lsn_out_buffer(), it converts to a pointer, not an array. See the following test:\r\n\r\n```c\r\n#include <stdio.h>\r\n#include <stdint.h>\r\n\r\ntypedef uint64_t XLogRecPtr;\r\ntypedef uint32_t uint32;\r\n\r\n#define MAXPG_LSNLEN 17\r\n#define LSN_FORMAT \"%X/%X\"\r\n#define LSN_FORMAT_ARG(lsn) (uint32) ((lsn) >> 32), (uint32) (lsn)\r\n\r\nchar *\r\npg_lsn_out_buffer(XLogRecPtr lsn, char *buf)\r\n{\r\n printf(\"pg_lsn_out_buffer: sizeof(buf) = %lu\\n\", sizeof(buf));\r\n snprintf(buf, sizeof(buf), LSN_FORMAT, LSN_FORMAT_ARG(lsn));\r\n return buf;\r\n}\r\n\r\nchar *\r\npg_lsn_out_buffer1(XLogRecPtr lsn, char *buf, size_t len)\r\n{\r\n printf(\"pg_lsn_out_buffer1: sizeof(buf) = %lu, len = %lu\\n\", sizeof(buf), len);\r\n snprintf(buf, len, LSN_FORMAT, LSN_FORMAT_ARG(lsn));\r\n return buf;\r\n}\r\n\r\nint\r\nmain(void)\r\n{\r\n char buf[MAXPG_LSNLEN + 1];\r\n XLogRecPtr lsn = 1234567UL;\r\n\r\n printf(\"main: sizeof(buf) = %lu\\n\", sizeof(buf));\r\n pg_lsn_out_buffer(lsn, buf);\r\n printf(\"buffer's content from pg_lsn_out_buffer: %s\\n\", buf);\r\n\r\n pg_lsn_out_buffer1(lsn, buf, sizeof(buf));\r\n printf(\"buffer's content from pg_lsn_out_buffer1: %s\\n\", buf);\r\n return 0;\r\n}\r\n```\r\n\r\nThe above output is:\r\n\r\n```\r\nmain: sizeof(buf) = 18\r\npg_lsn_out_buffer: sizeof(buf) = 8\r\nbuffer's content from pg_lsn_out_buffer: 0/12D68\r\npg_lsn_out_buffer1: sizeof(buf) = 8, len = 18\r\nbuffer's content from pg_lsn_out_buffer1: 0/12D687\r\n```\r\n\r\n--\r\nBest regards\r\nJapin Li\r\nChengDu WenWu Information Technolog Co.,Ltd.\r\n\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\r\nHi,\r\n\n\n\n\n\nOn Nov 30, 2020, at 9:06 PM, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n\nOn\r\n Fri, Nov 27, 2020 at 9:51 PM Li Japin <japinli@hotmail.com>\r\n wrote:\n\n\r\nHi,\n\r\nHere, we cannot use sizeof(but) to get the buf size, because it is a pointer, so it always\r\n8 bytes on 64-bit or 4 bytes on 32-bit machine.\n\n\nFor\r\n an array, the sizeof() returns the size of memory consumed by the\narray.\r\n See section \"Application to arrays\" at\nhttps://en.wikipedia.org/wiki/Sizeof.\n\n\n\nThat’s true! However, in pg_lsn_out_buffer(), it converts to a pointer, not an array. See the following test:\n\n\n```c\n\n\n#include <stdio.h>\n#include <stdint.h>\n\n\ntypedef uint64_t XLogRecPtr;\ntypedef uint32_t uint32;\n\n\n#define MAXPG_LSNLEN 17\n#define LSN_FORMAT \"%X/%X\"\n#define LSN_FORMAT_ARG(lsn) (uint32) ((lsn) >> 32), (uint32) (lsn)\n\n\nchar *\npg_lsn_out_buffer(XLogRecPtr lsn, char *buf)\n{\n printf(\"pg_lsn_out_buffer: sizeof(buf) = %lu\\n\", sizeof(buf));\n snprintf(buf, sizeof(buf), LSN_FORMAT, LSN_FORMAT_ARG(lsn));\n return buf;\n}\n\n\nchar *\npg_lsn_out_buffer1(XLogRecPtr lsn, char *buf, size_t len)\n{\n printf(\"pg_lsn_out_buffer1: sizeof(buf) = %lu, len = %lu\\n\", sizeof(buf), len);\n snprintf(buf, len, LSN_FORMAT, LSN_FORMAT_ARG(lsn));\n return buf;\n}\n\n\nint\nmain(void)\n{\n char buf[MAXPG_LSNLEN + 1];\n XLogRecPtr lsn = 1234567UL;\n\n\n printf(\"main: sizeof(buf) = %lu\\n\", sizeof(buf));\n pg_lsn_out_buffer(lsn, buf);\n printf(\"buffer's content from pg_lsn_out_buffer: %s\\n\", buf);\n\n\n pg_lsn_out_buffer1(lsn, buf, sizeof(buf));\n printf(\"buffer's content from pg_lsn_out_buffer1: %s\\n\", buf);\n return 0;\n}\n\n\n```\n\n\nThe above output is:\n\n\n```\n\n\nmain: sizeof(buf) = 18\npg_lsn_out_buffer: sizeof(buf) = 8\nbuffer's content from pg_lsn_out_buffer: 0/12D68\npg_lsn_out_buffer1: sizeof(buf) = 8, len = 18\nbuffer's content from pg_lsn_out_buffer1: 0/12D687\n\n\n```\n\n\n\n\n--\nBest regards\nJapin Li\nChengDu WenWu Information Technolog Co.,Ltd.",
"msg_date": "Mon, 30 Nov 2020 14:08:44 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On 2020-Nov-30, Ashutosh Bapat wrote:\n\n> Peter Eisentraut explained that the translation system can not handle\n> strings broken by macros, e.g. errmsg(\"foo\" MACRO \"bar\"), since it doesn't\n> know statically what the macro resolves to. But I am not familiar with our\n> translation system myself. But if we allow INT64_FORMAT, LSN_FORMAT should\n> be allowed. That makes life much easier. We do not need other functions at\n> all.\n> \n> Peter E or somebody familiar with translations can provide more\n> information.\n\nWe don't allow INT64_FORMAT in translatable strings, period. (See\ncommit 6a1cd8b9236d which undid some hacks we had to use to work around\nthe lack of it, by allowing %llu to take its place.) For the same\nreason, we cannot allow LSN_FORMAT or similar constructions in\ntranslatable strings either.\n\nIf the solution to ugliness of LSN printing is going to require that we\nadd a \"%s\" which prints a previously formatted string with LSN_FORMAT,\nit won't win us anything.\n\nAs annoyed as I am about our LSN printing, I don't think this patch\nhas the solutions we need.\n\n\n",
"msg_date": "Mon, 30 Nov 2020 11:37:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-29 12:10:21 -0500, Tom Lane wrote:\n> Agreed. snprintf.c is meant to implement a recognized standard\n> (ok, %m is a GNU extension, but it's still pretty standard).\n> I'm not on board with putting PG-only extensions in there.\n\nI wonder if there's something we could layer on the elog.c level\ninstead. But I also don't like the idea of increasing the use of wrapper\nfunctions that need to allocate string buffers than then need to get\ncopied in turn.\n\nWe could do something like\n errmsg(\"plain string arg: %s, conv string arg: %s\", somestr, estr_lsn(lsn))\n\nwhere estr_lsn() wouldn't do any conversion directly, instead setting up\na callback in ErrorData that knows how to do the type specific\nconversion. Then during EVALUATE_MESSAGE() we'd evaluate the message\npiecemeal and call the output callbacks for each arg, using the\nStringInfo.\n\nThere's two main issues with something roughly like this:\n1) We'd need to do format string parsing somewhere above snprintf.c,\n which isn't free.\n2) Without relying on C11 / _Generic() some ugly macro hackery would be\n needed to have a argument-number indexed state. If we did rely on\n _Generic() we probably could do better, even getting rid of the need\n for something like estr_lsn().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Nov 2020 13:03:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 7:38 PM Li Japin <japinli@hotmail.com> wrote:\n\n> Hi,\n>\n> On Nov 30, 2020, at 9:06 PM, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n> wrote:\n>\n> On Fri, Nov 27, 2020 at 9:51 PM Li Japin <japinli@hotmail.com> wrote:\n>\n>\n> Hi,\n>\n> Here, we cannot use sizeof(but) to get the buf size, because it is a\n> pointer, so it always\n> 8 bytes on 64-bit or 4 bytes on 32-bit machine.\n>\n>\n> For an array, the sizeof() returns the size of memory consumed by the\n> array. See section \"Application to arrays\" at\n> https://en.wikipedia.org/wiki/Sizeof.\n>\n>\n> That’s true! However, in pg_lsn_out_buffer(), it converts to a pointer,\n> not an array. See the following test:\n>\n>\nAh! Thanks for pointing that out. I have fixed this in my repository.\nHowever, from Alvaro's reply it looks like the approach is not acceptable,\nso I am not posting the fixed version here.\n\n--\nBest Wishes,\nAshutosh\n\nOn Mon, Nov 30, 2020 at 7:38 PM Li Japin <japinli@hotmail.com> wrote:\n\nHi,\n\n\n\n\n\nOn Nov 30, 2020, at 9:06 PM, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n\nOn\n Fri, Nov 27, 2020 at 9:51 PM Li Japin <japinli@hotmail.com>\n wrote:\n\n\nHi,\n\nHere, we cannot use sizeof(but) to get the buf size, because it is a pointer, so it always\n8 bytes on 64-bit or 4 bytes on 32-bit machine.\n\n\nFor\n an array, the sizeof() returns the size of memory consumed by the\narray.\n See section \"Application to arrays\" at\nhttps://en.wikipedia.org/wiki/Sizeof.\n\n\n\nThat’s true! However, in pg_lsn_out_buffer(), it converts to a pointer, not an array. See the following test:\nAh! Thanks for pointing that out. I have fixed this in my repository. However, from Alvaro's reply it looks like the approach is not acceptable, so I am not posting the fixed version here.--Best Wishes,Ashutosh",
"msg_date": "Thu, 3 Dec 2020 11:43:41 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 8:07 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2020-Nov-30, Ashutosh Bapat wrote:\n>\n> > Peter Eisentraut explained that the translation system can not handle\n> > strings broken by macros, e.g. errmsg(\"foo\" MACRO \"bar\"), since it\n> doesn't\n> > know statically what the macro resolves to. But I am not familiar with\n> our\n> > translation system myself. But if we allow INT64_FORMAT, LSN_FORMAT\n> should\n> > be allowed. That makes life much easier. We do not need other functions\n> at\n> > all.\n> >\n> > Peter E or somebody familiar with translations can provide more\n> > information.\n>\n> We don't allow INT64_FORMAT in translatable strings, period. (See\n> commit 6a1cd8b9236d which undid some hacks we had to use to work around\n> the lack of it, by allowing %llu to take its place.) For the same\n> reason, we cannot allow LSN_FORMAT or similar constructions in\n> translatable strings either.\n\n\n> If the solution to ugliness of LSN printing is going to require that we\n> add a \"%s\" which prints a previously formatted string with LSN_FORMAT,\n> it won't win us anything.\n>\n\nThanks for the commit. The commit reverted code which uses a char array to\nprint int64. On the same lines, using a character array to print an LSN\nwon't be acceptable. I agree.\n\nMaybe we should update the section \"Message-Writing Guidelines\" in\ndoc/src/sgml/nls.sgml. I think it's implied by \"Do not construct sentences\nat run-time, like ... \" but using a macro is not really constructing\nsentences run time. Hopefully, some day, we will fix the translation system\nto handle strings with macros and make it easier to print PG specific\nobjects in strings easily.\n\n\n>\n> As annoyed as I am about our LSN printing, I don't think this patch\n> has the solutions we need.\n>\n\nI think we could at least accept the macros so that they can be used in\nnon-translatable strings i.e. use it with sprints, appendStringInfo\nvariants etc. The macros will also serve to document the string format of\nLSN. Developers can then copy from the macros rather than copying from\n\"some other\" (erroneous) usage in the code.\n\n--\nBest Wishes,\nAshutosh\n\nOn Mon, Nov 30, 2020 at 8:07 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2020-Nov-30, Ashutosh Bapat wrote:\n\n> Peter Eisentraut explained that the translation system can not handle\n> strings broken by macros, e.g. errmsg(\"foo\" MACRO \"bar\"), since it doesn't\n> know statically what the macro resolves to. But I am not familiar with our\n> translation system myself. But if we allow INT64_FORMAT, LSN_FORMAT should\n> be allowed. That makes life much easier. We do not need other functions at\n> all.\n> \n> Peter E or somebody familiar with translations can provide more\n> information.\n\nWe don't allow INT64_FORMAT in translatable strings, period. (See\ncommit 6a1cd8b9236d which undid some hacks we had to use to work around\nthe lack of it, by allowing %llu to take its place.) For the same\nreason, we cannot allow LSN_FORMAT or similar constructions in\ntranslatable strings either.\n\nIf the solution to ugliness of LSN printing is going to require that we\nadd a \"%s\" which prints a previously formatted string with LSN_FORMAT,\nit won't win us anything.Thanks for the commit. The commit reverted code which uses a char array to print int64. On the same lines, using a character array to print an LSN won't be acceptable. I agree.Maybe we should update the section \"Message-Writing Guidelines\" in doc/src/sgml/nls.sgml. I think it's implied by \"Do not construct sentences at run-time, like ... \" but using a macro is not really constructing sentences run time. Hopefully, some day, we will fix the translation system to handle strings with macros and make it easier to print PG specific objects in strings easily. \n\nAs annoyed as I am about our LSN printing, I don't think this patch\nhas the solutions we need.\nI think we could at least accept the macros so that they can be used in non-translatable strings i.e. use it with sprints, appendStringInfo variants etc. The macros will also serve to document the string format of LSN. Developers can then copy from the macros rather than copying from \"some other\" (erroneous) usage in the code.--Best Wishes,Ashutosh",
"msg_date": "Thu, 3 Dec 2020 12:08:39 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On 2020-11-27 11:40, Ashutosh Bapat wrote:\n> The solution seems to be simple though. In the attached patch, I have\n> added two macros\n> #define LSN_FORMAT \"%X/%X\"\n> #define LSN_FORMAT_ARG(lsn) (uint32) ((lsn) >> 32), (uint32) (lsn)\n> \n> which can be used instead.\n\nIt looks like we are not getting any consensus on this approach. One \nreduced version I would consider is just the second part, so you'd write \nsomething like\n\n snprintf(lsnchar, sizeof(lsnchar), \"%X/%X\",\n LSN_FORMAT_ARGS(lsn));\n\nThis would still reduce notational complexity quite a bit but avoid any \nfunny business with the format strings.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n",
"msg_date": "Wed, 20 Jan 2021 07:25:37 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 07:25:37AM +0100, Peter Eisentraut wrote:\n> It looks like we are not getting any consensus on this approach. One\n> reduced version I would consider is just the second part, so you'd write\n> something like\n> \n> snprintf(lsnchar, sizeof(lsnchar), \"%X/%X\",\n> LSN_FORMAT_ARGS(lsn));\n> \n> This would still reduce notational complexity quite a bit but avoid any\n> funny business with the format strings.\n\nThat seems reasonable to me. So +1.\n--\nMichael",
"msg_date": "Wed, 20 Jan 2021 16:40:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 11:55 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-11-27 11:40, Ashutosh Bapat wrote:\n> > The solution seems to be simple though. In the attached patch, I have\n> > added two macros\n> > #define LSN_FORMAT \"%X/%X\"\n> > #define LSN_FORMAT_ARG(lsn) (uint32) ((lsn) >> 32), (uint32) (lsn)\n> >\n> > which can be used instead.\n>\n> It looks like we are not getting any consensus on this approach. One\n> reduced version I would consider is just the second part, so you'd write\n> something like\n>\n> snprintf(lsnchar, sizeof(lsnchar), \"%X/%X\",\n> LSN_FORMAT_ARGS(lsn));\n>\n> This would still reduce notational complexity quite a bit but avoid any\n> funny business with the format strings.\n>\n\nThanks for looking into this. I would like to keep both the LSN_FORMAT and\nLSN_FORMAT_ARGS but with a note that the first can not be used in elog() or\nin messages which require localization. We have many other places doing\nsnprintf() and such stuff, which can use LSN_FORMAT. If we do so, the\nfunctions to output string representation will not be needed so they can be\nremoved.\n\n--\nBest Wishes,\nAshutosh\n\nOn Wed, Jan 20, 2021 at 11:55 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-11-27 11:40, Ashutosh Bapat wrote:\n> The solution seems to be simple though. In the attached patch, I have\n> added two macros\n> #define LSN_FORMAT \"%X/%X\"\n> #define LSN_FORMAT_ARG(lsn) (uint32) ((lsn) >> 32), (uint32) (lsn)\n> \n> which can be used instead.\n\nIt looks like we are not getting any consensus on this approach. One \nreduced version I would consider is just the second part, so you'd write \nsomething like\n\n snprintf(lsnchar, sizeof(lsnchar), \"%X/%X\",\n LSN_FORMAT_ARGS(lsn));\n\nThis would still reduce notational complexity quite a bit but avoid any \nfunny business with the format strings.Thanks for looking into this. I would like to keep both the LSN_FORMAT and LSN_FORMAT_ARGS but with a note that the first can not be used in elog() or in messages which require localization. We have many other places doing snprintf() and such stuff, which can use LSN_FORMAT. If we do so, the functions to output string representation will not be needed so they can be removed. --Best Wishes,Ashutosh",
"msg_date": "Wed, 20 Jan 2021 13:20:11 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "At Wed, 20 Jan 2021 16:40:59 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Jan 20, 2021 at 07:25:37AM +0100, Peter Eisentraut wrote:\n> > It looks like we are not getting any consensus on this approach. One\n> > reduced version I would consider is just the second part, so you'd write\n> > something like\n> > \n> > snprintf(lsnchar, sizeof(lsnchar), \"%X/%X\",\n> > LSN_FORMAT_ARGS(lsn));\n> > \n> > This would still reduce notational complexity quite a bit but avoid any\n> > funny business with the format strings.\n> \n> That seems reasonable to me. So +1.\n\nThat seems in the good balance. +1, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Jan 2021 09:29:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On 2021-01-20 08:50, Ashutosh Bapat wrote:\n> Thanks for looking into this. I would like to keep both the LSN_FORMAT \n> and LSN_FORMAT_ARGS but with a note that the first can not be used in \n> elog() or in messages which require localization. We have many other \n> places doing snprintf() and such stuff, which can use LSN_FORMAT. If we \n> do so, the functions to output string representation will not be needed \n> so they can be removed.\n\nThen you'd end up with half the code doing this and half the code doing \nthat. That doesn't sound very attractive. It's not like \"%X/%X\" is \nhard to type.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n",
"msg_date": "Thu, 21 Jan 2021 11:23:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 3:53 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2021-01-20 08:50, Ashutosh Bapat wrote:\n> > Thanks for looking into this. I would like to keep both the LSN_FORMAT\n> > and LSN_FORMAT_ARGS but with a note that the first can not be used in\n> > elog() or in messages which require localization. We have many other\n> > places doing snprintf() and such stuff, which can use LSN_FORMAT. If we\n> > do so, the functions to output string representation will not be needed\n> > so they can be removed.\n>\n> Then you'd end up with half the code doing this and half the code doing\n> that. That doesn't sound very attractive. It's not like \"%X/%X\" is\n> hard to type.\n>\n\n:). That's true. I thought LSN_FORMAT would document the string\nrepresentation of an LSN. But anyway I am fine with this as well.\n\n\n>\n> --\n> Peter Eisentraut\n> 2ndQuadrant, an EDB company\n> https://www.2ndquadrant.com/\n>\n\n\n-- \n--\nBest Wishes,\nAshutosh\n\nOn Thu, Jan 21, 2021 at 3:53 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2021-01-20 08:50, Ashutosh Bapat wrote:\n> Thanks for looking into this. I would like to keep both the LSN_FORMAT \n> and LSN_FORMAT_ARGS but with a note that the first can not be used in \n> elog() or in messages which require localization. We have many other \n> places doing snprintf() and such stuff, which can use LSN_FORMAT. If we \n> do so, the functions to output string representation will not be needed \n> so they can be removed.\n\nThen you'd end up with half the code doing this and half the code doing \nthat. That doesn't sound very attractive. It's not like \"%X/%X\" is \nhard to type.:). That's true. I thought LSN_FORMAT would document the string representation of an LSN. But anyway I am fine with this as well. \n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n-- --Best Wishes,Ashutosh",
"msg_date": "Thu, 21 Jan 2021 16:00:14 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "Here is an updated patch that just introduces LSN_FORMAT_ARGS(). I \nthink the result is quite pleasant.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/",
"msg_date": "Thu, 18 Feb 2021 13:49:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Thu, Feb 18, 2021 at 6:19 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> Here is an updated patch that just introduces LSN_FORMAT_ARGS(). I\n> think the result is quite pleasant.\n>\n\nThanks a lot Peter for producing this patch. I am fine with it. The way\nthis is defined someone could write xyz = LSN_FORMAT_ARGS(lsn). But then\nthey are misusing it so I won't care. Even my proposal had that problem.\n\n--\nBest Wishes,\nAshutosh\n\nOn Thu, Feb 18, 2021 at 6:19 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:Here is an updated patch that just introduces LSN_FORMAT_ARGS(). I \nthink the result is quite pleasant.Thanks a lot Peter for producing this patch. I am fine with it. The way this is defined someone could write xyz = LSN_FORMAT_ARGS(lsn). But then they are misusing it so I won't care. Even my proposal had that problem.--Best Wishes,Ashutosh",
"msg_date": "Thu, 18 Feb 2021 18:51:37 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "At Thu, 18 Feb 2021 18:51:37 +0530, Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote in \n> On Thu, Feb 18, 2021 at 6:19 PM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n> \n> > Here is an updated patch that just introduces LSN_FORMAT_ARGS(). I\n> > think the result is quite pleasant.\n> >\n> \n> Thanks a lot Peter for producing this patch. I am fine with it. The way\n> this is defined someone could write xyz = LSN_FORMAT_ARGS(lsn). But then\n> they are misusing it so I won't care. Even my proposal had that problem.\n\nAs for the side effect by expressions as the parameter, unary\noperators are seldom (or never) applied to LSN. I think there's no\nneed to fear about other (modifying) expressions, too.\n\nAs a double-checking, I checked that the patch covers all output by\n'%X/%X' and \" ?>> ?32)\" that are handling LSNs, and there's no\nmis-replacing of the source variables.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 19 Feb 2021 10:54:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
},
{
"msg_contents": "On Fri, Feb 19, 2021 at 10:54:05AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 18 Feb 2021 18:51:37 +0530, Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote in \n>> On Thu, Feb 18, 2021 at 6:19 PM Peter Eisentraut <\n>> peter.eisentraut@2ndquadrant.com> wrote:\n>>> Here is an updated patch that just introduces LSN_FORMAT_ARGS(). I\n>>> think the result is quite pleasant.\n\nThis result is nice for the eye. Thanks Peter.\n--\nMichael",
"msg_date": "Fri, 19 Feb 2021 15:10:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Printing LSN made easy"
}
] |
[
{
"msg_contents": "Hi,\n\nhttps://www.postgresql.org/docs/current/default-roles.html mentions the\n\"Administrator\" several times, but does not specify it further. I\nunderstand that it is mentioned elsewhere [1], but I think it would be\nbeneficial to remind the reader on that page at least once that\n\"Administrators\" includes \"roles with the CREATEROLE privilege\" in the\ncontext of GRANTing and REVOKEing default privileges, e.g. with the\nattached patch.\n\n\nMichael\n\n[1] e.g. at https://www.postgresql.org/docs/current/sql-createrole.html\nit is mentioned that CREATEROLE privs can be regarded as \"almost-superuser-roles\"\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz",
"msg_date": "Fri, 27 Nov 2020 23:49:59 +0100",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": true,
"msg_subject": "[Doc Patch] Clarify that CREATEROLE roles can GRANT default roles"
},
{
"msg_contents": "Hi Michael,\n\nOn Sat, Nov 28, 2020 at 7:50 AM Michael Banck <michael.banck@credativ.de> wrote:\n>\n> Hi,\n>\n> https://www.postgresql.org/docs/current/default-roles.html mentions the\n> \"Administrator\" several times, but does not specify it further. I\n> understand that it is mentioned elsewhere [1], but I think it would be\n> beneficial to remind the reader on that page at least once that\n> \"Administrators\" includes \"roles with the CREATEROLE privilege\" in the\n> context of GRANTing and REVOKEing default privileges, e.g. with the\n> attached patch.\n>\n\nYou sent in your patch, default-roles-createrole.patch to\npgsql-hackers on Nov 28, but you did not post it to the next\nCommitFest[1]. If this was intentional, then you need to take no\naction. However, if you want your patch to be reviewed as part of the\nupcoming CommitFest, then you need to add it yourself before\n2021-01-01 AoE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 20:41:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Doc Patch] Clarify that CREATEROLE roles can GRANT default roles"
},
{
"msg_contents": "Hi,\n\nAm Montag, den 28.12.2020, 20:41 +0900 schrieb Masahiko Sawada:\n> On Sat, Nov 28, 2020 at 7:50 AM Michael Banck <michael.banck@credativ.de> wrote:\n> > https://www.postgresql.org/docs/current/default-roles.html mentions the\n> > \"Administrator\" several times, but does not specify it further. I\n> > understand that it is mentioned elsewhere [1], but I think it would be\n> > beneficial to remind the reader on that page at least once that\n> > \"Administrators\" includes \"roles with the CREATEROLE privilege\" in the\n> > context of GRANTing and REVOKEing default privileges, e.g. with the\n> > attached patch.\n> > \n> \n> You sent in your patch, default-roles-createrole.patch to\n> pgsql-hackers on Nov 28, but you did not post it to the next\n> CommitFest[1]. If this was intentional, then you need to take no\n> action. However, if you want your patch to be reviewed as part of the\n> upcoming CommitFest, then you need to add it yourself before\n> 2021-01-01 AoE[2]. Thanks for your contributions.\n\nThanks for reminding me, I've done so now[1].\n\n\nMichael\n\n[1] https://commitfest.postgresql.org/31/2921/\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n",
"msg_date": "Thu, 31 Dec 2020 16:05:20 +0100",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": true,
"msg_subject": "Re: [Doc Patch] Clarify that CREATEROLE roles can GRANT default\n roles"
},
{
"msg_contents": "On Thu, Dec 31, 2020 at 10:05 AM Michael Banck\n<michael.banck@credativ.de> wrote:\n>\n> Hi,\n>\n> Am Montag, den 28.12.2020, 20:41 +0900 schrieb Masahiko Sawada:\n> > On Sat, Nov 28, 2020 at 7:50 AM Michael Banck <michael.banck@credativ.de> wrote:\n> > > https://www.postgresql.org/docs/current/default-roles.html mentions the\n> > > \"Administrator\" several times, but does not specify it further. I\n> > > understand that it is mentioned elsewhere [1], but I think it would be\n> > > beneficial to remind the reader on that page at least once that\n> > > \"Administrators\" includes \"roles with the CREATEROLE privilege\" in the\n> > > context of GRANTing and REVOKEing default privileges, e.g. with the\n> > > attached patch.\n> > >\n\nTook look at the wording and +1 from me on the proposed change. FWIW,\nI believe the preceding sentence would be more grammatically correct\nif the word \"which\" was replaced with \"that\", ie. PostgreSQL provides\na set of default roles /that/ provide access to certain, commonly\nneeded, privileged capabilities and information.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Tue, 23 Feb 2021 01:18:50 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: [Doc Patch] Clarify that CREATEROLE roles can GRANT default roles"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 7:19 AM Robert Treat <rob@xzilla.net> wrote:\n>\n> On Thu, Dec 31, 2020 at 10:05 AM Michael Banck\n> <michael.banck@credativ.de> wrote:\n> >\n> > Hi,\n> >\n> > Am Montag, den 28.12.2020, 20:41 +0900 schrieb Masahiko Sawada:\n> > > On Sat, Nov 28, 2020 at 7:50 AM Michael Banck <michael.banck@credativ.de> wrote:\n> > > > https://www.postgresql.org/docs/current/default-roles.html mentions the\n> > > > \"Administrator\" several times, but does not specify it further. I\n> > > > understand that it is mentioned elsewhere [1], but I think it would be\n> > > > beneficial to remind the reader on that page at least once that\n> > > > \"Administrators\" includes \"roles with the CREATEROLE privilege\" in the\n> > > > context of GRANTing and REVOKEing default privileges, e.g. with the\n> > > > attached patch.\n> > > >\n>\n> Took look at the wording and +1 from me on the proposed change. FWIW,\n> I believe the preceding sentence would be more grammatically correct\n> if the word \"which\" was replaced with \"that\", ie. PostgreSQL provides\n> a set of default roles /that/ provide access to certain, commonly\n> needed, privileged capabilities and information.\n\nApplied, including the suggested change from Robert.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sat, 6 Mar 2021 18:12:50 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: [Doc Patch] Clarify that CREATEROLE roles can GRANT default roles"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 06:12:50PM +0100, Magnus Hagander wrote:\n> On Tue, Feb 23, 2021 at 7:19 AM Robert Treat <rob@xzilla.net> wrote:\n> > On Thu, Dec 31, 2020 at 10:05 AM Michael Banck\n> > <michael.banck@credativ.de> wrote:\n> > > Am Montag, den 28.12.2020, 20:41 +0900 schrieb Masahiko Sawada:\n> > > > On Sat, Nov 28, 2020 at 7:50 AM Michael Banck <michael.banck@credativ.de> wrote:\n> > > > > https://www.postgresql.org/docs/current/default-roles.html mentions the\n> > > > > \"Administrator\" several times, but does not specify it further. I\n> > > > > understand that it is mentioned elsewhere [1], but I think it would be\n> > > > > beneficial to remind the reader on that page at least once that\n> > > > > \"Administrators\" includes \"roles with the CREATEROLE privilege\" in the\n> > > > > context of GRANTing and REVOKEing default privileges, e.g. with the\n> > > > > attached patch.\n> >\n> > Took look at the wording and +1 from me on the proposed change. FWIW,\n> > I believe the preceding sentence would be more grammatically correct\n> > if the word \"which\" was replaced with \"that\", ie. PostgreSQL provides\n> > a set of default roles /that/ provide access to certain, commonly\n> > needed, privileged capabilities and information.\n> \n> Applied, including the suggested change from Robert.\n\nThanks!\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n",
"msg_date": "Sat, 6 Mar 2021 23:08:44 +0100",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": true,
"msg_subject": "Re: [Doc Patch] Clarify that CREATEROLE roles can GRANT default roles"
}
] |
[
{
"msg_contents": "Hi,\n\nAs discussed in this thread \n(https://www.postgresql.org/message-id/201010112055.o9BKtZf7011251@wwwmaster.postgresql.org) \nbtree_gist is broken for the inet and cidr types. The reason is that it \nuses convert_network_to_scalar() to make a lossy GiST index, but \nconvert_network_to_scalar() throws away the netmask which is necessary \nto preserve the correct sort order of the keys.\n\nThis has been broken for as long as we have had btree_gist indexes over \ninet. And personally I am not a fan of PostgreSQL shipping with known \nbroken features. But it is also not obvious to me what the best way to \nfix it is.\n\nTo refresh my memory I quickly hacked together a proof of concept patch \nwhich adds a couple of test cases to reproduce the bug plus uses \nbasically the same implementation of the btree_gist as text and numeric, \nwhich changes to index keys from 64 bit floats to a variable sized \nbytea. This means that the indexes are no longer lossy.\n\nSome questions/thoughts:\n\nIs it even worth fixing btree_gist for inet when we already have \ninet_ops which can do more things and is not broken. We could just \ndeprecate and then rip out gist_inet_ops?\n\nIf we are going to fix it I cannot see any reasonably sized fix which \ndoes not also corrupt all existing indexes on inet, even those which do \nnot contain any net masks (those which contain netmasks are already \nbroken anyway). Do we have some way to handle this kind of breakage nicely?\n\nI see two ways to fix the inet indexes, either do as in my PoC patch and \nmake the indexes no longer lossy. This will make it more similar to how \nbtree_gist works for other data types but requires us to change the \nstorage type of the index keys, but since all indexes will break anyway \nthat might not be an issue.\n\nThe other way is to implement correct lossy indexes, using something \nsimilar to what network_abbrev_convert() does.\n\nDoes anyone have any thoughts on this?\n\nAndreas",
"msg_date": "Sat, 28 Nov 2020 01:04:09 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": true,
"msg_subject": "What to do about the broken btree_gist for inet/cidr?"
},
{
"msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> This has been broken for as long as we have had btree_gist indexes over \n> inet. And personally I am not a fan of PostgreSQL shipping with known \n> broken features. But it is also not obvious to me what the best way to \n> fix it is.\n\nYeah, this has been lingering unfixed precisely for lack of a clearly-\ngood fix.\n\n> Is it even worth fixing btree_gist for inet when we already have \n> inet_ops which can do more things and is not broken. We could just \n> deprecate and then rip out gist_inet_ops?\n\nThat might be the thing to do; it doesn't seem very nice to carry\nessentially duplicate functionality. However\n(1) is performance as good?\n(2) gist inet_ops is not marked default, so we'd have to change that.\nI don't recall what happens if there are two default opclasses\nfor the same AM and input type, but it's likely not great.\n\n> If we are going to fix it I cannot see any reasonably sized fix which \n> does not also corrupt all existing indexes on inet, even those which do \n> not contain any net masks (those which contain netmasks are already \n> broken anyway). Do we have some way to handle this kind of breakage nicely?\n\nNot really. If the fix involves changing the declared storage type for\nan inet column, we might be able to have the support routines check that,\nand either error out or continue to do what they used to. But I'm not\nsure whether that info is available to all the support routines.\npg_upgrade scenarios might be messy, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Nov 2020 14:43:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What to do about the broken btree_gist for inet/cidr?"
}
] |
[
{
"msg_contents": "I noticed in CI builds of PL/Java with PG 13 that -Wimplicit-fallthrough=3\nin pg_config's CFLAGS was causing some switch case fallthrough warnings\nafter an elog(ERROR. [1]\n\nI added my own pg_unreachable() after the elog(ERROR, ...) calls [2]\nand that did away with the warnings.\n\nBut it looks odd, and shouldn't it be unnecessary if elog is supplying\nits own pg_unreachable() in the elevel >= ERROR case? [3]\n\nHas anyone else seen this behavior? Am I doing something that stops gcc\nfrom recognizing ERROR as a compile-time constant? Is it just a gcc bug?\n\n(Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 seems to be the gcc version on the\nTravis runner.\n\nRegards,\n-Chap\n\n\n\n\n[1] https://travis-ci.com/github/tada/pljava/jobs/447208803#L694\n\n AFAICT, the #L694 doesn't really take you to line 694 in a Travis log\n because they collapse lines so the browser will not see the anchor,\n so one must click to expand the containing line range first. Irksome.\n Copied below.\n\n[2]\nhttps://github.com/tada/pljava/blame/2c8d992/pljava-so/src/main/c/type/Type.c#L246\n\n[3]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/include/utils/elog.h;h=a16f925#l104\n\n\n\n\n\nIn file included from /usr/include/postgresql/13/server/postgres.h:47:0,\n from\n/home/travis/build/tada/pljava/pljava-so/src/main/c/type/Type.c:13:\n/usr/include/postgresql/13/server/utils/elog.h:125:5: warning: this\nstatement may fall through [-Wimplicit-fallthrough=]\n do { \\\n ^\n/usr/include/postgresql/13/server/utils/elog.h:145:2: note: in expansion of\nmacro ‘ereport_domain’\n ereport_domain(elevel, TEXTDOMAIN, __VA_ARGS__)\n ^~~~~~~~~~~~~~\n/usr/include/postgresql/13/server/utils/elog.h:215:2: note: in expansion of\nmacro ‘ereport’\n ereport(elevel, errmsg_internal(__VA_ARGS__))\n ^~~~~~~\n/home/travis/build/tada/pljava/pljava-so/src/main/c/type/Type.c:256:3: note:\nin expansion of macro ‘elog’\n elog(ERROR, \"COERCEVIAIO not implemented from (regtype) %d to %d\",\n ^~~~\n/home/travis/build/tada/pljava/pljava-so/src/main/c/type/Type.c:258:2: note:\nhere\n case COERCION_PATH_ARRAYCOERCE:\n ^~~~\n\n\n",
"msg_date": "Sat, 28 Nov 2020 11:10:59 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "gcc -Wimplicit-fallthrough and pg_unreachable"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> I noticed in CI builds of PL/Java with PG 13 that -Wimplicit-fallthrough=3\n> in pg_config's CFLAGS was causing some switch case fallthrough warnings\n> after an elog(ERROR. [1]\n\nYeah, I can replicate this here (gcc 8.3.1 on RHEL8). My recollection\nis that we saw this when trialling -Wimplicit-fallthrough, and determined\nthat the hack used to teach the compiler that elog(ERROR) doesn't return\nfails to prevent -Wimplicit-fallthrough warnings even though it does work\nfor other purposes. Using this test case:\n\nint\nfoo(int p)\n{\n int x;\n\n switch (p)\n {\n case 0:\n x = 1;\n break;\n case 1:\n elog(ERROR, \"bogus\");\n break;\n case 2:\n x = 2;\n break;\n default:\n x = 3;\n }\n\n return x;\n}\n\nI do not get a warning about x being possibly uninitialized (so it\nknows elog(ERROR) doesn't return?), but without the \"break\" after\nelog() I do get a fallthrough warning (so it doesn't know that?).\n\nSeems like a minor gcc bug; no idea if anyone's complained to them.\nI think we've found other weak spots in -Wimplicit-fallthrough's\ncoverage though, so it's not one of gcc's best areas.\n\nAs illustrated here, I'd just add a \"break\" rather than\n\"pg_unreachable()\", but that's a matter of taste.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Nov 2020 11:48:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: gcc -Wimplicit-fallthrough and pg_unreachable"
}
] |
[
{
"msg_contents": "Hi,\n\ntesting master at 3df51ca8 with sqlsmith triggers the following\nassertion:\n\n TRAP: FailedAssertion(\"!bms_is_empty(present_parts)\", File: \"partprune.c\", Line: 588, PID: 8540)\n\nI looked at a dozen backtraces and they all sport a window aggregate but\nthat may still be random chance since sqlsmith really likes generating\nthese a lot... Below is the shortest recipe I found to reproduce it on a\nfresh regression database:\n\n--8<---------------cut here---------------start------------->8---\nregression=# insert into trigger_parted values (1);\nERROR: control reached end of trigger procedure without RETURN\nCONTEXT: PL/pgSQL function trigger_parted_trigfunc()\nregression=# select myaggp05a(a) over (partition by a order by a) from trigger_parted where pg_trigger_depth() <> a limit 40;\nserver closed the connection unexpectedly\n--8<---------------cut here---------------end--------------->8---\n\nBacktrace of this one below.\n\nregards,\nAndreas\n\n#2 0x000055bafaa95b81 in ExceptionalCondition (conditionName=conditionName@entry=0x55bafabe7812 \"!bms_is_empty(present_parts)\", errorType=errorType@entry=0x55bafaae901d \"FailedAssertion\", fileName=0x7ffee4fbbe40 \"b[\\251\\372\\272U\",\n fileName@entry=0x55bafabe765b \"partprune.c\", lineNumber=lineNumber@entry=588) at assert.c:69\n#3 0x000055bafa8f02c0 in make_partitionedrel_pruneinfo (matchedsubplans=<synthetic pointer>, prunequal=<optimized out>, partrelids=<optimized out>, relid_subplan_map=0x55bafba17fd0, parentrel=0x55bafb928050, root=0x55bafb9f4f78) at partprune.c:588\n#4 make_partition_pruneinfo (root=root@entry=0x55bafb9f4f78, parentrel=parentrel@entry=0x55bafb928050, subpaths=0x55bafba16c10, partitioned_rels=0x55bafba16d58, prunequal=prunequal@entry=0x55bafba17f78) at partprune.c:274\n#5 0x000055bafa8b05cd in create_append_plan (root=0x55bafb9f4f78, best_path=0x55bafba16cc0, flags=6) at createplan.c:1249\n#6 0x000055bafa8acd5e in create_windowagg_plan (best_path=0x55bafba174c0, root=0x55bafb9f4f78) at createplan.c:2452\n#7 create_plan_recurse (root=0x55bafb9f4f78, best_path=0x55bafba174c0, flags=1) at createplan.c:492\n#8 0x000055bafa8ad341 in create_limit_plan (flags=1, best_path=0x55bafba17910, root=0x55bafb9f4f78) at createplan.c:2699\n#9 create_plan_recurse (root=0x55bafb9f4f78, best_path=0x55bafba17910, flags=1) at createplan.c:514\n#10 0x000055bafa8b00d1 in create_plan (root=root@entry=0x55bafb9f4f78, best_path=<optimized out>) at createplan.c:333\n#11 0x000055bafa8bf013 in standard_planner (parse=0x55bafb9287a8, query_string=<optimized out>, cursorOptions=256, boundParams=<optimized out>) at planner.c:409\n#12 0x000055bafa989da8 in pg_plan_query (querytree=0x55bafb9287a8, querytree@entry=0x7ffee4fbc620,\n query_string=query_string@entry=0x55bafb9082f0 \"explain select myaggp05a(a) over (partition by a order by a) from trigger_parted where pg_trigger_depth() <> a limit 40;\", cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:875\n#13 0x000055bafa7b5dcf in ExplainOneQuery (query=0x7ffee4fbc620, cursorOptions=256, into=0x0, es=0x55bafb927f80, queryString=0x55bafb9082f0 \"explain select myaggp05a(a) over (partition by a order by a) from trigger_parted where pg_trigger_depth() <> a limit 40;\",\n params=0x0, queryEnv=0x0) at explain.c:391\n#14 0x000055bafa7b6507 in ExplainQuery (pstate=0x55bafb92a1c0, stmt=0x55bafb909930, params=0x0, dest=0x55bafb92a128) at ../../../src/include/nodes/nodes.h:592\n#15 0x000055bafa98f87d in standard_ProcessUtility (pstmt=0x55bafb9c6090, queryString=0x55bafb9082f0 \"explain select myaggp05a(a) over (partition by a order by a) from trigger_parted where pg_trigger_depth() <> a limit 40;\", context=PROCESS_UTILITY_TOPLEVEL,\n params=0x0, queryEnv=0x0, dest=0x55bafb92a128, qc=0x7ffee4fbc8c0) at utility.c:829\n#16 0x000055bafa98cb36 in PortalRunUtility (portal=0x55bafb96b590, pstmt=0x55bafb9c6090, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x55bafb92a128, qc=0x7ffee4fbc8c0) at pquery.c:1159\n#17 0x000055bafa98d910 in FillPortalStore (portal=0x55bafb96b590, isTopLevel=<optimized out>) at ../../../src/include/nodes/nodes.h:592\n#18 0x000055bafa98e54d in PortalRun (portal=portal@entry=0x55bafb96b590, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x55bafb9c6180, altdest=altdest@entry=0x55bafb9c6180, qc=0x7ffee4fbcac0)\n at pquery.c:751\n#19 0x000055bafa98a29c in exec_simple_query (query_string=0x55bafb9082f0 \"explain select myaggp05a(a) over (partition by a order by a) from trigger_parted where pg_trigger_depth() <> a limit 40;\") at postgres.c:1239\n#20 0x000055bafa98beaa in PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffee4fbcff0, dbname=<optimized out>, username=<optimized out>) at postgres.c:4308\n#21 0x000055bafa9054fa in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4488\n#22 BackendStartup (port=<optimized out>) at postmaster.c:4210\n#23 ServerLoop () at postmaster.c:1727\n#24 0x000055bafa906452 in PostmasterMain (argc=<optimized out>, argv=0x55bafb902c70) at postmaster.c:1400\n#25 0x000055bafa65e980 in main (argc=3, argv=0x55bafb902c70) at main.c:209\n\n\n",
"msg_date": "Sat, 28 Nov 2020 22:43:06 +0100",
"msg_from": "Andreas Seltenreich <seltenreich@gmx.de>",
"msg_from_op": true,
"msg_subject": "[sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "Andreas Seltenreich <seltenreich@gmx.de> writes:\n> testing master at 3df51ca8 with sqlsmith triggers the following\n> assertion:\n> TRAP: FailedAssertion(\"!bms_is_empty(present_parts)\", File: \"partprune.c\", Line: 588, PID: 8540)\n\n> I looked at a dozen backtraces and they all sport a window aggregate but\n> that may still be random chance since sqlsmith really likes generating\n> these a lot...\n\nYeah, it doesn't seem to need a window aggregate:\n\nregression=# select a from trigger_parted where pg_trigger_depth() <> a order by a limit 40;\nserver closed the connection unexpectedly\n\nWhat it looks like to me is that the code for setting up run-time\npartition pruning has failed to consider the possibility of nested\npartitioning: it's expecting that every partitioned table will have\nat least one direct child that is a leaf. I'm not sure though\nwhether just the Assert is wrong, or there's more fundamental\nissues here.\n\nIt's also somewhat interesting that you need the \"order by a limit 40\"\nto get a crash. Poking around in the failing backend, I can see that\nthat causes the leaf-partition subplan to be an indexscan not a seqscan,\nbut it's far from clear why that'd make any difference to the partition\npruning logic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Nov 2020 17:52:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "I wrote:\n> What it looks like to me is that the code for setting up run-time\n> partition pruning has failed to consider the possibility of nested\n> partitioning: it's expecting that every partitioned table will have\n> at least one direct child that is a leaf. I'm not sure though\n> whether just the Assert is wrong, or there's more fundamental\n> issues here.\n\nAfter looking into the git history I realized that this assertion is\nquite new, stemming from David's a929e17e5a8 of 2020-11-02. So there's\nsomething not right about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Dec 2020 15:31:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "I wrote:\n>> What it looks like to me is that the code for setting up run-time\n>> partition pruning has failed to consider the possibility of nested\n>> partitioning: it's expecting that every partitioned table will have\n>> at least one direct child that is a leaf. I'm not sure though\n>> whether just the Assert is wrong, or there's more fundamental\n>> issues here.\n\n> After looking into the git history I realized that this assertion is\n> quite new, stemming from David's a929e17e5a8 of 2020-11-02. So there's\n> something not right about that.\n\nI took some more time to poke at this today, and I now think that\nthe assertion in make_partitionedrel_pruneinfo is probably OK,\nand what it's pointing out is a bug upstream in path creation.\nSpecifically, I noted that in\n\nselect a from trigger_parted where pg_trigger_depth() <> a order by a;\n\nwe arrive at make_partitionedrel_pruneinfo with partrelids equal\nto (b 1 2), which seems to be correct. The RTE list is\n\nRTE 1: trigger_parted\nRTE 2: trigger_parted_p1\nRTE 3: trigger_parted_p1_1\n\nLike so much else of the partitioning code, AppendPath.partitioned_rels\nis abysmally underdocumented, but what I think it means is the set of\nnon-leaf partitioned tables that are notionally scanned by the\nAppendPath. The only table directly mentioned by the AppendPath's\nsubpath is RTE 3, so that all seems fine.\n\nHowever, upon adding a LIMIT:\n\nselect a from trigger_parted where pg_trigger_depth() <> a order by a limit 40;\nserver closed the connection unexpectedly\n\nwe arrive at make_partitionedrel_pruneinfo with partrelids equal\nto just (b 1); trigger_parted_p1 has been left out. The Path\nin this case has been made by generate_orderedappend_paths, which\nis what's responsible for computing AppendPath.partitioned_rels that\neventually winds up as the argument to make_partitionedrel_pruneinfo.\nSo I think that that code is somehow failing to account for nested\npartitioning, while the non-ordered-append code is doing it right.\nBut I didn't spot exactly where the discrepancy is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Jan 2021 15:28:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "Hi,\nI wonder if the (failed) assertion should be converted to an if statement:\n\ndiff --git a/src/backend/partitioning/partprune.c\nb/src/backend/partitioning/partprune.c\nindex fac921eea5..d646f08a07 100644\n--- a/src/backend/partitioning/partprune.c\n+++ b/src/backend/partitioning/partprune.c\n@@ -585,7 +585,7 @@ make_partitionedrel_pruneinfo(PlannerInfo *root,\nRelOptInfo *parentrel,\n * partitioned tables that we have no sub-paths or\n * sub-PartitionedRelPruneInfo for.\n */\n- Assert(!bms_is_empty(present_parts));\n+ if (bms_is_empty(present_parts)) return NIL;\n\n /* Record the maps and other information. */\n pinfo->present_parts = present_parts;\n\nCheers\n\nOn Fri, Jan 29, 2021 at 12:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> >> What it looks like to me is that the code for setting up run-time\n> >> partition pruning has failed to consider the possibility of nested\n> >> partitioning: it's expecting that every partitioned table will have\n> >> at least one direct child that is a leaf. I'm not sure though\n> >> whether just the Assert is wrong, or there's more fundamental\n> >> issues here.\n>\n> > After looking into the git history I realized that this assertion is\n> > quite new, stemming from David's a929e17e5a8 of 2020-11-02. So there's\n> > something not right about that.\n>\n> I took some more time to poke at this today, and I now think that\n> the assertion in make_partitionedrel_pruneinfo is probably OK,\n> and what it's pointing out is a bug upstream in path creation.\n> Specifically, I noted that in\n>\n> select a from trigger_parted where pg_trigger_depth() <> a order by a;\n>\n> we arrive at make_partitionedrel_pruneinfo with partrelids equal\n> to (b 1 2), which seems to be correct. The RTE list is\n>\n> RTE 1: trigger_parted\n> RTE 2: trigger_parted_p1\n> RTE 3: trigger_parted_p1_1\n>\n> Like so much else of the partitioning code, AppendPath.partitioned_rels\n> is abysmally underdocumented, but what I think it means is the set of\n> non-leaf partitioned tables that are notionally scanned by the\n> AppendPath. The only table directly mentioned by the AppendPath's\n> subpath is RTE 3, so that all seems fine.\n>\n> However, upon adding a LIMIT:\n>\n> select a from trigger_parted where pg_trigger_depth() <> a order by a\n> limit 40;\n> server closed the connection unexpectedly\n>\n> we arrive at make_partitionedrel_pruneinfo with partrelids equal\n> to just (b 1); trigger_parted_p1 has been left out. The Path\n> in this case has been made by generate_orderedappend_paths, which\n> is what's responsible for computing AppendPath.partitioned_rels that\n> eventually winds up as the argument to make_partitionedrel_pruneinfo.\n> So I think that that code is somehow failing to account for nested\n> partitioning, while the non-ordered-append code is doing it right.\n> But I didn't spot exactly where the discrepancy is.\n>\n> regards, tom lane\n>\n>\n>\n\nHi,I wonder if the (failed) assertion should be converted to an if statement:diff --git a/src/backend/partitioning/partprune.c b/src/backend/partitioning/partprune.cindex fac921eea5..d646f08a07 100644--- a/src/backend/partitioning/partprune.c+++ b/src/backend/partitioning/partprune.c@@ -585,7 +585,7 @@ make_partitionedrel_pruneinfo(PlannerInfo *root, RelOptInfo *parentrel, * partitioned tables that we have no sub-paths or * sub-PartitionedRelPruneInfo for. */- Assert(!bms_is_empty(present_parts));+ if (bms_is_empty(present_parts)) return NIL; /* Record the maps and other information. */ pinfo->present_parts = present_parts;CheersOn Fri, Jan 29, 2021 at 12:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n>> What it looks like to me is that the code for setting up run-time\n>> partition pruning has failed to consider the possibility of nested\n>> partitioning: it's expecting that every partitioned table will have\n>> at least one direct child that is a leaf. I'm not sure though\n>> whether just the Assert is wrong, or there's more fundamental\n>> issues here.\n\n> After looking into the git history I realized that this assertion is\n> quite new, stemming from David's a929e17e5a8 of 2020-11-02. So there's\n> something not right about that.\n\nI took some more time to poke at this today, and I now think that\nthe assertion in make_partitionedrel_pruneinfo is probably OK,\nand what it's pointing out is a bug upstream in path creation.\nSpecifically, I noted that in\n\nselect a from trigger_parted where pg_trigger_depth() <> a order by a;\n\nwe arrive at make_partitionedrel_pruneinfo with partrelids equal\nto (b 1 2), which seems to be correct. The RTE list is\n\nRTE 1: trigger_parted\nRTE 2: trigger_parted_p1\nRTE 3: trigger_parted_p1_1\n\nLike so much else of the partitioning code, AppendPath.partitioned_rels\nis abysmally underdocumented, but what I think it means is the set of\nnon-leaf partitioned tables that are notionally scanned by the\nAppendPath. The only table directly mentioned by the AppendPath's\nsubpath is RTE 3, so that all seems fine.\n\nHowever, upon adding a LIMIT:\n\nselect a from trigger_parted where pg_trigger_depth() <> a order by a limit 40;\nserver closed the connection unexpectedly\n\nwe arrive at make_partitionedrel_pruneinfo with partrelids equal\nto just (b 1); trigger_parted_p1 has been left out. The Path\nin this case has been made by generate_orderedappend_paths, which\nis what's responsible for computing AppendPath.partitioned_rels that\neventually winds up as the argument to make_partitionedrel_pruneinfo.\nSo I think that that code is somehow failing to account for nested\npartitioning, while the non-ordered-append code is doing it right.\nBut I didn't spot exactly where the discrepancy is.\n\n regards, tom lane",
"msg_date": "Fri, 29 Jan 2021 14:42:49 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I wonder if the (failed) assertion should be converted to an if statement:\n\nAs I said, I'm now thinking it's not the Assert that's faulty.\nIf I'm right about that, it's likely that the mistaken labeling\nof these paths has other consequences beyond triggering this\nassertion. (If it has none, I think we'd be better off to remove\nthese Path fields altogether, and re-identify the parent rels\nhere from the RelOptInfo data.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Jan 2021 17:52:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "I wrote:\n> As I said, I'm now thinking it's not the Assert that's faulty.\n> If I'm right about that, it's likely that the mistaken labeling\n> of these paths has other consequences beyond triggering this\n> assertion. (If it has none, I think we'd be better off to remove\n> these Path fields altogether, and re-identify the parent rels\n> here from the RelOptInfo data.)\n\nAfter more time poking at this, I've concluded that my parenthetical\nremark is the right solution: [Merge]AppendPath.partitioned_rels\nought to be nuked from orbit. It requires a whole lot of squirrely,\nnow-known-to-be-buggy logic in allpaths.c, and practically the only\nplace where we use the result is in make_partition_pruneinfo().\nThat means we're expending a lot of work *per AppendPath*, which\nis quite foolish unless there's some reason we can't get the\ninformation at create_plan() time instead. Which we can.\n\nFor simplicity of review I divided the patch into two parts.\n0001 revises make_partition_pruneinfo() and children to identify\nthe relevant parent partitions for themselves, which is not too\nhard to do by chasing up the child-to-parent AppendRelInfo links.\nNot formerly documented, AFAICT, is that we want to collect just\nthe parent partitions that are between the Append path's own rel\n(possibly equal to it) and the subpaths' rels. I'd first tried\nto code this by using the top_parent_relids and part_rels[] links\nin the RelOptInfos, but that turns out not to work. We might\nascend to a top parent that's above the Append's rel (if considering\nan appendpath for a sub-partition, which happens in partitionwise\njoin). We could also descend to a child at or below some subpath\nlevel, since there are also cases where subpaths correspond to\nunflattened non-leaf partitions. Either of those things result\nin failures. But once you wrap your head around handling those\nrestrictions, it's quite simple.\n\nThen 0002 just nukes all the no-longer-necessary logic to compute\nthe partitioned_rels fields. There are a couple of changes that\nare not quite trivial. create_[merge_]append_plan were testing\nbest_path->partitioned_rels != NIL to shortcut calling\nmake_partition_pruneinfo at all. I just dropped that, reasoning\nthat it wasn't saving enough to worry about. Also,\ncreate_append_path was similarly testing partitioned_rels != NIL\nto decide if it needed to go to the trouble of using\nget_baserel_parampathinfo. I changed that to an IS_PARTITIONED_REL()\ntest, which isn't *quite* the same thing but seems close enough.\n(It changes no regression results, anyway.)\n\nThis fixes the cases reported by Andreas and Jaime, leaving me\nmore confident that there's nothing wrong with David's Assert.\n\nI did not try to add a regression test case, mainly because it's\nnot quite clear to me where the original bug is. (I'm a little\nsuspicious that the blame might lie with the \"match_partition_order\"\ncases in generate_orderedappend_paths, which try to bypass\naccumulate_append_subpath without updating the partitioned_rels\ndata. But I'm not excited about trying to prove that.)\n\nI wonder whether we should consider back-patching this. Another\nthing that seems unclear is whether there is any serious consequence\nto omitting some intermediate partitions from the set considered\nby make_partition_pruneinfo. Presumably it could lead to missing\nsome potential run-time-pruning opportunities, but is there any\nworse issue? If there isn't, this is a bigger change than I want\nto put in the back braches.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 30 Jan 2021 17:42:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "On Sun, 31 Jan 2021 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This fixes the cases reported by Andreas and Jaime, leaving me\n> more confident that there's nothing wrong with David's Assert.\n\nI agree that there is nothing wrong with the Assert.\n\nThe commit message of a929e17e5 mentions:\n\n> Here we tighten that up so that partitioned_rels only ever contains the RT\n> index for partitioned tables which actually have subpaths in the given\n> Append/MergeAppend. We can now Assert that every PartitionedRelPruneInfo\n> has a non-empty present_parts. That should allow us to catch any weird\n> corner cases that have been missed.\n\nIt seems in this case the Assert *did* find a case that I missed.\nUnfortunately, I just missed the email that reported the problem,\nuntil yesterday.\n\n> I did not try to add a regression test case, mainly because it's\n> not quite clear to me where the original bug is. (I'm a little\n> suspicious that the blame might lie with the \"match_partition_order\"\n> cases in generate_orderedappend_paths, which try to bypass\n> accumulate_append_subpath without updating the partitioned_rels\n> data. But I'm not excited about trying to prove that.)\n\nYeah, that suspicion is correct. More specifically when\nget_singleton_append_subpath() finds a single subpath Append /\nMergeAppend, the single subpath is returned.\n\nSo what's happening is that we first build the Append paths for the\nsub-partitioned table which, in the example case, will have a single\nsubpath Append path with partitioned_rels containing just the parent\nof that partition (i.e trigger_parted_p1). When it comes to doing\ngenerate_orderedappend_paths() for the base relation (trigger_parted),\nwe find match_partition_order to be true and allow\nget_singleton_append_subpath() to pullup the single-subplan Append\npath. Unfortunately we just completely ignore that Append path's\npartitioned_rels. Back in generate_orderedappend_paths() the\nstartup_partitioned_rels and total_partitioned_rels variables are\nused, both of which only mention the trigger_parted table. The mention\nof trigger_parted_p1 is lost.\n\nIt could be fixed by modifying get_singleton_append_subpath() to\nmodify the partitioned_rels when it collapses these single-subpath\nAppend/MergeAppend path, but I'm very open to looking at just getting\nrid of the partitioned_rels field. Prior to a929e17e5 that field was\nvery loosely set and in serval cases could contain RT indexes of\npartitioned tables that didn't even have any subpaths in the given\nAppend/MergeAppend. a929e17e5 attempted to tighten all that up but\nlooks like I missed the case above.\n\n> I wonder whether we should consider back-patching this. Another\n> thing that seems unclear is whether there is any serious consequence\n> to omitting some intermediate partitions from the set considered\n> by make_partition_pruneinfo. Presumably it could lead to missing\n> some potential run-time-pruning opportunities, but is there any\n> worse issue? If there isn't, this is a bigger change than I want\n> to put in the back braches.\n\nIt shouldn't be backpatched. a929e17e5 only exists in master. Prior to\nthat AppendPath/MergeAppendPath's partitioned_rels field could only\ncontain additional partitioned table RT index. There are no known\ncases of missing ones prior to a929e17e5, so this bug shouldn't exist\nin PG13.\n\na929e17e5 introduced run-time partition pruning for\nsub-Append/MergeAppend paths. The commit message of that explains\nthat there are plenty of legitimate cases where we can't flatten these\nsub-Append/MergeAppend paths down into the top-level\nAppend/MergeAppend. Originally I had thought we should only bother\ndoing run-time pruning on the top-level Append/MergeAppend because I\nthought these cases were very uncommon. I've now changed my mind.\n\nFor a929e17e5, it was not just a matter of removing the lines in [1]\nto allow run-time pruning on nested Append/MergeAppends. I also needed\nto clean up the sloppy setting of partitioned_rels. The remainder of\nthe patch attempted that.\n\nFWIW, I hacked together a patch which fixes the bug by passing a\nBitmapset ** pointer to get_singleton_append_subpath(), which set the\nbits for the Append / MergeAppend path's partitioned_rels that we get\nrid of when it only has a single subpath. This stops the Assert\nfailure mentioned here. However, I'd much rather explore getting rid\nof partitioned_rels completely. I'll now have a look at the patch\nyou're proposing for that.\n\nThanks for investigating this and writing the patch. Apologies for\nthis email missing my attention closer to the time to when it was\ninitially reported.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blobdiff;f=src/backend/optimizer/plan/createplan.c;h=40abe6f9f623ed2922ccc4e1991e97e90322f47d;hp=94280a730c4d9abb2143416fca8e74e76dada042;hb=a929e17e5;hpb=dfc797730fc7a07c0e6bd636ad1a564aecab3161\n\n\n",
"msg_date": "Mon, 1 Feb 2021 12:49:22 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 8:50 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Sun, 31 Jan 2021 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > This fixes the cases reported by Andreas and Jaime, leaving me\n> > more confident that there's nothing wrong with David's Assert.\n>\n> It could be fixed by modifying get_singleton_append_subpath() to\n> modify the partitioned_rels when it collapses these single-subpath\n> Append/MergeAppend path, but I'm very open to looking at just getting\n> rid of the partitioned_rels field. Prior to a929e17e5 that field was\n> very loosely set and in serval cases could contain RT indexes of\n> partitioned tables that didn't even have any subpaths in the given\n> Append/MergeAppend. a929e17e5 attempted to tighten all that up but\n> looks like I missed the case above.\n>\n> > I wonder whether we should consider back-patching this. Another\n> > thing that seems unclear is whether there is any serious consequence\n> > to omitting some intermediate partitions from the set considered\n> > by make_partition_pruneinfo. Presumably it could lead to missing\n> > some potential run-time-pruning opportunities, but is there any\n> > worse issue? If there isn't, this is a bigger change than I want\n> > to put in the back braches.\n>\n> It shouldn't be backpatched. a929e17e5 only exists in master. Prior to\n> that AppendPath/MergeAppendPath's partitioned_rels field could only\n> contain additional partitioned table RT index. There are no known\n> cases of missing ones prior to a929e17e5, so this bug shouldn't exist\n> in PG13.\n>\n> a929e17e5 introduced run-time partition pruning for\n> sub-Append/MergeAppend paths. The commit message of that explains\n> that there are plenty of legitimate cases where we can't flatten these\n> sub-Append/MergeAppend paths down into the top-level\n> Append/MergeAppend. Originally I had thought we should only bother\n> doing run-time pruning on the top-level Append/MergeAppend because I\n> thought these cases were very uncommon. I've now changed my mind.\n>\n> For a929e17e5, it was not just a matter of removing the lines in [1]\n> to allow run-time pruning on nested Append/MergeAppends. I also needed\n> to clean up the sloppy setting of partitioned_rels. The remainder of\n> the patch attempted that.\n>\n> FWIW, I hacked together a patch which fixes the bug by passing a\n> Bitmapset ** pointer to get_singleton_append_subpath(), which set the\n> bits for the Append / MergeAppend path's partitioned_rels that we get\n> rid of when it only has a single subpath. This stops the Assert\n> failure mentioned here. However, I'd much rather explore getting rid\n> of partitioned_rels completely. I'll now have a look at the patch\n> you're proposing for that.\n\nI've read Tom's patch (0001) and would definitely vote for that over\nhaving partitioned_rels in Paths anymore.\n\n> Thanks for investigating this and writing the patch.\n\n+1. My apologies as well for failing to notice this thread sooner.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Feb 2021 11:46:15 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "On Sun, 31 Jan 2021 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> For simplicity of review I divided the patch into two parts.\n> 0001 revises make_partition_pruneinfo() and children to identify\n> the relevant parent partitions for themselves, which is not too\n> hard to do by chasing up the child-to-parent AppendRelInfo links.\n> Not formerly documented, AFAICT, is that we want to collect just\n> the parent partitions that are between the Append path's own rel\n> (possibly equal to it) and the subpaths' rels. I'd first tried\n> to code this by using the top_parent_relids and part_rels[] links\n> in the RelOptInfos, but that turns out not to work. We might\n> ascend to a top parent that's above the Append's rel (if considering\n> an appendpath for a sub-partition, which happens in partitionwise\n> join). We could also descend to a child at or below some subpath\n> level, since there are also cases where subpaths correspond to\n> unflattened non-leaf partitions. Either of those things result\n> in failures. But once you wrap your head around handling those\n> restrictions, it's quite simple.\n\nI had a look at this one and it all makes sense and the logic for\nobtaining the lineage of parent partitioned tables seems fine.\n\nWhat I can't understand is why you changed to a List-of-Lists rather\nthan a List-of-Relids. This makes the patch both bigger than it needs\nto be and the processing quite a bit less efficient.\n\nFor example, in make_partition_pruneinfo() when you're walking up to\nthe top-level target partitioned table you must lcons() each new list\nmember to ensure the children always come after the parents. These\nchains are likely to be short so the horrible overheads of the\nmemmove() in lcons() won't cost that much, but there's more to it.\nThe other seemingly needlessly slow part is in the\nlist_concat_unique_ptr() call. This needs to be done for every subpath\nin the Append/Merge append. It would be good to get rid of that.\nGiven, the list are most likely going to be small, but that's still a\nquadratic function, so it seems like a good idea to try to avoid using\nit if there is another way to do it.\n\nThe memory allocations could also be more efficient for Relids rather\nthan Lists. Since we're working up from the child to the parent in\nthe lineage calculation code in make_partition_pruneinfo(), we'll\nalways allocate the Bitmapset to the correct size right away, rather\nthan possibly having to increase the size if the next RT index were\nnot to fit in the current number of words. Of course, with a\nList-of-Lists, not every lcons() would require a new allocation, but\nthere's an above zero chance that it might.\n\nI've attached a version of your 0001 patch which just maintains using\na List-of-Relids. This shrinks the diff down about 3kb.\n\nParent RT indexes are guaranteed to be lower than their children RT\nindexes, so it's pretty simple to figure out the target RT index by\njust looking at the lowest set bit. Doing it this way also simplifies\nthings as add_part_rel_list() no longer must insist that the sublists\nare in parent-to-child order.\n\nDavid",
"msg_date": "Mon, 1 Feb 2021 18:36:02 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> What I can't understand is why you changed to a List-of-Lists rather\n> than a List-of-Relids.\n\nYeah, I spent no effort on micro-optimizing the data structure. I figured\nthat since we were not including leaf partitions, there would never be\nenough rels involved to worry about. Perhaps that's wrong though.\n\n> Parent RT indexes are guaranteed to be lower than their children RT\n> indexes,\n\nI was intentionally avoiding that assumption ;-). Maybe it buys enough\nto be worth the loss of generality, but ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Feb 2021 00:48:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "I wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> Parent RT indexes are guaranteed to be lower than their children RT\n>> indexes,\n\n> I was intentionally avoiding that assumption ;-). Maybe it buys enough\n> to be worth the loss of generality, but ...\n\nOh, it's too late at night. I now remember that the real problem\nI had with that representation was that it cannot work for joinrels.\nCurrently we only apply this logic to partitioned baserels, but\ndon't you think it might someday be called on to optimize\npartitionwise joins?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Feb 2021 00:57:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "On Mon, 1 Feb 2021 at 18:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> >> Parent RT indexes are guaranteed to be lower than their children RT\n> >> indexes,\n>\n> > I was intentionally avoiding that assumption ;-). Maybe it buys enough\n> > to be worth the loss of generality, but ...\n>\n> Oh, it's too late at night. I now remember that the real problem\n> I had with that representation was that it cannot work for joinrels.\n> Currently we only apply this logic to partitioned baserels, but\n> don't you think it might someday be called on to optimize\n> partitionwise joins?\n\nI've not looked in detail, but I think the code would need a pretty\nbig overhaul before that could happen. For example, ever since we\nallowed ATTACH PARTITION to work without taking an AEL we now have a\nPartitionedRelPruneInfo.relid_map field that stores Oids for the\nexecutor to look at to see if it can figure out if a partition has\nbeen added since the plan was generated. Not sure how that can work\nwith non-base rels as we have no Oid for join rels. Perhaps I'm just\nnot thinking hard enough, but either way, it does seem like it would\ntake a pretty big hit with a hammer to make it work. My current\nthinking is that being unable to represent join rels in a set of\nRelids is fairly insignificant compared to what would be required to\nget the feature to work correctly.\n\nDavid\n\n\n",
"msg_date": "Mon, 1 Feb 2021 19:20:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Mon, 1 Feb 2021 at 18:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oh, it's too late at night. I now remember that the real problem\n>> I had with that representation was that it cannot work for joinrels.\n>> Currently we only apply this logic to partitioned baserels, but\n>> don't you think it might someday be called on to optimize\n>> partitionwise joins?\n\n> I've not looked in detail, but I think the code would need a pretty\n> big overhaul before that could happen.\n\nYeah, there's no doubt that a lot of other work would be needed too;\nI'd just figured that maybe this code could be ready for it. But on\nlooking at it again, I agree that the bitmapset approach is simpler\nand cheaper than what I did. I realized that a good bit of what\nI did not like about the pre-existing logic is that it was calling\nthe values \"Relids\" rather than Bitmapsets. To my mind, the Relids\ntypedef is meant to denote values that identify individual relations\n(which might be joins) to the planner. The values we're dealing with\nin this code do *not* correspond to anything that is or could be a\nRelOptInfo; rather they are sets of distinct relations. It's fine\nto use a Bitmapset if it's a convenient representation, but we should\ncall it that and not a Relids.\n\nI renamed things that way, did some more work on the comments,\nand pushed it. Thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Feb 2021 14:58:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
},
{
"msg_contents": "On Tue, 2 Feb 2021 at 08:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I renamed things that way, did some more work on the comments,\n> and pushed it. Thanks for reviewing!\n\nThanks for working on this and coming up with the idea to nuke partitioned_rels.\n\nDavid\n\n\n",
"msg_date": "Tue, 2 Feb 2021 10:26:23 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Failed assertion during partition pruning"
}
] |
[
{
"msg_contents": "Hi all.\n\nI often fall into error like this:\n\nDBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR: timestamp out of range\nCONTEXT: SQL function \"accounting_ready\" statement 1 [for Statement \"SELECT COUNT( * ) FROM (WITH\ntarget_date AS ( SELECT ?::timestamptz ),\ntarget_order as (\n SELECT\n invoice_range as bill_range,\n o.*\n FROM ( SELECT * FROM \"order_bt\" WHERE sys_period @> sys_time() ) o\n LEFT JOIN period prd on prd.id = o.period_id\n LEFT JOIN accounting_ready(\n.....\nother 200 lines of query\n\nWould be nice if here also will be reported error value.\nIt will shed more light on what is comming wrong\n\nAlso would be useful if PG point at query where this bad value was calculated or occur.\n\nIt this possible?\n\nThank you.\n\n\n-- \nBest regards,\nEugen Konkov\nFeature Request: Report additionally error value\n\n\nHi all.\n\nI often fall into error like this:\n\nDBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR: timestamp out of range\nCONTEXT: SQL function \"accounting_ready\" statement 1 [for Statement \"SELECT COUNT( * ) FROM (WITH\ntarget_date AS ( SELECT ?::timestamptz ),\ntarget_order as (\n SELECT\n invoice_range as bill_range,\n o.*\n FROM ( SELECT * FROM \"order_bt\" WHERE sys_period @> sys_time() ) o\n LEFT JOIN period prd on prd.id = o.period_id\n LEFT JOIN accounting_ready(\n.....\nother 200 lines of query\n\nWould be nice if here also will be reported error value.\nIt will shed more light on what is comming wrong\n\nAlso would be useful if PG point at query where this bad value was calculated or occur.\n\nIt this possible?\n\nThank you.\n\n\n--\nBest regards,\nEugen Konkov",
"msg_date": "Sun, 29 Nov 2020 00:57:58 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Feature Request: Report additionally error value"
},
{
"msg_contents": "On Sat, Nov 28, 2020 at 4:18 PM Eugen Konkov <kes-kes@yandex.ru> wrote:\n\n> Hi all.\n>\n> I often fall into error like this:\n>\n> DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st\n> execute failed: ERROR: timestamp out of range\n> CONTEXT: SQL function \"accounting_ready\" statement 1 [for Statement\n> \"SELECT COUNT( * ) FROM (WITH\n> target_date AS ( SELECT ?::timestamptz ),\n> target_order as (\n> SELECT\n> invoice_range as bill_range,\n> o.*\n> FROM ( SELECT * FROM \"order_bt\" WHERE sys_period @> sys_time() ) o\n> LEFT JOIN period prd on prd.id = o.period_id\n> LEFT JOIN accounting_ready(\n> .....\n> other 200 lines of query\n>\n> Would be nice if here also will be reported error value.\n>\n\nIn this specific situation timestamp input does report the problematic text\nwhen text is provided (timestamp_in) but both the output and\n\"receive/binary\" routines simply provide the reported error.\n\nUnless you are doing some kind of ETL with that query I'd bet money\nwhatever perl is sending along for that input parameter is being sent in\nbinary and is incompatible with PostgreSQL's allowed timestamp range.\n\nOtherwise, yes, sometimes you just need to debug your data (though usually\ndata is in text format and the error includes the problematic string).\n\n\n> Also would be useful if PG point at query where this bad value was\n> calculated or occur.\n>\n\nThis is not the first time we've seen this request and it usually ends up\ngetting stalled because its non-trivial to implement and thus isn't\nfeasible for the benefit it brings. In short, the text input parsing\nroutines are decoupled from the queries where they are used.\n\nDavid J.\n\nOn Sat, Nov 28, 2020 at 4:18 PM Eugen Konkov <kes-kes@yandex.ru> wrote:\n\nHi all.\n\nI often fall into error like this:\n\nDBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR: timestamp out of range\nCONTEXT: SQL function \"accounting_ready\" statement 1 [for Statement \"SELECT COUNT( * ) FROM (WITH\ntarget_date AS ( SELECT ?::timestamptz ),\ntarget_order as (\n SELECT\n invoice_range as bill_range,\n o.*\n FROM ( SELECT * FROM \"order_bt\" WHERE sys_period @> sys_time() ) o\n LEFT JOIN period prd on prd.id = o.period_id\n LEFT JOIN accounting_ready(\n.....\nother 200 lines of query\n\nWould be nice if here also will be reported error value.In this specific situation timestamp input does report the problematic text when text is provided (timestamp_in) but both the output and \"receive/binary\" routines simply provide the reported error.Unless you are doing some kind of ETL with that query I'd bet money whatever perl is sending along for that input parameter is being sent in binary and is incompatible with PostgreSQL's allowed timestamp range.Otherwise, yes, sometimes you just need to debug your data (though usually data is in text format and the error includes the problematic string).\n\nAlso would be useful if PG point at query where this bad value was calculated or occur.This is not the first time we've seen this request and it usually ends up getting stalled because its non-trivial to implement and thus isn't feasible for the benefit it brings. In short, the text input parsing routines are decoupled from the queries where they are used.David J.",
"msg_date": "Sat, 28 Nov 2020 17:08:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request: Report additionally error value"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sat, Nov 28, 2020 at 4:18 PM Eugen Konkov <kes-kes@yandex.ru> wrote:\n>> I often fall into error like this:\n>> DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st\n>> execute failed: ERROR: timestamp out of range\n>> Would be nice if here also will be reported error value.\n\n> In this specific situation timestamp input does report the problematic text\n> when text is provided (timestamp_in) but both the output and\n> \"receive/binary\" routines simply provide the reported error.\n\nA quick look through the source code says that the places where we report\nthat message without providing a value are mostly places where\ntimestamp2tm or the like has failed. That means we *can't* produce a\nvalue that will mean anything to a human; the range checks on timestamps\nare closely associated with the limitations of the calendar conversion\ncode.\n\n> Unless you are doing some kind of ETL with that query I'd bet money\n> whatever perl is sending along for that input parameter is being sent in\n> binary and is incompatible with PostgreSQL's allowed timestamp range.\n\nYeah. It's not that easy to get an out-of-range timestamp into Postgres,\nso a garbage value being sent in binary seems like a pretty likely\nexplanation. It's not the only explanation, but you'd have to be doing a\nfairly out-of-the-ordinary calculation, like\n\n# select now() + interval '1000000 years';\nERROR: timestamp out of range\n\n>> Also would be useful if PG point at query where this bad value was\n>> calculated or occur.\n\n> This is not the first time we've seen this request and it usually ends up\n> getting stalled because its non-trivial to implement and thus isn't\n> feasible for the benefit it brings.\n\nI do still have ambitions to make that happen, but you're right that it's\na major undertaking. Don't hold your breath.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Nov 2020 19:47:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request: Report additionally error value"
},
{
"msg_contents": "Hello Tom,\n\n>>> Also would be useful if PG point at query where this bad value was\n>>> calculated or occur.\n\n>> This is not the first time we've seen this request and it usually ends up\n>> getting stalled because its non-trivial to implement and thus isn't\n>> feasible for the benefit it brings.\n\n> I do still have ambitions to make that happen, but you're right that it's\n> a major undertaking. Don't hold your breath.\n\nAnother case posted below.\n\nHere in database are values which could not construct correct daterange.\nThere are thousands of rows. It is hard to find even a way or make an assumption \nhow to get record which cause this error.\n\nIt will be very impressive if database could report the line at database where error is occur.\nVia HINT for example. \nIf this is not possible, then current cursor position or raw data dump in debug mode.\nProbably some memory buffers or so.\n\nAny additional info would be helpful.\n\nThank you.\n\n\nDBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR: range lower bound must be less than or equal to range upper bound\rCONTEXT: SQL function \"accounting_ready\" statement 1 [for Statement \"SELECT COUNT( * ) FROM (( WITH\rtarget_date AS ( SELECT ?::timestamptz ),\rtarget_order as (\r SELECT\r usage_range as bill_range,\r o.*\r FROM ( SELECT * FROM \"order_bt\" WHERE sys_period @> sys_time() ) o\r LEFT JOIN period prd on prd.id = o.period_id\r LEFT JOIN accounting_ready(\r CASE WHEN prd.INTERVAL = '1 mon' THEN date_trunc( 'month', o.docdate ) ELSE o.docdate END,\r prd.INTERVAL, (SELECT * FROM target_date)\r ) acc ON true\r WHERE FALSE\r OR ? = 0\r OR ? = 1 AND o.id = ?\r AND acc.usage AND EXISTS (\r SELECT * FROM order_bt prev_order\r WHERE sys_period @> sys_time() AND\r prev_order.id = o.id AND prev_order.app_period && acc.usage_range\r )\r AND o.app_period && acc.usage_range\r OR ? = 2 AND o.agreement_id = ? and o.period_id = ?\r AND acc.usage AND EXISTS (\r SELECT * FROM order_bt prev_order\r WHERE sys_period @> sys_time() AND\r prev_order.id = o.id AND prev_order.app_period && acc.usage_range\r )\r AND o.app_period && acc.usage_range\r OR ? = 3\r),\rUSAGE AS ( SELECT\r (ocd.o).id as order_id,\r (dense_rank() over (PARTITION BY o.agreement_id, o.id ORDER BY (ocd.ic).consumed_period )) as group_id,\r (ocd.c).id as detail_id,\r (ocd.c).service_type_id as service_type,\r (ocd.c).resource_type_id as detail_type,\r (ocd.c).amount as detail_amount,\r (ocd.c).allocated_resource_id as resource_id,\r null as resource_uuid,\r null as resource_desc,\r rt.unit as resource_unit,\r -- count changes. Logic is next: How many times configration for this order is met at this period\r count(*) OVER (PARTITION BY (ocd.o).id, (ocd.c).id ) as consumed_count,\r (ocd.ic).consumed as consumed_days,\r null as consumed_amount,\r null as consumed_last,\r\r a.id as agreement_id,\r coalesce( substring( a.docn from '^A?(.*?)(?:\\s*-[^-]*)?$' ), a.docn ) as agreement,\r a.docdate as docdate,\r\r pkg.id as package_id,\r pkg.link_1c_id as package_1c_id,\r coalesce( pkg.display, pkg.name ) as package,\r\r coalesce( st.display, st.name, rt.display, rt.name ) as sr_name,\r COALESCE( (ocd.p).label, rt.LABEL ) as unit_label,\r\r coalesce( (ocd.c).sort_order, pd.sort_order ) as sort_order,\r\r ocd.item_price AS price,\r -- We want to display QTY for resources too\r coalesce( ocd.item_qty, (ocd.c).amount/rt.unit ) AS qty,\r 0 AS month_length,\r 0 AS days_count,\r o.bill_range,\r lower( (ocd.ic).consumed_period ) as consumed_from,\r upper( (ocd.ic).consumed_period ) -interval '1sec' as consumed_till,\r\r\r ocd.item_suma,\r\r 0 as discount,\r (sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).id, (ocd.ic).consumed_period ))::numeric( 10, 2) AS group_suma,\r (sum( ocd.item_cost ) OVER( PARTITION BY (ocd.o).id, (ocd.ic).consumed_period ))::numeric( 10, 2) AS order_cost\rFROM target_order o\rLEFT JOIN order_cost_details( o.bill_range ) ocd\r ON (ocd.o).id = o.id AND (ocd.ic).consumed_period && o.app_period\r LEFT JOIN agreement a ON a.id = o.agreement_id\r LEFT JOIN package pkg ON pkg.id = o.package_id\r LEFT JOIN package_detail pd ON pd.package_id = (ocd.o).package_id\r AND pd.resource_type_id IS NOT DISTINCT FROM (ocd.c).resource_type_id\r AND pd.service_type_id IS NOT DISTINCT FROM (ocd.c).service_type_id\r LEFT JOIN resource_type rt ON rt.id = (ocd.c).resource_type_id\r LEFT JOIN service_type st on st.id = (ocd.c).service_type_id\r)\r\rSELECT *,\r (group_suma/6) ::numeric( 10, 2 ) as group_nds,\r (SELECT sum(x) from (SELECT sum( DISTINCT group_suma ) AS x FROM usage sub_u WHERE sub_u.agreement_id = usage.agreement_id GROUP BY agreement_id, order_id) t) as total_suma,\r (SELECT sum(x) from (SELECT sum( DISTINCT (group_suma/6)::numeric( 10, 2 ) ) AS x FROM usage sub_u WHERE sub_u.agreement_id = usage.agreement_id GROUP BY agreement_id, order_id) t) as total_nds\rFROM usage\rwhere ? <> 3 OR consumed_count > 1\rORDER BY\r /* put order first then allocated resource without Order */\r agreement,\r order_id,\r bill_range,\r group_id,\r sort_order nulls last,\r detail_type nulls last,\r price desc nulls last,\r detail_amount desc,\r service_type nulls last,\r detail_id\r)\r) \"me\"\" with ParamValues: 1='2020-08-01', 2='2', 3='2', 4=undef, 5='2', 6='3493', 7='10', 8='2', 9='2'] at /home/kes/work/projects/tucha/monkeyman/lib/MaitreD/Controller/Cart.pm line 828\n\n\n-- \nBest regards,\nEugen Konkov\nRe: Feature Request: Report additionally error value\n\n\nHello Tom,\n\n>>> Also would be useful if PG point at query where this bad value was\n>>> calculated or occur.\n\n>> This is not the first time we've seen this request and it usually ends up\n>> getting stalled because its non-trivial to implement and thus isn't\n>> feasible for the benefit it brings.\n\n> I do still have ambitions to make that happen, but you're right that it's\n> a major undertaking. Don't hold your breath.\n\nAnother case posted below.\n\nHere in database are values which could not construct correct daterange.\nThere are thousands of rows. It is hard to find even a way or make an assumption\nhow to get record which cause this error.\n\nIt will be very impressive if database could report the line at database where error is occur.\nVia HINT for example.\nIf this is not possible, then current cursor position or raw data dump in debug mode.\nProbably some memory buffers or so.\n\nAny additional info would be helpful.\n\nThank you.\n\n\nDBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR: range lower bound must be less than or equal to range upper bound?CONTEXT: SQL function \"accounting_ready\" statement 1 [for Statement \"SELECT COUNT( * ) FROM (( WITH?target_date AS ( SELECT ?::timestamptz ),?target_order as (? SELECT? usage_range as bill_range,? o.*? FROM ( SELECT * FROM \"order_bt\" WHERE sys_period @> sys_time() ) o? LEFT JOIN period prd on prd.id = o.period_id? LEFT JOIN accounting_ready(? CASE WHEN prd.INTERVAL = '1 mon' THEN date_trunc( 'month', o.docdate ) ELSE o.docdate END,? prd.INTERVAL, (SELECT * FROM target_date)? ) acc ON true? WHERE FALSE? OR ? = 0? OR ? = 1 AND o.id = ?? AND acc.usage AND EXISTS (? SELECT * FROM order_bt prev_order? WHERE sys_period @> sys_time() AND? prev_order.id = o.id AND prev_order.app_period && acc.usage_range? )? AND o.app_period && acc.usage_range? OR ? = 2 AND o.agreement_id = ? and o.period_id = ?? AND acc.usage AND EXISTS (? SELECT * FROM order_bt prev_order? WHERE sys_period @> sys_time() AND? prev_order.id = o.id AND prev_order.app_period && acc.usage_range? )? AND o.app_period && acc.usage_range? OR ? = 3?),?USAGE AS ( SELECT? (ocd.o).id as order_id,? (dense_rank() over (PARTITION BY o.agreement_id, o.id ORDER BY (ocd.ic).consumed_period )) as group_id,? (ocd.c).id as detail_id,? (ocd.c).service_type_id as service_type,? (ocd.c).resource_type_id as detail_type,? (ocd.c).amount as detail_amount,? (ocd.c).allocated_resource_id as resource_id,? null as resource_uuid,? null as resource_desc,? rt.unit as resource_unit,? -- count changes. Logic is next: How many times configration for this order is met at this period? count(*) OVER (PARTITION BY (ocd.o).id, (ocd.c).id ) as consumed_count,? (ocd.ic).consumed as consumed_days,? null as consumed_amount,? null as consumed_last,?? a.id as agreement_id,? coalesce( substring( a.docn from '^A?(.*?)(?:\\s*-[^-]*)?$' ), a.docn ) as agreement,? a.docdate as docdate,?? pkg.id as package_id,? pkg.link_1c_id as package_1c_id,? coalesce( pkg.display, pkg.name ) as package,?? coalesce( st.display, st.name, rt.display, rt.name ) as sr_name,? COALESCE( (ocd.p).label, rt.LABEL ) as unit_label,?? coalesce( (ocd.c).sort_order, pd.sort_order ) as sort_order,?? ocd.item_price AS price,? -- We want to display QTY for resources too? coalesce( ocd.item_qty, (ocd.c).amount/rt.unit ) AS qty,? 0 AS month_length,? 0 AS days_count,? o.bill_range,? lower( (ocd.ic).consumed_period ) as consumed_from,? upper( (ocd.ic).consumed_period ) -interval '1sec' as consumed_till,??? ocd.item_suma,?? 0 as discount,? (sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).id, (ocd.ic).consumed_period ))::numeric( 10, 2) AS group_suma,? (sum( ocd.item_cost ) OVER( PARTITION BY (ocd.o).id, (ocd.ic).consumed_period ))::numeric( 10, 2) AS order_cost?FROM target_order o?LEFT JOIN order_cost_details( o.bill_range ) ocd? ON (ocd.o).id = o.id AND (ocd.ic).consumed_period && o.app_period? LEFT JOIN agreement a ON a.id = o.agreement_id? LEFT JOIN package pkg ON pkg.id = o.package_id? LEFT JOIN package_detail pd ON pd.package_id = (ocd.o).package_id? AND pd.resource_type_id IS NOT DISTINCT FROM (ocd.c).resource_type_id? AND pd.service_type_id IS NOT DISTINCT FROM (ocd.c).service_type_id? LEFT JOIN resource_type rt ON rt.id = (ocd.c).resource_type_id? LEFT JOIN service_type st on st.id = (ocd.c).service_type_id?)??SELECT *,? (group_suma/6) ::numeric( 10, 2 ) as group_nds,? (SELECT sum(x) from (SELECT sum( DISTINCT group_suma ) AS x FROM usage sub_u WHERE sub_u.agreement_id = usage.agreement_id GROUP BY agreement_id, order_id) t) as total_suma,? (SELECT sum(x) from (SELECT sum( DISTINCT (group_suma/6)::numeric( 10, 2 ) ) AS x FROM usage sub_u WHERE sub_u.agreement_id = usage.agreement_id GROUP BY agreement_id, order_id) t) as total_nds?FROM usage?where ? <> 3 OR consumed_count > 1?ORDER BY? /* put order first then allocated resource without Order */? agreement,? order_id,? bill_range,? group_id,? sort_order nulls last,? detail_type nulls last,? price desc nulls last,? detail_amount desc,? service_type nulls last,? detail_id?)?) \"me\"\" with ParamValues: 1='2020-08-01', 2='2', 3='2', 4=undef, 5='2', 6='3493', 7='10', 8='2', 9='2'] at /home/kes/work/projects/tucha/monkeyman/lib/MaitreD/Controller/Cart.pm line 828\n\n\n--\nBest regards,\nEugen Konkov",
"msg_date": "Tue, 8 Dec 2020 15:43:13 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Feature Request: Report additionally error value"
},
{
"msg_contents": "I format query to read it more easy. See attachment\n\nTuesday, December 8, 2020, 3:43:13 PM, you wrote:\n\n\nHello Tom,\n\n>>> Also would be useful if PG point at query where this bad value was\n>>> calculated or occur.\n\n>> This is not the first time we've seen this request and it usually ends up\n>> getting stalled because its non-trivial to implement and thus isn't\n>> feasible for the benefit it brings.\n\n> I do still have ambitions to make that happen, but you're right that it's\n> a major undertaking. Don't hold your breath.\n\nAnother case posted below.\n\nHere in database are values which could not construct correct daterange.\nThere are thousands of rows. It is hard to find even a way or make an assumption\nhow to get record which cause this error.\n\nIt will be very impressive if database could report the line at database where error is occur.\nVia HINT for example.\nIf this is not possible, then current cursor position or raw data dump in debug mode.\nProbably some memory buffers or so.\n\nAny additional info would be helpful.\n\nThank you.\n\n\nDBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR: range lower bound must be less than or equal to range upper bound?CONTEXT: SQL function \"accounting_ready\" statement 1 [for Statement \"SELECT COUNT( * ) FROM (( WITH?target_date AS ( SELECT ?::timestamptz ),?target_order as (? SELECT? usage_range as bill_range,? o.*? FROM ( SELECT * FROM \"order_bt\" WHERE sys_period @> sys_time() ) o? LEFT JOIN period prd on prd.id = o.period_id? LEFT JOIN accounting_ready(? CASE WHEN prd.INTERVAL = '1 mon' THEN date_trunc( 'month', o.docdate ) ELSE o.docdate END,? prd.INTERVAL, (SELECT * FROM target_date)? ) acc ON true? WHERE FALSE? OR ? = 0? OR ? = 1 AND o.id = ?? AND acc.usage AND EXISTS (? SELECT * FROM order_bt prev_order? WHERE sys_period @> sys_time() AND? prev_order.id = o.id AND prev_order.app_period && acc.usage_range? )? AND o.app_period && acc.usage_range? OR ? = 2 AND o.agreement_id = ? and o.period_id = ?? AND acc.usage AND EXISTS (? SELECT * FROM order_bt prev_order? WHERE sys_period @> sys_time() AND? prev_order.id = o.id AND prev_order.app_period && acc.usage_range? )? AND o.app_period && acc.usage_range? OR ? = 3?),?USAGE AS ( SELECT? (ocd.o).id as order_id,? (dense_rank() over (PARTITION BY o.agreement_id, o.id ORDER BY (ocd.ic).consumed_period )) as group_id,? (ocd.c).id as detail_id,? (ocd.c).service_type_id as service_type,? (ocd.c).resource_type_id as detail_type,? (ocd.c).amount as detail_amount,? (ocd.c).allocated_resource_id as resource_id,? null as resource_uuid,? null as resource_desc,? rt.unit as resource_unit,? -- count changes. Logic is next: How many times configration for this order is met at this period? count(*) OVER (PARTITION BY (ocd.o).id, (ocd.c).id ) as consumed_count,? (ocd.ic).consumed as consumed_days,? null as consumed_amount,? null as consumed_last,?? a.id as agreement_id,? coalesce( substring( a.docn from '^A?(.*?)(?:\\s*-[^-]*)?$' ), a.docn ) as agreement,? a.docdate as docdate,?? pkg.id as package_id,? pkg.link_1c_id as package_1c_id,? coalesce( pkg.display, pkg.name ) as package,?? coalesce( st.display, st.name, rt.display, rt.name ) as sr_name,? COALESCE( (ocd.p).label, rt.LABEL ) as unit_label,?? coalesce( (ocd.c).sort_order, pd.sort_order ) as sort_order,?? ocd.item_price AS price,? -- We want to display QTY for resources too? coalesce( ocd.item_qty, (ocd.c).amount/rt.unit ) AS qty,? 0 AS month_length,? 0 AS days_count,? o.bill_range,? lower( (ocd.ic).consumed_period ) as consumed_from,? upper( (ocd.ic).consumed_period ) -interval '1sec' as consumed_till,??? ocd.item_suma,?? 0 as discount,? (sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).id, (ocd.ic).consumed_period ))::numeric( 10, 2) AS group_suma,? (sum( ocd.item_cost ) OVER( PARTITION BY (ocd.o).id, (ocd.ic).consumed_period ))::numeric( 10, 2) AS order_cost?FROM target_order o?LEFT JOIN order_cost_details( o.bill_range ) ocd? ON (ocd.o).id = o.id AND (ocd.ic).consumed_period && o.app_period? LEFT JOIN agreement a ON a.id = o.agreement_id? LEFT JOIN package pkg ON pkg.id = o.package_id? LEFT JOIN package_detail pd ON pd.package_id = (ocd.o).package_id? AND pd.resource_type_id IS NOT DISTINCT FROM (ocd.c).resource_type_id? AND pd.service_type_id IS NOT DISTINCT FROM (ocd.c).service_type_id? LEFT JOIN resource_type rt ON rt.id = (ocd.c).resource_type_id? LEFT JOIN service_type st on st.id = (ocd.c).service_type_id?)??SELECT *,? (group_suma/6) ::numeric( 10, 2 ) as group_nds,? (SELECT sum(x) from (SELECT sum( DISTINCT group_suma ) AS x FROM usage sub_u WHERE sub_u.agreement_id = usage.agreement_id GROUP BY agreement_id, order_id) t) as total_suma,? (SELECT sum(x) from (SELECT sum( DISTINCT (group_suma/6)::numeric( 10, 2 ) ) AS x FROM usage sub_u WHERE sub_u.agreement_id = usage.agreement_id GROUP BY agreement_id, order_id) t) as total_nds?FROM usage?where ? <> 3 OR consumed_count > 1?ORDER BY? /* put order first then allocated resource without Order */? agreement,? order_id,? bill_range,? group_id,? sort_order nulls last,? detail_type nulls last,? price desc nulls last,? detail_amount desc,? service_type nulls last,? detail_id?)?) \"me\"\" with ParamValues: 1='2020-08-01', 2='2', 3='2', 4=undef, 5='2', 6='3493', 7='10', 8='2', 9='2'] at /home/kes/work/projects/tucha/monkeyman/lib/MaitreD/Controller/Cart.pm line 828\n\n\n--\nBest regards,\nEugen Konkov\n\n\n\n-- \nBest regards,\nEugen Konkov",
"msg_date": "Tue, 8 Dec 2020 15:46:44 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Feature Request: Report additionally error value"
}
] |
[
{
"msg_contents": "Hi,\n\ntesting with sqlsmith on master at 3df51ca8b3 produced one instance of\nthe following error:\n\n ERROR: failed to build any 6-way joins\n\nI can reproduce it on a fresh regression database with the query below.\nThese were last logged in 2015. Back then, it resulted in this commit:\n\n http://git.postgresql.org/pg/commitdiff/cfe30a72fa80528997357cb0780412736767e8c4\n\nregards,\nAndreas\n\nselect * from\n (select sample_1.a as c0\n from fkpart5.fk2 as sample_1) as subq_0,\n lateral (select 1\n from\n (select\n subq_0.c0 as c3,\n subq_5.c0 as c7,\n sample_2.b as c9\n from\n public.brin_test as sample_2,\n lateral (select\n subq_3.c1 as c0\n from\n fkpart5.pk3 as sample_3,\n lateral (select\n sample_2.a as c0,\n sample_3.a as c1\n from\n public.rtest_interface as ref_0\n ) as subq_1,\n lateral (select\n subq_1.c1 as c1\n from\n public.alter_table_under_transition_tables as ref_1\n ) as subq_3\n ) as subq_5) as subq_6\n right join public.gtest30_1 as sample_6\n on (true)\n where subq_6.c7 = subq_6.c3) as subq_7;\n\n\n",
"msg_date": "Sun, 29 Nov 2020 19:40:50 +0100",
"msg_from": "Andreas Seltenreich <seltenreich@gmx.de>",
"msg_from_op": true,
"msg_subject": "[sqlsmith] Planner error on lateral joins"
},
{
"msg_contents": "Andreas Seltenreich <seltenreich@gmx.de> writes:\n> testing with sqlsmith on master at 3df51ca8b3 produced one instance of\n> the following error:\n> ERROR: failed to build any 6-way joins\n\nThanks for the test case! The attached modification to use only\nlongstanding test tables fails back to 9.5, but succeeds in 9.4.\nI've not tried to bisect yet.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 29 Nov 2020 15:39:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Planner error on lateral joins"
},
{
"msg_contents": "I wrote:\n> Andreas Seltenreich <seltenreich@gmx.de> writes:\n>> testing with sqlsmith on master at 3df51ca8b3 produced one instance of\n>> the following error:\n>> ERROR: failed to build any 6-way joins\n\n> Thanks for the test case! The attached modification to use only\n> longstanding test tables fails back to 9.5, but succeeds in 9.4.\n> I've not tried to bisect yet.\n\nSeems to be a missed consideration in add_placeholders_to_joinrel.\nThe attached fixes it for me.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 29 Nov 2020 21:18:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Planner error on lateral joins"
}
] |
[
{
"msg_contents": "Hi,\n\nyesterday's sqlsmith run on master at 3df51ca8b3 also yielded 4855\nqueries failing with ERRORs like the following:\n\n,----\n| ERROR: subplan \"InitPlan 3 (returns $3)\" was not initialized\n| CONTEXT: parallel worker\n`----\n\nThey all run fine with parallel scans off. Below is a recipe that\nreproduces one for me on a fresh regression database.\n\nregards,\nAndreas\n\nset min_parallel_table_scan_size = '8kB';\nset parallel_setup_cost = 1;\nselect\n sample_0.a as c0,\n sample_0.a as c1,\n pg_catalog.pg_stat_get_bgwriter_requested_checkpoints() as c2,\n sample_0.b as c3\nfrom\n public.itest6 as sample_0\nwhere EXISTS (\n select\n cast(coalesce(cast(nullif(sample_1.z,\n sample_1.x) as int4),\n sample_1.x) as int4) as c0,\n pg_catalog.macaddr_cmp(\n cast(case when false then cast(null as macaddr) else cast(null as macaddr) end\n as macaddr),\n cast(cast(null as macaddr) as macaddr)) as c1\n from\n public.insert_tbl as sample_1\n where (select timestamptzcol from public.brintest limit 1 offset 29)\n < pg_catalog.date_larger(\n cast((select d from public.nv_child_2011 limit 1 offset 6)\n as date),\n cast((select pg_catalog.min(filler3) from public.mcv_lists)\n as date)));\n\n\n",
"msg_date": "Sun, 29 Nov 2020 20:58:20 +0100",
"msg_from": "Andreas Seltenreich <seltenreich@gmx.de>",
"msg_from_op": true,
"msg_subject": "[sqlsmith] parallel worker errors \"subplan ... was not initialized\""
},
{
"msg_contents": "Andreas Seltenreich <seltenreich@gmx.de> writes:\n> yesterday's sqlsmith run on master at 3df51ca8b3 also yielded 4855\n> queries failing with ERRORs like the following:\n\n> | ERROR: subplan \"InitPlan 3 (returns $3)\" was not initialized\n> | CONTEXT: parallel worker\n\nThis is somewhat of a known issue [1], but thanks for the additional\ntest case!\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/622580997.37108180.1604080457319.JavaMail.zimbra%40siscobra.com.br\n\n\n",
"msg_date": "Sun, 29 Nov 2020 15:29:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] parallel worker errors \"subplan ... was not\n initialized\""
}
] |
[
{
"msg_contents": "Hi\r\n\r\nFound a possible typo in cost.h\r\n\r\n-/* If you change these, update backend/utils/misc/postgresql.sample.conf */\r\n+/* If you change these, update backend/utils/misc/postgresql.conf.sample */\r\n\r\n\r\nBest regards,\r\nTang",
"msg_date": "Mon, 30 Nov 2020 03:15:45 +0000",
"msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in cost.h"
},
{
"msg_contents": "\n\nOn 2020/11/30 12:15, Tang, Haiying wrote:\n> Hi\n> \n> Found a possible typo in cost.h\n> \n> -/* If you change these, update backend/utils/misc/postgresql.sample.conf */\n> +/* If you change these, update backend/utils/misc/postgresql.conf.sample */\n\nPushed. Thanks!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 30 Nov 2020 12:56:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in cost.h"
}
] |
[
{
"msg_contents": "Hi all\n\nWe have a dynamic_library_path setting to allow postgres to search\nadditional locations for loadable libraries (since 8.1, commit\n761a0bb69b). This permits libraries to be located in alternate\nlocations for loading by shared_preload_libraries, LOAD commands,\nimplicit loading via function calls etc. This searches '$libdir',\nPG_INSTALL_DIR/lib, by default, but can be modified by configuration.\n\nWhen extensions were introduced in 9.1 (d9572c4e3b) no search path\nsupport for extension control files and SQL scripts was added to match\ndynamic_library_path. The only place extension scripts and control\nfiles can be placed is PG_INSTALL_DIR/share/extension .\n\nPractically all loadable modules now come with extension control files\nand usually extension SQL scripts. In practice this means extensions\nare not relocatable.\n\nReasons to care are much the same reasons dynamic_library_path was\nadded. Specific use cases include:\n\n1. Running \"make check\" against a temporary instance of a postgres\ninstall (from packages or source) without needing permission to modify\nthe installation's lib/ and share/extension directories or install the\nextension before testing it.\n2. Letting postgres builds installed in non-default paths load\nextensions installed from public repositories like yum.postgresql.org,\nso long as the custom build can promise to be ABI-compatible with the\nmainline builds.\n\n\n------\n\nDetails:\n\n(1) is useful when building and testing extensions against a postgres\nthat was installed from packages, or when you want to build and check\na new or updated extension before installing it in the local postgres.\n\nRight now the extension must \"make install\" into the active postgres\ninstall first; we have to install the built code before testing it,\nwhich is backwards. And the extension tests need have write privileges\nto lib/ and share/extension which is undesirable.\n\nPGXS doesn't support a \"temp_install\" mode, so we can't \"make check\"\nthe same way we do for in-tree builds, and it doesn't IMO make a lot\nof sense to do so. It'd be much better to just allow pg_regress,\npg_isolation_regress and TAP tests to specify an extension_search_path\nwhen they initdb and start a new temp instance.\n\n(2) is useful when for an alternative PostgreSQL build that's\nbinary-compatible with normal builds in a non-default path, while\ncontinuing to use extensions packaged from public repos. For example,\nto hot-fix a production issue with a patched build until the next\npostgres release includes the fix without overwriting packaged files.\nOr to trial a proposed upgrade before committing to it. Right now if\nyou want to install to a non-default path you have to recompile all\nyour extensions or manually copy files around. Not fun if you use\n(say) PostGIS.\n\nI'm mainly interested in the first use case.\n\n\n",
"msg_date": "Mon, 30 Nov 2020 11:23:29 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "RFC: extension_search_path to supplement dynamic_library_path"
}
] |
[
{
"msg_contents": "Hi all\n\nI recently wrote some notes on interaction between physical\nreplication failover/promotion and logical replication publisher\nand/or standby.\n\nAs you probably all know, right now we don't support physical failover\nfor logical replication publishers at all, either for in-core logical\nreplication or for 3rd party solutions. And while we support physical\nfailover and promotion of subscribers, there turn out to be some\nissues with that too.\n\nI'm not trying to solve those issues right now. But since there are\nvarious out-of-tree replication tools trying to work around these\nlimitations, I wanted to share my knowledge of the various hazards and\nchallenges involved in doing so, so I've written a wiki article on it.\n\nhttps://wiki.postgresql.org/wiki/Logical_replication_and_physical_standby_failover\n\nI tried to address many of these issues with failover slots, but I am\nnot trying to beat that dead horse now. I know that at least some\npeople here are of the opinion that effort shouldn't go into\nlogical/physical replication interoperation anyway - that we should\ninstead address the remaining limitations in logical replication so\nthat it can provide complete HA capabilities without use of physical\nreplication. So for now I'm just trying to save others who go looking\ninto these issues some time and warn them about some of the less\nobvious booby-traps.\n\nI do want to add some info to the logical decoding docs around slot\nfast-forward behaviour and how to write clients to avoid missing or\ndouble-processing transactions. I'd welcome advice on the best way to\ndo that in a manner that would be accepted by this community.\n\n\n",
"msg_date": "Mon, 30 Nov 2020 11:59:38 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Notes on physical replica failover with logical publisher or\n subscriber"
},
{
"msg_contents": "Hi Craig,\n\nOn 2020-11-30 06:59, Craig Ringer wrote:\n> \n> https://wiki.postgresql.org/wiki/Logical_replication_and_physical_standby_failover\n> \n\nThank you for sharing these notes. I have not dealt a lot with \nphysical/logical replication interoperability, so those were mostly new \nproblems for me to know.\n\nOne point from the wiki page, which seems clear enough to me:\n\n```\nLogical slots can fill pg_wal and can't benefit from archiving. Teach \nthe logical decoding page read callback how to use the restore_command \nto retrieve WAL segs temporarily if they're not found in pg_wal...\n```\n\nIt does not look like a big deal to teach logical decoding process to \nuse restore_command, but I have some doubts about how everything will \nperform in the case when we started getting WAL from archive for \ndecoding purposes. If we started using restore_command, then subscriber \nlagged long enough to exceed max_slot_wal_keep_size. Taking into account \nthat getting WAL files from the archive has an additional overhead and \nthat primary continues generating (and archiving) new segments, there is \na possibility for primary to start doing this double duty forever --- \narchive WAL file at first and get it back for decoding when requested.\n\nAnother problem is that there are maybe several active decoders, IIRC, \nso they would have better to communicate in order to avoid fetching the \nsame segment twice.\n\n> \n> I tried to address many of these issues with failover slots, but I am\n> not trying to beat that dead horse now. I know that at least some\n> people here are of the opinion that effort shouldn't go into\n> logical/physical replication interoperation anyway - that we should\n> instead address the remaining limitations in logical replication so\n> that it can provide complete HA capabilities without use of physical\n> replication. So for now I'm just trying to save others who go looking\n> into these issues some time and warn them about some of the less\n> obvious booby-traps.\n> \n\nAnother point to add regarding logical replication capabilities to build \nlogical-only HA system --- logical equivalent of pg_rewind. At least I \nhave not noticed anything after brief reading of the wiki page. IIUC, \ncurrently there is no way to quickly return ex-primary (ex-logical \npublisher) into HA-cluster without doing a pg_basebackup, isn't it? It \nseems that we should have the same problem here as with physical \nreplication --- ex-primary may accept some xacts after promotion of new \nprimary, so their history diverges and old primary should be rewound \nbefore being returned as standby (subscriber).\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Mon, 30 Nov 2020 20:34:21 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Notes on physical replica failover with logical publisher or\n subscriber"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 9:30 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> Hi all\n>\n> I recently wrote some notes on interaction between physical\n> replication failover/promotion and logical replication publisher\n> and/or standby.\n>\n> As you probably all know, right now we don't support physical failover\n> for logical replication publishers at all, either for in-core logical\n> replication or for 3rd party solutions. And while we support physical\n> failover and promotion of subscribers, there turn out to be some\n> issues with that too.\n>\n> I'm not trying to solve those issues right now. But since there are\n> various out-of-tree replication tools trying to work around these\n> limitations, I wanted to share my knowledge of the various hazards and\n> challenges involved in doing so, so I've written a wiki article on it.\n>\n> https://wiki.postgresql.org/wiki/Logical_replication_and_physical_standby_failover\n>\n> I tried to address many of these issues with failover slots, but I am\n> not trying to beat that dead horse now. I know that at least some\n> people here are of the opinion that effort shouldn't go into\n> logical/physical replication interoperation anyway - that we should\n> instead address the remaining limitations in logical replication so\n> that it can provide complete HA capabilities without use of physical\n> replication.\n>\n\nWhat is the advantage of using logical replication for HA over\nphysical replication? It sends more WAL in most cases and probably\nrequires more set up in terms of the publisher-subscriber. I see the\nmajor use as upgrades can be easier because it can provide\ncross-version replication and others as mentioned in docs [1]. I can\nalso imagine that one might want to use it for scale-out by carefully\nhaving transactions specific to different master nodes or probably by\nhaving a third-party transaction coordinator.\n\n> So for now I'm just trying to save others who go looking\n> into these issues some time and warn them about some of the less\n> obvious booby-traps.\n>\n> I do want to add some info to the logical decoding docs around slot\n> fast-forward behaviour and how to write clients to avoid missing or\n> double-processing transactions.\n>\n\n+1.\n\n> I'd welcome advice on the best way to\n> do that in a manner that would be accepted by this community.\n>\n\nWhy not enhance test_decoding apart from mentioning in docs to explain\nsuch concepts? It might help in some code-coverage as well depending\non what we want to cover.\n\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 18:47:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Notes on physical replica failover with logical publisher or\n subscriber"
}
] |
[
{
"msg_contents": "GCC 4.8.5 does not default to gnu99 or gnu11 like the newer versions.\n---\n src/common/logging.c | 3 ++-\n src/common/unicode_norm.c | 3 ++-\n 2 files changed, 4 insertions(+), 2 deletions(-)\n\ndiff --git a/src/common/logging.c b/src/common/logging.c\nindex f3fc0b8..2ac502a 100644\n--- a/src/common/logging.c\n+++ b/src/common/logging.c\n@@ -112,7 +112,8 @@ pg_logging_init(const char *argv0)\n \n \t\t\tif (colors)\n \t\t\t{\n-\t\t\t\tfor (char *token = strtok(colors, \":\"); token; token = strtok(NULL, \":\"))\n+\t\t\t\tchar *token;\n+\t\t\t\tfor (token = strtok(colors, \":\"); token; token = strtok(NULL, \":\"))\n \t\t\t\t{\n \t\t\t\t\tchar\t *e = strchr(token, '=');\n \ndiff --git a/src/common/unicode_norm.c b/src/common/unicode_norm.c\nindex ab5ce59..0de2e87 100644\n--- a/src/common/unicode_norm.c\n+++ b/src/common/unicode_norm.c\n@@ -519,6 +519,7 @@ unicode_is_normalized_quickcheck(UnicodeNormalizationForm form, const pg_wchar *\n {\n \tuint8\t\tlastCanonicalClass = 0;\n \tUnicodeNormalizationQC result = UNICODE_NORM_QC_YES;\n+\tconst pg_wchar *p;\n \n \t/*\n \t * For the \"D\" forms, we don't run the quickcheck. We don't include the\n@@ -530,7 +531,7 @@ unicode_is_normalized_quickcheck(UnicodeNormalizationForm form, const pg_wchar *\n \tif (form == UNICODE_NFD || form == UNICODE_NFKD)\n \t\treturn UNICODE_NORM_QC_MAYBE;\n \n-\tfor (const pg_wchar *p = input; *p; p++)\n+\tfor (p = input; *p; p++)\n \t{\n \t\tpg_wchar\tch = *p;\n \t\tuint8\t\tcanonicalClass;\n-- \n1.8.3.1\n\n\n\n",
"msg_date": "Sun, 29 Nov 2020 20:33:41 -0800",
"msg_from": "Rosen Penev <rosenp@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] fix compilation with gnu89"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-29 20:33:41 -0800, Rosen Penev wrote:\n> GCC 4.8.5 does not default to gnu99 or gnu11 like the newer versions.\n\nWe require C99 support since postgres 12, and configure should end up\nchoosing flags to make the compiler support that if possible.\n\nAs far as I can tell the code you changed doesn't actually exist in <=\n11, so I don't see what problem it'd fix. And for newer branches you'd\nobviously need a lot more changes than just this...\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Sun, 29 Nov 2020 20:49:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix compilation with gnu89"
},
{
"msg_contents": "Rosen Penev <rosenp@gmail.com> writes:\n> GCC 4.8.5 does not default to gnu99 or gnu11 like the newer versions.\n\nProject policy now is to require C99 support, so the correct solution\nfor using an older compiler is to do something like\n\n./configure CC=\"gcc -std=gnu99\" ...\n\nWe're not going to accept patches to remove declarations-in-for-loops,\nas that feature was actually one of the primary arguments for requiring\nC99 in the first place. (Also, I'm pretty sure that there are already\nconsiderably more than two such uses.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Nov 2020 23:50:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix compilation with gnu89"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-11-29 20:33:41 -0800, Rosen Penev wrote:\n>> GCC 4.8.5 does not default to gnu99 or gnu11 like the newer versions.\n\n> We require C99 support since postgres 12, and configure should end up\n> choosing flags to make the compiler support that if possible.\n\nHmm, yeah, that's a good point ... why didn't configure select\n\"-std=gnu99\" for you? This might boil down to trying to use\nCFLAGS selected for one compiler with a different compiler.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 00:06:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix compilation with gnu89"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nI found that when running command vaccum full in stand-alone mode there will be a core dump.\r\nThe core stack looks like this:\r\n------------------------------------\r\nbackend> vacuum full\r\nTRAP: FailedAssertion(\"IsUnderPostmaster\", File: \"dsm.c\", Line: 439)\r\n./postgres(ExceptionalCondition+0xac)[0x55c664c36913]\r\n./postgres(dsm_create+0x3c)[0x55c664a79fee]\r\n./postgres(GetSessionDsmHandle+0xdc)[0x55c6645f8296]\r\n./postgres(InitializeParallelDSM+0xf9)[0x55c6646c59ca]\r\n./postgres(+0x16bdef)[0x55c664692def]\r\n./postgres(+0x16951e)[0x55c66469051e]\r\n./postgres(btbuild+0xcc)[0x55c6646903ec]\r\n./postgres(index_build+0x322)[0x55c664719749]\r\n./postgres(reindex_index+0x2ee)[0x55c66471a765]\r\n./postgres(reindex_relation+0x1e5)[0x55c66471acca]\r\n./postgres(finish_heap_swap+0x118)[0x55c6647c8db1]\r\n./postgres(+0x2a0848)[0x55c6647c7848]\r\n./postgres(cluster_rel+0x34e)[0x55c6647c727f]\r\n./postgres(+0x3414cf)[0x55c6648684cf]\r\n./postgres(vacuum+0x4f3)[0x55c664866591]\r\n./postgres(ExecVacuum+0x736)[0x55c66486609b]\r\n./postgres(standard_ProcessUtility+0x840)[0x55c664ab74fc]\r\n./postgres(ProcessUtility+0x131)[0x55c664ab6cb5]\r\n./postgres(+0x58ea69)[0x55c664ab5a69]\r\n./postgres(+0x58ec95)[0x55c664ab5c95]\r\n./postgres(PortalRun+0x307)[0x55c664ab5184]\r\n./postgres(+0x587ef6)[0x55c664aaeef6]\r\n./postgres(PostgresMain+0x819)[0x55c664ab3271]\r\n./postgres(main+0x2e1)[0x55c6648f9df5]\r\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f65376192e1]\r\n./postgres(_start+0x2a)[0x55c6645df86a]\r\nAborted (core dumped)\r\n------------------------------------\r\n\r\nI think that when there is a btree index in a table, vacuum full tries to rebuild the btree index in a parallel way.\r\n\r\nThis will launch several workers and each of then will try to apply a dynamic shared memory segment, which is not allowed in stand-alone mode.\r\n\r\nI think it is better not to use btree index build in parallel in stand-alone mode. My patch is attached below.\r\n\r\n\r\n\r\n\r\n\r\nBest Regards!\r\nYulin PEI",
"msg_date": "Mon, 30 Nov 2020 07:27:28 +0000",
"msg_from": "Yulin PEI <ypeiae@connect.ust.hk>",
"msg_from_op": true,
"msg_subject": "[PATCH] BUG FIX: Core dump could happen when VACUUM FULL in\n standalone mode"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 5:45 PM Yulin PEI <ypeiae@connect.ust.hk> wrote:\n\n> Hi hackers,\n>\n> I found that when running command vaccum full in stand-alone mode there\n> will be a core dump.\n> The core stack looks like this:\n> ------------------------------------\n> backend> vacuum full\n> TRAP: FailedAssertion(\"IsUnderPostmaster\", File: \"dsm.c\", Line: 439)\n> ./postgres(ExceptionalCondition+0xac)[0x55c664c36913]\n> ./postgres(dsm_create+0x3c)[0x55c664a79fee]\n> ./postgres(GetSessionDsmHandle+0xdc)[0x55c6645f8296]\n> ./postgres(InitializeParallelDSM+0xf9)[0x55c6646c59ca]\n> ./postgres(+0x16bdef)[0x55c664692def]\n> ./postgres(+0x16951e)[0x55c66469051e]\n> ./postgres(btbuild+0xcc)[0x55c6646903ec]\n> ./postgres(index_build+0x322)[0x55c664719749]\n> ./postgres(reindex_index+0x2ee)[0x55c66471a765]\n> ./postgres(reindex_relation+0x1e5)[0x55c66471acca]\n> ./postgres(finish_heap_swap+0x118)[0x55c6647c8db1]\n> ./postgres(+0x2a0848)[0x55c6647c7848]\n> ./postgres(cluster_rel+0x34e)[0x55c6647c727f]\n> ./postgres(+0x3414cf)[0x55c6648684cf]\n> ./postgres(vacuum+0x4f3)[0x55c664866591]\n> ./postgres(ExecVacuum+0x736)[0x55c66486609b]\n> ./postgres(standard_ProcessUtility+0x840)[0x55c664ab74fc]\n> ./postgres(ProcessUtility+0x131)[0x55c664ab6cb5]\n> ./postgres(+0x58ea69)[0x55c664ab5a69]\n> ./postgres(+0x58ec95)[0x55c664ab5c95]\n> ./postgres(PortalRun+0x307)[0x55c664ab5184]\n> ./postgres(+0x587ef6)[0x55c664aaeef6]\n> ./postgres(PostgresMain+0x819)[0x55c664ab3271]\n> ./postgres(main+0x2e1)[0x55c6648f9df5]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f65376192e1]\n> ./postgres(_start+0x2a)[0x55c6645df86a]\n> Aborted (core dumped)\n> ------------------------------------\n>\n> I think that when there is a btree index in a table, vacuum full tries to\n> rebuild the btree index in a parallel way.\n>\n> This will launch several workers and each of then will try to apply a\n> dynamic shared memory segment, which is not allowed in stand-alone mode.\n>\n> I think it is better not to use btree index build in parallel in\n> stand-alone mode. My patch is attached below.\n>\n>\n>\nGood catch. This is a bug in parallel index(btree) creation. I could\nreproduce this assertion failure with HEAD even by using CREATE INDEX\ncommand.\n\n- indexRelation->rd_rel->relam == BTREE_AM_OID)\n+ indexRelation->rd_rel->relam == BTREE_AM_OID && IsPostmasterEnvironment\n&& !IsBackendStandAlone())\n\n+#define IsBackendStandAlone() (!IsBootstrapProcessingMode() &&\n!IsPostmasterEnvironment)\n /*\n * Auxiliary-process type identifiers. These used to be in bootstrap.h\n * but it seems saner to have them here, with the ProcessingMode stuff.\n\nI think we can use IsUnderPostmaster instead. If it's false we should not\nenable parallel index creation.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\nOn Mon, Nov 30, 2020 at 5:45 PM Yulin PEI <ypeiae@connect.ust.hk> wrote:\n\n\n\n\n\nHi hackers,\n\nI found that when running command vaccum full in stand-alone mode there will be a core dump.\n\nThe core stack looks like this:\n------------------------------------\nbackend> vacuum full\nTRAP: FailedAssertion(\"IsUnderPostmaster\", File: \"dsm.c\", Line: 439)\n./postgres(ExceptionalCondition+0xac)[0x55c664c36913]\n./postgres(dsm_create+0x3c)[0x55c664a79fee]\n./postgres(GetSessionDsmHandle+0xdc)[0x55c6645f8296]\n./postgres(InitializeParallelDSM+0xf9)[0x55c6646c59ca]\n./postgres(+0x16bdef)[0x55c664692def]\n./postgres(+0x16951e)[0x55c66469051e]\n./postgres(btbuild+0xcc)[0x55c6646903ec]\n./postgres(index_build+0x322)[0x55c664719749]\n./postgres(reindex_index+0x2ee)[0x55c66471a765]\n./postgres(reindex_relation+0x1e5)[0x55c66471acca]\n./postgres(finish_heap_swap+0x118)[0x55c6647c8db1]\n./postgres(+0x2a0848)[0x55c6647c7848]\n./postgres(cluster_rel+0x34e)[0x55c6647c727f]\n./postgres(+0x3414cf)[0x55c6648684cf]\n./postgres(vacuum+0x4f3)[0x55c664866591]\n./postgres(ExecVacuum+0x736)[0x55c66486609b]\n./postgres(standard_ProcessUtility+0x840)[0x55c664ab74fc]\n./postgres(ProcessUtility+0x131)[0x55c664ab6cb5]\n./postgres(+0x58ea69)[0x55c664ab5a69]\n./postgres(+0x58ec95)[0x55c664ab5c95]\n./postgres(PortalRun+0x307)[0x55c664ab5184]\n./postgres(+0x587ef6)[0x55c664aaeef6]\n./postgres(PostgresMain+0x819)[0x55c664ab3271]\n./postgres(main+0x2e1)[0x55c6648f9df5]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f65376192e1]\n./postgres(_start+0x2a)[0x55c6645df86a]\nAborted (core dumped)\n------------------------------------\n\nI think that when there is a btree index in a table, vacuum full tries to rebuild the btree index in a parallel way.\n\n\nThis will launch several workers and each of then will try to apply a dynamic shared memory segment, which is not allowed in stand-alone mode. \n\nI think it is better not to use btree index build in parallel in stand-alone mode. My patch is attached below.\n\nGood catch. This is a bug in parallel index(btree) creation. I could reproduce this assertion failure with HEAD even by using CREATE INDEX command.-\t\tindexRelation->rd_rel->relam == BTREE_AM_OID)+\t\tindexRelation->rd_rel->relam == BTREE_AM_OID && IsPostmasterEnvironment && !IsBackendStandAlone())+#define IsBackendStandAlone()\t(!IsBootstrapProcessingMode() && !IsPostmasterEnvironment) /* * Auxiliary-process type identifiers. These used to be in bootstrap.h * but it seems saner to have them here, with the ProcessingMode stuff.I think we can use IsUnderPostmaster instead. If it's false we should not enable parallel index creation.Regards,-- Masahiko SawadaEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 30 Nov 2020 18:27:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] BUG FIX: Core dump could happen when VACUUM FULL in\n standalone mode"
},
{
"msg_contents": "Yes, I agree because (IsNormalProcessingMode() ) means that current process is not in bootstrap mode and postmaster process will not build index.\r\nSo my new modified patch is attached.\r\n\r\n\r\n________________________________\r\n发件人: Masahiko Sawada <sawada.mshk@gmail.com>\r\n发送时间: 2020年11月30日 17:27\r\n收件人: Yulin PEI <ypeiae@connect.ust.hk>\r\n抄送: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\n主题: Re: [PATCH] BUG FIX: Core dump could happen when VACUUM FULL in standalone mode\r\n\r\n\r\n\r\nOn Mon, Nov 30, 2020 at 5:45 PM Yulin PEI <ypeiae@connect.ust.hk<mailto:ypeiae@connect.ust.hk>> wrote:\r\nHi hackers,\r\n\r\nI found that when running command vaccum full in stand-alone mode there will be a core dump.\r\nThe core stack looks like this:\r\n------------------------------------\r\nbackend> vacuum full\r\nTRAP: FailedAssertion(\"IsUnderPostmaster\", File: \"dsm.c\", Line: 439)\r\n./postgres(ExceptionalCondition+0xac)[0x55c664c36913]\r\n./postgres(dsm_create+0x3c)[0x55c664a79fee]\r\n./postgres(GetSessionDsmHandle+0xdc)[0x55c6645f8296]\r\n./postgres(InitializeParallelDSM+0xf9)[0x55c6646c59ca]\r\n./postgres(+0x16bdef)[0x55c664692def]\r\n./postgres(+0x16951e)[0x55c66469051e]\r\n./postgres(btbuild+0xcc)[0x55c6646903ec]\r\n./postgres(index_build+0x322)[0x55c664719749]\r\n./postgres(reindex_index+0x2ee)[0x55c66471a765]\r\n./postgres(reindex_relation+0x1e5)[0x55c66471acca]\r\n./postgres(finish_heap_swap+0x118)[0x55c6647c8db1]\r\n./postgres(+0x2a0848)[0x55c6647c7848]\r\n./postgres(cluster_rel+0x34e)[0x55c6647c727f]\r\n./postgres(+0x3414cf)[0x55c6648684cf]\r\n./postgres(vacuum+0x4f3)[0x55c664866591]\r\n./postgres(ExecVacuum+0x736)[0x55c66486609b]\r\n./postgres(standard_ProcessUtility+0x840)[0x55c664ab74fc]\r\n./postgres(ProcessUtility+0x131)[0x55c664ab6cb5]\r\n./postgres(+0x58ea69)[0x55c664ab5a69]\r\n./postgres(+0x58ec95)[0x55c664ab5c95]\r\n./postgres(PortalRun+0x307)[0x55c664ab5184]\r\n./postgres(+0x587ef6)[0x55c664aaeef6]\r\n./postgres(PostgresMain+0x819)[0x55c664ab3271]\r\n./postgres(main+0x2e1)[0x55c6648f9df5]\r\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f65376192e1]\r\n./postgres(_start+0x2a)[0x55c6645df86a]\r\nAborted (core dumped)\r\n------------------------------------\r\n\r\nI think that when there is a btree index in a table, vacuum full tries to rebuild the btree index in a parallel way.\r\n\r\nThis will launch several workers and each of then will try to apply a dynamic shared memory segment, which is not allowed in stand-alone mode.\r\n\r\nI think it is better not to use btree index build in parallel in stand-alone mode. My patch is attached below.\r\n\r\n\r\n\r\nGood catch. This is a bug in parallel index(btree) creation. I could reproduce this assertion failure with HEAD even by using CREATE INDEX command.\r\n\r\n- indexRelation->rd_rel->relam == BTREE_AM_OID)\r\n+ indexRelation->rd_rel->relam == BTREE_AM_OID && IsPostmasterEnvironment && !IsBackendStandAlone())\r\n\r\n+#define IsBackendStandAlone() (!IsBootstrapProcessingMode() && !IsPostmasterEnvironment)\r\n /*\r\n * Auxiliary-process type identifiers. These used to be in bootstrap.h\r\n * but it seems saner to have them here, with the ProcessingMode stuff.\r\n\r\nI think we can use IsUnderPostmaster instead. If it's false we should not enable parallel index creation.\r\n\r\nRegards,\r\n\r\n--\r\nMasahiko Sawada\r\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 30 Nov 2020 11:07:10 +0000",
"msg_from": "Yulin PEI <ypeiae@connect.ust.hk>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?u9i4tDogW1BBVENIXSBCVUcgRklYOiBDb3JlIGR1bXAgY291bGQgaGFwcGVu?=\n =?gb2312?Q?_when_VACUUM_FULL_in_standalone_mode?="
},
{
"msg_contents": "Yulin PEI <ypeiae@connect.ust.hk> writes:\n> Yes, I agree because (IsNormalProcessingMode() ) means that current process is not in bootstrap mode and postmaster process will not build index.\n> So my new modified patch is attached.\n\nThis is a good catch, but the proposed fix still seems pretty random\nand unlike how it's done elsewhere. It seems to me that since\nindex_build() is relying on plan_create_index_workers() to assess\nparallel safety, that's where to check IsUnderPostmaster. Moreover,\nthe existing code in compute_parallel_vacuum_workers (which gets\nthis right) associates the IsUnderPostmaster check with the initial\ncheck on max_parallel_maintenance_workers. So I think that the\nright fix is to adopt the compute_parallel_vacuum_workers coding\nin plan_create_index_workers, and thereby create a model for future\nuses of max_parallel_maintenance_workers to follow.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 30 Nov 2020 13:54:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?gb2312?B?u9i4tDogW1BBVENIXSBCVUcgRklYOiBDb3JlIGR1bXAgY291bGQgaGFwcGVu?=\n =?gb2312?Q?_when_VACUUM_FULL_in_standalone_mode?="
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 3:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Yulin PEI <ypeiae@connect.ust.hk> writes:\n> > Yes, I agree because (IsNormalProcessingMode() ) means that current process is not in bootstrap mode and postmaster process will not build index.\n> > So my new modified patch is attached.\n>\n> This is a good catch, but the proposed fix still seems pretty random\n> and unlike how it's done elsewhere. It seems to me that since\n> index_build() is relying on plan_create_index_workers() to assess\n> parallel safety, that's where to check IsUnderPostmaster. Moreover,\n> the existing code in compute_parallel_vacuum_workers (which gets\n> this right) associates the IsUnderPostmaster check with the initial\n> check on max_parallel_maintenance_workers. So I think that the\n> right fix is to adopt the compute_parallel_vacuum_workers coding\n> in plan_create_index_workers, and thereby create a model for future\n> uses of max_parallel_maintenance_workers to follow.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 1 Dec 2020 08:08:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=3A_=5BPATCH=5D_BUG_FIX=3A_Core_dump_could_happen_?=\n\t=?UTF-8?Q?when_VACUUM_FULL_in_standalone_mode?="
},
{
"msg_contents": "I think you are right after reading code in compute_parallel_vacuum_workers() :)\r\n\r\n________________________________\r\n发件人: Tom Lane <tgl@sss.pgh.pa.us>\r\n发送时间: 2020年12月1日 2:54\r\n收件人: Yulin PEI <ypeiae@connect.ust.hk>\r\n抄送: Masahiko Sawada <sawada.mshk@gmail.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\n主题: Re: 回复: [PATCH] BUG FIX: Core dump could happen when VACUUM FULL in standalone mode\r\n\r\nYulin PEI <ypeiae@connect.ust.hk> writes:\r\n> Yes, I agree because (IsNormalProcessingMode() ) means that current process is not in bootstrap mode and postmaster process will not build index.\r\n> So my new modified patch is attached.\r\n\r\nThis is a good catch, but the proposed fix still seems pretty random\r\nand unlike how it's done elsewhere. It seems to me that since\r\nindex_build() is relying on plan_create_index_workers() to assess\r\nparallel safety, that's where to check IsUnderPostmaster. Moreover,\r\nthe existing code in compute_parallel_vacuum_workers (which gets\r\nthis right) associates the IsUnderPostmaster check with the initial\r\ncheck on max_parallel_maintenance_workers. So I think that the\r\nright fix is to adopt the compute_parallel_vacuum_workers coding\r\nin plan_create_index_workers, and thereby create a model for future\r\nuses of max_parallel_maintenance_workers to follow.\r\n\r\n regards, tom lane\r\n\r\n\n\n\n\n\n\n\n\nI think you are right after reading\n code in compute_parallel_vacuum_workers() :)\n\n\n\n\n\n\n发件人: Tom Lane <tgl@sss.pgh.pa.us>\n发送时间: 2020年12月1日 2:54\n收件人: Yulin PEI <ypeiae@connect.ust.hk>\n抄送: Masahiko Sawada <sawada.mshk@gmail.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n主题: Re: 回复: [PATCH] BUG FIX: Core dump could happen when VACUUM FULL in standalone mode\n \n\n\nYulin PEI <ypeiae@connect.ust.hk> writes:\n> Yes, I agree because (IsNormalProcessingMode() ) means that current process is not in bootstrap mode and postmaster process will not build index.\n> So my new modified patch is attached.\n\nThis is a good catch, but the proposed fix still seems pretty random\nand unlike how it's done elsewhere. It seems to me that since\nindex_build() is relying on plan_create_index_workers() to assess\nparallel safety, that's where to check IsUnderPostmaster. Moreover,\nthe existing code in compute_parallel_vacuum_workers (which gets\nthis right) associates the IsUnderPostmaster check with the initial\ncheck on max_parallel_maintenance_workers. So I think that the\nright fix is to adopt the compute_parallel_vacuum_workers coding\nin plan_create_index_workers, and thereby create a model for future\nuses of max_parallel_maintenance_workers to follow.\n\n regards, tom lane",
"msg_date": "Tue, 1 Dec 2020 01:54:50 +0000",
"msg_from": "Yulin PEI <ypeiae@connect.ust.hk>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?u9i4tDogu9i4tDogW1BBVENIXSBCVUcgRklYOiBDb3JlIGR1bXAgY291bGQg?=\n =?gb2312?Q?happen_when_VACUUM_FULL_in_standalone_mode?="
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed a couple of comments that were obsoleted by commit\n578b229718 which forgot to remove them. Attached fixes that.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 30 Nov 2020 17:21:39 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "obsolete comment from WITH OIDS days"
},
{
"msg_contents": "On 30/11/2020 10:21, Amit Langote wrote:\n> I noticed a couple of comments that were obsoleted by commit\n> 578b229718 which forgot to remove them. Attached fixes that.\n\nApplied, thanks!\n\n- Heikki\n\n\n",
"msg_date": "Mon, 30 Nov 2020 10:32:43 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: obsolete comment from WITH OIDS days"
}
] |
[
{
"msg_contents": "Hi,\n\nI found IncrementalSortPath type is not analyzed in outNode.\nIt does not matter during runtime though.\n\nJust for the convenience of debugging, \nwhen debug by gdb, we can use \"call pprint(incresortpath)\" to list the node info.\n\nSo I try to support IncrementalSortPath in outNode.\n\nBest regards,\nhouzj",
"msg_date": "Mon, 30 Nov 2020 09:30:43 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "support IncrementalSortPath type in outNode()"
},
{
"msg_contents": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com> writes:\n> I found IncrementalSortPath type is not analyzed in outNode.\n\nIndeed, that's supposed to be supported. Pushed, thanks for the patch!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 16:34:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: support IncrementalSortPath type in outNode()"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile testing of Asynchronous Append feature with TPC-H queries, I found \nthat the push-down JOIN technique is rarely used.\nFor my servers fdw_tuple_cost = 0.2, fdw_startup_cost = 100.\nExploring the code, i found in postgres_fdw, estimate_path_cost_size(), \nlines 2908,2909:\nrun_cost += nrows * join_cost.per_tuple;\nnrows = clamp_row_est(nrows * fpinfo->joinclause_sel);\n\nAbove:\nnrows = fpinfo_i->rows * fpinfo_o->rows;\n\nMaybe it is needed to swap lines 2908 and 2909 (see attachment)?\n\nIn my case of two big partitioned tables and small join result it \nstrongly influenced on choice of the JOIN push-down strategy.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Mon, 30 Nov 2020 21:19:11 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Cost overestimation of foreign JOIN"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> Maybe it is needed to swap lines 2908 and 2909 (see attachment)?\n\nNo; as explained in the comment immediately above here, we're assuming\nthat the join conditions will be applied on the cross product of the\ninput relations.\n\nNow admittedly, that's a worst-case assumption, since it amounts to\nexpecting that the remote server will do the join in the dumbest\npossible nested-loop way. If the remote can use a merge or hash\njoin, for example, the cost is likely to be a lot less. But it is\nnot the job of this code path to outguess the remote planner. It's\ncertainly not appropriate to invent an unprincipled cost estimate\nas a substitute for trying to guess that.\n\nIf you're unhappy with the planning results you get for this,\nwhy don't you have use_remote_estimate turned on?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 12:38:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cost overestimation of foreign JOIN"
},
{
"msg_contents": "On 30.11.2020 22:38, Tom Lane wrote:\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>> Maybe it is needed to swap lines 2908 and 2909 (see attachment)?\n> \n> No; as explained in the comment immediately above here, we're assuming\n> that the join conditions will be applied on the cross product of the\n> input relations.\n\nThank you. Now it is clear to me.\n> \n> Now admittedly, that's a worst-case assumption, since it amounts to\n> expecting that the remote server will do the join in the dumbest\n> possible nested-loop way. If the remote can use a merge or hash\n> join, for example, the cost is likely to be a lot less.\n\nMy goal is scaling Postgres on a set of the same servers with same \npostgres instances. If one server uses for the join a hash-join node, i \nthink it is most likely that the other server will also use for this \njoin a hash-join node (Maybe you remember, I also use the statistics \ncopying technique to provide up-to-date statistics on partitions). Tests \nshow good results with such an approach. But maybe this is my special case.\n\n> But it is\n> not the job of this code path to outguess the remote planner. It's\n> certainly not appropriate to invent an unprincipled cost estimate\n> as a substitute for trying to guess that.\n\nAgreed.\n> \n> If you're unhappy with the planning results you get for this,\n> why don't you have use_remote_estimate turned on?\n\nI have a mixed load model. Large queries are suitable for additional \nestimate queries. But for many simple SELECT's that touch a small \nportion of the data, the latency has increased significantly. And I \ndon't know how to switch the use_remote_estimate setting in such case.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Mon, 30 Nov 2020 23:26:23 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Cost overestimation of foreign JOIN"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 11:56 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> On 30.11.2020 22:38, Tom Lane wrote:\n> > Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> >> Maybe it is needed to swap lines 2908 and 2909 (see attachment)?\n> >\n> > No; as explained in the comment immediately above here, we're assuming\n> > that the join conditions will be applied on the cross product of the\n> > input relations.\n>\n> Thank you. Now it is clear to me.\n> >\n> > Now admittedly, that's a worst-case assumption, since it amounts to\n> > expecting that the remote server will do the join in the dumbest\n> > possible nested-loop way. If the remote can use a merge or hash\n> > join, for example, the cost is likely to be a lot less.\n>\n> My goal is scaling Postgres on a set of the same servers with same\n> postgres instances. If one server uses for the join a hash-join node, i\n> think it is most likely that the other server will also use for this\n> join a hash-join node (Maybe you remember, I also use the statistics\n> copying technique to provide up-to-date statistics on partitions). Tests\n> show good results with such an approach. But maybe this is my special case.\n>\n> > But it is\n> > not the job of this code path to outguess the remote planner. It's\n> > certainly not appropriate to invent an unprincipled cost estimate\n> > as a substitute for trying to guess that.\n>\n> Agreed.\n> >\n> > If you're unhappy with the planning results you get for this,\n> > why don't you have use_remote_estimate turned on?\n>\n> I have a mixed load model. Large queries are suitable for additional\n> estimate queries. But for many simple SELECT's that touch a small\n> portion of the data, the latency has increased significantly. And I\n> don't know how to switch the use_remote_estimate setting in such case.\n\nYou may disable use_remote_estimates for given table or a server. So\nif tables participating in short queries are different from those in\nthe large queries, you could set use_remote_estimate at table level to\nturn it off for the first set. Otherwise, we need a FDW level GUC\nwhich can be turned on/off for a given session or a query.\n\nGenerally use_remote_estimate isn't scalable and there have been\ndiscussions about eliminating the need of it. But no concrete proposal\nhas come yet.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 1 Dec 2020 18:47:50 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cost overestimation of foreign JOIN"
},
{
"msg_contents": "On 12/1/20 6:17 PM, Ashutosh Bapat wrote:\n> On Mon, Nov 30, 2020 at 11:56 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>>\n>> On 30.11.2020 22:38, Tom Lane wrote:\n>>> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>>> If you're unhappy with the planning results you get for this,\n>>> why don't you have use_remote_estimate turned on?\n>>\n>> I have a mixed load model. Large queries are suitable for additional\n>> estimate queries. But for many simple SELECT's that touch a small\n>> portion of the data, the latency has increased significantly. And I\n>> don't know how to switch the use_remote_estimate setting in such case.\n> \n> You may disable use_remote_estimates for given table or a server. So\n> if tables participating in short queries are different from those in\n> the large queries, you could set use_remote_estimate at table level to\n> turn it off for the first set. Otherwise, we need a FDW level GUC\n> which can be turned on/off for a given session or a query.\n\nCurrently I implemented another technique:\n- By default, use_remote_estimate is off.\n- On the estimate_path_cost_size() some estimation criteria is checked. \nIf true, we force remote estimation for this JOIN.\nThis approach solves the push-down problem in my case - TPC-H test with \n6 servers/instances. But it is not so scalable, as i want.\n> \n> Generally use_remote_estimate isn't scalable and there have been\n> discussions about eliminating the need of it. But no concrete proposal\n> has come yet.\n> \nAbove I suggested to use results of cost calculation on local JOIN, \nassuming that in the case of postgres_fdw wrapper very likely, that \nforeign server will use the same type of join (or even better, if it has \nsome index, for example).\nIf this approach is of interest, I can investigate it.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Wed, 2 Dec 2020 15:49:46 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Cost overestimation of foreign JOIN"
}
] |
[
{
"msg_contents": "This makes toast tables a bit less special and easier to inspect.\n\npostgres=# \\dtS+ pg_toast.pg_toast_2619\n pg_toast | pg_toast_2619 | toast table | pryzbyj | permanent | heap | 56 kB | \n\nThis follows commit from last year:\n| eb5472da9 make \\d pg_toast.foo show its indices ; and, \\d toast show its main table\n\n-- \nJustin",
"msg_date": "Mon, 30 Nov 2020 10:54:36 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "allow to \\dtS+ pg_toast.*"
},
{
"msg_contents": "On Mon, 2020-11-30 at 10:54 -0600, Justin Pryzby wrote:\n> This makes toast tables a bit less special and easier to inspect.\n> \n> postgres=# \\dtS+ pg_toast.pg_toast_2619\n> pg_toast | pg_toast_2619 | toast table | pryzbyj | permanent | heap | 56 kB | \n> \n> This follows commit from last year:\n> | eb5472da9 make \\d pg_toast.foo show its indices ; and, \\d toast show its main table\n\nThis would indeed be convenient.\n\nThe patch passes regression tests.\n\nWhile playing around with it, I found the following oddity:\n\nregression=# \\dtS+ pg_toast.pg_toast_30701\n List of relations\n Schema | Name | Type | Owner | Persistence | Access Method | Size | Description \n----------+----------------+-------------+---------+-------------+---------------+---------+-------------\n pg_toast | pg_toast_30701 | toast table | laurenz | permanent | heap | 0 bytes | \n(1 row)\n\nregression=# \\dtS pg_toast.pg_toast_30701\n List of relations\n Schema | Name | Type | Owner \n----------+----------------+-------------+---------\n pg_toast | pg_toast_30701 | toast table | laurenz\n(1 row)\n\nregression=# \\dt pg_toast.pg_toast_30701\nDid not find any relation named \"pg_toast.pg_toast_30701\".\n\nNow this doesn't seem right. To my understanding, \\dtS should do the same as \\dt,\nexcept that it should also search in \"pg_catalog\" if no schema was provided.\n\nAnother thing that is missing is tab completion for\n\nregression=# \\dtS pg_toast.pg_\n\nThis should work just like for \\d and \\dS.\n\n\nBoth of these effects can also be observed with \\di and toast indexes.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 16:16:52 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: allow to \\dtS+ pg_toast.*"
},
{
"msg_contents": "Thanks for looking\n\nOn Thu, Dec 17, 2020 at 04:16:52PM +0100, Laurenz Albe wrote:\n> On Mon, 2020-11-30 at 10:54 -0600, Justin Pryzby wrote:\n> > This makes toast tables a bit less special and easier to inspect.\n> > \n> > postgres=# \\dtS+ pg_toast.pg_toast_2619\n> > pg_toast | pg_toast_2619 | toast table | pryzbyj | permanent | heap | 56 kB | \n> > \n> > This follows commit from last year:\n> > | eb5472da9 make \\d pg_toast.foo show its indices ; and, \\d toast show its main table\n> \n> This would indeed be convenient.\n> \n> While playing around with it, I found the following oddity:\n> \n> regression=# \\dtS pg_toast.pg_toast_30701\n> pg_toast | pg_toast_30701 | toast table | laurenz\n> \n> regression=# \\dt pg_toast.pg_toast_30701\n> Did not find any relation named \"pg_toast.pg_toast_30701\".\n> \n> Now this doesn't seem right. To my understanding, \\dtS should do the same as \\dt,\n> except that it should also search in \"pg_catalog\" if no schema was provided.\n\nYou mean that if pg_toast.* should be shown if a matching \"pattern\" is given,\neven if \"S\" was not used. I think you're right. The behavior I implemented\nwas intended to provide a bit of historic compatibility towards hiding toast\ntables, but I think it's not needed, since they're not shown anyway unless\nsomeone includes \"S\", specifies the \"pg_toast.\" schema, or pg_toast is in their\nsearch path. See attached.\n\n> Another thing that is missing is tab completion for\n> regression=# \\dtS pg_toast.pg_\n> This should work just like for \\d and \\dS.\n\nTom objected to this in the past, humorously to me:\n\nhttps://www.postgresql.org/message-id/14255.1536781029@sss.pgh.pa.us\nOn Wed, Sep 12, 2018 at 03:37:09PM -0400, Tom Lane wrote:\n> Arthur Zakirov <a.zakirov@postgrespro.ru> writes:\n> > On Sun, Jul 29, 2018 at 07:42:43PM -0500, Justin Pryzby wrote:\n> >>> Actually..another thought: since toast tables may be VACUUM-ed, should I\n> >>> introduce Query_for_list_of_tpmt ?\n> \n> >> I didn't include this one yet though.\n> \n> > I think it could be done by a separate patch.\n> \n> I don't actually think that's a good idea. It's more likely to clutter\n> peoples' completion lists than offer anything they want. Even if someone\n> actually does want to vacuum a toast table, they are not likely to be\n> entering its name via tab completion; they're going to have identified\n> which table they want via some query, and then they'll be doing something\n> like copy-and-paste out of a query result.",
"msg_date": "Fri, 18 Dec 2020 00:58:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: allow to \\dtS+ pg_toast.*"
},
{
"msg_contents": "On Fri, 2020-12-18 at 00:58 -0600, Justin Pryzby wrote:\n> On Thu, Dec 17, 2020 at 04:16:52PM +0100, Laurenz Albe wrote:\n> > On Mon, 2020-11-30 at 10:54 -0600, Justin Pryzby wrote:\n> > > This makes toast tables a bit less special and easier to inspect.\n> > > \n> > > postgres=# \\dtS+ pg_toast.pg_toast_2619\n> > > pg_toast | pg_toast_2619 | toast table | pryzbyj | permanent | heap | 56 kB | \n> > > \n> > > This follows commit from last year:\n> > > | eb5472da9 make \\d pg_toast.foo show its indices ; and, \\d toast show its main table\n> > \n> > This would indeed be convenient.\n> > \n> > While playing around with it, I found the following oddity:\n> > \n> > regression=# \\dtS pg_toast.pg_toast_30701\n> > pg_toast | pg_toast_30701 | toast table | laurenz\n> > \n> > regression=# \\dt pg_toast.pg_toast_30701\n> > Did not find any relation named \"pg_toast.pg_toast_30701\".\n> > \n> > Now this doesn't seem right. To my understanding, \\dtS should do the same as \\dt,\n> > except that it should also search in \"pg_catalog\" if no schema was provided.\n> \n> You mean that if pg_toast.* should be shown if a matching \"pattern\" is given,\n> even if \"S\" was not used. I think you're right. The behavior I implemented\n> was intended to provide a bit of historic compatibility towards hiding toast\n> tables, but I think it's not needed, since they're not shown anyway unless\n> someone includes \"S\", specifies the \"pg_toast.\" schema, or pg_toast is in their\n> search path. See attached.\n\nYes, exactly.\n\nI wonder why the modification in \"listPartitionedTables\" is necessary.\nSurely there cannot be any partitioned toast tables, can there?\n\n> > Another thing that is missing is tab completion for\n> > regression=# \\dtS pg_toast.pg_\n> > This should work just like for \\d and \\dS.\n> \n> Tom objected to this in the past, humorously to me:\n> \n> https://www.postgresql.org/message-id/14255.1536781029@sss.pgh.pa.us\n\nThat was about VACUUM and ANALYZE, and I certainly see the point\nthat this is not a frequent enough use case to warrant guiding\npeople there with tab completion.\n\nBut I don't take that as an argument against tab completion in our case.\nIf I want to know how big the TOAST table of relation 87654 is,\nI think it is convenient if I can tab to\n\n\\dt+ pg_toast.pg_toast_\n\nIf you want to abolish special treatment for TOAST tables in \\dt(S),\ngo all the way.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 12:43:07 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: allow to \\dtS+ pg_toast.*"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 12:43:07PM +0100, Laurenz Albe wrote:\n> On Fri, 2020-12-18 at 00:58 -0600, Justin Pryzby wrote:\n> > On Thu, Dec 17, 2020 at 04:16:52PM +0100, Laurenz Albe wrote:\n> > > On Mon, 2020-11-30 at 10:54 -0600, Justin Pryzby wrote:\n> > > > This makes toast tables a bit less special and easier to inspect.\n> > > > \n> > > > postgres=# \\dtS+ pg_toast.pg_toast_2619\n> > > > pg_toast | pg_toast_2619 | toast table | pryzbyj | permanent | heap | 56 kB | \n> > > > \n> > > > This follows commit from last year:\n> > > > | eb5472da9 make \\d pg_toast.foo show its indices ; and, \\d toast show its main table\n> > > \n> > > This would indeed be convenient.\n> > > \n> > > While playing around with it, I found the following oddity:\n> > > \n> > > regression=# \\dtS pg_toast.pg_toast_30701\n> > > pg_toast | pg_toast_30701 | toast table | laurenz\n> > > \n> > > regression=# \\dt pg_toast.pg_toast_30701\n> > > Did not find any relation named \"pg_toast.pg_toast_30701\".\n> > > \n> > > Now this doesn't seem right. To my understanding, \\dtS should do the same as \\dt,\n> > > except that it should also search in \"pg_catalog\" if no schema was provided.\n> > \n> > You mean that if pg_toast.* should be shown if a matching \"pattern\" is given,\n> > even if \"S\" was not used. I think you're right. The behavior I implemented\n> > was intended to provide a bit of historic compatibility towards hiding toast\n> > tables, but I think it's not needed, since they're not shown anyway unless\n> > someone includes \"S\", specifies the \"pg_toast.\" schema, or pg_toast is in their\n> > search path. See attached.\n> \n> Yes, exactly.\n> \n> I wonder why the modification in \"listPartitionedTables\" is necessary.\n> Surely there cannot be any partitioned toast tables, can there?\n\nThe comment should be removed for consistency.\nAnd I changed the code for consistency with listTables (from which I assume\nlistPartitionedTables was derived - I was involved in the last stages of that\npatch). It doesn't need to exclude pg_catalog or information_schema, either,\nbut it's kept the same for consistency. That part could also be removed.\n\n> > > Another thing that is missing is tab completion for\n> > > regression=# \\dtS pg_toast.pg_\n> > > This should work just like for \\d and \\dS.\n..\n> If I want to know how big the TOAST table of relation 87654 is,\n> I think it is convenient if I can tab to\n> \n> \\dt+ pg_toast.pg_toast_\n\nI agree that it's nice to complete the schema name, but I'm still not convinced\nthis part should be included.\n\nThe way to include pg_toast.pg_toast is if toast relations are included, which\nis exactly what Tom pointed out is usually unhelpful. If you include toast\nrelations, tab completion might give \"pg_toast.pg_toast_14...\" when you wanted\nto paste \"145678\" - you'd need to remove the common suffix that it found.\n\nI considered whether \"toast table\" should be capitalized (as it is for \"\\d\")\nbut I think it should stay lowercase.\n\n-- \nJustin",
"msg_date": "Fri, 18 Dec 2020 11:33:12 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: allow to \\dtS+ pg_toast.*"
},
{
"msg_contents": "On Fri, 2020-12-18 at 11:33 -0600, Justin Pryzby wrote:\n> > > > > This makes toast tables a bit less special and easier to inspect.\n> > \n> > I wonder why the modification in \"listPartitionedTables\" is necessary.\n> > Surely there cannot be any partitioned toast tables, can there?\n> \n> The comment should be removed for consistency.\n> And I changed the code for consistency with listTables (from which I assume\n> listPartitionedTables was derived - I was involved in the last stages of that\n> patch). It doesn't need to exclude pg_catalog or information_schema, either,\n> but it's kept the same for consistency. That part could also be removed.\n\nI don't think that consistency with \"listTables\" is particularly useful here,\nbut I think this is somewhat academic.\n\nI'll leave that for the committer to decide.\n\n> > > > Another thing that is missing is tab completion for\n> > > > regression=# \\dtS pg_toast.pg_\n> > > > This should work just like for \\d and \\dS.\n> \n> I agree that it's nice to complete the schema name, but I'm still not convinced\n> this part should be included.\n> \n> The way to include pg_toast.pg_toast is if toast relations are included, which\n> is exactly what Tom pointed out is usually unhelpful. If you include toast\n> relations, tab completion might give \"pg_toast.pg_toast_14...\" when you wanted\n> to paste \"145678\" - you'd need to remove the common suffix that it found.\n\nAgain a judgement call. I am happy with the way the latest patch does it.\n\n> I considered whether \"toast table\" should be capitalized (as it is for \"\\d\")\n> but I think it should stay lowercase.\n\nThen you should also change the way \\d does it (upper case).\nI think we should be consistent.\n\nI'd use TOAST for both to create no unnecessary change in \\d output.\n\nAnyway, I think that this is ready for committer and will mark it as such.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 21 Dec 2020 15:20:51 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: allow to \\dtS+ pg_toast.*"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Fri, 2020-12-18 at 11:33 -0600, Justin Pryzby wrote:\n>>>> This makes toast tables a bit less special and easier to inspect.\n\nPushed, except for\n\n>>> Another thing that is missing is tab completion for\n>>> regression=# \\dtS pg_toast.pg_\n>>> This should work just like for \\d and \\dS.\n\n>> I agree that it's nice to complete the schema name, but I'm still not convinced\n>> this part should be included.\n\n> Again a judgement call. I am happy with the way the latest patch does it.\n\nI remain pretty much against including toast tables in tab completion,\nso I left out that part.\n\n>> I considered whether \"toast table\" should be capitalized (as it is for \"\\d\")\n>> but I think it should stay lowercase.\n\n> Then you should also change the way \\d does it (upper case).\n> I think we should be consistent.\n\nI think capitalized TOAST is correct, because it's an acronym.\nI also agree that consistency with what \\d shows is important,\nand I have no desire to change \\d's longstanding output.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jan 2021 18:46:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow to \\dtS+ pg_toast.*"
},
{
"msg_contents": "On Tue, Jan 05, 2021 at 06:46:01PM -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Fri, 2020-12-18 at 11:33 -0600, Justin Pryzby wrote:\n> >>>> This makes toast tables a bit less special and easier to inspect.\n> \n> Pushed, except for\n\nThank you\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 5 Jan 2021 17:50:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: allow to \\dtS+ pg_toast.*"
}
] |
[
{
"msg_contents": "Hello,\n\nIn a previous thread [1], we added smarts so that processes running\nCREATE INDEX CONCURRENTLY would not wait for each other.\n\nOne is adding the same to REINDEX CONCURRENTLY. I've attached patch\n0002 here which does that.\n\nWhy 0002, you ask? That's because preparatory patch 0001 simplifies the\nReindexRelationConcurrently somewhat by adding a struct to be used of\nindexes that are going to be processed, instead of just a list of Oids.\nThis is a good change in itself because it let us get rid of duplicative\nopen/close of the index rels in order to obtain some info that's already\nknown at the start.\n\nThe other thing is that it'd be good if we can make VACUUM also ignore\nXmin of processes doing CREATE INDEX CONCURRENTLY and REINDEX\nCONCURRENTLY, when possible. I have two possible ideas to handle this,\nabout which I'll post later.\n\n\n[1] https://postgr.es/m/20200810233815.GA18970@alvherre.pgsql\n\n-- \n�lvaro Herrera Valdivia, Chile",
"msg_date": "Mon, 30 Nov 2020 16:54:39 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "{CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "> On Mon, Nov 30, 2020 at 04:54:39PM -0300, Alvaro Herrera wrote:\n>\n> In a previous thread [1], we added smarts so that processes running\n> CREATE INDEX CONCURRENTLY would not wait for each other.\n>\n> One is adding the same to REINDEX CONCURRENTLY. I've attached patch\n> 0002 here which does that.\n>\n> Why 0002, you ask? That's because preparatory patch 0001 simplifies the\n> ReindexRelationConcurrently somewhat by adding a struct to be used of\n> indexes that are going to be processed, instead of just a list of Oids.\n> This is a good change in itself because it let us get rid of duplicative\n> open/close of the index rels in order to obtain some info that's already\n> known at the start.\n\nThanks! The patch looks pretty good to me, after reading it I only have\na few minor comments/questions:\n\n* ReindexIndexInfo sounds a bit weird for me because of the repeating\n part, although I see there is already a similar ReindexIndexCallbackState\n so probably it's fine.\n\n* This one is mostly for me to understand. There are couple of places\n with a commentary that 'PROC_IN_SAFE_IC is not necessary, because the\n transaction only takes a snapshot to do some catalog manipulation'.\n But for some of them I don't immediately see in the relevant code\n anything related to snapshots. E.g. one in DefineIndex is followed by\n WaitForOlderSnapshots (which seems to only do waiting, not taking a\n snapshot), index_set_state_flags and CacheInvalidateRelcacheByRelid.\n Is taking a snapshot hidden somewhere there inside?\n\n\n",
"msg_date": "Fri, 4 Dec 2020 12:38:32 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On 2020-Dec-04, Dmitry Dolgov wrote:\n\n> * This one is mostly for me to understand. There are couple of places\n> with a commentary that 'PROC_IN_SAFE_IC is not necessary, because the\n> transaction only takes a snapshot to do some catalog manipulation'.\n> But for some of them I don't immediately see in the relevant code\n> anything related to snapshots. E.g. one in DefineIndex is followed by\n> WaitForOlderSnapshots (which seems to only do waiting, not taking a\n> snapshot), index_set_state_flags and CacheInvalidateRelcacheByRelid.\n> Is taking a snapshot hidden somewhere there inside?\n\nWell, they're actually going to acquire an Xid, not a snapshot, so the\ncomment is slightly incorrect; I'll fix it, thanks for pointing that\nout. The point stands: because those transactions are of very short\nduration (at least of very short duration after the point where the XID\nis obtained), it's not necessary to set the PROC_IN_SAFE_IC flag, since\nit won't cause any disruption to other processes.\n\nIt is possible that I copied the wrong comment in DefineIndex. (I only\nnoticed that one after I had mulled over the ones in\nReindexRelationConcurrently, each of which is skipping setting the flag\nfor slightly different reasons.)\n\n\n",
"msg_date": "Fri, 4 Dec 2020 11:37:20 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThe patch looks good to me. With regards to the code comments, I had pretty similar concerns as already raised by Dmitry; and those have already been answered by you. So this patch is good to go from my side once you have updated the comments per your last email.",
"msg_date": "Tue, 08 Dec 2020 08:17:25 +0000",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 5:18 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> The patch looks good to me. With regards to the code comments, I had pretty similar concerns as already raised by Dmitry; and those have already been answered by you. So this patch is good to go from my side once you have updated the comments per your last email.\n\nFor the 0001 patch, since ReindexIndexInfo is used only within\nReindexRelationConcurrently() I think it’s a function-local structure\ntype. So we can declare it within the function. What do you think?\n\nRegards,\n\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 4 Jan 2021 19:55:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On 2021-Jan-04, Masahiko Sawada wrote:\n\n> On Tue, Dec 8, 2020 at 5:18 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> >\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: not tested\n> > Spec compliant: not tested\n> > Documentation: not tested\n> >\n> > The patch looks good to me. With regards to the code comments, I had pretty similar concerns as already raised by Dmitry; and those have already been answered by you. So this patch is good to go from my side once you have updated the comments per your last email.\n> \n> For the 0001 patch, since ReindexIndexInfo is used only within\n> ReindexRelationConcurrently() I think it’s a function-local structure\n> type. So we can declare it within the function. What do you think?\n\nThat's a good idea. Pushed with that change, thanks.\n\nThanks also to Dmitry and Hamid for the reviews.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W\n\n\n",
"msg_date": "Tue, 12 Jan 2021 17:10:28 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On 2021-Jan-12, Álvaro Herrera wrote:\n\n> > For the 0001 patch, since ReindexIndexInfo is used only within\n> > ReindexRelationConcurrently() I think it’s a function-local structure\n> > type. So we can declare it within the function. What do you think?\n> \n> That's a good idea. Pushed with that change, thanks.\n\nHere's the other patch, with comments fixed per reviews, and rebased to\naccount for the scope change.\n\n-- \nÁlvaro Herrera Valdivia, Chile\nTulio: oh, para qué servirá este boton, Juan Carlos?\nPolicarpo: No, aléjense, no toquen la consola!\nJuan Carlos: Lo apretaré una y otra vez.",
"msg_date": "Tue, 12 Jan 2021 17:21:29 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On 2021-Jan-12, Álvaro Herrera wrote:\n\n> On 2021-Jan-12, Álvaro Herrera wrote:\n> \n> > > For the 0001 patch, since ReindexIndexInfo is used only within\n> > > ReindexRelationConcurrently() I think it’s a function-local structure\n> > > type. So we can declare it within the function. What do you think?\n> > \n> > That's a good idea. Pushed with that change, thanks.\n> \n> Here's the other patch, with comments fixed per reviews, and rebased to\n> account for the scope change.\n\nPushed. At the last minute I decided to back off on reverting the flag\nset that DefineIndex has, because there was no point in doing that. I\nalso updated the comment in (two places of) ReindexRelationConcurrently\nabout why we don't do it there. The previous submission had this:\n\n> +\t/*\n> +\t * This transaction doesn't need to set the PROC_IN_SAFE_IC flag, because\n> +\t * it only acquires an Xid to do some catalog manipulations, after the\n> +\t * wait is over.\n> +\t */\n\nbut this fails to point out that the main reason not to do it, is to\navoid having to decide whether to do it or not when some indexes are\nsafe and others aren't. So I changed to:\n\n+ /*\n+ * While we could set PROC_IN_SAFE_IC if all indexes qualified, there's no\n+ * real need for that, because we only acquire an Xid after the wait is\n+ * done, and that lasts for a very short period.\n+ */\n\nThanks!\n\n-- \nÁlvaro Herrera Valdivia, Chile\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)\n\n\n",
"msg_date": "Fri, 15 Jan 2021 10:38:58 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "So one last remaining improvement was to have VACUUM ignore processes\ndoing CIC and RC to compute the Xid horizon of tuples to remove. I\nthink we can do something simple like the attached patch.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Aprender sin pensar es in�til; pensar sin aprender, peligroso\" (Confucio)",
"msg_date": "Fri, 15 Jan 2021 11:29:26 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Fri, 15 Jan 2021 at 15:29, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> So one last remaining improvement was to have VACUUM ignore processes\n> doing CIC and RC to compute the Xid horizon of tuples to remove. I\n> think we can do something simple like the attached patch.\n\nRegarding the patch:\n\n> + if (statusFlags & PROC_IN_SAFE_IC)\n> + h->catalog_oldest_nonremovable =\n> + TransactionIdOlder(h->catalog_oldest_nonremovable, xmin);\n> + else\n> + h->data_oldest_nonremovable = h->catalog_oldest_nonremovable =\n> + TransactionIdOlder(h->data_oldest_nonremovable, xmin);\n\nWould this not need to be the following? Right now, it resets\npotentially older h->catalog_oldest_nonremovable (which is set in the\nPROC_IN_SAFE_IC branch).\n\n> + if (statusFlags & PROC_IN_SAFE_IC)\n> + h->catalog_oldest_nonremovable =\n> + TransactionIdOlder(h->catalog_oldest_nonremovable, xmin);\n> + else\n> + {\n> + h->data_oldest_nonremovable =\n> + TransactionIdOlder(h->data_oldest_nonremovable, xmin);\n> + h->catalog_oldest_nonremovable =\n> + TransactionIdOlder(h->catalog_oldest_nonremovable, xmin);\n> + }\n\n\nPrior to reading the rest of my response: I do not understand the\nintricacies of the VACUUM process, nor the systems of db snapshots, so\nit's reasonably possible I misunderstand this.\nBut would this patch not allow for tuples to be created, deleted and\nvacuumed away from a table whilst rebuilding an index on that table,\nresulting in invalid pointers in the index?\n\nExample:\n\n1.) RI starts\n2.) PHASE 2: filling the index:\n2.1.) scanning the heap (live tuple is cached)\n< tuple is deleted\n< last transaction other than RI commits, only snapshot of RI exists\n< vacuum drops the tuple, and cannot remove it from the new index\nbecause this new index is not yet populated.\n2.2.) sorting tuples\n2.3.) index filled with tuples, incl. deleted tuple\n3.) PHASE 3: wait for transactions\n4.) PHASE 4: validate does not remove the tuple from the index,\nbecause it is not built to do so: it will only insert new tuples.\nTuples that are marked for deletion are removed from the index only\nthrough VACUUM (and optimistic ALL_DEAD detection).\n\nAccording to my limited knowledge of RI, it requires VACUUM to not run\non the table during the initial index build process (which is\ncurrently guaranteed through the use of a snapshot).\n\n\nRegards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 18 Jan 2021 21:14:03 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On 2021-Jan-18, Matthias van de Meent wrote:\n\n> Example:\n> \n> 1.) RI starts\n> 2.) PHASE 2: filling the index:\n> 2.1.) scanning the heap (live tuple is cached)\n> < tuple is deleted\n> < last transaction other than RI commits, only snapshot of RI exists\n> < vacuum drops the tuple, and cannot remove it from the new index\n> because this new index is not yet populated.\n> 2.2.) sorting tuples\n> 2.3.) index filled with tuples, incl. deleted tuple\n> 3.) PHASE 3: wait for transactions\n> 4.) PHASE 4: validate does not remove the tuple from the index,\n> because it is not built to do so: it will only insert new tuples.\n> Tuples that are marked for deletion are removed from the index only\n> through VACUUM (and optimistic ALL_DEAD detection).\n> \n> According to my limited knowledge of RI, it requires VACUUM to not run\n> on the table during the initial index build process (which is\n> currently guaranteed through the use of a snapshot).\n\nVACUUM cannot run concurrently with CIC or RI in a table -- both acquire\nShareUpdateExclusiveLock, which conflicts with itself, so this cannot\noccur.\n\nI do wonder if the problem you suggest (or something similar) can occur\nvia HOT pruning, though.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Mon, 18 Jan 2021 17:25:49 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On 2021-Jan-18, Matthias van de Meent wrote:\n\n> On Fri, 15 Jan 2021 at 15:29, �lvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> Would this not need to be the following? Right now, it resets\n> potentially older h->catalog_oldest_nonremovable (which is set in the\n> PROC_IN_SAFE_IC branch).\n> \n> > + if (statusFlags & PROC_IN_SAFE_IC)\n> > + h->catalog_oldest_nonremovable =\n> > + TransactionIdOlder(h->catalog_oldest_nonremovable, xmin);\n> > + else\n> > + {\n> > + h->data_oldest_nonremovable =\n> > + TransactionIdOlder(h->data_oldest_nonremovable, xmin);\n> > + h->catalog_oldest_nonremovable =\n> > + TransactionIdOlder(h->catalog_oldest_nonremovable, xmin);\n> > + }\n\nOops, you're right.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Mon, 18 Jan 2021 17:27:34 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Mon, 18 Jan 2021, 21:25 Álvaro Herrera, <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jan-18, Matthias van de Meent wrote:\n>\n> > Example:\n> >\n> > 1.) RI starts\n> > 2.) PHASE 2: filling the index:\n> > 2.1.) scanning the heap (live tuple is cached)\n> > < tuple is deleted\n> > < last transaction other than RI commits, only snapshot of RI exists\n> > < vacuum drops the tuple, and cannot remove it from the new index\n> > because this new index is not yet populated.\n> > 2.2.) sorting tuples\n> > 2.3.) index filled with tuples, incl. deleted tuple\n> > 3.) PHASE 3: wait for transactions\n> > 4.) PHASE 4: validate does not remove the tuple from the index,\n> > because it is not built to do so: it will only insert new tuples.\n> > Tuples that are marked for deletion are removed from the index only\n> > through VACUUM (and optimistic ALL_DEAD detection).\n> >\n> > According to my limited knowledge of RI, it requires VACUUM to not run\n> > on the table during the initial index build process (which is\n> > currently guaranteed through the use of a snapshot).\n>\n> VACUUM cannot run concurrently with CIC or RI in a table -- both acquire\n> ShareUpdateExclusiveLock, which conflicts with itself, so this cannot\n> occur.\n\nYes, you are correct. Vacuum indeed has a ShareUpdateExclusiveLock.\nAre there no other ways that pages are optimistically pruned?\n\nBut the base case still stands, ignoring CIC snapshots in would give\nthe semantic of all_dead to tuples that are actually still considered\nalive in some context, and should not yet be deleted (you're deleting\ndata from an in-use snapshot). Any local pruning optimizations using\nall_dead mechanics now cannot be run on the table unless they hold an\nShareUpdateExclusiveLock; though I'm unaware of any such mechanisms\n(other than below).\n\n> I do wonder if the problem you suggest (or something similar) can occur\n> via HOT pruning, though.\n\nIt could not, at least not at the current HEAD, as only one tuple in a\nHOT-chain can be alive at one point, and all indexes point to the root\nof the HOT-chain, which is never HOT-pruned. See also the\nsrc/backend/access/heap/README.HOT.\n\nRegards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 19 Jan 2021 21:59:07 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Tue, 19 Jan 2021 at 21:59, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Mon, 18 Jan 2021, 21:25 Álvaro Herrera, <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Jan-18, Matthias van de Meent wrote:\n> >\n> > > Example:\n> > >\n> > > 1.) RI starts\n> > > 2.) PHASE 2: filling the index:\n> > > 2.1.) scanning the heap (live tuple is cached)\n> > > < tuple is deleted\n> > > < last transaction other than RI commits, only snapshot of RI exists\n> > > < vacuum drops the tuple, and cannot remove it from the new index\n> > > because this new index is not yet populated.\n> > > 2.2.) sorting tuples\n> > > 2.3.) index filled with tuples, incl. deleted tuple\n> > > 3.) PHASE 3: wait for transactions\n> > > 4.) PHASE 4: validate does not remove the tuple from the index,\n> > > because it is not built to do so: it will only insert new tuples.\n> > > Tuples that are marked for deletion are removed from the index only\n> > > through VACUUM (and optimistic ALL_DEAD detection).\n> > >\n> > > According to my limited knowledge of RI, it requires VACUUM to not run\n> > > on the table during the initial index build process (which is\n> > > currently guaranteed through the use of a snapshot).\n> >\n> > VACUUM cannot run concurrently with CIC or RI in a table -- both acquire\n> > ShareUpdateExclusiveLock, which conflicts with itself, so this cannot\n> > occur.\n>\n> Yes, you are correct. Vacuum indeed has a ShareUpdateExclusiveLock.\n> Are there no other ways that pages are optimistically pruned?\n>\n> But the base case still stands, ignoring CIC snapshots in would give\n> the semantic of all_dead to tuples that are actually still considered\n> alive in some context, and should not yet be deleted (you're deleting\n> data from an in-use snapshot). Any local pruning optimizations using\n> all_dead mechanics now cannot be run on the table unless they hold an\n> ShareUpdateExclusiveLock; though I'm unaware of any such mechanisms\n> (other than below).\n\nRe-thinking this, and after some research:\n\nIs the behaviour of \"any process that invalidates TIDs in this table\n(that could be in an index on this table) always holds a lock that\nconflicts with CIC/RiC on that table\" a requirement of tableams, or is\nit an implementation-detail?\n\nIf it is a requirement, then this patch is a +1 for me (and that\nrequirement should be documented in such case), otherwise a -1 while\nthere is no mechanism in place to remove concurrently-invalidated TIDs\nfrom CIC-ed/RiC-ed indexes.\n\nThis concurrently-invalidated check could be done through e.g.\nupdating validate_index to have one more phase that removes unknown /\nincorrect TIDs from the index. As a note: index insertion logic would\nthen also have to be able to handle duplicate TIDs in the index.\n\nRegards,\n\nMatthias van de Meent\n\n\nRegards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 21 Jan 2021 21:08:39 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 3:08 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Re-thinking this, and after some research:\n>\n> Is the behaviour of \"any process that invalidates TIDs in this table\n> (that could be in an index on this table) always holds a lock that\n> conflicts with CIC/RiC on that table\" a requirement of tableams, or is\n> it an implementation-detail?\n>\n> If it is a requirement, then this patch is a +1 for me (and that\n> requirement should be documented in such case), otherwise a -1 while\n> there is no mechanism in place to remove concurrently-invalidated TIDs\n> from CIC-ed/RiC-ed indexes.\n\nI don't believe that the table AM machinery presumes that every kind\nof table AM necessarily has to support VACUUM; or at least, I don't\nthink the original version presumed that. However, if you want it to\nwork some other way, there's not really any infrastructure for that,\neither. Like, if you want to run your own background workers to clean\nup your table, you could, but you'd have to arrange that yourself. If\nyou want a SWEEP command or an OPTIMIZE TABLE command instead of\nVACUUM, you'd have to hack the grammar. Likewise, I don't know that\nthere's any guarantee that CLUSTER would work on a table of any old AM\neither, or that it would do anything useful.\n\nBut having said that, if you have a table AM that *does* support\nVACUUM and CLUSTER, I think it would have to treat TIDs pretty much\njust as we do today. Almost by definition, they have to live with the\nway the rest of the system works. TIDs have to be 6 bytes, and lock\nlevels can't be changed, because there's nothing in table AM that\nwould let you tinker with that stuff. The table AM can opt out of that\nmachinery altogether in favor of just throwing an error, but it can't\nmake it work differently unless somebody extends the API. It's\npossible that this might suck a lot for some AMs. For instance, if\nyour AM is an in-memory btree, you might want CLUSTER to work in\nplace, rather than by copying the whole relation file, but I doubt\nit's possible to make that work using the APIs as they exist. Maybe\nsomeday we'll have better ones, but right now we don't.\n\nAs far as the specific question asked here, I don't really understand\nhow it could ever work any differently. If you want to invalidate some\nTIDs in such a way that they can be reused by unrelated tuples - in\nthe case of the heap this would be by making the line pointers\nLP_UNUSED - that has to be coordinated with the indexing machinery. I\ncan imagine a table AM that never reuses TIDs - especially if we\neventually have TIDs > 6 bytes - but I can't imagine a table AM that\nreuses TIDs that might still be present in the index without telling\nthe index. And that seems to be what is being proposed here: if CIC or\nRiC could be putting TIDs into indexes while at the same time some of\nthose very same TIDs were being made available for reuse, that would\nbe chaos, and right now we have no mechanism other than the lock level\nto prevent that. We could invent one, maybe, but it doesn't exist\ntoday, and zheap never tried to invent anything like that.\n\nI agree that, if we do this, the comment should make clear that the\nCIC/RiC lock has to conflict with VACUUM to avoid indexing tuples that\nVACUUM is busy marking LP_UNUSED, and that this is precisely because\nother backends will ignore the CIC/RiC snapshot while determining\nwhether a tuple is live. I don't think this is the case.\nHypothetically speaking, suppose we rejected this patch. Then, suppose\nsomeone proposed to replace ShareUpdateExclusiveLock with two new lock\nlevels VacuumTuplesLock and IndexTheRelationLock each of which\nconflicts with itself and all other lock levels with which\nShareUpdateExclusiveLock conflicts, but the two don't conflict with\neach other. AFAIK, that patch would be acceptable today and\nunacceptable after this change. While I'm not arguing that as a reason\nto reject this patch, it's not impossible that somebody could: they'd\njust need to argue that the separated lock levels would have greater\nvalue than this patch. I don't think I believe that, but it's not a\ntotally crazy suggestion. My main point though is that this\nmeaningfully changing some assumptions about how the system works, and\nwe should be sure to be clear about that.\n\nI looked at the patch but don't really understand it; the code in this\narea seems to have changed a lot since I last looked at it. So, I'm\njust commenting mostly on the specific question that was asked, and a\nlittle bit on the theory of the patch, but without expressing an\nopinion on the implementation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jan 2021 12:54:26 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Here's an updated patch.\n\nI haven't added commentary or documentation to account for the new\nassumption, per Matthias' comment and Robert's discussion thereof.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)",
"msg_date": "Mon, 22 Feb 2021 12:56:42 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On 2021-Feb-22, �lvaro Herrera wrote:\n\n> Here's an updated patch.\n> \n> I haven't added commentary or documentation to account for the new\n> assumption, per Matthias' comment and Robert's discussion thereof.\n\nDone that. I also restructured the code a bit, since one line in\nComputedXidHorizons was duplicative.\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://archives.postgresql.org/message-id/482D1632.8010507@sigaev.ru",
"msg_date": "Mon, 22 Feb 2021 21:15:31 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 9:15 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Feb-22, Álvaro Herrera wrote:\n>\n> > Here's an updated patch.\n> >\n> > I haven't added commentary or documentation to account for the new\n> > assumption, per Matthias' comment and Robert's discussion thereof.\n>\n> Done that. I also restructured the code a bit, since one line in\n> ComputedXidHorizons was duplicative.\n\nI've looked at the v3 patch and it looks good to me. A minor comment is:\n\n+ /* Catalog tables need to consider all backends. */\n+ h->catalog_oldest_nonremovable =\n+ TransactionIdOlder(h->catalog_oldest_nonremovable, xmin);\n+\n\nI think that the above comment is not accurate since we consider only\nbackends on the same database in some cases.\n\nFWIW, just for the record, around the following original code,\n\n /*\n * Check whether there are replication slots requiring an older xmin.\n */\n h->shared_oldest_nonremovable =\n TransactionIdOlder(h->shared_oldest_nonremovable, h->slot_xmin);\n h->data_oldest_nonremovable =\n TransactionIdOlder(h->data_oldest_nonremovable, h->slot_xmin);\n\n /*\n * The only difference between catalog / data horizons is that the slot's\n * catalog xmin is applied to the catalog one (so catalogs can be accessed\n * for logical decoding). Initialize with data horizon, and then back up\n * further if necessary. Have to back up the shared horizon as well, since\n * that also can contain catalogs.\n */\n h->shared_oldest_nonremovable_raw = h->shared_oldest_nonremovable;\n h->shared_oldest_nonremovable =\n TransactionIdOlder(h->shared_oldest_nonremovable,\n h->slot_catalog_xmin);\n h->catalog_oldest_nonremovable = h->data_oldest_nonremovable;\n h->catalog_oldest_nonremovable =\n TransactionIdOlder(h->catalog_oldest_nonremovable,\n h->slot_catalog_xmin);\n\nthe patch makes the following change:\n\n@@ -1847,7 +1871,6 @@ ComputeXidHorizons(ComputeXidHorizonsResult *h)\n h->shared_oldest_nonremovable =\n TransactionIdOlder(h->shared_oldest_nonremovable,\n h->slot_catalog_xmin);\n- h->catalog_oldest_nonremovable = h->data_oldest_nonremovable;\n h->catalog_oldest_nonremovable =\n TransactionIdOlder(h->catalog_oldest_nonremovable,\n h->slot_catalog_xmin);\n\nIn the original code, catalog_oldest_nonremovable is initialized with\ndata_oldest_nonremovable that considered slot_xmin. Therefore, IIUC\ncatalog_oldest_nonremovable is the oldest transaction id among\ndata_oldest_nonremovable, slot_xmin, and slot_catalog_xmin. But with\nthe patch, catalog_oldest_nonremovable doesn't consider slot_xmin. It\nseems okay to me as catalog_oldest_nonremovable is interested in only\nslot_catalog_xmin.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 23 Feb 2021 20:03:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On 2021-Feb-23, Masahiko Sawada wrote:\n\n> I've looked at the v3 patch and it looks good to me. A minor comment is:\n> \n> + /* Catalog tables need to consider all backends. */\n> + h->catalog_oldest_nonremovable =\n> + TransactionIdOlder(h->catalog_oldest_nonremovable, xmin);\n> +\n> \n> I think that the above comment is not accurate since we consider only\n> backends on the same database in some cases.\n\nYou're right. Adjusted.\n\n> the patch makes the following change:\n> \n> @@ -1847,7 +1871,6 @@ ComputeXidHorizons(ComputeXidHorizonsResult *h)\n> h->shared_oldest_nonremovable =\n> TransactionIdOlder(h->shared_oldest_nonremovable,\n> h->slot_catalog_xmin);\n> - h->catalog_oldest_nonremovable = h->data_oldest_nonremovable;\n> h->catalog_oldest_nonremovable =\n> TransactionIdOlder(h->catalog_oldest_nonremovable,\n> h->slot_catalog_xmin);\n> \n> In the original code, catalog_oldest_nonremovable is initialized with\n> data_oldest_nonremovable that considered slot_xmin. Therefore, IIUC\n> catalog_oldest_nonremovable is the oldest transaction id among\n> data_oldest_nonremovable, slot_xmin, and slot_catalog_xmin. But with\n> the patch, catalog_oldest_nonremovable doesn't consider slot_xmin. It\n> seems okay to me as catalog_oldest_nonremovable is interested in only\n> slot_catalog_xmin.\n\nI tend to agree -- it seems safe to ignore slot_xmin in that case.\nHowever, tracing through the maze of xmin vs. catalog_xmin transmission\nacross the whole system is not trivial. I changed the patch so that we\nadjust to slot_xmin first, then to slot_catalog_xmin. I *think* the end\neffect should be the same. But if we want to make that claim to save\nthat one assignment, we can make it in a separate commit. However,\ngiven that this comparison and assignment is outside the locked area, I\ndon't think it's worth bothering with.\n\nThanks for the input! I have pushed this patch.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hern�ndez-Novich)\n\n\n",
"msg_date": "Tue, 23 Feb 2021 12:26:23 -0300",
"msg_from": "=?iso-8859-1?Q?=C1lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nhere is a scenario that produces an orphaned function (means it does not \nbelong to any namespace):\n\nSession 1:\n\npostgres=# create schema tobeorph;\nCREATE SCHEMA\n\npostgres=# BEGIN;\nBEGIN\n\npostgres=*# CREATE OR REPLACE FUNCTION tobeorph.bdttime() RETURNS TIMESTAMP AS $$\nDECLARE\n outTS TIMESTAMP;\nBEGIN\n perform pg_sleep(10);\n RETURN CURRENT_TIMESTAMP;\nEND;\n$$ LANGUAGE plpgsql volatile;\nCREATE FUNCTION\n\n=> Don't commit\n\nSession 2:\n\npostgres=# drop schema tobeorph;\nDROP SCHEMA\n\nSession 1:\n\npostgres=*# END;\nCOMMIT\n\nFunction is orphaned:\npostgres=# \\df *.bdttime\n List of functions\n Schema | Name | Result data type | Argument data types | Type\n--------+---------+-----------------------------+---------------------+------\n | bdttime | timestamp without time zone | | func\n(1 row)\n\n\nIt appears that an AccessShareLock on the target namespace is not \nacquired when the function is being created (while, for example, it is \nacquired when creating a relation (through \nRangeVarGetAndCheckCreationNamespace())).\n\nWhile CreateFunction() could lock the object the same way \nRangeVarGetAndCheckCreationNamespace() does, that would leave other \nobjects that belong to namespaces unprotected. Locking the schema in \nQualifiedNameGetCreationNamespace() will protect those objects.\n\nPlease find attached a patch that adds a LockDatabaseObject() call in \nQualifiedNameGetCreationNamespace() to prevent such orphaned situations.\n\nI will add this patch to the next commitfest. I look forward to your \nfeedback.\n\nBertrand",
"msg_date": "Mon, 30 Nov 2020 22:07:29 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "[BUG] orphaned function"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> here is a scenario that produces an orphaned function (means it does not \n> belong to any namespace):\n> [ drop schema before committing function creation ]\n\nHm. Historically we've not been too fussed about preventing such race\nconditions, and I wonder just how far is sane to take it. Should we\nacquire lock on the function's argument/result data types? Its language?\n\nGiven the precedent of RangeVarGetAndCheckCreationNamespace, I'm\nwilling to accept this patch's goals as stated. But it feels like\nwe need some clearer shared understanding of which things we are\nwilling to bother with preventing races for, and which we aren't.\n\n> Please find attached a patch that adds a LockDatabaseObject() call in \n> QualifiedNameGetCreationNamespace() to prevent such orphaned situations.\n\nI don't think that actually succeeds in preventing the race, it'll\njust delay your process till the deletion is committed. But you\nalready have the namespaceId. Note the complex retry logic in\nRangeVarGetAndCheckCreationNamespace; without something on the same\norder, you're not closing the hole in any meaningful way. Likely\nwhat this patch should do is refactor that function so that its guts\ncan be used for other object-creation scenarios. (The fact that\nthis is so far from trivial is one reason I'm not in a hurry to\nextend this sort of protection to other dependencies.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Nov 2020 17:29:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "Hi,\n\nOn 11/30/20 11:29 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>> here is a scenario that produces an orphaned function (means it does not\n>> belong to any namespace):\n>> [ drop schema before committing function creation ]\n> Hm. Historically we've not been too fussed about preventing such race\n> conditions, and I wonder just how far is sane to take it. Should we\n> acquire lock on the function's argument/result data types? Its language?\n>\n> Given the precedent of RangeVarGetAndCheckCreationNamespace, I'm\n> willing to accept this patch's goals as stated.\n\nThanks!\n\nForgot to mention an observed side effect: pg_upgrade failing during the \n\"Creating dump of database schemas\" step, with things like:\n\npg_dump: last built-in OID is 16383\npg_dump: reading extensions\npg_dump: identifying extension members\npg_dump: reading schemas\npg_dump: reading user-defined tables\npg_dump: reading user-defined functions\npg_dump: error: schema with OID 16385 does not exist\n\n> But it feels like\n> we need some clearer shared understanding of which things we are\n> willing to bother with preventing races for, and which we aren't.\n>\n>> Please find attached a patch that adds a LockDatabaseObject() call in\n>> QualifiedNameGetCreationNamespace() to prevent such orphaned situations.\n> I don't think that actually succeeds in preventing the race, it'll\n> just delay your process till the deletion is committed.\n\noooh, i was thinking the other way around: it was protecting the other \nsituation (when the drop schema is waiting for the create function to \nfinish).\n\nThe other scenario you are referring to needs to be addressed as well, \nthanks!\n\n> But you\n> already have the namespaceId. Note the complex retry logic in\n> RangeVarGetAndCheckCreationNamespace; without something on the same\n> order, you're not closing the hole in any meaningful way.\n\nunderstood with the scenario you were referring to previously (which was \nnot the one I focused on).\n\n> Likely\n> what this patch should do is refactor that function so that its guts\n> can be used for other object-creation scenarios.\nWill have a look.\n> (The fact that\n> this is so far from trivial is one reason I'm not in a hurry to\n> extend this sort of protection to other dependencies.)\n>\n> regards, tom lane\n\nThanks!\n\nBertrand\n\n\n\n",
"msg_date": "Tue, 1 Dec 2020 10:32:17 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "Hi,\n\nOn 12/1/20 10:32 AM, Drouvot, Bertrand wrote:\n> Hi,\n>\n> On 11/30/20 11:29 PM, Tom Lane wrote:\n>> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>>> here is a scenario that produces an orphaned function (means it does \n>>> not\n>>> belong to any namespace):\n>>> [ drop schema before committing function creation ]\n>> Hm. Historically we've not been too fussed about preventing such race\n>> conditions, and I wonder just how far is sane to take it. Should we\n>> acquire lock on the function's argument/result data types? Its \n>> language?\n>>\n>> Given the precedent of RangeVarGetAndCheckCreationNamespace, I'm\n>> willing to accept this patch's goals as stated.\n>\n> Thanks!\n>\n> Forgot to mention an observed side effect: pg_upgrade failing during \n> the \"Creating dump of database schemas\" step, with things like:\n>\n> pg_dump: last built-in OID is 16383\n> pg_dump: reading extensions\n> pg_dump: identifying extension members\n> pg_dump: reading schemas\n> pg_dump: reading user-defined tables\n> pg_dump: reading user-defined functions\n> pg_dump: error: schema with OID 16385 does not exist\n>\n>> But it feels like\n>> we need some clearer shared understanding of which things we are\n>> willing to bother with preventing races for, and which we aren't.\n>>\n>>> Please find attached a patch that adds a LockDatabaseObject() call in\n>>> QualifiedNameGetCreationNamespace() to prevent such orphaned \n>>> situations.\n>> I don't think that actually succeeds in preventing the race, it'll\n>> just delay your process till the deletion is committed.\n>\n> oooh, i was thinking the other way around: it was protecting the other \n> situation (when the drop schema is waiting for the create function to \n> finish).\n>\n> The other scenario you are referring to needs to be addressed as well, \n> thanks!\n>\n>> But you\n>> already have the namespaceId. Note the complex retry logic in\n>> RangeVarGetAndCheckCreationNamespace; without something on the same\n>> order, you're not closing the hole in any meaningful way.\n>\n> understood with the scenario you were referring to previously (which \n> was not the one I focused on).\n>\n>> Likely\n>> what this patch should do is refactor that function so that its guts\n>> can be used for other object-creation scenarios.\n> Will have a look.\n>>\nI ended up making more changes in QualifiedNameGetCreationNamespace \n<https://doxygen.postgresql.org/namespace_8c.html#a2cde9c8897e89ae47a99c103c4f96d31>() \nto mimic the retry() logic that is in \nRangeVarGetAndCheckCreationNamespace().\n\nThe attached v2 patch, now protects the function to be orphaned in those \n2 scenarios:\n\nScenario 1: first, session 1 begin create function, then session 2 drops \nthe schema: drop schema is locked and would produce (once the create \nfunction finishes):\n\nbdt=# 2020-12-02 10:12:46.364 UTC [15810] ERROR: cannot drop schema \ntobeorph because other objects depend on it\n2020-12-02 10:12:46.364 UTC [15810] DETAIL: function tobeorph.bdttime() \ndepends on schema tobeorph\n2020-12-02 10:12:46.364 UTC [15810] HINT: Use DROP ... CASCADE to drop \nthe dependent objects too.\n2020-12-02 10:12:46.364 UTC [15810] STATEMENT: drop schema tobeorph;\n\nScenario 2: first, session 1 begin drop schema, then session 2 creates \nthe function: create function is locked and would produce (once the drop \nschema finishes):\n\n2020-12-02 10:14:29.468 UTC [15822] ERROR: schema \"tobeorph\" does not exist\n\nThanks\n\nBertrand",
"msg_date": "Wed, 2 Dec 2020 11:46:35 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: [BUG] orphaned function"
},
{
"msg_contents": "Le 02/12/2020 à 11:46, Drouvot, Bertrand a écrit :\n>\n> I ended up making more changes in QualifiedNameGetCreationNamespace \n> <https://doxygen.postgresql.org/namespace_8c.html#a2cde9c8897e89ae47a99c103c4f96d31>() \n> to mimic the retry() logic that is in \n> RangeVarGetAndCheckCreationNamespace().\n>\n> The attached v2 patch, now protects the function to be orphaned in \n> those 2 scenarios:\n>\n> Scenario 1: first, session 1 begin create function, then session 2 \n> drops the schema: drop schema is locked and would produce (once the \n> create function finishes):\n>\n> bdt=# 2020-12-02 10:12:46.364 UTC [15810] ERROR: cannot drop schema \n> tobeorph because other objects depend on it\n> 2020-12-02 10:12:46.364 UTC [15810] DETAIL: function \n> tobeorph.bdttime() depends on schema tobeorph\n> 2020-12-02 10:12:46.364 UTC [15810] HINT: Use DROP ... CASCADE to \n> drop the dependent objects too.\n> 2020-12-02 10:12:46.364 UTC [15810] STATEMENT: drop schema tobeorph;\n>\n> Scenario 2: first, session 1 begin drop schema, then session 2 creates \n> the function: create function is locked and would produce (once the \n> drop schema finishes):\n>\n> 2020-12-02 10:14:29.468 UTC [15822] ERROR: schema \"tobeorph\" does not \n> exist\n>\n> Thanks\n>\n> Bertrand\n>\n\nHi,\n\nThis is a review for the orphaned function bug fix \nhttps://www.postgresql.org/message-id/flat/5a9daaae-5538-b209-6279-e903c3ea2157@amazon.com\n\n\nContents & Purpose\n==================\n\nThis patch fixes an historical race condition in PostgreSQL that leave a \nfunction orphan of a namespace. It happens when a function is created in \na namespace inside a transaction not yet committed and that another \nsession drop the namespace. The function become orphan and can not be \nused anymore.\n\npostgres=# \\df *.bdttime\n List of functions\n Schema | Name | Result data type | Argument data types \n| Type\n --------+---------+-----------------------------+---------------------+------\n | bdttime | timestamp without time zone | \n| func\n\n\nInitial Run\n===========\n\nThe patch applies cleanly to HEAD. The regression tests all pass \nsuccessfully with the new behavior. Given the example of the case to \nreproduce, the second session now waits until the first session is \ncommitted and when it is done it thrown error:\n\n postgres=# drop schema tobeorph;\n ERROR: cannot drop schema tobeorph because other objects depend on it\n DETAIL: function tobeorph.bdttime() depends on schema tobeorph\n HINT: Use DROP ... CASCADE to drop the dependent objects too.\n\nthe function is well defined in the catalog.\n\n\nComments\n========\n\nAs Tom mention there is still pending DDL concurrency problems. For \nexample if session 1 execute the following:\n\n CREATE TYPE public.foo as enum ('one', 'two');\n BEGIN;\n CREATE OR REPLACE FUNCTION tobeorph.bdttime(num foo) RETURNS \nTIMESTAMP AS $$\n DECLARE\n outTS TIMESTAMP;\n BEGIN\n perform pg_sleep(10);\n RETURN CURRENT_TIMESTAMP;\n END;\n $$ LANGUAGE plpgsql volatile;\n\nif session 2 drop the data type before the session 1 is committed, the \nfunction will be declared with invalid parameter data type.\n\nThe same problem applies if the returned type or the procedural language \nis dropped. I have tried to fix that in ProcedureCreate() by using an \nAccessSharedLock for the data types and languages involved in the \nfunction declaration. With this patch the race condition with parameters \ntypes, return types and PL languages are fixed. Only data types and \nprocedural languages with Oid greater than FirstBootstrapObjectId will \nbe locked locked. But this is probably more complex than this fix so it \nis proposed as a separate patch \n(v1-003-orphaned_function_types_language.patch) to not interfere with \nthe applying of Bertran's patch.\n\n\nConclusion\n==========\n\nI tag this patch (v1-002-orphaned_function.patch) as ready for \ncommitters review as it fixes the bug reported.\n\n\n-- \nGilles Darold\nLzLabs GmbH\nhttps://www.lzlabs.com/",
"msg_date": "Thu, 17 Dec 2020 23:12:26 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: [BUG] orphaned function"
},
{
"msg_contents": "Gilles Darold <gilles@darold.net> writes:\n> The same problem applies if the returned type or the procedural language \n> is dropped. I have tried to fix that in ProcedureCreate() by using an \n> AccessSharedLock for the data types and languages involved in the \n> function declaration. With this patch the race condition with parameters \n> types, return types and PL languages are fixed. Only data types and \n> procedural languages with Oid greater than FirstBootstrapObjectId will \n> be locked locked. But this is probably more complex than this fix so it \n> is proposed as a separate patch \n> (v1-003-orphaned_function_types_language.patch) to not interfere with \n> the applying of Bertran's patch.\n\nIndeed. This points up one of the things that is standing in the way\nof any serious consideration of applying this patch. To my mind there\nare two fundamental, somewhat interrelated considerations that haven't\nbeen meaningfully addressed:\n\n1. What set of objects is it worth trying to remove this race condition\nfor, and with respect to what dependencies? Bertrand gave no\njustification for worrying about function-to-namespace dependencies\nspecifically, and you've likewise given none for expanding the patch\nto consider function-to-datatype dependencies. There are dozens more\ncases that could be considered; but I sure don't want to be processing\nanother ad-hoc fix every time someone thinks they're worried about\nanother specific case.\n\nTaking a practical viewpoint, I'm far more worried about dependencies\nof tables than those of any other kind of object. A messed-up function\ndefinition doesn't cost you much: you can just drop and recreate the\nfunction. A table full of data is way more trouble to recreate, and\nindeed the data might be irreplaceable. So it seems pretty silly to\npropose that we try to remove race conditions for functions' dependencies\non datatypes without removing the same race condition for tables'\ndependencies on their column datatypes.\n\nBut any of these options lead to the same question: why stop there?\nAn approach that would actually be defensible, perhaps, is to incorporate\nthis functionality into the dependency mechanism: any time we're about to\ncreate a pg_depend or pg_shdepend entry, take out an AccessShareLock on\nthe referenced object, and then check to make sure it still exists.\nThis would improve the dependency mechanism so it prevents creation-time\nrace conditions, not just attempts to drop dependencies of\nalready-committed objects. It would mean that the one patch would be\nthe end of the problem, rather than looking forward to a steady drip of\nnew fixes.\n\n2. How much are we willing to pay for this added protection? The fact\nthat we've gotten along fine without it for years suggests that the\nanswer needs to be \"not much\". But I'm not sure that that actually\nis the answer, especially if we don't have a policy that says \"we'll\nprotect against these race conditions but no other ones\". I think\nthere are possibly-serious costs in three different areas:\n\n* Speed. How much do all these additional lock acquisitions slow\ndown a typical DDL command?\n\n* Number of locks taken per transaction. This'd be particularly an\nissue for pg_restore runs using single-transaction mode: they'd take\nnot only locks on the objects they create, but also a bunch of\nreference-protection locks. It's not very hard to believe that this'd\nmake a difference in whether restoring a large database is possible\nwithout increasing max_locks_per_transaction.\n\n* Risk of deadlock. The reference locks themselves should be sharable,\nso maybe there isn't much of a problem, but I want to see this question\nseriously analyzed not just ignored.\n\nObviously, the magnitude of these costs depends a lot on how many\ndependencies we want to remove the race condition for. But that's\nexactly the reason why I don't want a piecemeal approach of fixing\nsome problems now and some other problems later. That's way too\nmuch like the old recipe for boiling a frog: we could gradually get\ninto serious performance problems without anyone ever having stopped\nto consider the issue.\n\nIn short, I think we should either go for a 100% solution if careful\nanalysis shows we can afford it, or else have a reasoned policy\nwhy we are going to close these specific race conditions and no others\n(implying that we'll reject future patches in the same area). We\nhaven't got either thing in this thread as it stands, so I do not\nthink it's anywhere near being ready to commit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 17 Dec 2020 18:26:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "Le 18/12/2020 à 00:26, Tom Lane a écrit :\n> Gilles Darold <gilles@darold.net> writes:\n>> The same problem applies if the returned type or the procedural language\n>> is dropped. I have tried to fix that in ProcedureCreate() by using an\n>> AccessSharedLock for the data types and languages involved in the\n>> function declaration. With this patch the race condition with parameters\n>> types, return types and PL languages are fixed. Only data types and\n>> procedural languages with Oid greater than FirstBootstrapObjectId will\n>> be locked locked. But this is probably more complex than this fix so it\n>> is proposed as a separate patch\n>> (v1-003-orphaned_function_types_language.patch) to not interfere with\n>> the applying of Bertran's patch.\n> Indeed. This points up one of the things that is standing in the way\n> of any serious consideration of applying this patch. To my mind there\n> are two fundamental, somewhat interrelated considerations that haven't\n> been meaningfully addressed:\n>\n> 1. What set of objects is it worth trying to remove this race condition\n> for, and with respect to what dependencies? Bertrand gave no\n> justification for worrying about function-to-namespace dependencies\n> specifically, and you've likewise given none for expanding the patch\n> to consider function-to-datatype dependencies. There are dozens more\n> cases that could be considered; but I sure don't want to be processing\n> another ad-hoc fix every time someone thinks they're worried about\n> another specific case.\n>\n> Taking a practical viewpoint, I'm far more worried about dependencies\n> of tables than those of any other kind of object. A messed-up function\n> definition doesn't cost you much: you can just drop and recreate the\n> function. A table full of data is way more trouble to recreate, and\n> indeed the data might be irreplaceable. So it seems pretty silly to\n> propose that we try to remove race conditions for functions' dependencies\n> on datatypes without removing the same race condition for tables'\n> dependencies on their column datatypes.\n>\n> But any of these options lead to the same question: why stop there?\n> An approach that would actually be defensible, perhaps, is to incorporate\n> this functionality into the dependency mechanism: any time we're about to\n> create a pg_depend or pg_shdepend entry, take out an AccessShareLock on\n> the referenced object, and then check to make sure it still exists.\n> This would improve the dependency mechanism so it prevents creation-time\n> race conditions, not just attempts to drop dependencies of\n> already-committed objects. It would mean that the one patch would be\n> the end of the problem, rather than looking forward to a steady drip of\n> new fixes.\n>\n> 2. How much are we willing to pay for this added protection? The fact\n> that we've gotten along fine without it for years suggests that the\n> answer needs to be \"not much\". But I'm not sure that that actually\n> is the answer, especially if we don't have a policy that says \"we'll\n> protect against these race conditions but no other ones\". I think\n> there are possibly-serious costs in three different areas:\n>\n> * Speed. How much do all these additional lock acquisitions slow\n> down a typical DDL command?\n>\n> * Number of locks taken per transaction. This'd be particularly an\n> issue for pg_restore runs using single-transaction mode: they'd take\n> not only locks on the objects they create, but also a bunch of\n> reference-protection locks. It's not very hard to believe that this'd\n> make a difference in whether restoring a large database is possible\n> without increasing max_locks_per_transaction.\n>\n> * Risk of deadlock. The reference locks themselves should be sharable,\n> so maybe there isn't much of a problem, but I want to see this question\n> seriously analyzed not just ignored.\n>\n> Obviously, the magnitude of these costs depends a lot on how many\n> dependencies we want to remove the race condition for. But that's\n> exactly the reason why I don't want a piecemeal approach of fixing\n> some problems now and some other problems later. That's way too\n> much like the old recipe for boiling a frog: we could gradually get\n> into serious performance problems without anyone ever having stopped\n> to consider the issue.\n>\n> In short, I think we should either go for a 100% solution if careful\n> analysis shows we can afford it, or else have a reasoned policy\n> why we are going to close these specific race conditions and no others\n> (implying that we'll reject future patches in the same area). We\n> haven't got either thing in this thread as it stands, so I do not\n> think it's anywhere near being ready to commit.\n>\n> \t\t\tregards, tom lane\n\n\nThanks for the review,\n\n\nHonestly I have never met these races condition in 25 years of \nPostgreSQL and will probably never but clearly I don't have the Amazon \ndatabase services workload. I let Bertrand decide what to do with the \npatch and address the races condition with a more global approach \nfollowing your advices if more work is done on this topic. I tag the \npatch as \"Waiting on author\".\n\n\nBest regards,\n\n-- \nGilles Darold\nLzLabs GmbH\nhttp://www.lzlabs.com/\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 10:52:39 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "Hi Bertrand,\n\nOn Fri, Dec 18, 2020 at 6:52 PM Gilles Darold <gilles@darold.net> wrote:\n>\n> Le 18/12/2020 à 00:26, Tom Lane a écrit :\n> > Gilles Darold <gilles@darold.net> writes:\n> >> The same problem applies if the returned type or the procedural language\n> >> is dropped. I have tried to fix that in ProcedureCreate() by using an\n> >> AccessSharedLock for the data types and languages involved in the\n> >> function declaration. With this patch the race condition with parameters\n> >> types, return types and PL languages are fixed. Only data types and\n> >> procedural languages with Oid greater than FirstBootstrapObjectId will\n> >> be locked locked. But this is probably more complex than this fix so it\n> >> is proposed as a separate patch\n> >> (v1-003-orphaned_function_types_language.patch) to not interfere with\n> >> the applying of Bertran's patch.\n> > Indeed. This points up one of the things that is standing in the way\n> > of any serious consideration of applying this patch. To my mind there\n> > are two fundamental, somewhat interrelated considerations that haven't\n> > been meaningfully addressed:\n> >\n> > 1. What set of objects is it worth trying to remove this race condition\n> > for, and with respect to what dependencies? Bertrand gave no\n> > justification for worrying about function-to-namespace dependencies\n> > specifically, and you've likewise given none for expanding the patch\n> > to consider function-to-datatype dependencies. There are dozens more\n> > cases that could be considered; but I sure don't want to be processing\n> > another ad-hoc fix every time someone thinks they're worried about\n> > another specific case.\n> >\n> > Taking a practical viewpoint, I'm far more worried about dependencies\n> > of tables than those of any other kind of object. A messed-up function\n> > definition doesn't cost you much: you can just drop and recreate the\n> > function. A table full of data is way more trouble to recreate, and\n> > indeed the data might be irreplaceable. So it seems pretty silly to\n> > propose that we try to remove race conditions for functions' dependencies\n> > on datatypes without removing the same race condition for tables'\n> > dependencies on their column datatypes.\n> >\n> > But any of these options lead to the same question: why stop there?\n> > An approach that would actually be defensible, perhaps, is to incorporate\n> > this functionality into the dependency mechanism: any time we're about to\n> > create a pg_depend or pg_shdepend entry, take out an AccessShareLock on\n> > the referenced object, and then check to make sure it still exists.\n> > This would improve the dependency mechanism so it prevents creation-time\n> > race conditions, not just attempts to drop dependencies of\n> > already-committed objects. It would mean that the one patch would be\n> > the end of the problem, rather than looking forward to a steady drip of\n> > new fixes.\n> >\n> > 2. How much are we willing to pay for this added protection? The fact\n> > that we've gotten along fine without it for years suggests that the\n> > answer needs to be \"not much\". But I'm not sure that that actually\n> > is the answer, especially if we don't have a policy that says \"we'll\n> > protect against these race conditions but no other ones\". I think\n> > there are possibly-serious costs in three different areas:\n> >\n> > * Speed. How much do all these additional lock acquisitions slow\n> > down a typical DDL command?\n> >\n> > * Number of locks taken per transaction. This'd be particularly an\n> > issue for pg_restore runs using single-transaction mode: they'd take\n> > not only locks on the objects they create, but also a bunch of\n> > reference-protection locks. It's not very hard to believe that this'd\n> > make a difference in whether restoring a large database is possible\n> > without increasing max_locks_per_transaction.\n> >\n> > * Risk of deadlock. The reference locks themselves should be sharable,\n> > so maybe there isn't much of a problem, but I want to see this question\n> > seriously analyzed not just ignored.\n> >\n> > Obviously, the magnitude of these costs depends a lot on how many\n> > dependencies we want to remove the race condition for. But that's\n> > exactly the reason why I don't want a piecemeal approach of fixing\n> > some problems now and some other problems later. That's way too\n> > much like the old recipe for boiling a frog: we could gradually get\n> > into serious performance problems without anyone ever having stopped\n> > to consider the issue.\n> >\n> > In short, I think we should either go for a 100% solution if careful\n> > analysis shows we can afford it, or else have a reasoned policy\n> > why we are going to close these specific race conditions and no others\n> > (implying that we'll reject future patches in the same area). We\n> > haven't got either thing in this thread as it stands, so I do not\n> > think it's anywhere near being ready to commit.\n> >\n> > regards, tom lane\n>\n>\n> Thanks for the review,\n>\n>\n> Honestly I have never met these races condition in 25 years of\n> PostgreSQL and will probably never but clearly I don't have the Amazon\n> database services workload. I let Bertrand decide what to do with the\n> patch and address the races condition with a more global approach\n> following your advices if more work is done on this topic. I tag the\n> patch as \"Waiting on author\".\n\nStatus update for a commitfest entry.\n\nThis patch has been \"Waiting on Author\" and inactive for almost 2\nmonths. Are you planning to work on address the review comments Tom\nsent? This is categorized as bug fixes on CF app but I'm going to set\nit to \"Returned with Feedback\" barring objections as the patch got\nfeedback but there has been no activity for a long time.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Feb 2021 11:26:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 11:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi Bertrand,\n>\n> On Fri, Dec 18, 2020 at 6:52 PM Gilles Darold <gilles@darold.net> wrote:\n> >\n> > Le 18/12/2020 à 00:26, Tom Lane a écrit :\n> > > Gilles Darold <gilles@darold.net> writes:\n> > >> The same problem applies if the returned type or the procedural language\n> > >> is dropped. I have tried to fix that in ProcedureCreate() by using an\n> > >> AccessSharedLock for the data types and languages involved in the\n> > >> function declaration. With this patch the race condition with parameters\n> > >> types, return types and PL languages are fixed. Only data types and\n> > >> procedural languages with Oid greater than FirstBootstrapObjectId will\n> > >> be locked locked. But this is probably more complex than this fix so it\n> > >> is proposed as a separate patch\n> > >> (v1-003-orphaned_function_types_language.patch) to not interfere with\n> > >> the applying of Bertran's patch.\n> > > Indeed. This points up one of the things that is standing in the way\n> > > of any serious consideration of applying this patch. To my mind there\n> > > are two fundamental, somewhat interrelated considerations that haven't\n> > > been meaningfully addressed:\n> > >\n> > > 1. What set of objects is it worth trying to remove this race condition\n> > > for, and with respect to what dependencies? Bertrand gave no\n> > > justification for worrying about function-to-namespace dependencies\n> > > specifically, and you've likewise given none for expanding the patch\n> > > to consider function-to-datatype dependencies. There are dozens more\n> > > cases that could be considered; but I sure don't want to be processing\n> > > another ad-hoc fix every time someone thinks they're worried about\n> > > another specific case.\n> > >\n> > > Taking a practical viewpoint, I'm far more worried about dependencies\n> > > of tables than those of any other kind of object. A messed-up function\n> > > definition doesn't cost you much: you can just drop and recreate the\n> > > function. A table full of data is way more trouble to recreate, and\n> > > indeed the data might be irreplaceable. So it seems pretty silly to\n> > > propose that we try to remove race conditions for functions' dependencies\n> > > on datatypes without removing the same race condition for tables'\n> > > dependencies on their column datatypes.\n> > >\n> > > But any of these options lead to the same question: why stop there?\n> > > An approach that would actually be defensible, perhaps, is to incorporate\n> > > this functionality into the dependency mechanism: any time we're about to\n> > > create a pg_depend or pg_shdepend entry, take out an AccessShareLock on\n> > > the referenced object, and then check to make sure it still exists.\n> > > This would improve the dependency mechanism so it prevents creation-time\n> > > race conditions, not just attempts to drop dependencies of\n> > > already-committed objects. It would mean that the one patch would be\n> > > the end of the problem, rather than looking forward to a steady drip of\n> > > new fixes.\n> > >\n> > > 2. How much are we willing to pay for this added protection? The fact\n> > > that we've gotten along fine without it for years suggests that the\n> > > answer needs to be \"not much\". But I'm not sure that that actually\n> > > is the answer, especially if we don't have a policy that says \"we'll\n> > > protect against these race conditions but no other ones\". I think\n> > > there are possibly-serious costs in three different areas:\n> > >\n> > > * Speed. How much do all these additional lock acquisitions slow\n> > > down a typical DDL command?\n> > >\n> > > * Number of locks taken per transaction. This'd be particularly an\n> > > issue for pg_restore runs using single-transaction mode: they'd take\n> > > not only locks on the objects they create, but also a bunch of\n> > > reference-protection locks. It's not very hard to believe that this'd\n> > > make a difference in whether restoring a large database is possible\n> > > without increasing max_locks_per_transaction.\n> > >\n> > > * Risk of deadlock. The reference locks themselves should be sharable,\n> > > so maybe there isn't much of a problem, but I want to see this question\n> > > seriously analyzed not just ignored.\n> > >\n> > > Obviously, the magnitude of these costs depends a lot on how many\n> > > dependencies we want to remove the race condition for. But that's\n> > > exactly the reason why I don't want a piecemeal approach of fixing\n> > > some problems now and some other problems later. That's way too\n> > > much like the old recipe for boiling a frog: we could gradually get\n> > > into serious performance problems without anyone ever having stopped\n> > > to consider the issue.\n> > >\n> > > In short, I think we should either go for a 100% solution if careful\n> > > analysis shows we can afford it, or else have a reasoned policy\n> > > why we are going to close these specific race conditions and no others\n> > > (implying that we'll reject future patches in the same area). We\n> > > haven't got either thing in this thread as it stands, so I do not\n> > > think it's anywhere near being ready to commit.\n> > >\n> > > regards, tom lane\n> >\n> >\n> > Thanks for the review,\n> >\n> >\n> > Honestly I have never met these races condition in 25 years of\n> > PostgreSQL and will probably never but clearly I don't have the Amazon\n> > database services workload. I let Bertrand decide what to do with the\n> > patch and address the races condition with a more global approach\n> > following your advices if more work is done on this topic. I tag the\n> > patch as \"Waiting on author\".\n>\n> Status update for a commitfest entry.\n>\n> This patch has been \"Waiting on Author\" and inactive for almost 2\n> months. Are you planning to work on address the review comments Tom\n> sent? This is categorized as bug fixes on CF app but I'm going to set\n> it to \"Returned with Feedback\" barring objections as the patch got\n> feedback but there has been no activity for a long time.\n>\n\nThis patch, which you submitted to this CommitFest, has been awaiting\nyour attention for more than one month. As such, we have moved it to\n\"Returned with Feedback\" and removed it from the reviewing queue.\nDepending on timing, this may be reversable, so let us know if there\nare extenuating circumstances. In any case, you are welcome to address\nthe feedback you have received, and resubmit the patch to the next\nCommitFest.\n\nThank you for contributing to PostgreSQL.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Feb 2021 22:37:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "Hi Sawada,\n\nOn 2/1/21 2:37 PM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Mon, Feb 1, 2021 at 11:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> Status update for a commitfest entry.\n>>\n>> This patch has been \"Waiting on Author\" and inactive for almost 2\n>> months. Are you planning to work on address the review comments Tom\n>> sent? This is categorized as bug fixes on CF app but I'm going to set\n>> it to \"Returned with Feedback\" barring objections as the patch got\n>> feedback but there has been no activity for a long time.\n>>\n> This patch, which you submitted to this CommitFest, has been awaiting\n> your attention for more than one month. As such, we have moved it to\n> \"Returned with Feedback\" and removed it from the reviewing queue.\n> Depending on timing, this may be reversable, so let us know if there\n> are extenuating circumstances. In any case, you are welcome to address\n> the feedback you have received, and resubmit the patch to the next\n> CommitFest.\n>\nI had in mind to reply to Tom's comments later this week (sorry for the \ndelay).\n\nPlease let me know if it's possible to put it back to the reviewing \nqueue (if not I'll open a new thread).\n\nThanks\n\nBertrand\n\n\n\n",
"msg_date": "Mon, 1 Feb 2021 15:06:07 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 11:06 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi Sawada,\n>\n> On 2/1/21 2:37 PM, Masahiko Sawada wrote:\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Mon, Feb 1, 2021 at 11:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >> Status update for a commitfest entry.\n> >>\n> >> This patch has been \"Waiting on Author\" and inactive for almost 2\n> >> months. Are you planning to work on address the review comments Tom\n> >> sent? This is categorized as bug fixes on CF app but I'm going to set\n> >> it to \"Returned with Feedback\" barring objections as the patch got\n> >> feedback but there has been no activity for a long time.\n> >>\n> > This patch, which you submitted to this CommitFest, has been awaiting\n> > your attention for more than one month. As such, we have moved it to\n> > \"Returned with Feedback\" and removed it from the reviewing queue.\n> > Depending on timing, this may be reversable, so let us know if there\n> > are extenuating circumstances. In any case, you are welcome to address\n> > the feedback you have received, and resubmit the patch to the next\n> > CommitFest.\n> >\n> I had in mind to reply to Tom's comments later this week (sorry for the\n> delay).\n>\n> Please let me know if it's possible to put it back to the reviewing\n> queue (if not I'll open a new thread).\n\nThank you for letting me know.\n\nI've moved this patch to the next commitfest. Please check the\nstatus on the CF app.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Feb 2021 23:17:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "Hi,\n\n+ Jim and Nathan.\n\nOn 12/18/20 12:26 AM, Tom Lane wrote:\n> Gilles Darold <gilles@darold.net> writes:\n>> The same problem applies if the returned type or the procedural language\n>> is dropped. I have tried to fix that in ProcedureCreate() by using an\n>> AccessSharedLock for the data types and languages involved in the\n>> function declaration. With this patch the race condition with parameters\n>> types, return types and PL languages are fixed. Only data types and\n>> procedural languages with Oid greater than FirstBootstrapObjectId will\n>> be locked locked. But this is probably more complex than this fix so it\n>> is proposed as a separate patch\n>> (v1-003-orphaned_function_types_language.patch) to not interfere with\n>> the applying of Bertran's patch.\n> Indeed. This points up one of the things that is standing in the way\n> of any serious consideration of applying this patch. To my mind there\n> are two fundamental, somewhat interrelated considerations that haven't\n> been meaningfully addressed:\n>\n> 1. What set of objects is it worth trying to remove this race condition\n> for, and with respect to what dependencies? Bertrand gave no\n> justification for worrying about function-to-namespace dependencies\n> specifically,\n\nI focused on this one because this is the most (if not the unique) one \nwe have observed so far.\n\nThose orphaned functions are breaking stuff like pg_dump, pg_upgrade and \nthat's how we discovered them.\n\nThat being said, I agree that there is no reason to focus only on this \nones and we should better try to \"fix\" them all.\n> and you've likewise given none for expanding the patch\n> to consider function-to-datatype dependencies. There are dozens more\n> cases that could be considered; but I sure don't want to be processing\n> another ad-hoc fix every time someone thinks they're worried about\n> another specific case.\n>\n> Taking a practical viewpoint, I'm far more worried about dependencies\n> of tables than those of any other kind of object. A messed-up function\n> definition doesn't cost you much: you can just drop and recreate the\n> function. A table full of data is way more trouble to recreate, and\n> indeed the data might be irreplaceable. So it seems pretty silly to\n> propose that we try to remove race conditions for functions' dependencies\n> on datatypes without removing the same race condition for tables'\n> dependencies on their column datatypes.\n>\n> But any of these options lead to the same question: why stop there?\n> An approach that would actually be defensible, perhaps, is to incorporate\n> this functionality into the dependency mechanism: any time we're about to\n> create a pg_depend or pg_shdepend entry, take out an AccessShareLock on\n> the referenced object, and then check to make sure it still exists.\n> This would improve the dependency mechanism so it prevents creation-time\n> race conditions, not just attempts to drop dependencies of\n> already-committed objects. It would mean that the one patch would be\n> the end of the problem, rather than looking forward to a steady drip of\n> new fixes.\n\nAgree that working with pg_depend and pg_shdepend is the way to go.\n\nInstead of using the locking machinery (and then make the one that would \ncurrently produce the orphaned object waiting), Jim proposed another \napproach: make use of special snapshot (like a dirty one depending of \nthe case).\n\nThat way instead of locking we could instead report an error, something \nlike this:\n\npostgres=# drop schema tobeorph;\nERROR: cannot drop schema tobeorph because other objects that are \ncurrently under creation depend on it\nDETAIL: function under creation tobeorph.bdttime() depends on schema \ntobeorph\n\n>\n> 2. How much are we willing to pay for this added protection? The fact\n> that we've gotten along fine without it for years suggests that the\n> answer needs to be \"not much\". But I'm not sure that that actually\n> is the answer, especially if we don't have a policy that says \"we'll\n> protect against these race conditions but no other ones\". I think\n> there are possibly-serious costs in three different areas:\n>\n> * Speed. How much do all these additional lock acquisitions slow\n> down a typical DDL command?\n>\n> * Number of locks taken per transaction. This'd be particularly an\n> issue for pg_restore runs using single-transaction mode: they'd take\n> not only locks on the objects they create, but also a bunch of\n> reference-protection locks. It's not very hard to believe that this'd\n> make a difference in whether restoring a large database is possible\n> without increasing max_locks_per_transaction.\n>\n> * Risk of deadlock. The reference locks themselves should be sharable,\n> so maybe there isn't much of a problem, but I want to see this question\n> seriously analyzed not just ignored.\nAll of this should be mitigated with the \"special snapshot\" approach.\n>\n> Obviously, the magnitude of these costs depends a lot on how many\n> dependencies we want to remove the race condition for. But that's\n> exactly the reason why I don't want a piecemeal approach of fixing\n> some problems now and some other problems later. That's way too\n> much like the old recipe for boiling a frog: we could gradually get\n> into serious performance problems without anyone ever having stopped\n> to consider the issue.\n>\n> In short, I think we should either go for a 100% solution if careful\n> analysis shows we can afford it, or else have a reasoned policy\n> why we are going to close these specific race conditions and no others\n> (implying that we'll reject future patches in the same area).\n\nAgree.\n\nWhat about taking the 100% solution way with the \"special snapshot\" \napproach?\n\nIf you think that could make sense then we (Jim, Nathan and I) can work \non a patch with this approach and come back with a proposal.\n\nBertrand\n\n\n",
"msg_date": "Tue, 2 Feb 2021 09:15:05 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "On 2021-Feb-02, Drouvot, Bertrand wrote:\n\n> On 12/18/20 12:26 AM, Tom Lane wrote:\n\n> > But any of these options lead to the same question: why stop there?\n> > An approach that would actually be defensible, perhaps, is to incorporate\n> > this functionality into the dependency mechanism: any time we're about to\n> > create a pg_depend or pg_shdepend entry, take out an AccessShareLock on\n> > the referenced object, and then check to make sure it still exists.\n> > This would improve the dependency mechanism so it prevents creation-time\n> > race conditions, not just attempts to drop dependencies of\n> > already-committed objects.\n> \n> Agree that working with pg_depend and pg_shdepend is the way to go.\n> \n> Instead of using the locking machinery (and then make the one that\n> would currently produce the orphaned object waiting), Jim proposed\n> another approach: make use of special snapshot (like a dirty one\n> depending of the case).\n> \n> That way instead of locking we could instead report an error,\n> something like this:\n> \n> postgres=# drop schema tobeorph;\n> ERROR:� cannot drop schema tobeorph because other objects that are currently under creation depend on it\n> DETAIL:� function under creation tobeorph.bdttime() depends on schema tobeorph\n\nSounds like an idea worth trying. What are the semantics of that\nspecial snapshot? Why do we need a special snapshot at all -- doesn't\nCatalogSnapshot serve?\n\nThis item is classified as a bug-fix in the commitfest, but it doesn't\nsound like something we can actually back-patch. In that light, maybe\nit's not an actual bugfix, but a new safety feature.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Sat, 27 Mar 2021 11:13:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "Hi Alvaro,\n\nThanks for the feedback!\n\nOn 3/27/21 3:13 PM, Alvaro Herrera wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On 2021-Feb-02, Drouvot, Bertrand wrote:\n>\n>> On 12/18/20 12:26 AM, Tom Lane wrote:\n>>> But any of these options lead to the same question: why stop there?\n>>> An approach that would actually be defensible, perhaps, is to incorporate\n>>> this functionality into the dependency mechanism: any time we're about to\n>>> create a pg_depend or pg_shdepend entry, take out an AccessShareLock on\n>>> the referenced object, and then check to make sure it still exists.\n>>> This would improve the dependency mechanism so it prevents creation-time\n>>> race conditions, not just attempts to drop dependencies of\n>>> already-committed objects.\n>> Agree that working with pg_depend and pg_shdepend is the way to go.\n>>\n>> Instead of using the locking machinery (and then make the one that\n>> would currently produce the orphaned object waiting), Jim proposed\n>> another approach: make use of special snapshot (like a dirty one\n>> depending of the case).\n>>\n>> That way instead of locking we could instead report an error,\n>> something like this:\n>>\n>> postgres=# drop schema tobeorph;\n>> ERROR: cannot drop schema tobeorph because other objects that are currently under creation depend on it\n>> DETAIL: function under creation tobeorph.bdttime() depends on schema tobeorph\n> Sounds like an idea worth trying.\n\nGreat, if no objections is coming then I'll work on a patch proposal.\n\n\n> What are the semantics of that\n> special snapshot? Why do we need a special snapshot at all -- doesn't\n> CatalogSnapshot serve?\n\nYes, the CatalogSnapshot should serve the need.\n\nBy \"special\" I meant \"dirty\" for example.\n\n>\n> This item is classified as a bug-fix in the commitfest, but it doesn't\n> sound like something we can actually back-patch.\n\nWhy couldn't it be back-patchable?\n\nThanks\n\nBertrand\n\n\n\n",
"msg_date": "Mon, 29 Mar 2021 13:20:24 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] orphaned function"
},
{
"msg_contents": "On 3/29/21 7:20 AM, Drouvot, Bertrand wrote:\n>>\n>> This item is classified as a bug-fix in the commitfest, but it doesn't\n>> sound like something we can actually back-patch.\n\nSince it looks like a new patch is required and this probably should not \nbe classified as a bug fix, marking Returned with Feedback.\n\nPlease resubmit to the next CF when you have a new patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 8 Apr 2021 10:55:17 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] orphaned function"
}
] |
[
{
"msg_contents": "Hi,\nIn our testPgRegressTrigger test log, I saw the following (this was for a\nrelatively old version of PG):\n\n197859 [ts-1]\n ../../../../../../src/postgres/src/backend/commands/indexcmds.c:1062:22:\nruntime error: null pointer passed as argument 2, which is declared to\nnever be null\n197860 [ts-1]\n /opt/yb-build/brew/linuxbrew-20181203T161736v9/include/string.h:43:28:\nnote: nonnull attribute specified here\n197861 [ts-1] #0 0xacbd0f in DefineIndex\n$YB_SRC_ROOT/src/postgres/src/backend/commands/../../../../../../src/postgres/src/backend/commands/indexcmds.c:1062:4\n197862 [ts-1] #1 0x11441e0 in ProcessUtilitySlow\n$YB_SRC_ROOT/src/postgres/src/backend/tcop/../../../../../../src/postgres/src/backend/tcop/utility.c:1436:7\n197863 [ts-1] #2 0x114141f in standard_ProcessUtility\n$YB_SRC_ROOT/src/postgres/src/backend/tcop/../../../../../../src/postgres/src/backend/tcop/utility.c:962:4\n197864 [ts-1] #3 0x1140b65 in YBProcessUtilityDefaultHook\n$YB_SRC_ROOT/src/postgres/src/backend/tcop/../../../../../../src/postgres/src/backend/tcop/utility.c:3574:3\n197865 [ts-1] #4 0x7f47d4950eac in pgss_ProcessUtility\n$YB_SRC_ROOT/src/postgres/contrib/pg_stat_statements/../../../../../src/postgres/contrib/pg_stat_statements/pg_stat_statements.c:1120:4\n\nThis was the line runtime error was raised:\n\n memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n\n From RelationBuildPartitionDesc we can see that:\n\n\tif (nparts > 0)\n\t{\n\t\tPartitionBoundInfo boundinfo;\n\t\tint\t\t *mapping;\n\t\tint\t\t\tnext_index = 0;\n\n\t\tresult->oids = (Oid *) palloc0(nparts * sizeof(Oid));\n\nThe cause was oids field was not assigned due to nparts being 0.\nThis is verified by additional logging added just prior to the memcpy call.\n\nI want to get the community's opinion on whether a null check should be\nadded prior to the memcpy() call.\n\nCheers\n\nHi,In our testPgRegressTrigger test log, I saw the following (this was for a relatively old version of PG):197859 [ts-1] ../../../../../../src/postgres/src/backend/commands/indexcmds.c:1062:22: runtime error: null pointer passed as argument 2, which is declared to never be null197860 [ts-1] /opt/yb-build/brew/linuxbrew-20181203T161736v9/include/string.h:43:28: note: nonnull attribute specified here197861 [ts-1] #0 0xacbd0f in DefineIndex $YB_SRC_ROOT/src/postgres/src/backend/commands/../../../../../../src/postgres/src/backend/commands/indexcmds.c:1062:4197862 [ts-1] #1 0x11441e0 in ProcessUtilitySlow $YB_SRC_ROOT/src/postgres/src/backend/tcop/../../../../../../src/postgres/src/backend/tcop/utility.c:1436:7197863 [ts-1] #2 0x114141f in standard_ProcessUtility $YB_SRC_ROOT/src/postgres/src/backend/tcop/../../../../../../src/postgres/src/backend/tcop/utility.c:962:4197864 [ts-1] #3 0x1140b65 in YBProcessUtilityDefaultHook $YB_SRC_ROOT/src/postgres/src/backend/tcop/../../../../../../src/postgres/src/backend/tcop/utility.c:3574:3197865 [ts-1] #4 0x7f47d4950eac in pgss_ProcessUtility $YB_SRC_ROOT/src/postgres/contrib/pg_stat_statements/../../../../../src/postgres/contrib/pg_stat_statements/pg_stat_statements.c:1120:4This was the line runtime error was raised: memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);From RelationBuildPartitionDesc we can see that:\tif (nparts > 0)\n\t{\n\t\tPartitionBoundInfo boundinfo;\n\t\tint\t\t *mapping;\n\t\tint\t\t\tnext_index = 0;\n\n\t\tresult->oids = (Oid *) palloc0(nparts * sizeof(Oid));The cause was oids field was not assigned due to nparts being 0.This is verified by additional logging added just prior to the memcpy call.I want to get the community's opinion on whether a null check should be added prior to the memcpy() call.Cheers",
"msg_date": "Mon, 30 Nov 2020 14:29:11 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "runtime error copying oids field"
},
{
"msg_contents": "On 2020-Nov-30, Zhihong Yu wrote:\n\n> This was the line runtime error was raised:\n> \n> memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n> \n> From RelationBuildPartitionDesc we can see that:\n> \n> \tif (nparts > 0)\n> \t{\n> \t\tPartitionBoundInfo boundinfo;\n> \t\tint\t\t *mapping;\n> \t\tint\t\t\tnext_index = 0;\n> \n> \t\tresult->oids = (Oid *) palloc0(nparts * sizeof(Oid));\n> \n> The cause was oids field was not assigned due to nparts being 0.\n> This is verified by additional logging added just prior to the memcpy call.\n> \n> I want to get the community's opinion on whether a null check should be\n> added prior to the memcpy() call.\n\nAs far as I understand, we do want to avoid memcpy's of null pointers;\nsee [1].\n\nIn this case I think it'd be sane to skip the complete block, not just\nthe memcpy, something like\n\ndiff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\nindex ca24620fd0..d35deb433a 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -1163,15 +1163,17 @@ DefineIndex(Oid relationId,\n \n \tif (partitioned)\n \t{\n+\t\tPartitionDesc partdesc;\n+\n \t\t/*\n \t\t * Unless caller specified to skip this step (via ONLY), process each\n \t\t * partition to make sure they all contain a corresponding index.\n \t\t *\n \t\t * If we're called internally (no stmt->relation), recurse always.\n \t\t */\n-\t\tif (!stmt->relation || stmt->relation->inh)\n+\t\tpartdesc = RelationGetPartitionDesc(rel);\n+\t\tif ((!stmt->relation || stmt->relation->inh) && partdesc->nparts > 0)\n \t\t{\n-\t\t\tPartitionDesc partdesc = RelationGetPartitionDesc(rel);\n \t\t\tint\t\t\tnparts = partdesc->nparts;\n \t\t\tOid\t\t *part_oids = palloc(sizeof(Oid) * nparts);\n \t\t\tbool\t\tinvalidate_parent = false;\n\n[1] https://www.postgresql.org/message-id/flat/20200904023648.GB3426768%40rfd.leadboat.com\n\n\n",
"msg_date": "Mon, 30 Nov 2020 20:01:39 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: runtime error copying oids field"
},
{
"msg_contents": "Hi,\nSee attached patch which is along the line Alvaro outlined.\n\nCheers\n\nOn Mon, Nov 30, 2020 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2020-Nov-30, Zhihong Yu wrote:\n>\n> > This was the line runtime error was raised:\n> >\n> > memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n> >\n> > From RelationBuildPartitionDesc we can see that:\n> >\n> > if (nparts > 0)\n> > {\n> > PartitionBoundInfo boundinfo;\n> > int *mapping;\n> > int next_index = 0;\n> >\n> > result->oids = (Oid *) palloc0(nparts * sizeof(Oid));\n> >\n> > The cause was oids field was not assigned due to nparts being 0.\n> > This is verified by additional logging added just prior to the memcpy\n> call.\n> >\n> > I want to get the community's opinion on whether a null check should be\n> > added prior to the memcpy() call.\n>\n> As far as I understand, we do want to avoid memcpy's of null pointers;\n> see [1].\n>\n> In this case I think it'd be sane to skip the complete block, not just\n> the memcpy, something like\n>\n> diff --git a/src/backend/commands/indexcmds.c\n> b/src/backend/commands/indexcmds.c\n> index ca24620fd0..d35deb433a 100644\n> --- a/src/backend/commands/indexcmds.c\n> +++ b/src/backend/commands/indexcmds.c\n> @@ -1163,15 +1163,17 @@ DefineIndex(Oid relationId,\n>\n> if (partitioned)\n> {\n> + PartitionDesc partdesc;\n> +\n> /*\n> * Unless caller specified to skip this step (via ONLY),\n> process each\n> * partition to make sure they all contain a corresponding\n> index.\n> *\n> * If we're called internally (no stmt->relation), recurse\n> always.\n> */\n> - if (!stmt->relation || stmt->relation->inh)\n> + partdesc = RelationGetPartitionDesc(rel);\n> + if ((!stmt->relation || stmt->relation->inh) &&\n> partdesc->nparts > 0)\n> {\n> - PartitionDesc partdesc =\n> RelationGetPartitionDesc(rel);\n> int nparts = partdesc->nparts;\n> Oid *part_oids = palloc(sizeof(Oid)\n> * nparts);\n> bool invalidate_parent = false;\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/20200904023648.GB3426768%40rfd.leadboat.com\n>",
"msg_date": "Mon, 30 Nov 2020 15:24:42 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: runtime error copying oids field"
},
{
"msg_contents": "Alvaro, et al:\nPlease let me know how to proceed with the patch.\n\nRunning test suite with the patch showed no regression.\n\nCheers\n\nOn Mon, Nov 30, 2020 at 3:24 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> See attached patch which is along the line Alvaro outlined.\n>\n> Cheers\n>\n> On Mon, Nov 30, 2020 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>\n>> On 2020-Nov-30, Zhihong Yu wrote:\n>>\n>> > This was the line runtime error was raised:\n>> >\n>> > memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n>> >\n>> > From RelationBuildPartitionDesc we can see that:\n>> >\n>> > if (nparts > 0)\n>> > {\n>> > PartitionBoundInfo boundinfo;\n>> > int *mapping;\n>> > int next_index = 0;\n>> >\n>> > result->oids = (Oid *) palloc0(nparts * sizeof(Oid));\n>> >\n>> > The cause was oids field was not assigned due to nparts being 0.\n>> > This is verified by additional logging added just prior to the memcpy\n>> call.\n>> >\n>> > I want to get the community's opinion on whether a null check should be\n>> > added prior to the memcpy() call.\n>>\n>> As far as I understand, we do want to avoid memcpy's of null pointers;\n>> see [1].\n>>\n>> In this case I think it'd be sane to skip the complete block, not just\n>> the memcpy, something like\n>>\n>> diff --git a/src/backend/commands/indexcmds.c\n>> b/src/backend/commands/indexcmds.c\n>> index ca24620fd0..d35deb433a 100644\n>> --- a/src/backend/commands/indexcmds.c\n>> +++ b/src/backend/commands/indexcmds.c\n>> @@ -1163,15 +1163,17 @@ DefineIndex(Oid relationId,\n>>\n>> if (partitioned)\n>> {\n>> + PartitionDesc partdesc;\n>> +\n>> /*\n>> * Unless caller specified to skip this step (via ONLY),\n>> process each\n>> * partition to make sure they all contain a\n>> corresponding index.\n>> *\n>> * If we're called internally (no stmt->relation),\n>> recurse always.\n>> */\n>> - if (!stmt->relation || stmt->relation->inh)\n>> + partdesc = RelationGetPartitionDesc(rel);\n>> + if ((!stmt->relation || stmt->relation->inh) &&\n>> partdesc->nparts > 0)\n>> {\n>> - PartitionDesc partdesc =\n>> RelationGetPartitionDesc(rel);\n>> int nparts = partdesc->nparts;\n>> Oid *part_oids =\n>> palloc(sizeof(Oid) * nparts);\n>> bool invalidate_parent = false;\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/flat/20200904023648.GB3426768%40rfd.leadboat.com\n>>\n>\n\nAlvaro, et al:Please let me know how to proceed with the patch.Running test suite with the patch showed no regression.CheersOn Mon, Nov 30, 2020 at 3:24 PM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,See attached patch which is along the line Alvaro outlined.CheersOn Mon, Nov 30, 2020 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2020-Nov-30, Zhihong Yu wrote:\n\n> This was the line runtime error was raised:\n> \n> memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n> \n> From RelationBuildPartitionDesc we can see that:\n> \n> if (nparts > 0)\n> {\n> PartitionBoundInfo boundinfo;\n> int *mapping;\n> int next_index = 0;\n> \n> result->oids = (Oid *) palloc0(nparts * sizeof(Oid));\n> \n> The cause was oids field was not assigned due to nparts being 0.\n> This is verified by additional logging added just prior to the memcpy call.\n> \n> I want to get the community's opinion on whether a null check should be\n> added prior to the memcpy() call.\n\nAs far as I understand, we do want to avoid memcpy's of null pointers;\nsee [1].\n\nIn this case I think it'd be sane to skip the complete block, not just\nthe memcpy, something like\n\ndiff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\nindex ca24620fd0..d35deb433a 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -1163,15 +1163,17 @@ DefineIndex(Oid relationId,\n\n if (partitioned)\n {\n+ PartitionDesc partdesc;\n+\n /*\n * Unless caller specified to skip this step (via ONLY), process each\n * partition to make sure they all contain a corresponding index.\n *\n * If we're called internally (no stmt->relation), recurse always.\n */\n- if (!stmt->relation || stmt->relation->inh)\n+ partdesc = RelationGetPartitionDesc(rel);\n+ if ((!stmt->relation || stmt->relation->inh) && partdesc->nparts > 0)\n {\n- PartitionDesc partdesc = RelationGetPartitionDesc(rel);\n int nparts = partdesc->nparts;\n Oid *part_oids = palloc(sizeof(Oid) * nparts);\n bool invalidate_parent = false;\n\n[1] https://www.postgresql.org/message-id/flat/20200904023648.GB3426768%40rfd.leadboat.com",
"msg_date": "Mon, 30 Nov 2020 19:49:34 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: runtime error copying oids field"
},
{
"msg_contents": "On 2020-Nov-30, Zhihong Yu wrote:\n\n> Alvaro, et al:\n> Please let me know how to proceed with the patch.\n> \n> Running test suite with the patch showed no regression.\n\nThat's good to hear. I'll get it pushed today. Thanks for following\nup.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 10:01:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: runtime error copying oids field"
},
{
"msg_contents": "On 2020-Dec-01, Alvaro Herrera wrote:\n\n> On 2020-Nov-30, Zhihong Yu wrote:\n> \n> > Alvaro, et al:\n> > Please let me know how to proceed with the patch.\n> > \n> > Running test suite with the patch showed no regression.\n> \n> That's good to hear. I'll get it pushed today. Thanks for following\n> up.\n\nDone, thanks for reporting this.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 12:02:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: runtime error copying oids field"
}
] |
[
{
"msg_contents": "I've been investigating parallelizing certain correlated subqueries,\nand during that work stumbled across the fact that\nset_rel_consider_parallel disallows parallel query on what seems like\na fairly simple case.\n\nConsider this query:\n\nselect t.unique1\nfrom tenk1 t\njoin lateral (select t.unique1 from tenk1 offset 0) l on true;\n\nCurrent set_rel_consider_parallel sets consider_parallel=false on the\nsubquery rel because it has a limit/offset. That restriction makes a\nlot of sense when we have a subquery whose results conceptually need\nto be \"shared\" (or at least be the same) across multiple workers\n(indeed the relevant comment in that function notes that cases where\nwe could prove a unique ordering would also qualify, but punts on\nimplementing that due to complexity). But if the subquery is LATERAL,\nthen no such conceptual restriction.\n\nIf we change the code slightly to allow considering parallel query\neven in the face of LIMIT/OFFSET for LATERAL subqueries, then our\nquery above changes from the following plan:\n\n Nested Loop\n Output: t.unique1\n -> Gather\n Output: t.unique1\n Workers Planned: 2\n -> Parallel Index Only Scan using tenk1_unique1 on public.tenk1 t\n Output: t.unique1\n -> Gather\n Output: NULL::integer\n Workers Planned: 2\n -> Parallel Index Only Scan using tenk1_hundred on public.tenk1\n Output: NULL::integer\n\nto this plan:\n\n Gather\n Output: t.unique1\n Workers Planned: 2\n -> Nested Loop\n Output: t.unique1\n -> Parallel Index Only Scan using tenk1_unique1 on public.tenk1 t\n Output: t.unique1\n -> Index Only Scan using tenk1_hundred on public.tenk1\n Output: NULL::integer\n\nThe code change itself is quite simple (1 line). As far as I can tell\nwe don't need to expressly check parallel safety of the limit/offset\nexpressions; that appears to happen elsewhere (and that makes sense\nsince the RTE_RELATION case doesn't check those clauses either).\n\nIf I'm missing something about the safety of this (or any other\nissue), I'd appreciate the feedback.\n\nJames",
"msg_date": "Mon, 30 Nov 2020 19:00:26 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 7:00 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> I've been investigating parallelizing certain correlated subqueries,\n> and during that work stumbled across the fact that\n> set_rel_consider_parallel disallows parallel query on what seems like\n> a fairly simple case.\n>\n> Consider this query:\n>\n> select t.unique1\n> from tenk1 t\n> join lateral (select t.unique1 from tenk1 offset 0) l on true;\n>\n> Current set_rel_consider_parallel sets consider_parallel=false on the\n> subquery rel because it has a limit/offset. That restriction makes a\n> lot of sense when we have a subquery whose results conceptually need\n> to be \"shared\" (or at least be the same) across multiple workers\n> (indeed the relevant comment in that function notes that cases where\n> we could prove a unique ordering would also qualify, but punts on\n> implementing that due to complexity). But if the subquery is LATERAL,\n> then no such conceptual restriction.\n>\n> If we change the code slightly to allow considering parallel query\n> even in the face of LIMIT/OFFSET for LATERAL subqueries, then our\n> query above changes from the following plan:\n>\n> Nested Loop\n> Output: t.unique1\n> -> Gather\n> Output: t.unique1\n> Workers Planned: 2\n> -> Parallel Index Only Scan using tenk1_unique1 on public.tenk1 t\n> Output: t.unique1\n> -> Gather\n> Output: NULL::integer\n> Workers Planned: 2\n> -> Parallel Index Only Scan using tenk1_hundred on public.tenk1\n> Output: NULL::integer\n>\n> to this plan:\n>\n> Gather\n> Output: t.unique1\n> Workers Planned: 2\n> -> Nested Loop\n> Output: t.unique1\n> -> Parallel Index Only Scan using tenk1_unique1 on public.tenk1 t\n> Output: t.unique1\n> -> Index Only Scan using tenk1_hundred on public.tenk1\n> Output: NULL::integer\n>\n> The code change itself is quite simple (1 line). As far as I can tell\n> we don't need to expressly check parallel safety of the limit/offset\n> expressions; that appears to happen elsewhere (and that makes sense\n> since the RTE_RELATION case doesn't check those clauses either).\n>\n> If I'm missing something about the safety of this (or any other\n> issue), I'd appreciate the feedback.\n\nNote that near the end of grouping planner we have a similar check:\n\nif (final_rel->consider_parallel && root->query_level > 1 &&\n !limit_needed(parse))\n\nguarding copying the partial paths from the current rel to the final\nrel. I haven't managed to come up with a test case that exposes that\nthough since simple examples like the one above get converted into a\nJOIN, so we're not in grouping_planner for a subquery. Making the\nsubquery above correlated results in us getting to that point, but\nisn't currently marked as parallel safe for other reasons (because it\nhas params), so that's not a useful test. I'm not sure if there are\ncases where we can't convert to a join but also don't involve params;\nhaven't thought about it a lot though.\n\nJames\n\n\n",
"msg_date": "Tue, 1 Dec 2020 08:43:38 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "Brian's email didn't keep the relevant headers, and so didn't show up\nas a reply, so I've pasted it here and am replying in this original\nthread. You can find the original at [1].\n\nOn Sun, Dec 6, 2020 at 7:34 PM Brian Davis <brian@brianlikespostgres.com> wrote:\n>\n> > Note that near the end of grouping planner we have a similar check:\n> >\n> > if (final_rel->consider_parallel && root->query_level > 1 &&\n> > !limit_needed(parse))\n> >\n> > guarding copying the partial paths from the current rel to the final\n> > rel. I haven't managed to come up with a test case that exposes that\n>\n> Played around with this a bit, here's a non-correlated subquery that gets us to that if statement\n>\n> DROP TABLE IF EXISTS foo;\n> CREATE TABLE foo (bar int);\n>\n> INSERT INTO foo (bar)\n> SELECT\n> g\n> FROM\n> generate_series(1, 10000) AS g;\n>\n>\n> SELECT\n> (\n> SELECT\n> bar\n> FROM\n> foo\n> LIMIT 1\n> ) AS y\n> FROM\n> foo;\n\nThanks. That triggers the parallel if case for limit -- any additional\nGUCs need to be modified to get that result? I assume regardless a\nparallel plan isn't chosen (even if you remove the limit needed\ncheck)?\n\n> I also was thinking about the LATERAL part.\n>\n> I couldn't think of any reason why the uncorrelated subquery's results would need to be shared and therefore the same, when we'll be \"looping\" over each row of the source table, running the subquery anew for each, conceptually.\n>\n> But then I tried this...\n>\n> test=# CREATE TABLE foo (bar int);\n> CREATE TABLE\n> test=#\n> test=# INSERT INTO foo (bar)\n> test-# SELECT\n> test-# g\n> test-# FROM\n> test-# generate_series(1, 10) AS g;\n> INSERT 0 10\n> test=#\n> test=#\n> test=# SELECT\n> test-# foo.bar,\n> test-# lat.bar\n> test-# FROM\n> test-# foo JOIN LATERAL (\n> test(# SELECT\n> test(# bar\n> test(# FROM\n> test(# foo AS foo2\n> test(# ORDER BY\n> test(# random()\n> test(# LIMIT 1\n> test(# ) AS lat ON true;\n> bar | bar\n> -----+-----\n> 1 | 7\n> 2 | 7\n> 3 | 7\n> 4 | 7\n> 5 | 7\n> 6 | 7\n> 7 | 7\n> 8 | 7\n> 9 | 7\n> 10 | 7\n> (10 rows)\n>\n>\n> As you can see, random() is only called once. If postgres were supposed to be running the subquery for each source row, conceptually, it would be a mistake to cache the results of a volatile function like random().\n\nThis was genuinely surprising to me. I think one could argue that this\nis just an optimization (after all -- if there is no correlation, then\nrunning it once is conceptually/safely the same as running it multiple\ntimes), but that doesn't seem to hold water with the volatile function\nin play.\n\nOf course, given the volatile function we'd never parallelize this\nanyway. But we still have to consider the case where the result is\notherwise ordered differently between workers (just by virtue of disk\norder, for example).\n\nI've tried the above query using tenk1 from the regress tests to get a\nbit more data, and, along with modifying several GUCs, can force\nparallel plans. However in no case can I get it to execute that\nuncorrelated lateral in multiple workers. That makes me suspicious\nthat there's another check in play here ensuring the lateral subquery\nis executed for each group, and that in the uncorrelated case really\nthat rule still holds -- it's just a single group.\n\n> The docs say: \"When a FROM item contains LATERAL cross-references, evaluation proceeds as follows: for each row of the FROM item providing the cross-referenced column(s), or set of rows of multiple FROM items providing the columns, the LATERAL item is evaluated using that row or row set's values of the columns. The resulting row(s) are joined as usual with the rows they were computed from. This is repeated for each row or set of rows from the column source table(s).\"\n>\n> They don't say what happens with LATERAL when there aren't cross-references though. As we expect, adding one does show random() being called once for each source row.\n\nIf my theory above is correct then it's implicit that the row set is\nthe whole previous FROM group.\n\n> test=# SELECT\n> test-# foo.bar,\n> test-# lat.bar\n> test-# FROM\n> test-# foo JOIN LATERAL (\n> test(# SELECT\n> test(# bar\n> test(# FROM\n> test(# foo AS foo2\n> test(# WHERE\n> test(# foo2.bar < foo.bar + 100000\n> test(# ORDER BY\n> test(# random()\n> test(# LIMIT 1\n> test(# ) AS lat ON true;\n> bar | bar\n> -----+-----\n> 1 | 5\n> 2 | 8\n> 3 | 3\n> 4 | 4\n> 5 | 5\n> 6 | 5\n> 7 | 1\n> 8 | 3\n> 9 | 7\n> 10 | 3\n> (10 rows)\n>\n> It seems like to keep the same behavior that exists today, results of LATERAL subqueries would need to be the same if they aren't correlated, and so you couldn't run them in parallel with a limit if the order wasn't guaranteed. But I'll be the first to admit that it's easy enough for me to miss a key piece of logic on something like this, so I could be way off base too.\n\nIf it weren't for the volatile function in the example, I think I\ncould argue we could change before (i.e., my theorizing above\noriginally about just being an optimization)...but yes, it seems like\nthis behavior shouldn't change. I can't seem to make it break though\nwith the patch.\n\nWhile I haven't actually tracked down to guarantee this is handled\nelsewhere, a thought experiment -- I think -- shows it must be so.\nHere's why: suppose we don't have a limit here, but the query return\norder is different in different backends. Then we would have the same\nproblem you bring up. In that case this code is already setting\nconsider_parallel=true on the rel. So I don't think we're changing any\nbehavior here.\n\nJames\n\n1: https://www.postgresql.org/message-id/a50766a4-a927-41c4-984c-76e513b6d1c4%40www.fastmail.com\n\n\n",
"msg_date": "Mon, 7 Dec 2020 18:45:50 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On 12/7/20 6:45 PM, James Coleman wrote:\n> On Sun, Dec 6, 2020 at 7:34 PM Brian Davis <brian@brianlikespostgres.com> wrote:\n>>\n>> Played around with this a bit, here's a non-correlated subquery that gets us to that if statement\n> \n> While I haven't actually tracked down to guarantee this is handled\n> elsewhere, a thought experiment -- I think -- shows it must be so.\n> Here's why: suppose we don't have a limit here, but the query return\n> order is different in different backends. Then we would have the same\n> problem you bring up. In that case this code is already setting\n> consider_parallel=true on the rel. So I don't think we're changing any\n> behavior here.\n\nSo it looks like you and Brian are satisfied that this change is not \nallowing bad behavior.\n\nSeems like an obvious win. Hopefully we can get some other concurring \nopinions.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 10 Mar 2021 12:31:36 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 10:46 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> While I haven't actually tracked down to guarantee this is handled\n> elsewhere, a thought experiment -- I think -- shows it must be so.\n> Here's why: suppose we don't have a limit here, but the query return\n> order is different in different backends. Then we would have the same\n> problem you bring up. In that case this code is already setting\n> consider_parallel=true on the rel. So I don't think we're changing any\n> behavior here.\n>\n\nAFAICS, the patch seems very reasonable and specifically targets\nlateral subqueries with limit/offset. It seems like the uncorrelated\ncase is the only real concern.\nI generally agree that the current patch is probably not changing any\nbehavior in the uncorrelated case (and like yourself, haven't yet\nfound a case for which it breaks), but I'm not sure Brian's concerns\ncan be ruled out entirely.\n\nHow about a minor update to the patch to make it slightly more\nrestrictive, to exclude the case when there are no lateral\ncross-references, so we'll be allowing parallelism only when we know\nthe lateral subquery will be evaluated anew for each source row?\nI was thinking of the following patch modification:\n\nBEFORE:\n- if (limit_needed(subquery))\n+ if (!rte->lateral && limit_needed(subquery))\n\nAFTER:\n- if (limit_needed(subquery))\n+ if ((!rte->lateral || bms_is_empty(rel->lateral_relids)) &&\n+ limit_needed(subquery))\n\n\nThoughts?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 28 May 2021 11:01:31 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Thu, May 27, 2021 at 9:01 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Dec 8, 2020 at 10:46 AM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > While I haven't actually tracked down to guarantee this is handled\n> > elsewhere, a thought experiment -- I think -- shows it must be so.\n> > Here's why: suppose we don't have a limit here, but the query return\n> > order is different in different backends. Then we would have the same\n> > problem you bring up. In that case this code is already setting\n> > consider_parallel=true on the rel. So I don't think we're changing any\n> > behavior here.\n> >\n>\n> AFAICS, the patch seems very reasonable and specifically targets\n> lateral subqueries with limit/offset. It seems like the uncorrelated\n> case is the only real concern.\n> I generally agree that the current patch is probably not changing any\n> behavior in the uncorrelated case (and like yourself, haven't yet\n> found a case for which it breaks), but I'm not sure Brian's concerns\n> can be ruled out entirely.\n>\n> How about a minor update to the patch to make it slightly more\n> restrictive, to exclude the case when there are no lateral\n> cross-references, so we'll be allowing parallelism only when we know\n> the lateral subquery will be evaluated anew for each source row?\n> I was thinking of the following patch modification:\n>\n> BEFORE:\n> - if (limit_needed(subquery))\n> + if (!rte->lateral && limit_needed(subquery))\n>\n> AFTER:\n> - if (limit_needed(subquery))\n> + if ((!rte->lateral || bms_is_empty(rel->lateral_relids)) &&\n> + limit_needed(subquery))\n>\n>\n> Thoughts?\n\nApologies for the delayed response; this seems fine to me. I've\nattached patch v2.\n\nThanks,\nJames Coleman",
"msg_date": "Fri, 16 Jul 2021 15:16:10 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Fri, Jul 16, 2021 at 3:16 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 9:01 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Tue, Dec 8, 2020 at 10:46 AM James Coleman <jtc331@gmail.com> wrote:\n> > >\n> > > While I haven't actually tracked down to guarantee this is handled\n> > > elsewhere, a thought experiment -- I think -- shows it must be so.\n> > > Here's why: suppose we don't have a limit here, but the query return\n> > > order is different in different backends. Then we would have the same\n> > > problem you bring up. In that case this code is already setting\n> > > consider_parallel=true on the rel. So I don't think we're changing any\n> > > behavior here.\n> > >\n> >\n> > AFAICS, the patch seems very reasonable and specifically targets\n> > lateral subqueries with limit/offset. It seems like the uncorrelated\n> > case is the only real concern.\n> > I generally agree that the current patch is probably not changing any\n> > behavior in the uncorrelated case (and like yourself, haven't yet\n> > found a case for which it breaks), but I'm not sure Brian's concerns\n> > can be ruled out entirely.\n> >\n> > How about a minor update to the patch to make it slightly more\n> > restrictive, to exclude the case when there are no lateral\n> > cross-references, so we'll be allowing parallelism only when we know\n> > the lateral subquery will be evaluated anew for each source row?\n> > I was thinking of the following patch modification:\n> >\n> > BEFORE:\n> > - if (limit_needed(subquery))\n> > + if (!rte->lateral && limit_needed(subquery))\n> >\n> > AFTER:\n> > - if (limit_needed(subquery))\n> > + if ((!rte->lateral || bms_is_empty(rel->lateral_relids)) &&\n> > + limit_needed(subquery))\n> >\n> >\n> > Thoughts?\n>\n> Apologies for the delayed response; this seems fine to me. I've\n> attached patch v2.\n\nGreg,\n\nDo you believe this is now ready for committer?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Wed, 3 Nov 2021 09:49:20 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Thu, Nov 4, 2021 at 12:49 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> Greg,\n>\n> Do you believe this is now ready for committer?\n>\n\nThe patch LGTM.\nI have set the status to \"Ready for Committer\".\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 4 Nov 2021 13:06:41 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> The patch LGTM.\n> I have set the status to \"Ready for Committer\".\n\nI don't really see why this patch is even a little bit safe.\nThe argument for it seems to be that a lateral subquery will\nnecessarily be executed in such a way that each complete iteration\nof the subquery, plus joining to its outer rel, happens within a\nsingle worker ... but where is the guarantee of that? Once\nyou've marked the rel as parallel-safe, the planner is free to\nconsider all sorts of parallel join structures. I'm afraid this\nwould be easily broken as soon as you look at cases with three or\nmore rels. Or maybe even just two. The reason for the existing\nrestriction boils down to this structure being unsafe:\n\n Gather\n NestLoop\n Scan ...\n Limit\n Scan ...\n\nand I don't see how the existence of a lateral reference\nmakes it any safer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Jan 2022 17:30:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Tue, Jan 4, 2022 at 5:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Nancarrow <gregn4422@gmail.com> writes:\n> > The patch LGTM.\n> > I have set the status to \"Ready for Committer\".\n>\n> I don't really see why this patch is even a little bit safe.\n> The argument for it seems to be that a lateral subquery will\n> necessarily be executed in such a way that each complete iteration\n> of the subquery, plus joining to its outer rel, happens within a\n> single worker ... but where is the guarantee of that? Once\n> you've marked the rel as parallel-safe, the planner is free to\n> consider all sorts of parallel join structures. I'm afraid this\n> would be easily broken as soon as you look at cases with three or\n> more rels. Or maybe even just two. The reason for the existing\n> restriction boils down to this structure being unsafe:\n>\n> Gather\n> NestLoop\n> Scan ...\n> Limit\n> Scan ...\n>\n> and I don't see how the existence of a lateral reference\n> makes it any safer.\n\nThanks for taking a look. I'm not following how the structure you\nposited is inherently unsafe when it's a lateral reference. That\nlimit/scan (if lateral) has to be being executed per tuple in the\nouter scan, right? And if it's a unique execution per tuple, then\nconsistency across tuples (that are in different workers) can't be a\nconcern.\n\nIs there a scenario I'm missing where lateral can currently be\nexecuted in that way in that structure (or a different one)?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 4 Jan 2022 21:59:21 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Tue, Jan 4, 2022 at 9:59 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Tue, Jan 4, 2022 at 5:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Greg Nancarrow <gregn4422@gmail.com> writes:\n> > > The patch LGTM.\n> > > I have set the status to \"Ready for Committer\".\n> >\n> > I don't really see why this patch is even a little bit safe.\n> > The argument for it seems to be that a lateral subquery will\n> > necessarily be executed in such a way that each complete iteration\n> > of the subquery, plus joining to its outer rel, happens within a\n> > single worker ... but where is the guarantee of that? Once\n> > you've marked the rel as parallel-safe, the planner is free to\n> > consider all sorts of parallel join structures. I'm afraid this\n> > would be easily broken as soon as you look at cases with three or\n> > more rels. Or maybe even just two. The reason for the existing\n> > restriction boils down to this structure being unsafe:\n> >\n> > Gather\n> > NestLoop\n> > Scan ...\n> > Limit\n> > Scan ...\n> >\n> > and I don't see how the existence of a lateral reference\n> > makes it any safer.\n>\n> Thanks for taking a look. I'm not following how the structure you\n> posited is inherently unsafe when it's a lateral reference. That\n> limit/scan (if lateral) has to be being executed per tuple in the\n> outer scan, right? And if it's a unique execution per tuple, then\n> consistency across tuples (that are in different workers) can't be a\n> concern.\n>\n> Is there a scenario I'm missing where lateral can currently be\n> executed in that way in that structure (or a different one)?\n\nTo expand on this:\n\nSuppose lateral is not in play. Then if we have a plan like:\n\n Gather\n NestLoop\n Scan X\n Limit\n Scan Y\n\nBecause we have the result \"X join Limit(Y)\" we need \"Limit(Y)\" to be\nconsistent across all of the possible executions of \"Limit(Y)\" (i.e.,\nin each worker it executes in). That means (absent infrastructure for\nguaranteeing a unique ordering) we obviously can't parallelize the\ninner side of the join as the limit may be applied in different ways\nin each worker's execution.\n\nNow suppose lateral is in play. Then (given the same plan) instead of\nour result being \"X join Limit(Y)\" the result is \"X join Limit(Y sub\nX)\", that is, each row in X is joined to a unique invocation of\n\"Limit(Y)\". In this case we are already conceivably getting different\nresults for each execution of the subquery \"Limit(Y)\" even if we're\nnot running those executions across multiple workers. Whether we've\noptimized such a subquery into running a single execution per row in X\nor have managed to optimize it in such a way that a single execution\nof \"Limit(Y)\" may be shared by multiple rows in X makes no difference\nbecause there are no relational guarantees that that is the case\n(conceptually each row in X gets its own results for \"Limit(Y)\").\n\nI've not been able to come up with a scenario where this doesn't hold\n-- even if part of the join or the subquery execution happens in a\ndifferent worker. I believe that for there to be a parallel safety\nproblem here you'd need to have a given subquery execution for a row\nin X be executed multiple times. Alternatively I've been trying to\nreason about whether there might be a scenario where a 3rd rel is\ninvolved (i.e., the inner rel is itself a join rel), but as far as I\ncan tell the moment we end up with a join structure such that the\nlateral rel is the one with the limit we'd be back to being safe again\nfor the reasons claimed earlier.\n\nIf there's something obvious (or not so obvious) I'm missing, I'd\nappreciate a counterexample, but I'm currently unable to falsify my\noriginal claim that the lateral reference is a fundamental difference\nthat allows us to consider this parallel safe.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Thu, 13 Jan 2022 18:26:01 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Tue, Jan 4, 2022 at 9:59 PM James Coleman <jtc331@gmail.com> wrote:\n>> On Tue, Jan 4, 2022 at 5:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I don't really see why this patch is even a little bit safe.\n\n> Suppose lateral is not in play. Then if we have a plan like:\n\n> Gather\n> NestLoop\n> Scan X\n> Limit\n> Scan Y\n\n> Because we have the result \"X join Limit(Y)\" we need \"Limit(Y)\" to be\n> consistent across all of the possible executions of \"Limit(Y)\" (i.e.,\n> in each worker it executes in). That means (absent infrastructure for\n> guaranteeing a unique ordering) we obviously can't parallelize the\n> inner side of the join as the limit may be applied in different ways\n> in each worker's execution.\n\n> Now suppose lateral is in play. Then (given the same plan) instead of\n> our result being \"X join Limit(Y)\" the result is \"X join Limit(Y sub\n> X)\", that is, each row in X is joined to a unique invocation of\n> \"Limit(Y)\".\n\nThis argument seems to be assuming that Y is laterally dependent on X,\nbut the patch as written will take *any* lateral dependency as a\nget-out-of-jail-free card. If what we have is \"Limit(Y sub Z)\" where\nZ is somewhere else in the query tree, it's not apparent to me that\nyour argument holds.\n\nBut more generally, I don't think you've addressed the fundamental\nconcern, which is that a query involving Limit is potentially\nnondeterministic (if it lacks a fully-deterministic ORDER BY),\nso that different workers could get different answers from it if\nthey're using a plan type that permits that to happen. (See the\noriginal discussion that led to 75f9c4ca5, at [1].) I do not see\nhow a lateral dependency removes that hazard. The bug report that\nstarted the original discussion hit the problem because it\ngenerated a plan like\n\nGather\n -> Hash Semi Join\n -> Parallel Seq Scan\n -> Hash\n -> Limit\n -> Seq Scan\n\nWe didn't make the submitter drill down far enough to verify\nexactly why he got nondeterministic results from the Limit, but\nI suppose the reason was that the table was big enough to trigger\n\"synchronize_seqscans\" behavior, allowing different workers to\nread different parts of that table. Now, that particular case\ndidn't have any lateral dependency, but if there was one it'd\njust have resulted in changing the hash join to a nestloop join,\nand the nondeterminism hazard would be exactly the same AFAICS.\n\n> In this case we are already conceivably getting different\n> results for each execution of the subquery \"Limit(Y)\" even if we're\n> not running those executions across multiple workers.\n\nThat seems to be about the same argument Andres made initially\nin the old thread, but we soon shot that down as not being the\nlevel of guarantee we want to provide. There's nothing in the\nSQL standard that says that\n select * from events where account in\n (select account from events\n where data->>'page' = 'success.html' limit 3);\n(the original problem query) shall execute the sub-query\nonly once, but people expect it to act that way.\n\nIf you want to improve this area, my feeling is that it'd be\nbetter to look into what was speculated about in the old\nthread: LIMIT doesn't create nondeterminism if the query has\nan ORDER BY that imposes a unique row ordering, ie\norder-by-primary-key. We didn't have planner infrastructure\nthat would allow checking that cheaply in 2018, but maybe\nthere is some now?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/153417684333.10284.11356259990921828616%40wrigleys.postgresql.org\n\n\n",
"msg_date": "Tue, 01 Mar 2022 17:35:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/2851/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 2 Aug 2022 14:05:41 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Tue, Mar 1, 2022 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> But more generally, I don't think you've addressed the fundamental\n> concern, which is that a query involving Limit is potentially\n> nondeterministic (if it lacks a fully-deterministic ORDER BY),\n> so that different workers could get different answers from it if\n> they're using a plan type that permits that to happen. (See the\n> original discussion that led to 75f9c4ca5, at [1].) I do not see\n> how a lateral dependency removes that hazard.\n> ...\n> > In this case we are already conceivably getting different\n> > results for each execution of the subquery \"Limit(Y)\" even if we're\n> > not running those executions across multiple workers.\n>\n> That seems to be about the same argument Andres made initially\n> in the old thread, but we soon shot that down as not being the\n> level of guarantee we want to provide. There's nothing in the\n> SQL standard that says that\n> select * from events where account in\n> (select account from events\n> where data->>'page' = 'success.html' limit 3);\n> (the original problem query) shall execute the sub-query\n> only once, but people expect it to act that way.\n\nI understand that particular case being a bit of a gotcha (i.e., what\nwe would naturally expect exceeds what the spec says).\n\nBut in the case where there's correlation via LATERAL we already don't\nguarantee unique executions for a given set of params into the lateral\nsubquery execution, right? For example, suppose we have:\n\n select *\n from foo\n left join lateral (\n select n\n from bar\n where bar.a = foo.a\n limit 1\n ) on true\n\nand suppose that foo.a (in the table scan) returns these values in\norder: 1, 2, 1. In that case we'll execute the lateral subquery 3\nseparate times rather than attempting to order the results of foo.a\nsuch that we can re-execute the subquery only when the param changes\nto a new unique value (and we definitely don't cache the results to\nguarantee unique subquery executions).\n\nSo, assuming that we can guarantee we're talking about the proper\nlateral reference (not something unrelated as you pointed out earlier)\nthen I don't believe we're actually changing even implicit guarantees\nabout how the query executes. I think that probably true both in\ntheory (I claim that once someone declares a lateral join they're\ndefinitionally expecting a subquery execution per outer row) and in\npractice (e.g., your hypothesis about synchronized seq scans would\napply here also to subsequent executions of the subquery).\n\nIs there still something I'm missing about the concern you have?\n\n> If you want to improve this area, my feeling is that it'd be\n> better to look into what was speculated about in the old\n> thread: LIMIT doesn't create nondeterminism if the query has\n> an ORDER BY that imposes a unique row ordering, ie\n> order-by-primary-key. We didn't have planner infrastructure\n> that would allow checking that cheaply in 2018, but maybe\n> there is some now?\n\nAssuming what I argued earlier holds true for lateral [at minimum when\ncorrelated] subquery execution, then I believe your suggestion here is\northogonal and would expand the use cases even more. For example, if\nwe were able to guarantee a unique result set (including order), then\nwe could allow parallelizing subqueries even if they're not lateral\nand correlated.\n\nJames Coleman\n\n\n",
"msg_date": "Mon, 19 Sep 2022 15:58:33 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 3:58 PM James Coleman <jtc331@gmail.com> wrote:\n> But in the case where there's correlation via LATERAL we already don't\n> guarantee unique executions for a given set of params into the lateral\n> subquery execution, right? For example, suppose we have:\n>\n> select *\n> from foo\n> left join lateral (\n> select n\n> from bar\n> where bar.a = foo.a\n> limit 1\n> ) on true\n>\n> and suppose that foo.a (in the table scan) returns these values in\n> order: 1, 2, 1. In that case we'll execute the lateral subquery 3\n> separate times rather than attempting to order the results of foo.a\n> such that we can re-execute the subquery only when the param changes\n> to a new unique value (and we definitely don't cache the results to\n> guarantee unique subquery executions).\n\nI think this is true, but I don't really understand why we should\nfocus on LATERAL here. What we really need, and I feel like we've\ntalked about this before, is a way to reason about where parameters\nare set and used. Your sample query gets a plan like this:\n\n Nested Loop Left Join (cost=0.00..1700245.00 rows=10000 width=8)\n -> Seq Scan on foo (cost=0.00..145.00 rows=10000 width=4)\n -> Limit (cost=0.00..170.00 rows=1 width=4)\n -> Seq Scan on bar (cost=0.00..170.00 rows=1 width=4)\n Filter: (foo.a = a)\n\nIf this were to occur inside a larger plan tree someplace, it would be\nOK to insert a Gather node above the Nested Loop node without doing\nanything further, because then the parameter that stores foo.a would\nbe both set and used in the worker. If you wanted to insert a Gather\nat any other place in this plan, things get more complicated. But just\nbecause you have LATERAL doesn't mean that you have this problem,\nbecause if you delete the \"limit 1\" then the subqueries get flattened\ntogether and the parameter disappears, and if you delete the lateral\nreference (i.e. WHERE foo.a = bar.a) then there's still a subquery but\nit no longer refers to an outer parameter. And on the flip side just\nbecause you don't have LATERAL doesn't mean that you don't have this\nproblem. e.g. the query could instead be:\n\nselect *, (select n from bar where bar.a = foo.a limit 1) from foo;\n\n...which I think is pretty much equivalent to your formulation and has\nthe same problem as far as parallel query as your formulation but does\nnot involve the LATERAL keyword.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Sep 2022 16:29:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 4:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 3:58 PM James Coleman <jtc331@gmail.com> wrote:\n> > But in the case where there's correlation via LATERAL we already don't\n> > guarantee unique executions for a given set of params into the lateral\n> > subquery execution, right? For example, suppose we have:\n> >\n> > select *\n> > from foo\n> > left join lateral (\n> > select n\n> > from bar\n> > where bar.a = foo.a\n> > limit 1\n> > ) on true\n> >\n> > and suppose that foo.a (in the table scan) returns these values in\n> > order: 1, 2, 1. In that case we'll execute the lateral subquery 3\n> > separate times rather than attempting to order the results of foo.a\n> > such that we can re-execute the subquery only when the param changes\n> > to a new unique value (and we definitely don't cache the results to\n> > guarantee unique subquery executions).\n>\n> I think this is true, but I don't really understand why we should\n> focus on LATERAL here. What we really need, and I feel like we've\n> talked about this before, is a way to reason about where parameters\n> are set and used.\n\nYes, over in the thread \"Parallelize correlated subqueries that\nexecute within each worker\" [1] which was meant to build on top of\nthis work (though is technically separate). Your bringing it up here\ntoo is making me wonder if we can combine the two and instead of\nalways allowing subqueries with LIMIT instead (like the other patch\ndoes) delay final determination of parallel safety of rels and paths\n(perhaps, as that thread is discussing, until gather/gather merge path\ncreation).\n\nSide note: I'd kinda like a way (maybe a development GUC or even a\ncompile time option) to have EXPLAIN show where params are set. If\nsomething like that exists, egg on my face I guess, but I'd love to\nknow about it.\n\n> Your sample query gets a plan like this:\n>\n> Nested Loop Left Join (cost=0.00..1700245.00 rows=10000 width=8)\n> -> Seq Scan on foo (cost=0.00..145.00 rows=10000 width=4)\n> -> Limit (cost=0.00..170.00 rows=1 width=4)\n> -> Seq Scan on bar (cost=0.00..170.00 rows=1 width=4)\n> Filter: (foo.a = a)\n>\n> If this were to occur inside a larger plan tree someplace, it would be\n> OK to insert a Gather node above the Nested Loop node without doing\n> anything further, because then the parameter that stores foo.a would\n> be both set and used in the worker. If you wanted to insert a Gather\n> at any other place in this plan, things get more complicated. But just\n> because you have LATERAL doesn't mean that you have this problem,\n> because if you delete the \"limit 1\" then the subqueries get flattened\n> together and the parameter disappears,\n\nFor future reference in this email thread when you remove the \"limit\n1\" this is the plan you get:\n\n Merge Right Join (cost=372.18..815.71 rows=28815 width=8)\n Merge Cond: (bar.a = foo.a)\n -> Sort (cost=158.51..164.16 rows=2260 width=8)\n Sort Key: bar.a\n -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=8)\n -> Sort (cost=179.78..186.16 rows=2550 width=4)\n Sort Key: foo.a\n -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n\nJust to make sure I'm following: by \"doesn't mean that you have this\nproblem\" you mean \"doesn't mean you have this limitation on parallel\nquery\"?\n\nThe \"flattening out\" (conversion to join between two table scans\nexecuted a single time each) is an interesting case because I consider\nthat to be \"just\" a performance optimization, and therefore I don't\nthink anything about the guarantees a user expects should change. But\ninterestingly: it does end up providing stronger guarantees about the\nquery results than it would if the conversion didn't happen (the\nsubquery results in only a single table scan whereas without the\nchange a scan per outer row means it's *possible* to get different\nresults across different outer rows even with the same join key\nvalue).\n\n> and if you delete the lateral\n> reference (i.e. WHERE foo.a = bar.a) then there's still a subquery but\n> it no longer refers to an outer parameter. And on the flip side just\n> because you don't have LATERAL doesn't mean that you don't have this\n> problem. e.g. the query could instead be:\n>\n> select *, (select n from bar where bar.a = foo.a limit 1) from foo;\n>\n> ...which I think is pretty much equivalent to your formulation and has\n> the same problem as far as parallel query as your formulation but does\n> not involve the LATERAL keyword.\n\nYes, that's a good point too. I need to play with these examples and\nconfirm whether lateral_relids gets set in that case. IIRC when that's\nset isn't exactly the same as whether or not the LATERAL keyword is\nused, and I should clarify that my claims here are meant to be about\nwhen we execute it that way regardless of the keyword usage. The\nkeyword usage I'd assumed just made it easier to talk about, but maybe\nyou're implying that it's actually generating confusion.\n\nJames Coleman\n\n1: https://www.postgresql.org/message-id/CA%2BTgmoYXm2NCLt1nikWfYj1_r3%3DfsoNCHCtDVdN7X1uX_xuXgw%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 22 Sep 2022 17:19:42 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 5:19 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 4:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Sep 19, 2022 at 3:58 PM James Coleman <jtc331@gmail.com> wrote:\n> > > But in the case where there's correlation via LATERAL we already don't\n> > > guarantee unique executions for a given set of params into the lateral\n> > > subquery execution, right? For example, suppose we have:\n> > >\n> > > select *\n> > > from foo\n> > > left join lateral (\n> > > select n\n> > > from bar\n> > > where bar.a = foo.a\n> > > limit 1\n> > > ) on true\n> > >\n> > > and suppose that foo.a (in the table scan) returns these values in\n> > > order: 1, 2, 1. In that case we'll execute the lateral subquery 3\n> > > separate times rather than attempting to order the results of foo.a\n> > > such that we can re-execute the subquery only when the param changes\n> > > to a new unique value (and we definitely don't cache the results to\n> > > guarantee unique subquery executions).\n> >\n> > I think this is true, but I don't really understand why we should\n> > focus on LATERAL here. What we really need, and I feel like we've\n> > talked about this before, is a way to reason about where parameters\n> > are set and used.\n>\n> Yes, over in the thread \"Parallelize correlated subqueries that\n> execute within each worker\" [1] which was meant to build on top of\n> this work (though is technically separate). Your bringing it up here\n> too is making me wonder if we can combine the two and instead of\n> always allowing subqueries with LIMIT instead (like the other patch\n> does) delay final determination of parallel safety of rels and paths\n> (perhaps, as that thread is discussing, until gather/gather merge path\n> creation).\n\nUpon further investigation and thought I believe it *might* be\npossible to do what I'd wondered about above (delay final\ndetermination of parallel safety of the rel until later on in planning\nby marking e.g. rels as tentatively safe and re-evaluating that as we\ngo) as my original patch did on the referenced thread, but that thread\nalso ended up with a much simpler proposed approach that still moved\nfinal parallel safety determination to later in the planner, but it\ndid it by checking (in generate_gather_paths and\ngenerate_user_gather_paths) whether that point in the plan tree\nsupplies the params required by the partial path.\n\nSo the current approach in that other thread is orthogonal to (if\ncomplementary in some queries) the current question, which is \"must we\nimmediately disallow parallelism on a rel that has a limit?\"\n\nTom's concern about my arguments about special casing lateral was:\n\n> This argument seems to be assuming that Y is laterally dependent on X,\n> but the patch as written will take *any* lateral dependency as a\n> get-out-of-jail-free card. If what we have is \"Limit(Y sub Z)\" where\n> Z is somewhere else in the query tree, it's not apparent to me that\n> your argument holds.\n\nI do see now that there was a now obvious flaw in the original patch:\nrte->lateral may well be set to true even in cases where there's no\nactual lateral dependency. That being said I don't see a need to\ndetermine if the current subtree provides the required lateral\ndependency, for the following reasons:\n\n1. We don't want to set rel->consider_parallel to false immediately\nbecause that will then poison everything higher in the tree, despite\nthe fact that it may well be that it's only higher up the plan tree\nthat the lateral dependency is provided. Regardless of the level in\nthe plan tree at which the param is provided we are going to execute\nthe subquery (even in serial execution) once per outer tuple at the\npoint in the join tree where the lateral join lies.\n2. We're *not* at this point actually checking parallel safety of a\ngiven path (i.e., is the path parallel safe at this point given the\nparams currently provided), we're only checking to see if the rel\nitself should be entirely excluded from consideration for parallel\nplans at any point in the future.\n3. We already know that the question of whether or not a param is\nprovided can't be the concern here since it isn't under consideration\nin the existing code path. That is, if a subquery doesn't have a limit\nthen this particular check won't determine that the subquery's\nexistence should result in setting rel->consider_parallel to false\nbecause of any params it may or may not contain.\n4. It is already the case that a subplan using exec params in the\nwhere clause will not be considered parallel safe in path generation.\n\nI believe the proper check for when to exclude subqueries with limit\nclauses is (as in the attached patch) prohibiting a limit when\nrel->lateral_relids is empty. Here are several examples of queries and\nplans and how this code plays out to attempt to validate that\nhypothesis:\n\n select *\n from foo\n left join lateral (\n select n\n from bar\n where bar.a = foo.a\n limit 1\n ) on true;\n\n Nested Loop Left Join (cost=0.00..8928.05 rows=2550 width=8)\n -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n -> Limit (cost=0.00..3.48 rows=1 width=4)\n -> Seq Scan on bar (cost=0.00..38.25 rows=11 width=4)\n Filter: (a = foo.a)\n\nThis was my base case for the past few emails. It hits the modified\ncode path with rte->lateral = true and\nbms_num_members(rel->lateral_relids) = 1. The patch would allow the\nsubquery rel.consider_parallel to be set to true, however with solely\nthis patch we still won't put the limit subquery within the\nparallelized portion of the plan because of the exec param used in the\nwhere clause.\n\n select *\n from foo\n left join lateral (\n select foo.a\n from bar\n limit 1\n ) on true;\n\n Nested Loop Left Join (cost=0.00..97.78 rows=2550 width=8)\n -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n -> Limit (cost=0.00..0.01 rows=1 width=4)\n -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=4)\n\nIn this case the lateral reference is only in the target list of the\nsubquery, and this form is where the patch kicks in to allow placing\nthe gather node above the whole join tree (thus executing the limit\nsubquery within each worker).\n\nselect *, (select n from bar where bar.a = foo.a limit 1) from foo;\n\n Seq Scan on foo (cost=0.00..8902.55 rows=2550 width=8)\n SubPlan 1\n -> Limit (cost=0.00..3.48 rows=1 width=4)\n -> Seq Scan on bar (cost=0.00..38.25 rows=11 width=4)\n Filter: (a = foo.a)\n\nRobert wondered if this was effectively the same thing, and it\ndefinitely, in my opinion, ought to be the same--in terms of the\nresults you expect--as my original example. However this example\ndoesn't appear to hit this code path at all. We also already\nparallelize this form of query (both when the outer tuple reference is\nin the subquery's target list and when it's in the subquery's where\nclause).\n\n select *\n from foo\n left join lateral (\n select n\n from bar\n where bar.a = foo.a\n ) on true;\n\n Merge Right Join (cost=372.18..815.71 rows=28815 width=8)\n Merge Cond: (bar.a = foo.a)\n -> Sort (cost=158.51..164.16 rows=2260 width=8)\n Sort Key: bar.a\n -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=8)\n -> Sort (cost=179.78..186.16 rows=2550 width=4)\n Sort Key: foo.a\n -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n\nRemoving the limit results in the correlated subquery being rewritten\nas a join. Because this rewrite removes the correlation (at least in\nexecution) this query also doesn't hit the modified code path at all.\nI do find this one interesting because it's one of these examples of\nhow we already provide different guarantees around correlated subquery\nexecution (even when executing serially): when there's a limit here\nthe subquery executes multiple times (which means, for example, that\nthe results of the scan may be returned in a different order) but\nwithout the limit it executes a single time (so the order is\nguaranteed).\n\n select *\n from foo\n left join lateral (\n select n\n from bar\n ) on true;\n\n Nested Loop Left Join (cost=0.00..140795.50 rows=5763000 width=8)\n -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=4)\n\nIf we remove the correlation but retain the lateral keyword we also\ndon't hit this code path. By the way, the plan is also the same if you\nremove the useless lateral keyword in this query.\n\n select *\n from foo\n where foo.a in (\n select bar.a from bar limit 1\n );\n\n Hash Semi Join (cost=0.03..42.37 rows=13 width=4)\n Hash Cond: (foo.a = bar.a)\n -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=0.01..0.01 rows=1 width=4)\n -> Limit (cost=0.00..0.01 rows=1 width=4)\n -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=4)\n\nThis is the query form Tom referenced from a bug report that\noriginally brought in the current code that excludes all subqueries\nwith limits from parallelization. This form does indeed hit the code\nblock in question, but rte->lateral is false and\nbms_num_members(rel->lateral_relids) is 0 so it is unaffected by this\npatch.\n\n select *\n from foo\n left join (\n select n\n from bar\n limit 1\n ) on true;\n\n Nested Loop Left Join (cost=0.00..67.39 rows=2550 width=8)\n -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n -> Materialize (cost=0.00..0.02 rows=1 width=4)\n -> Limit (cost=0.00..0.01 rows=1 width=4)\n -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=4)\n\nThis query also has rte->lateral is false and\nbms_num_members(rel->lateral_relids) is 0 when reaching the line of\ncode changed in this patch. The interesting thing about this query is\nthat if you set enable_material = off the materialize node goes away,\nand we do not attempt to rewrite the query into something that would\nexecute the subquery only once, so this is a case where we already\ndon't provide the theoretically strongest possible guarantees, though\none could reasonably argue it's a bit artificial with materialization\nturned off.\n\nIn summary we already allow parallelizing this type of execution\npattern if the subquery is in the select clause; we apply stricter\nstandards to all subqueries in from clauses. I believe the prohibition\non parallelizing subqueries with limits in from clauses was\nunnecessarily restrictive when it was added. When we have lateral\ndependencies we execute the query in the same was we would when the\nsubquery is in the target list (i.e., per tuple at that point in the\njoin tree), and so we should be able to parallelize those subqueries\nin the from clause just like we already do when they are in the target\nlist.\n\nIn the attached series the first patch adds a bunch of new tests to\nshow a bunch of permutations of queries. Most of the added test\nqueries don't end up changing with the 2nd patch applied (the actual\ncode changes) so that you can easily see the narrow scope of what's\naffected. I don't envision most of the tests sticking around if this\nis committed, but hopefully it provides a helpful way to evaluate the\nsemantics of the change I'm proposing.\n\nThanks,\nJames Coleman",
"msg_date": "Sat, 24 Sep 2022 20:35:52 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 5:19 PM James Coleman <jtc331@gmail.com> wrote:\n> > Your sample query gets a plan like this:\n> >\n> > Nested Loop Left Join (cost=0.00..1700245.00 rows=10000 width=8)\n> > -> Seq Scan on foo (cost=0.00..145.00 rows=10000 width=4)\n> > -> Limit (cost=0.00..170.00 rows=1 width=4)\n> > -> Seq Scan on bar (cost=0.00..170.00 rows=1 width=4)\n> > Filter: (foo.a = a)\n> >\n> > If this were to occur inside a larger plan tree someplace, it would be\n> > OK to insert a Gather node above the Nested Loop node without doing\n> > anything further, because then the parameter that stores foo.a would\n> > be both set and used in the worker. If you wanted to insert a Gather\n> > at any other place in this plan, things get more complicated. But just\n> > because you have LATERAL doesn't mean that you have this problem,\n> > because if you delete the \"limit 1\" then the subqueries get flattened\n> > together and the parameter disappears,\n>\n> For future reference in this email thread when you remove the \"limit\n> 1\" this is the plan you get:\n>\n> Merge Right Join (cost=372.18..815.71 rows=28815 width=8)\n> Merge Cond: (bar.a = foo.a)\n> -> Sort (cost=158.51..164.16 rows=2260 width=8)\n> Sort Key: bar.a\n> -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=8)\n> -> Sort (cost=179.78..186.16 rows=2550 width=4)\n> Sort Key: foo.a\n> -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n>\n> Just to make sure I'm following: by \"doesn't mean that you have this\n> problem\" you mean \"doesn't mean you have this limitation on parallel\n> query\"?\n\nI'm talking specifically about whether there's a parameter. The\nproblem here isn't created by LATERAL, but by parameters. In the\nnested loop plan, there's a parameter that's storing foo.a, and the\nstorage location that holds that parameter value is in backend-private\nmemory, so you can't set the value in the leader and then use it in a\nworker, and that restricts where you can insert a Gather node. But in\nthe Merge Join plan (or if you got a Hash Join plan) there is no\nparameter. So the fact that parameter storage isn't shared between\nleaders and workers does not matter.\n\n> Yes, that's a good point too. I need to play with these examples and\n> confirm whether lateral_relids gets set in that case. IIRC when that's\n> set isn't exactly the same as whether or not the LATERAL keyword is\n> used, and I should clarify that my claims here are meant to be about\n> when we execute it that way regardless of the keyword usage. The\n> keyword usage I'd assumed just made it easier to talk about, but maybe\n> you're implying that it's actually generating confusion.\n\nYes, I think so.\n\nStepping back a bit, commit 75f9c4ca5a8047d7a9cfbc7d51a610933d04dc7f\nintroduced the code that is at issue here, and it took the position\nthat limit/offset should be marked parallel-restricted because they're\nnondeterministic. I'm not sure that's really quite the right thing.\nThe original idea behind having things be parallel-restricted was that\nthere might be stuff which depends on state in the leader that is not\nreplicated to or shared with the workers, and thus it must be executed\nin the leader to get the right answer. Here, however, there is no such\nproblem. Something like LIMIT/OFFSET that is nondeterministic is\nperfectly safe to execute in a worker, or in the leader. It's just not\nsafe to execute the same computation more than once and assume that we\nwill get the same answer each time. But that's really a different\nproblem.\n\nAnd the problem that you've run into here, I think, is that if a Limit\nnode is getting fed a parameter from higher up in the query plan, then\nit's not really the same computation being performed every time, and\nthus the non-determinism doesn't matter, and thus the\nparallel-restriction goes away, but the code doesn't know that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 10:37:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 10:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Sep 22, 2022 at 5:19 PM James Coleman <jtc331@gmail.com> wrote:\n> > > Your sample query gets a plan like this:\n> > >\n> > > Nested Loop Left Join (cost=0.00..1700245.00 rows=10000 width=8)\n> > > -> Seq Scan on foo (cost=0.00..145.00 rows=10000 width=4)\n> > > -> Limit (cost=0.00..170.00 rows=1 width=4)\n> > > -> Seq Scan on bar (cost=0.00..170.00 rows=1 width=4)\n> > > Filter: (foo.a = a)\n> > >\n> > > If this were to occur inside a larger plan tree someplace, it would be\n> > > OK to insert a Gather node above the Nested Loop node without doing\n> > > anything further, because then the parameter that stores foo.a would\n> > > be both set and used in the worker. If you wanted to insert a Gather\n> > > at any other place in this plan, things get more complicated. But just\n> > > because you have LATERAL doesn't mean that you have this problem,\n> > > because if you delete the \"limit 1\" then the subqueries get flattened\n> > > together and the parameter disappears,\n> >\n> > For future reference in this email thread when you remove the \"limit\n> > 1\" this is the plan you get:\n> >\n> > Merge Right Join (cost=372.18..815.71 rows=28815 width=8)\n> > Merge Cond: (bar.a = foo.a)\n> > -> Sort (cost=158.51..164.16 rows=2260 width=8)\n> > Sort Key: bar.a\n> > -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=8)\n> > -> Sort (cost=179.78..186.16 rows=2550 width=4)\n> > Sort Key: foo.a\n> > -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n> >\n> > Just to make sure I'm following: by \"doesn't mean that you have this\n> > problem\" you mean \"doesn't mean you have this limitation on parallel\n> > query\"?\n>\n> I'm talking specifically about whether there's a parameter. The\n> problem here isn't created by LATERAL, but by parameters. In the\n> nested loop plan, there's a parameter that's storing foo.a, and the\n> storage location that holds that parameter value is in backend-private\n> memory, so you can't set the value in the leader and then use it in a\n> worker, and that restricts where you can insert a Gather node. But in\n> the Merge Join plan (or if you got a Hash Join plan) there is no\n> parameter. So the fact that parameter storage isn't shared between\n> leaders and workers does not matter.\n\nAh, yes, right.\n\n> > Yes, that's a good point too. I need to play with these examples and\n> > confirm whether lateral_relids gets set in that case. IIRC when that's\n> > set isn't exactly the same as whether or not the LATERAL keyword is\n> > used, and I should clarify that my claims here are meant to be about\n> > when we execute it that way regardless of the keyword usage. The\n> > keyword usage I'd assumed just made it easier to talk about, but maybe\n> > you're implying that it's actually generating confusion.\n>\n> Yes, I think so.\n>\n> Stepping back a bit, commit 75f9c4ca5a8047d7a9cfbc7d51a610933d04dc7f\n> introduced the code that is at issue here, and it took the position\n> that limit/offset should be marked parallel-restricted because they're\n> nondeterministic. I'm not sure that's really quite the right thing.\n> The original idea behind having things be parallel-restricted was that\n> there might be stuff which depends on state in the leader that is not\n> replicated to or shared with the workers, and thus it must be executed\n> in the leader to get the right answer. Here, however, there is no such\n> problem. Something like LIMIT/OFFSET that is nondeterministic is\n> perfectly safe to execute in a worker, or in the leader. It's just not\n> safe to execute the same computation more than once and assume that we\n> will get the same answer each time. But that's really a different\n> problem.\n>\n> And the problem that you've run into here, I think, is that if a Limit\n> node is getting fed a parameter from higher up in the query plan, then\n> it's not really the same computation being performed every time, and\n> thus the non-determinism doesn't matter, and thus the\n> parallel-restriction goes away, but the code doesn't know that.\n\nCorrect. I think the simplest description of the proper limitation is\nthat we can't parallelize when either:\n1. The param comes from outside the worker, or\n2. The \"same\" param value from the same tuple is computed in multiple\nworkers (as distinct from the same value from different outer tuples).\nOr, to put it positively, when can parallelize when:\n1. There are no params, or\n2. The param is uniquely generated and used inside a single worker.\n\nI believe the followup email I sent (with an updated patch) details\nthe correctness of that approach; I'd be interested in your feedback\non that (I recognize it's quite a long email, though...) if you get a\nchance.\n\nThanks for your review on this so far,\nJames Coleman\n\n\n",
"msg_date": "Mon, 26 Sep 2022 11:17:01 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
},
{
"msg_contents": "This patch has been \"Needs Review\" and bouncing from CF to CF. It\nactually looks like it got quite a bit of design discussion and while\nJames Coleman seems convinced of its safety it doesn't sound like Tom\nLane and Robert Haas are yet and it doesn't look like that's going to\nhappen in this CF.\n\nI think I'm going to mark this Returned With Feedback on the basis\nthat it did actually get a significant amount of discussion and no\nfurther review seems to be forthcoming. Perhaps a more rigorous\napproach of proving the correctness or perhaps an\neasier-to-reason-about set of constraints would make it easier to\nreach a consensus?\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/42/2851/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Thu, 23 Mar 2023 14:42:51 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
}
] |
[
{
"msg_contents": "If you quit the editor without saving, the current query buffer\nor the last executed SQL statement get run.\n\nThis can be annoying and disruptive, and it requires you to\nempty the file and save it if you don't want to execute anything.\n\nBut when editing a script, it is a clear POLA violation:\n\ntest=> \\! cat q.sql\nSELECT 99;\n\ntest=> SELECT 42;\n ?column? \n----------\n 42\n(1 row)\n\ntest=> \\e q.sql\n[quit the editor without saving]\n ?column? \n----------\n 42\n(1 row)\n\nThis is pretty bad: you either have to re-run the previous statement\nor you have to empty your script file. Both are unappealing.\n\nI have been annoyed about this myself, and I have heard other prople\ncomplain about it, so I propose to clear the query buffer if the\neditor exits without modifying the file.\n\nThis behavior is much more intuitive for me.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 01 Dec 2020 04:38:35 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Confusing behavior of psql's \\e"
},
{
"msg_contents": "On 11/30/20 22:38, Laurenz Albe wrote:\n\n> I have been annoyed about this myself, and I have heard other prople\n> complain about it, so I propose to clear the query buffer if the\n> editor exits without modifying the file.\n\nOr how about keeping the query buffer content unchanged, but not\nexecuting it? Then it's still there if you have another idea about\nediting it. It's easy enough to \\r if you really want it gone.\n\nOne downside I can foresee is that I could form a habit of doing\nsomething that would be safe in psql version x but would bite me\none day if I'm in psql version x-- for some reason.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 1 Dec 2020 11:03:35 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Tue, 2020-12-01 at 11:03 -0500, Chapman Flack wrote:\n> On 11/30/20 22:38, Laurenz Albe wrote:\n> > I have been annoyed about this myself, and I have heard other prople\n> > complain about it, so I propose to clear the query buffer if the\n> > editor exits without modifying the file.\n> \n> Or how about keeping the query buffer content unchanged, but not\n> executing it? Then it's still there if you have another idea about\n> editing it. It's easy enough to \\r if you really want it gone.\n\nWhat if the buffer was empty? Would you want to get the previous\nquery in the query buffer? I'd assume not...\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 01 Dec 2020 17:21:11 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On 12/01/20 11:21, Laurenz Albe wrote:\n> On Tue, 2020-12-01 at 11:03 -0500, Chapman Flack wrote:\n>>> complain about it, so I propose to clear the query buffer if the\n>>> editor exits without modifying the file.\n>>\n>> Or how about keeping the query buffer content unchanged, but not\n>> executing it? Then it's still there if you have another idea about\n>> editing it. It's easy enough to \\r if you really want it gone.\n> \n> What if the buffer was empty? Would you want to get the previous\n> query in the query buffer? I'd assume not...\n\nI took your proposal to be specifically about what happens if the editor\nis exited with no change to the buffer, and in that case, I would suggest\nmaking no change to the buffer, but not re-executing it.\n\nIf the editor is exited after deliberately emptying the editor buffer,\nI would expect that to be treated as emptying the query buffer.\n\nI don't foresee any case that would entail bringing a /previous/ query\nback into the query buffer.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 1 Dec 2020 11:34:19 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Tue, 2020-12-01 at 11:34 -0500, Chapman Flack wrote:\n> On 12/01/20 11:21, Laurenz Albe wrote:\n> > On Tue, 2020-12-01 at 11:03 -0500, Chapman Flack wrote:\n> > > > I propose to clear the query buffer if the\n> > > > editor exits without modifying the file.\n> > > \n> > > Or how about keeping the query buffer content unchanged, but not\n> > > executing it? Then it's still there if you have another idea about\n> > > editing it. It's easy enough to \\r if you really want it gone.\n> > \n> > What if the buffer was empty? Would you want to get the previous\n> > query in the query buffer? I'd assume not...\n> \n> I took your proposal to be specifically about what happens if the editor\n> is exited with no change to the buffer, and in that case, I would suggest\n> making no change to the buffer, but not re-executing it.\n> \n> If the editor is exited after deliberately emptying the editor buffer,\n> I would expect that to be treated as emptying the query buffer.\n> \n> I don't foresee any case that would entail bringing a /previous/ query\n> back into the query buffer.\n\nI see I'll have to try harder.\n\nThe attached patch changes the behavior as follows:\n\n- If the current query buffer is edited, and you quit the editor,\n the query buffer is retained. This is as it used to be.\n\n- If the query buffer is empty and you run \\e, the previous query\n is edited (as it used to be), but quitting the editor will empty\n the query buffer and execute nothing.\n\n- Similarly, if you \"\\e file\" and quit the editor, nothing will\n be executed and the query buffer is emptied.\n\n- The same behavior change applies to \\ef and \\ev.\n There is no need to retain the definition in the query buffer,\n you can always run the \\ev or \\ev again.\n\nI consider this a bug fix, but one that shouldn't be backpatched.\nRe-executing the previous query if the editor is quit is\nannoying at least and dangerous at worst.\n\nI'll add this patch to the next commitfest.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 16 Dec 2020 10:45:44 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Wed, 2020-12-16 at 10:45 +0100, Laurenz Albe wrote:\r\n> I consider this a bug fix, but one that shouldn't be backpatched.\r\n> Re-executing the previous query if the editor is quit is\r\n> annoying at least and dangerous at worst.\r\n\r\nI like that this patch also clears the query buffer in the error case.\r\n(For example, if I save the file but then decide I want to cancel\r\nexecution, the only choice is to issue an abortive :cq from Vim. The\r\ncurrent master-branch behavior is to just dump me back onto a\r\ncontinuation prompt, and I have to manually \\r the buffer. With this\r\npatch, I'm returned to an initial prompt with a clear buffer. Very\r\nnice.)\r\n\r\nSome unexpected behavior I saw when testing this patch: occasionally I\r\nwould perform a bare \\e, save the temporary file, and quit, only to\r\nfind that nothing was executed. What's happening is, I'm saving the\r\nfile too quickly, and the timestamp check (which has only second\r\nprecision) is failing! This isn't a problem in practice for the\r\nexplicit-filename case, because you probably didn't create the file\r\nwithin the last second, but the temporary files are always zero seconds\r\nold by definition. I could see this tripping up some people.\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 3 Mar 2021 00:07:22 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "Thanks for testing!\n\nOn Wed, 2021-03-03 at 00:07 +0000, Jacob Champion wrote:\n> On Wed, 2020-12-16 at 10:45 +0100, Laurenz Albe wrote:\n> \n> > I consider this a bug fix, but one that shouldn't be backpatched.\n> > Re-executing the previous query if the editor is quit is\n> > annoying at least and dangerous at worst.\n> \n> I like that this patch also clears the query buffer in the error case.\n> (For example, if I save the file but then decide I want to cancel\n> execution, the only choice is to issue an abortive :cq from Vim. The\n> current master-branch behavior is to just dump me back onto a\n> continuation prompt, and I have to manually \\r the buffer. With this\n> patch, I'm returned to an initial prompt with a clear buffer. Very\n> nice.)\n> \n> Some unexpected behavior I saw when testing this patch: occasionally I\n> would perform a bare \\e, save the temporary file, and quit, only to\n> find that nothing was executed. What's happening is, I'm saving the\n> file too quickly, and the timestamp check (which has only second\n> precision) is failing! This isn't a problem in practice for the\n> explicit-filename case, because you probably didn't create the file\n> within the last second, but the temporary files are always zero seconds\n> old by definition. I could see this tripping up some people.\n\nThat is because psql compares the file modification time before and\nafter the edit, and the \"st_mtime\" in \"struct stat\" has a precision of\nseconds. Some operating systems and file provide a finer granularity,\nbut for example PostgreSQL's _pgstat64 that is used on Windows doesn't.\n\nThis is no new behavior, and I think this is rare enough that we don't\nhave to bother. I had to define a vim macro to :wq the file fast\nenough to reproduce your observation...\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 03 Mar 2021 17:08:22 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Wed, 2021-03-03 at 17:08 +0100, Laurenz Albe wrote:\r\n> This is no new behavior, and I think this is rare enough that we don't\r\n> have to bother.\r\n\r\nI agree that it's not new behavior, but this patch exposes that\r\nbehavior for the temporary file case, because you've fixed the bug that\r\ncaused the timestamp check to not matter. As far as I can tell, you\r\ncan't run into that race on the master branch for temporary files, and\r\nyou won't run into it in practice for explicit filenames.\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 3 Mar 2021 17:12:22 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Wed, 2021-03-03 at 17:12 +0000, Jacob Champion wrote:\n> On Wed, 2021-03-03 at 17:08 +0100, Laurenz Albe wrote:\n> > This is no new behavior, and I think this is rare enough that we don't\n> > have to bother.\n> \n> I agree that it's not new behavior, but this patch exposes that\n> behavior for the temporary file case, because you've fixed the bug that\n> caused the timestamp check to not matter. As far as I can tell, you\n> can't run into that race on the master branch for temporary files, and\n> you won't run into it in practice for explicit filenames.\n\nActually, the timestamp check *did* matter before.\nThe code in \"do_edit\" has:\n\n[after the editor has been called]\nif (!error && before.st_mtime != after.st_mtime)\n{\n [read file back into query_buf]\n}\n\nThis is pre-existing code. I just added an else branch:\n\nelse\n{\n [discard query_buf if we were editing a script, function or view]\n}\n\nSo if you do your \"modify and save the file in less than a second\"\ntrick with the old code, you would end up with the old, unmodified\ndata in the query buffer.\n\nI would say that the old behavior is worse in that case, and\ndiscarding the query buffer is better.\n\nI am not against fixing or improving the behavior, but given the\nfact that we can't portably get less than second precision, it\nseems impossible. For good measure, I have added a check if the\nfile size has changed.\n\nI don't think we can or have to do better than that.\nFew people are skilled enough to modify and save a file in less\nthan a second, and I don't think there have been complaints\nabout that behavior so far.\n\nAttached is version 2 of the patch, with the added file size\ncheck and a pgindent run.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 04 Mar 2021 17:41:21 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Thu, 2021-03-04 at 17:41 +0100, Laurenz Albe wrote:\r\n> So if you do your \"modify and save the file in less than a second\"\r\n> trick with the old code, you would end up with the old, unmodified\r\n> data in the query buffer.\r\n\r\nSorry, I was unclear in my first post -- I'm not modifying the\r\ntemporary file. Just saving and quitting with :wq, which is much easier\r\nto do in less than a second.\r\n\r\n> I would say that the old behavior is worse in that case, and\r\n> discarding the query buffer is better.\r\n\r\nFor the case where you've modified the buffer, I agree, and I'm not\r\narguing otherwise.\r\n\r\n> I am not against fixing or improving the behavior, but given the\r\n> fact that we can't portably get less than second precision, it\r\n> seems impossible.\r\n\r\nYou could backdate the temporary file, so that any save is guaranteed\r\nto move the timestamp forward. That should work even if the filesystem\r\nhas extremely poor precision.\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 4 Mar 2021 16:51:55 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Thu, 2021-03-04 at 16:51 +0000, Jacob Champion wrote:\n> > I am not against fixing or improving the behavior, but given the\n> > fact that we can't portably get less than second precision, it\n> > seems impossible.\n> \n> You could backdate the temporary file, so that any save is guaranteed\n> to move the timestamp forward. That should work even if the filesystem\n> has extremely poor precision.\n\nAh, of course, that is the way to go.\nAttached is version 3 which does it like this.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 05 Mar 2021 13:38:32 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nVery nice quality-of-life improvement. Thanks!\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Mon, 08 Mar 2021 20:44:28 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Thu, 2021-03-04 at 16:51 +0000, Jacob Champion wrote:\n>> You could backdate the temporary file, so that any save is guaranteed\n>> to move the timestamp forward. That should work even if the filesystem\n>> has extremely poor precision.\n\n> Ah, of course, that is the way to go.\n\nI took a quick look at this. I don't have an opinion yet about the\nquestion of changing the when-to-discard-the-buffer rules, but I agree\nthat trying to get rid of the race condition inherent in the existing\nfile mtime test would be a good idea. However, I've got some\nportability-related gripes about how you are doing the latter:\n\n1. There is no principled reason to assume that the epoch date is in the\npast. IIRC, Postgres' timestamp epoch of 2000-01-01 was in the future\nat the time we set it. More relevant to the immediate issue, I clearly\nrecall a discussion at Red Hat in which one of the principal glibc\nmaintainers (likely Ulrich Drepper, though I'm not quite sure) argued\nthat 32-bit time_t could be used indefinitely by redefining the epoch\nforward by 2^32 seconds every often; which would require intervals of\ncirca 68 years in which time_t was seen as a negative offset from a\nfuture epoch date, rather than an unsigned offset from a past date.\nNow, I thought he was nuts then and I still think that 32-bit hardware\nwill be ancient history by 2038 ... but there may be systems that do it\nlike that. glibc hates ABI breakage.\n\n2. Putting an enormously old date on a file that was just created will\ngreatly confuse onlookers, some of whom (such as backup or antivirus\ndaemons) might not react pleasantly.\n\nBetween #1 and #2, it's clearly worth the extra one or two lines of\ncode to set the file dates to, say, \"time(NULL) - 1\", rather than\nassuming that zero is good enough.\n\n3. I wonder about the portability of utime(2). I see that we are using\nit to cause updates of socket and lock file times, but our expectations\nfor it there are rock-bottom low. I think that failing the edit command\nif utime() fails is an overreaction even with optimistic assumptions about\nits reliability. Doing that makes things strictly worse than simply not\ndoing anything, because 99% of the time this refinement is unnecessary.\n\nIn short, I think the relevant code ought to be more like\n\n\telse\n\t{\n\t struct utimbuf ut;\n\n\t ut.modtime = ut.actime = time(NULL) - 1;\n\t (void) utime(fname, &ut);\n\t}\n\n(plus some comments of course)\n\n\t\t\tregards, tom lane\n\nPS: I seem to recall that some Microsoft filesystems have 2-second\nresolution on file mod times, so maybe it needs to be \"time(NULL) - 2\"?\n\n\n",
"msg_date": "Wed, 10 Mar 2021 00:13:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 6:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> PS: I seem to recall that some Microsoft filesystems have 2-second\n> resolution on file mod times, so maybe it needs to be \"time(NULL) - 2\"?\n>\n\nYou are thinking about FAT:\n\nhttps://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getfiletime#remarks\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Mar 10, 2021 at 6:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nPS: I seem to recall that some Microsoft filesystems have 2-second\nresolution on file mod times, so maybe it needs to be \"time(NULL) - 2\"?You are thinking about FAT:https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getfiletime#remarksRegards,Juan José Santamaría Flecha",
"msg_date": "Wed, 10 Mar 2021 11:16:18 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Wed, 2021-03-10 at 00:13 -0500, Tom Lane wrote:\n> I agree\n> that trying to get rid of the race condition inherent in the existing\n> file mtime test would be a good idea. However, I've got some\n> portability-related gripes about how you are doing the latter:\n> \n> 1. There is no principled reason to assume that the epoch date is in the\n> past. IIRC, Postgres' timestamp epoch of 2000-01-01 was in the future\n> at the time we set it. More relevant to the immediate issue, I clearly\n> recall a discussion at Red Hat in which one of the principal glibc\n> maintainers (likely Ulrich Drepper, though I'm not quite sure) argued\n> that 32-bit time_t could be used indefinitely by redefining the epoch\n> forward by 2^32 seconds every often; which would require intervals of\n> circa 68 years in which time_t was seen as a negative offset from a\n> future epoch date, rather than an unsigned offset from a past date.\n> Now, I thought he was nuts then and I still think that 32-bit hardware\n> will be ancient history by 2038 ... but there may be systems that do it\n> like that. glibc hates ABI breakage.\n> \n> 2. Putting an enormously old date on a file that was just created will\n> greatly confuse onlookers, some of whom (such as backup or antivirus\n> daemons) might not react pleasantly.\n> \n> Between #1 and #2, it's clearly worth the extra one or two lines of\n> code to set the file dates to, say, \"time(NULL) - 1\", rather than\n> assuming that zero is good enough.\n> \n> 3. I wonder about the portability of utime(2). I see that we are using\n> it to cause updates of socket and lock file times, but our expectations\n> for it there are rock-bottom low. I think that failing the edit command\n> if utime() fails is an overreaction even with optimistic assumptions about\n> its reliability. Doing that makes things strictly worse than simply not\n> doing anything, because 99% of the time this refinement is unnecessary.\n> \n> In short, I think the relevant code ought to be more like\n> \n> \telse\n> \t{\n> \t struct utimbuf ut;\n> \n> \t ut.modtime = ut.actime = time(NULL) - 1;\n> \t (void) utime(fname, &ut);\n> \t}\n> \n> (plus some comments of course)\n> \n> \t\t\tregards, tom lane\n> \n> PS: I seem to recall that some Microsoft filesystems have 2-second\n> resolution on file mod times, so maybe it needs to be \"time(NULL) - 2\"?\n\nThanks for taking a look!\n\nTaken together, these arguments are convincing.\n\nDone like that in the attached patch version 4.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 10 Mar 2021 11:32:02 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> Done like that in the attached patch version 4.\n\nI pushed the race-condition-fixing part of this, since that's an\nunarguable bug fix and hence seems OK to back-patch. (I added a\ncheck on change of file size, because why not.)\n\nAttached is the rest, just to keep the cfbot happy.\n\nI don't think this is quite committable yet. The division of\nlabor between do_edit() and its callers seems to need more\nthought: in particular, I see that \\ef now fails to print\n\"No changes\" when I would expect it to. But the real question\nis whether there is any non-error condition in which do_edit\nshould not set the query_buffer, either to the edit result\nor empty. If we're going to improve the header comment for\ndo_edit, I would expect it to specify what happens to the\nquery_buf, and the fact that I can't write a concise spec\nleads me to think that a bit more design effort is needed.\n\nAlso, somewhat cosmetic, but: I feel like the hunk that is\nsetting discard_on_quit in exec_command_edit is assuming\nmore than it ought to about what copy_previous_query will do.\nPerhaps it's worth making copy_previous_query return a bool\nindicating whether it copied previous_buf, and then\nexec_command_edit becomes\n\n\tdiscard_on_quit = copy_previous_query(query_buf, previous_buf);\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 12 Mar 2021 13:12:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Fri, 2021-03-12 at 13:12 -0500, Tom Lane wrote:\n> I pushed the race-condition-fixing part of this, since that's an\n> unarguable bug fix and hence seems OK to back-patch. (I added a\n> check on change of file size, because why not.)\n\nThank you!\n\n> Attached is the rest, just to keep the cfbot happy.\n\nThanks for that as well.\n\n> I don't think this is quite committable yet. The division of\n> labor between do_edit() and its callers seems to need more\n> thought: in particular, I see that \\ef now fails to print\n> \"No changes\" when I would expect it to.\n\nHm. My idea was that \"No changes\" is short for \"No changes to\nthe query buffer\" and is printed only if you quit the editor,\nbut the query buffer is retained.\n\nBut it also makes sense to interpret it as \"No changes to the\nfunction or view definition\", so I have put it back according\nto your expectations.\n\n> But the real question\n> is whether there is any non-error condition in which do_edit\n> should not set the query_buffer, either to the edit result\n> or empty.\n\nI don't follow. That's exactly what happens...\nBut I guess the confusion is due to my inadequate comments.\n\n> If we're going to improve the header comment for\n> do_edit, I would expect it to specify what happens to the\n> query_buf, and the fact that I can't write a concise spec\n> leads me to think that a bit more design effort is needed.\n\nI have described the fate of the query buffer in the function\ncomment. I hope it is clearer now.\n\n> Also, somewhat cosmetic, but: I feel like the hunk that is\n> setting discard_on_quit in exec_command_edit is assuming\n> more than it ought to about what copy_previous_query will do.\n> Perhaps it's worth making copy_previous_query return a bool\n> indicating whether it copied previous_buf, and then\n> exec_command_edit becomes\n> \n> discard_on_quit = copy_previous_query(query_buf, previous_buf);\n\nThat is a clear improvement, and I have done it like that.\n\n\nPerhaps I should restate the problem the patch is trying to solve:\n\ntest=> INSERT INTO dummy VALUES (DEFAULT);\nINSERT 0 1\ntest=> -- quit the editor during the following\ntest=> \\e script.sql\nINSERT 0 1\n\nOuch. A second line was inserted into \"dummy\".\n\nThe same happens with plain \\e: if I edit the previous query\nand quit, I don't expect it to be executed again.\nCurrently, the only way to achieve that is to empty the temporary\nfile and then save it, which is annoying.\n\n\nAttached is version 6.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 15 Mar 2021 17:26:41 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> Attached is version 6.\n\nPushed with some mostly-cosmetic fiddling.\n\nOne thing I changed that wasn't cosmetic is that as you had it,\nthe behavior of \"\\e file\" varied depending on whether the query\nbuffer had been empty, which surely seems like a bad idea.\nI made it do discard_on_quit always in that case. I think there\nmight be a case for discard_on_quit = false always, ie maybe\nthe user wanted to load the file into the query buffer as-is.\nBut it seems like a pretty weak case --- you'd be more likely\nto just use \\i for that situation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Apr 2021 17:43:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Confusing behavior of psql's \\e"
},
{
"msg_contents": "On Sat, 2021-04-03 at 17:43 -0400, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > Attached is version 6.\n> \n> Pushed with some mostly-cosmetic fiddling.\n> \n> One thing I changed that wasn't cosmetic is that as you had it,\n> the behavior of \"\\e file\" varied depending on whether the query\n> buffer had been empty, which surely seems like a bad idea.\n> I made it do discard_on_quit always in that case. I think there\n> might be a case for discard_on_quit = false always, ie maybe\n> the user wanted to load the file into the query buffer as-is.\n> But it seems like a pretty weak case --- you'd be more likely\n> to just use \\i for that situation.\n\nThanks for that!\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 06 Apr 2021 11:17:58 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Confusing behavior of psql's \\e"
}
] |
[
{
"msg_contents": "Hi all\n\nI'd like to share the attached PG_LSN.pm module that I use when\nwriting TAP tests. I suggest that it be considered for inclusion in\ncore.\n\nIt defines a Perl datatype PG_LSN with operator support, so you can\nwrite things like\n\n cmp_ok($got_lsn, \"<\", $expected_lsn, \"testname\")\n\nin TAP tests and get sensible results without any concern for LSN\nrepresentation details, locale, etc. You can subtract LSNs to get a\nbyte difference too.\n\nIt's small but I've found it handy.",
"msg_date": "Tue, 1 Dec 2020 12:03:41 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "TAP test utility module 'PG_LSN.pm'"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 12:03:41PM +0800, Craig Ringer wrote:\n> I'd like to share the attached PG_LSN.pm module that I use when\n> writing TAP tests. I suggest that it be considered for inclusion in\n> core.\n> \n> It defines a Perl datatype PG_LSN with operator support, so you can\n> write things like\n> \n> cmp_ok($got_lsn, \"<\", $expected_lsn, \"testname\")\n> \n> in TAP tests and get sensible results without any concern for LSN\n> representation details, locale, etc. You can subtract LSNs to get a\n> byte difference too.\n\nIn my experience, any TAP tests making use of LSN operations can just\nlet the backend do the maths, so I am not much a fan of duplicating\nthat stuff in a perl module. Wouldn't it be better to add an\nequivalent of your add() function in the backend then? That could\nalso get tested in the main regression test suite.\n--\nMichael",
"msg_date": "Tue, 1 Dec 2020 14:58:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP test utility module 'PG_LSN.pm'"
},
{
"msg_contents": "On Tue, 1 Dec 2020 at 13:58, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 01, 2020 at 12:03:41PM +0800, Craig Ringer wrote:\n> > I'd like to share the attached PG_LSN.pm module that I use when\n> > writing TAP tests. I suggest that it be considered for inclusion in\n> > core.\n> >\n> > It defines a Perl datatype PG_LSN with operator support, so you can\n> > write things like\n> >\n> > cmp_ok($got_lsn, \"<\", $expected_lsn, \"testname\")\n> >\n> > in TAP tests and get sensible results without any concern for LSN\n> > representation details, locale, etc. You can subtract LSNs to get a\n> > byte difference too.\n>\n> In my experience, any TAP tests making use of LSN operations can just\n> let the backend do the maths, so I am not much a fan of duplicating\n> that stuff in a perl module. Wouldn't it be better to add an\n> equivalent of your add() function in the backend then? That could\n> also get tested in the main regression test suite.\n\nI find it convenient not to have as much log spam. Also, IIRC I needed\nit at some points where the target backend(s) were down while I was\ntesting something related to a backend that was still in recovery and\nnot yet available for querying.\n\nI don't really mind though, I'm just sharing what I have found useful.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 14:11:08 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TAP test utility module 'PG_LSN.pm'"
},
{
"msg_contents": "\n\nOn 2020/12/01 14:58, Michael Paquier wrote:\n> On Tue, Dec 01, 2020 at 12:03:41PM +0800, Craig Ringer wrote:\n>> I'd like to share the attached PG_LSN.pm module that I use when\n>> writing TAP tests. I suggest that it be considered for inclusion in\n>> core.\n>>\n>> It defines a Perl datatype PG_LSN with operator support, so you can\n>> write things like\n>>\n>> cmp_ok($got_lsn, \"<\", $expected_lsn, \"testname\")\n>>\n>> in TAP tests and get sensible results without any concern for LSN\n>> representation details, locale, etc. You can subtract LSNs to get a\n>> byte difference too.\n> \n> In my experience, any TAP tests making use of LSN operations can just\n> let the backend do the maths, so I am not much a fan of duplicating\n> that stuff in a perl module.\n\nAgreed.\n\n> Wouldn't it be better to add an\n> equivalent of your add() function in the backend then?\n\nYou mean the same function as the commit 9bae7e4cde provided?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 1 Dec 2020 15:14:06 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP test utility module 'PG_LSN.pm'"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 03:14:06PM +0900, Fujii Masao wrote:\n> You mean the same function as the commit 9bae7e4cde provided?\n\nCompletely forgot about this one, thanks.\n\n/me hides\n--\nMichael",
"msg_date": "Tue, 1 Dec 2020 15:27:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TAP test utility module 'PG_LSN.pm'"
}
] |
[
{
"msg_contents": "Hi\n\nSince seawasp's bleeding edge LLVM installation moved to \"trunk\n20201114 c8f4e06b 12.0.0\" ~16 days ago, it has been red. Further\nupdates didn't help it and it's now on \"trunk 20201127 6ee22ca6\n12.0.0\". I wonder if there is something in Fabien's scripting that\nneeds to be tweaked, perhaps a symlink name or similar. I don't\nfollow LLVM development but I found my way to a commit[1] around the\nright time that mentions breaking up the OrcJIT library, so *shrug*\nmaybe that's a clue.\n\n+ERROR: could not load library\n\"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\":\nlibLLVMOrcJIT.so.12git: cannot open shared object file: No such file\nor directory\n\n[1] https://github.com/llvm/llvm-project/commit/1d0676b54c4e3a517719220def96dfdbc26d8048\n\n\n",
"msg_date": "Tue, 1 Dec 2020 17:35:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-01 17:35:49 +1300, Thomas Munro wrote:\n> Since seawasp's bleeding edge LLVM installation moved to \"trunk\n> 20201114 c8f4e06b 12.0.0\" ~16 days ago, it has been red. Further\n> updates didn't help it and it's now on \"trunk 20201127 6ee22ca6\n> 12.0.0\". I wonder if there is something in Fabien's scripting that\n> needs to be tweaked, perhaps a symlink name or similar. I don't\n> follow LLVM development but I found my way to a commit[1] around the\n> right time that mentions breaking up the OrcJIT library, so *shrug*\n> maybe that's a clue.\n> \n> +ERROR: could not load library\n> \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\":\n> libLLVMOrcJIT.so.12git: cannot open shared object file: No such file\n> or directory\n\nIt's a change in how LLVM dependencies are declared\ninternally. Previously the 'native' component was - unintentionally -\ntransitively included via the 'orcjit' component, but now that's not the\ncase anymore.\n\nThe attached patch should fix it, I think?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 30 Nov 2020 22:49:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Hello Thomas,\n\n> Since seawasp's bleeding edge LLVM installation moved to \"trunk\n> 20201114 c8f4e06b 12.0.0\" ~16 days ago, it has been red. Further\n> updates didn't help it and it's now on \"trunk 20201127 6ee22ca6\n> 12.0.0\". I wonder if there is something in Fabien's scripting that\n> needs to be tweaked, perhaps a symlink name or similar.\n\nThe compiler compilation script is quite straightforward (basically, get \nsources, configure and compile), even for a such a moving target…\n\nThe right approach is to wait for some time before looking at the issue, \ntypically one week for the next recompilation, in case the problem \nevaporates, so you were right not to jump on it right away:-)\n\nAndres investigated a few days ago, managed to reproduce the issue \nlocally, and has one line patch. I'm unsure if it should be prevently \nback-patched, though.\n\n-- \nFabien.",
"msg_date": "Tue, 1 Dec 2020 21:04:44 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-01 21:04:44 +0100, Fabien COELHO wrote:\n> Andres investigated a few days ago, managed to reproduce the issue locally,\n> and has one line patch. I'm unsure if it should be prevently back-patched,\n> though.\n\nI see no reason not to backpatch - it's more correct for past versions\nof LLVM as well.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Dec 2020 12:08:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-01 12:08:10 -0800, Andres Freund wrote:\n> On 2020-12-01 21:04:44 +0100, Fabien COELHO wrote:\n> > Andres investigated a few days ago, managed to reproduce the issue locally,\n> > and has one line patch. I'm unsure if it should be prevently back-patched,\n> > though.\n> \n> I see no reason not to backpatch - it's more correct for past versions\n> of LLVM as well.\n\nI pushed that now.\n\n- Andres\n\n\n",
"msg_date": "Mon, 7 Dec 2020 19:38:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-07 19:38:19 -0800, Andres Freund wrote:\n> On 2020-12-01 12:08:10 -0800, Andres Freund wrote:\n> > On 2020-12-01 21:04:44 +0100, Fabien COELHO wrote:\n> > > Andres investigated a few days ago, managed to reproduce the issue locally,\n> > > and has one line patch. I'm unsure if it should be prevently back-patched,\n> > > though.\n> > \n> > I see no reason not to backpatch - it's more correct for past versions\n> > of LLVM as well.\n> \n> I pushed that now.\n\nI hadn't checked that before, but for the last few days there's been a\ndifferent failure than the one I saw earlier:\n\n+ERROR: could not load library \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\": libLLVMOrcJIT.so.12git: cannot open shared object file: No such file or directory\n\nwhereas what I fixed is about:\n\n+ERROR: could not load library \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\": /home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so: undefined symbol: LLVMInitializeX86Target\n\nChanged somewhere between\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2020-11-20%2009%3A17%3A10\nand\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2020-11-21%2023%3A17%3A11\n\nThe \"no such file\" error seems more like a machine local issue to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Dec 2020 22:35:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "\nHello Andres,\n\n> I hadn't checked that before, but for the last few days there's been a\n> different failure than the one I saw earlier:\n>\n> +ERROR: could not load library \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\": libLLVMOrcJIT.so.12git: cannot open shared object file: No such file or directory\n>\n> whereas what I fixed is about:\n>\n> +ERROR: could not load library \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\": /home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so: undefined symbol: LLVMInitializeX86Target\n>\n> Changed somewhere between\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2020-11-20%2009%3A17%3A10\n> and\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2020-11-21%2023%3A17%3A11\n>\n> The \"no such file\" error seems more like a machine local issue to me.\n\nI'll look into it when I have time, which make take some time. Hopefully \nover the holidays.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 11 Dec 2020 20:45:12 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "On 2020-Dec-11, Fabien COELHO wrote:\n\n> > I hadn't checked that before, but for the last few days there's been a\n> > different failure than the one I saw earlier:\n> > \n> > +ERROR: could not load library \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\": libLLVMOrcJIT.so.12git: cannot open shared object file: No such file or directory\n\n> > The \"no such file\" error seems more like a machine local issue to me.\n> \n> I'll look into it when I have time, which make take some time. Hopefully\n> over the holidays.\n\nThis is still happening ... Any chance you can have a look at it?\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Fri, 15 Jan 2021 14:06:31 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Dec-11, Fabien COELHO wrote:\n>> I'll look into it when I have time, which make take some time. Hopefully\n>> over the holidays.\n\n> This is still happening ... Any chance you can have a look at it?\n\nIf you don't have time to debug it, perhaps you could just disable\nthe buildfarm animal till you do. It's cluttering the buildfarm\nfailure report without providing useful info ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jan 2021 12:43:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "\nHello Alvaro,\n\n>>> The \"no such file\" error seems more like a machine local issue to me.\n>>\n>> I'll look into it when I have time, which make take some time. Hopefully\n>> over the holidays.\n>\n> This is still happening ... Any chance you can have a look at it?\n\nIndeed. I'll try to look (again) into it soon. I had a look but did not \nfind anything obvious in the short time frame I had. Last two months were \na little overworked for me so I let slip quite a few things. If you want \nto disable the animal as Tom suggests, do as you want.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 18 Jan 2021 21:29:53 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 09:29:53PM +0100, Fabien COELHO wrote:\n> >>>The \"no such file\" error seems more like a machine local issue to me.\n> >>\n> >>I'll look into it when I have time, which make take some time. Hopefully\n> >>over the holidays.\n> >\n> >This is still happening ... Any chance you can have a look at it?\n> \n> Indeed. I'll try to look (again) into it soon. I had a look but did not find\n> anything obvious in the short time frame I had. Last two months were a\n> little overworked for me so I let slip quite a few things. If you want to\n> disable the animal as Tom suggests, do as you want.\n\nPerhaps he was suggesting that you (buildfarm owner) disable the cron job that\ninitiates new runs. That's what I do when one of my animals needs my\nintervention.\n\n\n",
"msg_date": "Mon, 25 Jan 2021 03:40:51 +0000",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Mon, Jan 18, 2021 at 09:29:53PM +0100, Fabien COELHO wrote:\n>> ... Last two months were a\n>> little overworked for me so I let slip quite a few things. If you want to\n>> disable the animal as Tom suggests, do as you want.\n\n> Perhaps he was suggesting that you (buildfarm owner) disable the cron job that\n> initiates new runs. That's what I do when one of my animals needs my\n> intervention.\n\nIndeed. I'm not sure there even is a provision to block an animal on the\nbuildfarm-server side. If there is, you'd have to request that it be\nmanually undone after you get around to fixing the animal. Frankly,\nif I were the BF admin, I would be in about as much hurry to do that\nas you've been to fix it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Jan 2021 23:06:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 8:45 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > +ERROR: could not load library \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\": libLLVMOrcJIT.so.12git: cannot open shared object file: No such file or directory\n\nBonjour Fabien,\n\nHere is the creation of llvmjit.so:\n\ng++ ... -o llvmjit.so ... -L/home/fabien/clgtk/lib ... -lLLVMOrcJIT ...\n\nThat'd be from llvm-config --ldflags or similar, from this binary:\n\nchecking for llvm-config... (cached) /home/fabien/clgtk/bin/llvm-config\n\nSo what does ls -slap /home/fabien/clgtk/lib show? What does ldd\n/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\nshow? What do you have in seawap's LD_LIBRARY_PATH, and is the\nproblem that you need to add /home/fabien/clgtk/lib to it (or is it\nsupposed to be making it into the rpath, or is it in your ld.so.conf)?\n\nPS Could you try blowing away the accache directory so we can rule out\nbad cached configure stuff?\n\n\n",
"msg_date": "Mon, 15 Feb 2021 18:32:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Hello Thomas,\n\nThanks for looking at this, I'm currently far behind on many things and \nnot very responsive:-(\n\n> Here is the creation of llvmjit.so:\n>\n> g++ ... -o llvmjit.so ... -L/home/fabien/clgtk/lib ... -lLLVMOrcJIT ...\n>\n> That'd be from llvm-config --ldflags or similar, from this binary:\n>\n> checking for llvm-config... (cached) /home/fabien/clgtk/bin/llvm-config\n>\n> So what does ls -slap /home/fabien/clgtk/lib show?\n\nPlenty files and links, eg:\n\n 0 lrwxrwxrwx 1 fabien fabien 21 janv. 23 09:40 /home/fabien/clgtk/lib/libLLVMMCJIT.so -> libLLVMMCJIT.so.12git\n 2140 -rw-r--r-- 1 fabien fabien 2190824 janv. 23 09:28 /home/fabien/clgtk/lib/libLLVMMCJIT.so.12git\n 0 lrwxrwxrwx 1 fabien fabien 22 janv. 23 09:40 /home/fabien/clgtk/lib/libLLVMOrcJIT.so -> libLLVMOrcJIT.so.12git\n 40224 -rw-r--r-- 1 fabien fabien 41187208 janv. 23 09:34 /home/fabien/clgtk/lib/libLLVMOrcJIT.so.12git\n\nHmmm, clang recompilation has been failing for some weeks. Argh, this is \ndue to the recent \"master\" to \"main\" branch renaming.\n\n> What does ldd\n> /home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/tmp_install/home/fabien/pg/build-farm-11/buildroot/HEAD/inst/lib/postgresql/llvmjit.so\n\nNo such file of directory:-) because the directory is cleaned up.\n\n> show? What do you have in seawap's LD_LIBRARY_PATH, and is the\n> problem that you need to add /home/fabien/clgtk/lib to it\n\nArgh. Would it be so stupid? :-( I thought the configuration stuff would\nmanage the link path automatically, which may be quite na�ve, indeed.\n\n> PS Could you try blowing away the accache directory so we can rule out\n> bad cached configure stuff?\n\nHmmm. I've tried that before. I can do it again.\n\nI've added an explicit LD_LIBRARY_PATH, which will be triggered at some \npoint later.\n\n-- \nFabien Coelho - CRI, MINES ParisTech",
"msg_date": "Mon, 15 Feb 2021 10:05:32 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "Hi,\n\nOn 2021-02-15 10:05:32 +0100, Fabien COELHO wrote:\n> > show? What do you have in seawap's LD_LIBRARY_PATH, and is the\n> > problem that you need to add /home/fabien/clgtk/lib to it\n> \n> Argh. Would it be so stupid? :-( I thought the configuration stuff would\n> manage the link path automatically, which may be quite na�ve, indeed.\n\nThe only way to do that is to add an rpath annotation, and e.g. distros\ndon't like that - there's some security implications.\n\n> I've added an explicit LD_LIBRARY_PATH, which will be triggered at some\n> point later.\n\nYou can also do something like LDFLAGS=\"$LDFLAGS -Wl,-rpath,$(llvm-config --libdir)\"\nor such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Feb 2021 10:01:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "\n>> I've added an explicit LD_LIBRARY_PATH, which will be triggered at some\n>> point later.\n\nThis seems to have fixed the issue.\n\nI'm sorry for the noise and quite baffled anyway, because according to my \nchange logs it does not seem that I modified anything from my side about \nthe dynamic library path when compiling with clang. At least I do not see \na trace of that.\n\n> You can also do something like LDFLAGS=\"$LDFLAGS -Wl,-rpath,$(llvm-config --libdir)\"\n> or such.\n\nI've resorted to just hardcode LD_LIBRARY_PATH alongside PATH when \ncompiling with clang in my buildfarm script. Thanks for the tip anyway.\n\nAnd thanks Thomas for pointing out the fix!\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 15 Feb 2021 20:41:39 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
},
{
"msg_contents": "\nOn 1/24/21 11:06 PM, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> On Mon, Jan 18, 2021 at 09:29:53PM +0100, Fabien COELHO wrote:\n>>> ... Last two months were a\n>>> little overworked for me so I let slip quite a few things. If you want to\n>>> disable the animal as Tom suggests, do as you want.\n>> Perhaps he was suggesting that you (buildfarm owner) disable the cron job that\n>> initiates new runs. That's what I do when one of my animals needs my\n>> intervention.\n> Indeed. I'm not sure there even is a provision to block an animal on the\n> buildfarm-server side. If there is, you'd have to request that it be\n> manually undone after you get around to fixing the animal. Frankly,\n> if I were the BF admin, I would be in about as much hurry to do that\n> as you've been to fix it.\n>\n> \t\t\t\n\n\nIt's actually very easy, but it's something I usually reserve for people\nwho are very unresponsive to emails. As noted, disabling the crontab\nentry on the client side is a preferable solution.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 15 Feb 2021 14:50:34 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: PG vs LLVM 12 on seawasp, next round"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nCommitfest 2020-11 is officially closed now.\nMany thanks to everyone who participated by posting patches, reviewing \nthem, committing and sharing ideas in discussions!\n\nToday, me and Georgios will move the remaining items to the next CF or \nreturn them with feedback. We're planning to leave Ready For Committer \ntill the end of the week, to make them more visible and let them get the \nattention they deserve. And finally in the weekend I'll gather and share \nsome statistics.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 1 Dec 2020 13:05:54 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Commitfest 2020-11 is closed"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 01:05:54PM +0300, Anastasia Lubennikova wrote:\n> Commitfest 2020-11 is officially closed now.\n> Many thanks to everyone who participated by posting patches, reviewing them,\n> committing and sharing ideas in discussions!\n\nThanks Anastasia and Georgios for working on that!\n--\nMichael",
"msg_date": "Wed, 2 Dec 2020 16:42:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n> Commitfest 2020-11 is officially closed now.\n> Many thanks to everyone who participated by posting patches, reviewing \n> them, committing and sharing ideas in discussions!\n\nThanks for all the hard work!\n\n> Today, me and Georgios will move the remaining items to the next CF or \n> return them with feedback. We're planning to leave Ready For Committer \n> till the end of the week, to make them more visible and let them get the \n> attention they deserve.\n\nThis is actually a bit problematic, because now the cfbot is ignoring\nthose patches (or if it's not, I don't know where it's displaying the\nresults). Please go ahead and move the remaining open patches, or\nelse re-open the CF if that's possible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Dec 2020 15:59:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 10:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This is actually a bit problematic, because now the cfbot is ignoring\n> those patches (or if it's not, I don't know where it's displaying the\n> results). Please go ahead and move the remaining open patches, or\n> else re-open the CF if that's possible.\n\nAs of quite recently, Travis CI doesn't seem to like cfbot's rate of\nbuild jobs. Recently it's been taking a very long time to post\nresults for new patches and taking so long to get around to retesting\npatches for bitrot that the my \"delete old results after a week\" logic\nstarted wiping out some results while they are still the most recent,\nleading to the blank spaces where the results are supposed to be.\nD'oh. I'm looking into a couple of options.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 10:15:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On 12/2/20 4:15 PM, Thomas Munro wrote:\n> On Thu, Dec 3, 2020 at 10:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This is actually a bit problematic, because now the cfbot is ignoring\n>> those patches (or if it's not, I don't know where it's displaying the\n>> results). Please go ahead and move the remaining open patches, or\n>> else re-open the CF if that's possible.\n> \n> As of quite recently, Travis CI doesn't seem to like cfbot's rate of\n> build jobs. Recently it's been taking a very long time to post\n> results for new patches and taking so long to get around to retesting\n> patches for bitrot that the my \"delete old results after a week\" logic\n> started wiping out some results while they are still the most recent,\n> leading to the blank spaces where the results are supposed to be.\n> D'oh. I'm looking into a couple of options.\n\npgBackRest test runs have gone from ~17 minutes to 3-6 hours over the \nlast two months.\n\nAlso keep in mind that travis-ci.org will be shut down at the end of the \nmonth. Some users who have migrated to travis-ci.com have complained \nthat it is not any faster, but I have not tried myself (yet).\n\nDepending on how you have Github organized migrating to travis-ci.com \nmay be bit tricky because it requires full access to all private \nrepositories in your account and orgs administrated by your account.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 2 Dec 2020 16:51:33 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 10:51 AM David Steele <david@pgmasters.net> wrote:\n> On 12/2/20 4:15 PM, Thomas Munro wrote:\n> > On Thu, Dec 3, 2020 at 10:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> This is actually a bit problematic, because now the cfbot is ignoring\n> >> those patches (or if it's not, I don't know where it's displaying the\n> >> results). Please go ahead and move the remaining open patches, or\n> >> else re-open the CF if that's possible.\n> >\n> > As of quite recently, Travis CI doesn't seem to like cfbot's rate of\n> > build jobs. Recently it's been taking a very long time to post\n> > results for new patches and taking so long to get around to retesting\n> > patches for bitrot that the my \"delete old results after a week\" logic\n> > started wiping out some results while they are still the most recent,\n> > leading to the blank spaces where the results are supposed to be.\n> > D'oh. I'm looking into a couple of options.\n>\n> pgBackRest test runs have gone from ~17 minutes to 3-6 hours over the\n> last two months.\n\nOuch.\n\n> Also keep in mind that travis-ci.org will be shut down at the end of the\n> month. Some users who have migrated to travis-ci.com have complained\n> that it is not any faster, but I have not tried myself (yet).\n\nOh.\n\n> Depending on how you have Github organized migrating to travis-ci.com\n> may be bit tricky because it requires full access to all private\n> repositories in your account and orgs administrated by your account.\n\nI'm experimenting with Github's built in CI. All other ideas welcome.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 11:13:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On 12/2/20 5:13 PM, Thomas Munro wrote:\n> On Thu, Dec 3, 2020 at 10:51 AM David Steele <david@pgmasters.net> wrote:\n>> On 12/2/20 4:15 PM, Thomas Munro wrote:\n>>> On Thu, Dec 3, 2020 at 10:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> This is actually a bit problematic, because now the cfbot is ignoring\n>>>> those patches (or if it's not, I don't know where it's displaying the\n>>>> results). Please go ahead and move the remaining open patches, or\n>>>> else re-open the CF if that's possible.\n>>>\n>>> As of quite recently, Travis CI doesn't seem to like cfbot's rate of\n>>> build jobs. Recently it's been taking a very long time to post\n>>> results for new patches and taking so long to get around to retesting\n>>> patches for bitrot that the my \"delete old results after a week\" logic\n>>> started wiping out some results while they are still the most recent,\n>>> leading to the blank spaces where the results are supposed to be.\n>>> D'oh. I'm looking into a couple of options.\n>>\n>> pgBackRest test runs have gone from ~17 minutes to 3-6 hours over the\n>> last two months.\n> \n> Ouch.\n\nYeah.\n\n>> Also keep in mind that travis-ci.org will be shut down at the end of the\n>> month. Some users who have migrated to travis-ci.com have complained\n>> that it is not any faster, but I have not tried myself (yet).\n> \n> Oh.\n\nYeah.\n\n>> Depending on how you have Github organized migrating to travis-ci.com\n>> may be bit tricky because it requires full access to all private\n>> repositories in your account and orgs administrated by your account.\n> \n> I'm experimenting with Github's built in CI. All other ideas welcome.\n\nWe're leaning towards Github actions ourselves. The only thing that \nmakes us want to stay with Travis (at least for some tests) is the \nsupport for the ppc64le, arm64, and s390x architectures. s390x in \nparticular since it is the only big-endian architecture we have access to.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 2 Dec 2020 17:18:21 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "\nOn 12/2/20 5:13 PM, Thomas Munro wrote:\n>\n> I'm experimenting with Github's built in CI. All other ideas welcome.\n\n\n\nI'd look very closely at gitlab.\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Wed, 2 Dec 2020 17:36:06 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "st 2. 12. 2020 v 23:36 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\n>\n>\n> On 12/2/20 5:13 PM, Thomas Munro wrote:\n> >\n> > I'm experimenting with Github's built in CI. All other ideas welcome.\n>\n>\n>\n> I'd look very closely at gitlab.\n\nI was about to respond before with the same idea. Feel free to ping\nregarding another CI. Also there is possibility to move to GitHub\nActions (free open source CI). I got some experience with that as\nwell.\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n\n\n",
"msg_date": "Thu, 3 Dec 2020 00:02:14 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On 12/02/20 16:51, David Steele wrote:\n> Depending on how you have Github organized migrating to travis-ci.com may be\n> bit tricky because it requires full access to all private repositories in\n> your account and orgs administrated by your account.\n\nPL/Java just began using travis-ci.com this summer at the conclusion of\nour GSoC project, and Thomas had been leery of the full-access-to-everything\nrequirement, but that turned out to have been an old way it once worked.\nThe more current way involves installation as a GitHub app into a particular\nrepository, and it did not come with excessive access requirements.\n\nThat being said, we got maybe three months of use out of it all told.\nOn 2 November, they announced a \"new pricing model\"[1],[2], and since\n24 November it has no longer run PL/Java tests, logging a \"does not have\nenough credits\" message[3] instead. So I rather hastily put a GitHub\nActions workflow together to plug the hole.\n\nApparently there is a way for OSS projects to ask nicely for an allocation\nof some credits that might be available.\n\nRegards,\n-Chap\n\n\n[1]\nhttps://www.jeffgeerling.com/blog/2020/travis-cis-new-pricing-plan-threw-wrench-my-open-source-works\n[2] https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing\n[3] https://travis-ci.com/github/tada/pljava/requests\n\n\n",
"msg_date": "Wed, 2 Dec 2020 21:36:14 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 12:02 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n> st 2. 12. 2020 v 23:36 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\n> > On 12/2/20 5:13 PM, Thomas Munro wrote:\n> > > I'm experimenting with Github's built in CI. All other ideas welcome.\n> >\n> > I'd look very closely at gitlab.\n>\n> I was about to respond before with the same idea. Feel free to ping\n> regarding another CI. Also there is possibility to move to GitHub\n> Actions (free open source CI). I got some experience with that as\n> well.\n\nI spent today getting something working on Github just to try it out,\nand started a new thread[1] about that. I've not tried Gitlab and\nhave no opinion about that; if someone has a working patch similar to\nthat, I'd definitely be interested to take a look. I've looked at\nsome others. For what cfbot is doing (namely: concentrating the work\nof hundreds into one \"account\" via hundreds of branches), the spin-up\ntimes and concurrency limits are a bit of a problem on many of them.\nFWIW I think they're all wonderful for offering this service to open\nsource projects and I'm grateful that they do it!\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2By_SHVQcU3CPokmJxuHp1niebCjq4XzZizf8SR9ZdQRQ%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 3 Dec 2020 19:45:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On 02.12.2020 23:59, Tom Lane wrote:\n> Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n>> Commitfest 2020-11 is officially closed now.\n>> Many thanks to everyone who participated by posting patches, reviewing\n>> them, committing and sharing ideas in discussions!\n> Thanks for all the hard work!\n>\n>> Today, me and Georgios will move the remaining items to the next CF or\n>> return them with feedback. We're planning to leave Ready For Committer\n>> till the end of the week, to make them more visible and let them get the\n>> attention they deserve.\n> This is actually a bit problematic, because now the cfbot is ignoring\n> those patches (or if it's not, I don't know where it's displaying the\n> results). Please go ahead and move the remaining open patches, or\n> else re-open the CF if that's possible.\n>\n> \t\t\tregards, tom lane\n\nOh, I wasn't aware of that. Thank you for the reminder.\n\nNow all patches are moved to the next CF.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 3 Dec 2020 12:54:33 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On 2020-12-02 23:13, Thomas Munro wrote:\n> I'm experimenting with Github's built in CI. All other ideas welcome.\n\nYou can run Linux builds on AppVeyor, too.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 13:29:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "\nOn 12/3/20 4:54 AM, Anastasia Lubennikova wrote:\n> On 02.12.2020 23:59, Tom Lane wrote:\n>> Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n>>> Commitfest 2020-11 is officially closed now.\n>>> Many thanks to everyone who participated by posting patches, reviewing\n>>> them, committing and sharing ideas in discussions!\n>> Thanks for all the hard work!\n>>\n>>> Today, me and Georgios will move the remaining items to the next CF or\n>>> return them with feedback. We're planning to leave Ready For Committer\n>>> till the end of the week, to make them more visible and let them get\n>>> the\n>>> attention they deserve.\n>> This is actually a bit problematic, because now the cfbot is ignoring\n>> those patches (or if it's not, I don't know where it's displaying the\n>> results). Please go ahead and move the remaining open patches, or\n>> else re-open the CF if that's possible.\n>>\n>> regards, tom lane\n>\n> Oh, I wasn't aware of that. Thank you for the reminder.\n>\n> Now all patches are moved to the next CF.\n>\n\nMaybe this needs to be added to the instructions for CF managers so the\nworkflow is clear.\n\n\ncheers\n\n\nandrew\n\n\n\n\n",
"msg_date": "Thu, 3 Dec 2020 08:28:39 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 2:36 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 12/2/20 5:13 PM, Thomas Munro wrote:\n> >\n> > I'm experimenting with Github's built in CI. All other ideas welcome.\n>\n>\n> I'd look very closely at gitlab.\n>\n\n+1.\n\nWhy:\n- having a great experience for more than 2 years\n- if needed, there is an open-source version of it\n- it's possible to set up your own CI [custom] runners even when you're\nworking with their SaaS\n- finally, it's on Postgres itself\n\nOn Wed, Dec 2, 2020 at 2:36 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 12/2/20 5:13 PM, Thomas Munro wrote:\n>\n> I'm experimenting with Github's built in CI. All other ideas welcome.\n\nI'd look very closely at gitlab.+1.Why:- having a great experience for more than 2 years- if needed, there is an open-source version of it- it's possible to set up your own CI [custom] runners even when you're working with their SaaS- finally, it's on Postgres itself",
"msg_date": "Thu, 3 Dec 2020 09:03:47 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 1:29 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 2020-12-02 23:13, Thomas Munro wrote:\n> > I'm experimenting with Github's built in CI. All other ideas welcome.\n>\n> You can run Linux builds on AppVeyor, too.\n\nYeah. This would be the easiest change to make, because I already\nhave configuration files, cfbot already knows how to talk to Appveyor\nto collect results, and the build results are public URLs. So I'm\nleaning towards this option in the short term, so that cfbot keeps\nworking after December 31. We can always review the options later.\nAppveyor's free-for-open-source plan has no cap on minutes, but limits\nconcurrent jobs to 1.\n\nGitlab's free-for-open-source plan is based on metered CI minutes, but\ncfbot is a bit too greedy for the advertised limits. An annual\nallotment of 50,000 minutes would run out some time in February\nassuming we rebase each of ~250 branches every few days.\n\nGitHub Actions has very generous resource limits, nice UX and API, etc\netc, so it's tempting, but its build log URLs can only be accessed by\npeople with accounts. That limits their usefulness when discussing\ntest failures on our public mailing list. I thought about teaching\ncfbot to pull down the build logs and host them on its own web server,\nbut that feels like going against the grain.\n\n\n",
"msg_date": "Wed, 9 Dec 2020 09:46:16 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2020-11 is closed"
}
] |
[
{
"msg_contents": "Hi,\n\nIn function plan_union_children(), \nI found the lcons and list_delete_first here is easy to be replaced by lappend and list_delete_last.\n\nAnd I also found a previous commit do similar thing, so I try to improve this one.\n\nPrevious commit:\nd97b714a219959a50f9e7b37ded674f5132f93f3\n\nBest regards,\nhouzj",
"msg_date": "Tue, 1 Dec 2020 10:52:11 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Avoid using lcons and list_delete_first in plan_union_children()"
},
{
"msg_contents": "On 01/12/2020 12:52, Hou, Zhijie wrote:\n> Hi,\n> \n> In function plan_union_children(),\n> I found the lcons and list_delete_first here is easy to be replaced by lappend and list_delete_last.\n> \n> And I also found a previous commit do similar thing, so I try to improve this one.\n> \n> Previous commit:\n> d97b714a219959a50f9e7b37ded674f5132f93f3\n\nThis doesn't matter much in practice. I was able to measure a \nperformance difference with a very extreme example: \"SELECT 1 UNION \nSELECT 2 UNION ... UNION SELECT 10000\". That was about 30% faster with \nthis patch.\n\nI don't see any downside, though. And like the commit message of \nd97b714a21, it's better to have good examples than bad ones in the code \nbase, if someone copy-pastes it to somewhere where it matters.\n\nBut then again, I'm not excited about changing these one by one. If this \nis worth changing, what about the one in simplify_or_arguments()? Or in \nCopyMultiInsertInfoFlush()?\n\nPerhaps it would make sense to invent a new Queue abstraction for this. \nOr maybe it would be overkill..\n\n- Heikki\n\n\n",
"msg_date": "Tue, 1 Dec 2020 18:17:24 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Avoid using lcons and list_delete_first in plan_union_children()"
}
] |
[
{
"msg_contents": "Hi,\n\nI think we can pass CURSOR_OPT_PARALLEL_OK to pg_plan_query() for\nrefresh mat view so that parallelism can be considered for the SELECT\npart of the previously created mat view. The refresh mat view queries\ncan be faster in cases where SELECT is parallelized.\n\nAttaching a small patch. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 1 Dec 2020 17:34:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 5:34 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I think we can pass CURSOR_OPT_PARALLEL_OK to pg_plan_query() for\n> refresh mat view so that parallelism can be considered for the SELECT\n> part of the previously created mat view. The refresh mat view queries\n> can be faster in cases where SELECT is parallelized.\n>\n> Attaching a small patch. Thoughts?\n>\n\nAdded this to commitfest, in case it is useful -\nhttps://commitfest.postgresql.org/31/2856/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Dec 2020 18:12:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi\r\n\r\n> Added this to commitfest, in case it is useful -\r\n> https://commitfest.postgresql.org/31/2856/\r\n\r\nI have an issue about the safety of enable parallel select.\r\n\r\nI checked the [parallel insert into select] patch.\r\nhttps://commitfest.postgresql.org/31/2844/\r\nIt seems parallel select is not allowed when target table is temporary table.\r\n\r\n+\t/*\r\n+\t * We can't support table modification in parallel-mode if it's a foreign\r\n+\t * table/partition (no FDW API for supporting parallel access) or a\r\n+\t * temporary table.\r\n+\t */\r\n+\tif (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||\r\n+\t\tRelationUsesLocalBuffers(rel))\r\n+\t{\r\n+\t\ttable_close(rel, lockmode);\r\n+\t\tcontext->max_hazard = PROPARALLEL_UNSAFE;\r\n+\t\treturn context->max_hazard;\r\n+\t}\r\n\r\nFor Refresh MatView.\r\nif CONCURRENTLY is specified, It will builds new data in temp tablespace:\r\n\tif (concurrent)\r\n\t{\r\n\t\ttableSpace = GetDefaultTablespace(RELPERSISTENCE_TEMP, false);\r\n\t\trelpersistence = RELPERSISTENCE_TEMP;\r\n\t}\r\n\r\nFor the above case, should we still consider parallelism ?\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\n\n",
"msg_date": "Tue, 22 Dec 2020 11:23:48 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 4:53 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> I have an issue about the safety of enable parallel select.\n>\n> I checked the [parallel insert into select] patch.\n> https://commitfest.postgresql.org/31/2844/\n> It seems parallel select is not allowed when target table is temporary table.\n>\n> + /*\n> + * We can't support table modification in parallel-mode if it's a foreign\n> + * table/partition (no FDW API for supporting parallel access) or a\n> + * temporary table.\n> + */\n> + if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||\n> + RelationUsesLocalBuffers(rel))\n> + {\n> + table_close(rel, lockmode);\n> + context->max_hazard = PROPARALLEL_UNSAFE;\n> + return context->max_hazard;\n> + }\n>\n> For Refresh MatView.\n> if CONCURRENTLY is specified, It will builds new data in temp tablespace:\n> if (concurrent)\n> {\n> tableSpace = GetDefaultTablespace(RELPERSISTENCE_TEMP, false);\n> relpersistence = RELPERSISTENCE_TEMP;\n> }\n>\n> For the above case, should we still consider parallelism ?\n\nThanks for taking a look at the patch.\n\nThe intention of the patch is to just enable the parallel mode while\nplanning the select part of the materialized view, but the insertions\ndo happen in the leader backend itself. That way even if there's\ntemporary tablespace gets created, we have no problem.\n\nThis patch can be thought as a precursor to parallelizing inserts in\nrefresh matview. I'm thinking to follow the design of parallel inserts\nin ctas [1] i.e. pushing the dest receiver down to the workers once\nthat gets reviewed and finalized. At that time, we should take care of\nnot pushing inserts down to workers if temporary tablespace gets\ncreated.\n\nIn summary, as far as this patch is considered we don't have any\nproblem with temporary tablespace getting created with CONCURRENTLY\noption.\n\nI'm planning to add a few test cases to cover this patch in\nmatview.sql and post a v2 patch soon.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACWbQ95gS0z1viQC3qFVnMGAz7dcLjno9GdZv%2Bu9RAU9eQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 22 Dec 2020 18:19:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "> Thanks for taking a look at the patch.\r\n> \r\n> The intention of the patch is to just enable the parallel mode while planning\r\n> the select part of the materialized view, but the insertions do happen in\r\n> the leader backend itself. That way even if there's temporary tablespace\r\n> gets created, we have no problem.\r\n> \r\n> This patch can be thought as a precursor to parallelizing inserts in refresh\r\n> matview. I'm thinking to follow the design of parallel inserts in ctas [1]\r\n> i.e. pushing the dest receiver down to the workers once that gets reviewed\r\n> and finalized. At that time, we should take care of not pushing inserts\r\n> down to workers if temporary tablespace gets created.\r\n> \r\n> In summary, as far as this patch is considered we don't have any problem\r\n> with temporary tablespace getting created with CONCURRENTLY option.\r\n> \r\n> I'm planning to add a few test cases to cover this patch in matview.sql\r\n> and post a v2 patch soon.\r\n\r\nThanks for your explanation!\r\nYou are right that temporary tablespace does not affect current patch.\r\n\r\nFor the testcase:\r\nI noticed that you have post a mail about add explain support for REFRESH MATERIALIZED VIEW.\r\nDo you think we can combine these two features in one thread ?\r\n\r\nPersonally, The testcase with explain info will be better.\r\n\r\nBest regards,\r\nhouzj\r\n\n\n",
"msg_date": "Wed, 23 Dec 2020 03:44:30 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 9:14 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > Thanks for taking a look at the patch.\n> >\n> > The intention of the patch is to just enable the parallel mode while planning\n> > the select part of the materialized view, but the insertions do happen in\n> > the leader backend itself. That way even if there's temporary tablespace\n> > gets created, we have no problem.\n> >\n> > This patch can be thought as a precursor to parallelizing inserts in refresh\n> > matview. I'm thinking to follow the design of parallel inserts in ctas [1]\n> > i.e. pushing the dest receiver down to the workers once that gets reviewed\n> > and finalized. At that time, we should take care of not pushing inserts\n> > down to workers if temporary tablespace gets created.\n> >\n> > In summary, as far as this patch is considered we don't have any problem\n> > with temporary tablespace getting created with CONCURRENTLY option.\n> >\n> > I'm planning to add a few test cases to cover this patch in matview.sql\n> > and post a v2 patch soon.\n>\n> Thanks for your explanation!\n> You are right that temporary tablespace does not affect current patch.\n>\n> For the testcase:\n> I noticed that you have post a mail about add explain support for REFRESH MATERIALIZED VIEW.\n> Do you think we can combine these two features in one thread ?\n\nYeah, it is at [1] and on some initial analysis ISTM that\nexplain/explain analyze RMV requires a good amount of code\nrearrangement in ExecRefreshMatView(). IMO these two features can be\nkept separate because they serve different purposes.\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACU71s91G1EOzo-Xx7Z4mvF0dKq-mYeP5u4nikJWxPNRSA%40mail.gmail.com\n\n> Personally, The testcase with explain info will be better.\n\nYeah without explain analyze we can not show whether the parallelism\nis picked in the test cases. What we could do is that we can add a\nplain RMV test case in write_parallel.sql after CMV so that at least\nwe can be ensured that the parallelism will be picked because of the\nenforcement there. We can always see the parallelism for the select\npart of explain analyze CMV in write_parallel.sql and the same select\nquery gets planned even in RMV cases.\n\nIMO, the patch in this thread can go with test case addition to\nwrite_parallel.sql. since it is very small.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 24 Dec 2020 14:09:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "> Yeah without explain analyze we can not show whether the parallelism is\r\n> picked in the test cases. What we could do is that we can add a plain RMV\r\n> test case in write_parallel.sql after CMV so that at least we can be ensured\r\n> that the parallelism will be picked because of the enforcement there. We\r\n> can always see the parallelism for the select part of explain analyze CMV\r\n> in write_parallel.sql and the same select query gets planned even in RMV\r\n> cases.\r\n> \r\n> IMO, the patch in this thread can go with test case addition to\r\n> write_parallel.sql. since it is very small.\r\n> \r\n> Thoughts?\r\n\r\nYes, agreed.\r\n\r\nBest regards,\r\nhouzj\r\n\n\n",
"msg_date": "Wed, 30 Dec 2020 02:33:43 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 8:03 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > Yeah without explain analyze we can not show whether the parallelism is\n> > picked in the test cases. What we could do is that we can add a plain RMV\n> > test case in write_parallel.sql after CMV so that at least we can be ensured\n> > that the parallelism will be picked because of the enforcement there. We\n> > can always see the parallelism for the select part of explain analyze CMV\n> > in write_parallel.sql and the same select query gets planned even in RMV\n> > cases.\n> >\n> > IMO, the patch in this thread can go with test case addition to\n> > write_parallel.sql. since it is very small.\n> >\n> > Thoughts?\n>\n> Yes, agreed.\n\nThanks. Added the test case. Attaching v2 patch. Please have a look.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 30 Dec 2020 09:19:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On 30-12-2020 04:49, Bharath Rupireddy wrote:\n> On Wed, Dec 30, 2020 at 8:03 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>>> Yeah without explain analyze we can not show whether the parallelism is\n>>> picked in the test cases. What we could do is that we can add a plain RMV\n>>> test case in write_parallel.sql after CMV so that at least we can be ensured\n>>> that the parallelism will be picked because of the enforcement there. We\n>>> can always see the parallelism for the select part of explain analyze CMV\n>>> in write_parallel.sql and the same select query gets planned even in RMV\n>>> cases.\n>>>\n>>> IMO, the patch in this thread can go with test case addition to\n>>> write_parallel.sql. since it is very small.\n>>>\n>>> Thoughts?\n>>\n>> Yes, agreed.\n> \n> Thanks. Added the test case. Attaching v2 patch. Please have a look.\n> \n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n> \n\nHi,\n\nLooks good to me and a nice simple improvement.\n\nPasses everything according to http://cfbot.cputube.org/ so marked it \ntherefore as ready for commiter.\n\nCheers,\nLuc\n\n\n",
"msg_date": "Mon, 4 Jan 2021 09:04:16 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\npasses according to http://cfbot.cputube.org/\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Mon, 04 Jan 2021 08:04:34 +0000",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Jan 4, 2021 at 9:05 PM Luc Vlaming <luc@swarm64.com> wrote:\n> The new status of this patch is: Ready for Committer\n\nI think the comments above this might as well be removed, because they\naren't very convincing:\n\n+-- Allow parallel planning of the underlying query for refresh materialized\n+-- view. We can be ensured that parallelism will be picked because of the\n+-- enforcement done at the beginning of the test.\n+refresh materialized view parallel_mat_view;\n\nIf you just leave the REFRESH command, at least it'll be exercised,\nand I know you have a separate CF entry to add EXPLAIN support for\nREFRESH. So I'd just rip these weasel words out and then in a later\ncommit you can add the EXPLAIN there where it's obviously missing.\n\nWhile reading some back history, I saw that commit e9baa5e9 introduced\nparallelism for CREATE M V, but REFRESH was ripped out of the original\npatch by Robert, who said:\n\n> The problem with a case like REFRESH MATERIALIZED VIEW is that there's\n> nothing to prevent something that gets run in the course of the query\n> from trying to access the view (and the heavyweight lock won't prevent\n> that, due to group locking). That's probably a stupid thing to do,\n> but it can't be allowed to break the world. The other cases are safe\n> from that particular problem because the table doesn't exist yet.\n\nHmmm.\n\n\n",
"msg_date": "Mon, 15 Mar 2021 18:08:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 10:39 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Jan 4, 2021 at 9:05 PM Luc Vlaming <luc@swarm64.com> wrote:\n>\n> While reading some back history, I saw that commit e9baa5e9 introduced\n> parallelism for CREATE M V, but REFRESH was ripped out of the original\n> patch by Robert, who said:\n>\n> > The problem with a case like REFRESH MATERIALIZED VIEW is that there's\n> > nothing to prevent something that gets run in the course of the query\n> > from trying to access the view (and the heavyweight lock won't prevent\n> > that, due to group locking). That's probably a stupid thing to do,\n> > but it can't be allowed to break the world. The other cases are safe\n> > from that particular problem because the table doesn't exist yet.\n>\n> Hmmm.\n>\n\nI am not sure but we might want to add this in comments of the refresh\nmaterialize view function so that it would be easier for future\nreaders to understand why we have not enabled parallelism for this\ncase.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 15 Mar 2021 11:58:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 10:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> While reading some back history, I saw that commit e9baa5e9 introduced\n> parallelism for CREATE M V, but REFRESH was ripped out of the original\n> patch by Robert, who said:\n>\n> > The problem with a case like REFRESH MATERIALIZED VIEW is that there's\n> > nothing to prevent something that gets run in the course of the query\n> > from trying to access the view (and the heavyweight lock won't prevent\n> > that, due to group locking). That's probably a stupid thing to do,\n> > but it can't be allowed to break the world. The other cases are safe\n> > from that particular problem because the table doesn't exist yet.\n\nPlease correct me if my understanding of the above comment (from the\ncommit e9baa5e9) is wrong - even if the leader opens the matview\nrelation in exclusive mode, because of group locking(in case we allow\nparallel workers to feed in the data to the new heap that gets created\nfor RMV, see ExecRefreshMatView->make_new_heap), can other sessions\nstill access the matview relation with older data?\n\nI performed below testing to prove myself wrong for the above understanding:\nsession 1:\n1) added few rows to the table t1 on which the mv1 is defined;\n2) refresh materialized view mv1;\n\nsession 2:\n1) select count(*) from mv1; ---> this query is blocked until\nsession 1's step (2) is completed and gives the latest result even if\nthe underlying data-generating query runs select part in parallel.\n\nIs there anything I'm missing here?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Mar 2021 12:55:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 8:25 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > The problem with a case like REFRESH MATERIALIZED VIEW is that there's\n> > > nothing to prevent something that gets run in the course of the query\n> > > from trying to access the view (and the heavyweight lock won't prevent\n> > > that, due to group locking). That's probably a stupid thing to do,\n> > > but it can't be allowed to break the world. The other cases are safe\n> > > from that particular problem because the table doesn't exist yet.\n>\n> Please correct me if my understanding of the above comment (from the\n> commit e9baa5e9) is wrong - even if the leader opens the matview\n> relation in exclusive mode, because of group locking(in case we allow\n> parallel workers to feed in the data to the new heap that gets created\n> for RMV, see ExecRefreshMatView->make_new_heap), can other sessions\n> still access the matview relation with older data?\n>\n> I performed below testing to prove myself wrong for the above understanding:\n> session 1:\n> 1) added few rows to the table t1 on which the mv1 is defined;\n> 2) refresh materialized view mv1;\n>\n> session 2:\n> 1) select count(*) from mv1; ---> this query is blocked until\n> session 1's step (2) is completed and gives the latest result even if\n> the underlying data-generating query runs select part in parallel.\n>\n> Is there anything I'm missing here?\n\nI think he was talking about things like functions that try to access\nthe mv from inside the same query, in a worker. I haven't figured out\nexactly which hazards he meant. I thought about wrong-relfilenode\nhazards and combocid hazards, but considering the way this thing\nalways inserts into a fresh table before performing merge or swap\nsteps later, I don't yet see why this is different from any other\ninsert-from-select-with-gather.\n\nWith the script below you can reach this error in the leader:\n\npostgres=# refresh materialized view CONCURRENTLY mv;\nERROR: cannot start commands during a parallel operation\nCONTEXT: SQL function \"crazy\"\n\nBut that's reachable on master too, even if you change crazy() to do\n\"select 42 from pg_class limit 1\" instead of reading mv, when\nperforming a CREATE M V without NO DATA. :-( Without parallel\nleader participation it runs to completion.\n\n===8<===\n\nset parallel_leader_participation = off;\nset parallel_setup_cost = 0;\nset min_parallel_table_scan_size = 0;\nset parallel_tuple_cost = 0;\n\ndrop table if exists t cascade;\n\ncreate table t (i int);\ninsert into t select generate_series(1, 100000);\ncreate materialized view mv as select 42::int i;\ncreate or replace function crazy() returns int as $$ select i from mv\nlimit 1; $$ language sql parallel safe;\ndrop materialized view mv;\ncreate materialized view mv as select i + crazy() i from t with no data;\ncreate unique index on mv(i);\n\nrefresh materialized view mv;\nrefresh materialized view concurrently mv;\n\nbegin;\nrefresh materialized view mv;\nrefresh materialized view mv;\ncommit;\n\nbegin;\nrefresh materialized view concurrently mv;\nrefresh materialized view concurrently mv;\ncommit;\n\n===8<===\n\nPS, off-topic observation made while trying to think of ways to break\nyour patch: I noticed that REFRESH CONCURRENTLY spends a lot of time\nin refresh_by_match_merge()'s big FULL JOIN. That is separate from\nthe view query that you're parallelising here, and is used to perform\nthe merge between a temporary table and the target MV table. I hacked\nthe code a bit so that it wasn't scanning a temporary table\n(unparallelisable), and tried out the Parallel Hash Full Join patch\nwhich I intend to commit soon. This allowed REFRESH CONCURRENTLY to\ncomplete much faster. Huzzah! Unfortunately that query also does\nORDER BY tid; I guess we could remove that to skip a Sort and use\nGather instead of the more expensive Gather Merge, and hopefully soon\na pushed-down Insert.\n\n\n",
"msg_date": "Tue, 16 Mar 2021 14:41:14 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 2:41 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Mar 15, 2021 at 8:25 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > The problem with a case like REFRESH MATERIALIZED VIEW is that there's\n> > > > nothing to prevent something that gets run in the course of the query\n> > > > from trying to access the view (and the heavyweight lock won't prevent\n> > > > that, due to group locking). That's probably a stupid thing to do,\n> > > > but it can't be allowed to break the world. The other cases are safe\n> > > > from that particular problem because the table doesn't exist yet.\n> >\n> > Please correct me if my understanding of the above comment (from the\n> > commit e9baa5e9) is wrong - even if the leader opens the matview\n> > relation in exclusive mode, because of group locking(in case we allow\n> > parallel workers to feed in the data to the new heap that gets created\n> > for RMV, see ExecRefreshMatView->make_new_heap), can other sessions\n> > still access the matview relation with older data?\n> >\n> > I performed below testing to prove myself wrong for the above understanding:\n> > session 1:\n> > 1) added few rows to the table t1 on which the mv1 is defined;\n> > 2) refresh materialized view mv1;\n> >\n> > session 2:\n> > 1) select count(*) from mv1; ---> this query is blocked until\n> > session 1's step (2) is completed and gives the latest result even if\n> > the underlying data-generating query runs select part in parallel.\n> >\n> > Is there anything I'm missing here?\n>\n> I think he was talking about things like functions that try to access\n> the mv from inside the same query, in a worker. I haven't figured out\n> exactly which hazards he meant. I thought about wrong-relfilenode\n> hazards and combocid hazards, but considering the way this thing\n> always inserts into a fresh table before performing merge or swap\n> steps later, I don't yet see why this is different from any other\n> insert-from-select-with-gather.\n\nI asked Robert if he had some hazard in mind that we haven't already\ndiscussed here when he wrote that, and didn't recall any. I think\nwe're OK here.\n\nI added the \"concurrently\" variant to the regression test, just to get\nit exercised too.\n\nThe documentation needed a small tweak where we have a list of\ndata-writing commands that are allowed to use parallelism. That run\nof sentences was getting a bit tortured so I converted it into a\nbullet list; I hope I didn't upset the documentation style police.\n\nPushed. Thanks for working on this! This is really going to fly with\nINSERT pushdown. The 3 merge queries used by CONCURRENTLY will take\nsome more work.\n\n\n",
"msg_date": "Wed, 17 Mar 2021 15:08:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consider Parallelism While Planning For REFRESH MATERIALIZED VIEW"
}
] |
[
{
"msg_contents": "Hackers,\n\nGiven that we have already heard about 3 separate minor release regressions\nin the most recent update I'm putting forth for discussion whether an\noff-schedule release should be done. I probably wouldn't care as much if\nit wasn't for the fact that the same release fixes two CVEs.\n\nhttps://www.postgresql.org/message-id/flat/1b8561412e8a4f038d7a491c8b922788%40smhi.se\n(notification queue)\nhttps://www.postgresql.org/message-id/flat/16746-44b30e2edf4335d4%40postgresql.org\n(\\connect)\nhttps://www.postgresql.org/message-id/flat/831E420C-D9D5-4563-ABA4-F05F03E6D7DE%40guidance.nl#718d1785b6533f13b992c800f3bd6647\n(create table like)\n\nDavid J.\n\nHackers,Given that we have already heard about 3 separate minor release regressions in the most recent update I'm putting forth for discussion whether an off-schedule release should be done. I probably wouldn't care as much if it wasn't for the fact that the same release fixes two CVEs.https://www.postgresql.org/message-id/flat/1b8561412e8a4f038d7a491c8b922788%40smhi.se (notification queue)https://www.postgresql.org/message-id/flat/16746-44b30e2edf4335d4%40postgresql.org (\\connect)https://www.postgresql.org/message-id/flat/831E420C-D9D5-4563-ABA4-F05F03E6D7DE%40guidance.nl#718d1785b6533f13b992c800f3bd6647 (create table like)David J.",
"msg_date": "Tue, 1 Dec 2020 11:44:40 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Off-Schedule Minor Release Consideration?"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Given that we have already heard about 3 separate minor release regressions\n> in the most recent update I'm putting forth for discussion whether an\n> off-schedule release should be done. I probably wouldn't care as much if\n> it wasn't for the fact that the same release fixes two CVEs.\n\nThe release team had a discussion about this. We're leaning towards\nnot doing anything out of the ordinary, but are open to public input.\nTo recap a bit:\n\nActually there are four known regressions, because there are two\ndifferent CREATE TABLE LIKE bugs:\n* CREATE TABLE LIKE gets confused if new table masks old one\n (see f7f83a55b)\n* CREATE TABLE LIKE fails to handle a self-referential foreign key\n defined in the CREATE, if it depends on an index imported by LIKE\n (see 97390fe8a)\n* psql \\c with a connstring containing a password ignores the password\n (see 7e5e1bba0)\n* LISTEN/NOTIFY can fail to empty its SLRU queue in v12 and earlier\n (see 9c83b54a9)\n\nWe felt that the first three of these are at best minor annoyances.\n\nThe LISTEN/NOTIFY issue is potentially more serious, in that it\ncould cause a service outage for a site that makes heavy use of\nNOTIFY. However, the race is between two listeners, so in the\ncommon (I think) usage pattern of only one listener there's no\nissue. Even if you hit the race condition, you don't have a problem\nuntil 8GB more NOTIFY traffic has been sent, which seems like a lot.\nThe complainants in that bug report indicated they'd overrun that\namount in a week or two, but that seems like unusually heavy usage.\n\nThe other aspect that seems to mitigate the seriousness of the\nLISTEN/NOTIFY issue is that a server restart is enough to clear\nthe queue. Given that you'd also need a server restart to install\na hypothetical off-cycle release, it's easy to see that there's\nno benefit except to sites that anticipate logging more than 16GB\nof NOTIFY traffic (and thus needing two or more restarts) before\nFebruary.\n\nSo we are leaning to the position that the severity of these issues\ndoesn't justify the work involved in putting out an off-cycle release.\nBut if there are people out there with high-NOTIFY-traffic servers\nwho would be impacted, please speak up!\n\n(BTW, the calendar is also a bit unfriendly to doing an off-cycle\nrelease. It's probably already too late to plan a Dec 10 release,\nand by the week after that a lot of people will be in holiday mode.\nIf you start thinking about January, it's not clear why bother,\nsince we'll have minor releases in February anyway.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Dec 2020 14:50:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Off-Schedule Minor Release Consideration?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been using libpq to access postgres from within a system that uses an\nedge-triggered epoll in its event-loop. The instructions on\nhttps://www.postgresql.org/docs/current/libpq-async.html are pretty good,\nbut I've run into a bit of an edge case that would be much easier to handle\nif the interfaces exposed by libpq were a little more ergonomic. If it\nmakes a difference, I'm running with single-row mode enabled on the client.\n\nSpecifically, PQconsumeInput returns no information that allows the caller\nto distinguish whether there might be more data available to be read on the\nnetwork socket if PQconsumeInput were to be called again (it just returns 0\non error and 1 otherwise), and PQisBusy doesn't return false until a full\nrow's worth of data has been read by PQconsumeInput. This is a bad\ncombination if a result set contains rows larger than PGconn's default\nbuffer size, since calling PQconsumeInput followed by PQisBusy can suggest\nthat we need to wait on the socket's readability even if there's already\nmore data available to be read on the socket.\n\nIf more detail helps, here's a slightly more detailed summary based on my\ntrawling through the code:\n\n* The recommended pattern for processing responses to async commands is to\nwait until the socket is readable, then call PQconsumeInput, then check\nPQisBusy. If PQisBusy returns true, the docs suggest waiting on the socket\nagain.\n* When PQconsumeInput is called, it doubles the PGconn's buffer size if\nless than 8k of space is unused in it.\n* PQconsumeInput will then read either until its remaining buffer space\nfills up or until the socket has no more data in it ready to be read.\n* If the buffer fills up, PQconsumeInput will return to the caller even if\nthere's still more data to be read\n* PQconsumeInput's return value only indicates whether or not there was an\nerror, not whether any data was read.\n* PQisBusy will return true unless the buffer contains an entire row; it\ndoes not actually check the status of the socket.\n* If the PGconn's buffer wasn't large enough to fit an entire row in it\nwhen you called PQconsumeInput, PQisBusy will return true, suggesting that\nyou ought to wait on socket readability, when really the right thing to do\nwould be to call PQconsumeInput again (potentially multiple times) until\nthe buffer finally grows to be large enough to contain the whole row before\nPQisBusy can return false.\n\nThis can be worked around by making a poll() syscall on the socket without\ntimeout 0 before handing the socket off to epoll, but libpq could make this\ncase easier to deal with with a slightly different interface.\n\nThe function that PQconsumeInput uses internally, pqReadData, has a much\nmore helpful interface -- it returns 1 if it successfully read at least 1\nbyte, 0 if no data was available, or -1 if there was an error. If\nPQconsumeInput had a similar range of return values, this ability to\ndistinguish between whether we read data or not would be enough information\nto know whether we ought to call PQconsumeInput again when PQisBusy returns\ntrue.\n\nAs for what we can realistically do about it, I imagine it may be\ndisruptive to add an additional possible return value to PQconsumeInput\n(e.g. return 1 if at least one byte was read or 2 if no data was\navailable). And I'm not sure edge-triggered polling is a use case the\ncommunity cares enough about to justify adding a separate method that does\nbasically the same thing as PQconsumeInput but with more information\nconveyed by its return value.\n\nSo maybe there isn't any change worth making here, but I wanted to at least\nmention the problem, since it was pretty disappointing to me that calling\nPQconsumeInput followed by PQisBusy and then waiting on readability\noccasionally caused a hang on large rows. I'd expect other developers using\nedge-triggered epoll to run into problems like this too. If there's a\nbackwards-compatible way of making it less error prone, it'd be nice to do\nso.\n\nAlex\n\nHi,I've been using libpq to access postgres from within a system that uses an edge-triggered epoll in its event-loop. The instructions on https://www.postgresql.org/docs/current/libpq-async.html are pretty good, but I've run into a bit of an edge case that would be much easier to handle if the interfaces exposed by libpq were a little more ergonomic. If it makes a difference, I'm running with single-row mode enabled on the client.Specifically, PQconsumeInput returns no information that allows the caller to distinguish whether there might be more data available to be read on the network socket if PQconsumeInput were to be called again (it just returns 0 on error and 1 otherwise), and PQisBusy doesn't return false until a full row's worth of data has been read by PQconsumeInput. This is a bad combination if a result set contains rows larger than PGconn's default buffer size, since calling PQconsumeInput followed by PQisBusy can suggest that we need to wait on the socket's readability even if there's already more data available to be read on the socket.If more detail helps, here's a slightly more detailed summary based on my trawling through the code:* The recommended pattern for processing responses to async commands is to wait until the socket is readable, then call PQconsumeInput, then check PQisBusy. If PQisBusy returns true, the docs suggest waiting on the socket again.* When PQconsumeInput is called, it doubles the PGconn's buffer size if less than 8k of space is unused in it.* PQconsumeInput will then read either until its remaining buffer space fills up or until the socket has no more data in it ready to be read.* If the buffer fills up, PQconsumeInput will return to the caller even if there's still more data to be read* PQconsumeInput's return value only indicates whether or not there was an error, not whether any data was read.* PQisBusy will return true unless the buffer contains an entire row; it does not actually check the status of the socket.* If the PGconn's buffer wasn't large enough to fit an entire row in it when you called PQconsumeInput, PQisBusy will return true, suggesting that you ought to wait on socket readability, when really the right thing to do would be to call PQconsumeInput again (potentially multiple times) until the buffer finally grows to be large enough to contain the whole row before PQisBusy can return false.This can be worked around by making a poll() syscall on the socket without timeout 0 before handing the socket off to epoll, but libpq could make this case easier to deal with with a slightly different interface.The function that PQconsumeInput uses internally, pqReadData, has a much more helpful interface -- it returns 1 if it successfully read at least 1 byte, 0 if no data was available, or -1 if there was an error. If PQconsumeInput had a similar range of return values, this ability to distinguish between whether we read data or not would be enough information to know whether we ought to call PQconsumeInput again when PQisBusy returns true.As for what we can realistically do about it, I imagine it may be disruptive to add an additional possible return value to PQconsumeInput (e.g. return 1 if at least one byte was read or 2 if no data was available). And I'm not sure edge-triggered polling is a use case the community cares enough about to justify adding a separate method that does basically the same thing as PQconsumeInput but with more information conveyed by its return value.So maybe there isn't any change worth making here, but I wanted to at least mention the problem, since it was pretty disappointing to me that calling PQconsumeInput followed by PQisBusy and then waiting on readability occasionally caused a hang on large rows. I'd expect other developers using edge-triggered epoll to run into problems like this too. If there's a backwards-compatible way of making it less error prone, it'd be nice to do so.Alex",
"msg_date": "Tue, 1 Dec 2020 14:37:16 -0600",
"msg_from": "Alex Robinson <alexdwanerobinson@gmail.com>",
"msg_from_op": true,
"msg_subject": "libpq async command processing methods are difficult to use with\n edge-triggered epoll"
}
] |
[
{
"msg_contents": "Peter,\n\nWhile working on the pg_amcheck code for [1], I discovered an unexpected deficiency in the way btree indexes are checked. So far as I can tell from the docs [2], the deficiency does not violate any promises that amcheck is making, but I found it rather surprising all the same. To reproduce:\n\n1) Create a (possibly empty) table and btree index over the table.\n2) Flush buffers and backup a copy of the heap relation file.\n3) Load (more) data into the table.\n4) Flushing buffers as needed, revert the heap relation file to the backup previously taken.\n5) Run bt_index_check and bt_index_parent_check with and without heapallindexed and/or rootdescend. Note that the index passes all checks.\n6) Run a SQL query that uses a sequential scan on the table and observe no errors.\n7) Run a SQL query that uses an index scan on the table and see that it errors with something like:\n\n ERROR: could not read block 0 in file \"base/13097/16391\": read only 0 of 8192 bytes\n\nI found it surprising that even when precisely zero of the tids in the index exist in the table the index checks all come back clean. The heapallindexed check is technically running as advertised, checking that all of the zero tuples in the heap are present in the index. That is a pretty useless check under this condition, though. Is a \"indexallheaped\" option (by some less crazy name) needed?\n\nUsers might also run into this problem when a heap relation file gets erroneously shortened by some number of blocks but not fully truncated, or perhaps with torn page writes.\n\nHave you already considered and rejected a \"indexallheaped\" type check?\n\n\n\nBackground\n-------\n\nI have up until recently been focused on corruption caused by twiddling the bits within heap and index relation pages, but real-world user error, file system error, and perhaps race conditions in the core postgresql code seem at least as likely to result in missing or incorrect versions of blocks of relation files rather than individual bytes within those blocks being wrong. Per our discussions in [3], not all corruptions that can be created under laboratory conditions are equally likely to occur in the wild, and it may be reasonable to only harden the amcheck code against corruptions that are more likely to happen in actual practice.\n\nTo make it easier for tap tests to cover common corruption type scenarios, I have been extending PostgresNode.pm with functions to perform these kinds of file system corruptions. I expect to post that work in another thread soon. I am not embedding any knowledge of the internal structure of heap, index, or toast relations in PostgresNode, only creating functions to archive versions of files and perform full or partial reversions of them later.\n\nThe ultimate goal of this work is to have sufficient regression tests to demonstrate that pg_amcheck can be run with default options against a system corrupted in these common ways without crashing, and with reasonable likelihood of detecting these common corruptions. Users might understand that hard to detect corruption will go unnoticed, but it would be harder to explain to them why, immediately after getting a clean bill of health on their system, a query bombed out with the sort of error shown above.\n\n\n[1] https://commitfest.postgresql.org/31/2670/\n[2] https://www.postgresql.org/docs/13/amcheck.html\n[3] https://www.postgresql.org/message-id/flat/CAH2-WznaU6HcahLV4Hg-DnhEmW8DuSdYfn3vfWXoj3Me9jq%3DsQ%40mail.gmail.com#0691475da5e9163d21b13fc415095801\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 1 Dec 2020 12:39:02 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "room for improvement in amcheck btree checking?"
},
{
"msg_contents": "On 2020-Dec-01, Mark Dilger wrote:\n\n> 7) Run a SQL query that uses an index scan on the table and see that it errors with something like:\n> \n> ERROR: could not read block 0 in file \"base/13097/16391\": read only 0 of 8192 bytes\n> \n> I found it surprising that even when precisely zero of the tids in the\n> index exist in the table the index checks all come back clean.\n\nYeah, I've seen this kind of symptom in production databases (indexes\npointing to non-existant heap pages).\n\nI think one useful cross-check that amcheck could do, is verify that if\na heap page is referenced from the index, then the heap page must exist.\nOtherwise, it's a future index corruption of sorts: the old index\nentries will point to the wrong new heap tuples as soon as the table\ngrows again.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 20:26:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: room for improvement in amcheck btree checking?"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 12:39 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I found it surprising that even when precisely zero of the tids in the index exist in the table the index checks all come back clean. The heapallindexed check is technically running as advertised, checking that all of the zero tuples in the heap are present in the index. That is a pretty useless check under this condition, though. Is a \"indexallheaped\" option (by some less crazy name) needed?\n>\n> Users might also run into this problem when a heap relation file gets erroneously shortened by some number of blocks but not fully truncated, or perhaps with torn page writes.\n\nIt would probably be possible to detect this exact condition (though\nnot other variants of the more general problem) relatively easily.\nPerhaps by doing something with RelationOpenSmgr() with the heap\nrelation, while applying a little knowledge of what must be true about\nthe heap relation given what is known about the index relation. (I'm\nnot 100% sure that that's possible, but offhand it seems like it\nprobably is.)\n\nSee also: commit 6754fe65a4c, which tightened things up in this area\nfor the index relation itself.\n\n> Have you already considered and rejected a \"indexallheaped\" type check?\n\nActually, I pointed out that we should do something along these lines\nvery recently:\n\nhttps://postgr.es/m/CAH2-Wz=dy--FG5iJ0kPcQumS0W5g+xQED3t-7HE+UqAK_hmLTw@mail.gmail.com\n\nI'd be happy to see you pursue it.\n\n> Background\n> -------\n>\n> I have up until recently been focused on corruption caused by twiddling the bits within heap and index relation pages, but real-world user error, file system error, and perhaps race conditions in the core postgresql code seem at least as likely to result in missing or incorrect versions of blocks of relation files rather than individual bytes within those blocks being wrong. Per our discussions in [3], not all corruptions that can be created under laboratory conditions are equally likely to occur in the wild, and it may be reasonable to only harden the amcheck code against corruptions that are more likely to happen in actual practice.\n\nRight. I also like to harden amcheck in ways that happen to be easy,\nespecially when it seems like it might kind of work as a broad\ndefense that doesn't consider any particular scenario. For example,\nhardening to make sure the an index tuple's lp_len matches\nIndexTupleSize() for the tuple proper (this also kind of works as\nstorage format documentation). I am concerned about both costs and\nbenefits.\n\nFWIW, this is a perfect example of the kind of hardening that makes\nsense to me. Clearly this kind of failure is a reasonably plausible\nscenario (probably even a known real world scenario with catalog\ncorruption), and clearly we could do something about it that's pretty\nsimple and obviously low risk. It easily meets the standard that I\nmight apply here.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 1 Dec 2020 16:38:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: room for improvement in amcheck btree checking?"
},
{
"msg_contents": "\n\n> On Dec 1, 2020, at 4:38 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Tue, Dec 1, 2020 at 12:39 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I found it surprising that even when precisely zero of the tids in the index exist in the table the index checks all come back clean. The heapallindexed check is technically running as advertised, checking that all of the zero tuples in the heap are present in the index. That is a pretty useless check under this condition, though. Is a \"indexallheaped\" option (by some less crazy name) needed?\n>> \n>> Users might also run into this problem when a heap relation file gets erroneously shortened by some number of blocks but not fully truncated, or perhaps with torn page writes.\n> \n> It would probably be possible to detect this exact condition (though\n> not other variants of the more general problem) relatively easily.\n> Perhaps by doing something with RelationOpenSmgr() with the heap\n> relation, while applying a little knowledge of what must be true about\n> the heap relation given what is known about the index relation. (I'm\n> not 100% sure that that's possible, but offhand it seems like it\n> probably is.)\n> \n> See also: commit 6754fe65a4c, which tightened things up in this area\n> for the index relation itself.\n\nYes, some of the test code I've been playing with already hit the error messages introduced in that commit.\n\n>> Have you already considered and rejected a \"indexallheaped\" type check?\n> \n> Actually, I pointed out that we should do something along these lines\n> very recently:\n> \n> https://postgr.es/m/CAH2-Wz=dy--FG5iJ0kPcQumS0W5g+xQED3t-7HE+UqAK_hmLTw@mail.gmail.com\n\nRight.\n\n> I'd be happy to see you pursue it.\n\nI'm not sure how much longer I'll be pursuing corruption checkers before switching to another task. Certainly, I'm interested in pursuing this if time permits.\n\n>> Background\n>> -------\n>> \n>> I have up until recently been focused on corruption caused by twiddling the bits within heap and index relation pages, but real-world user error, file system error, and perhaps race conditions in the core postgresql code seem at least as likely to result in missing or incorrect versions of blocks of relation files rather than individual bytes within those blocks being wrong. Per our discussions in [3], not all corruptions that can be created under laboratory conditions are equally likely to occur in the wild, and it may be reasonable to only harden the amcheck code against corruptions that are more likely to happen in actual practice.\n> \n> Right. I also like to harden amcheck in ways that happen to be easy,\n> especially when it seems like it might kind of work as a broad\n> defense that doesn't consider any particular scenario. For example,\n> hardening to make sure the an index tuple's lp_len matches\n> IndexTupleSize() for the tuple proper (this also kind of works as\n> storage format documentation). I am concerned about both costs and\n> benefits.\n> \n> FWIW, this is a perfect example of the kind of hardening that makes\n> sense to me. Clearly this kind of failure is a reasonably plausible\n> scenario (probably even a known real world scenario with catalog\n> corruption), and clearly we could do something about it that's pretty\n> simple and obviously low risk. It easily meets the standard that I\n> might apply here.\n\nOk, it seems we're in general agreement about that.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 1 Dec 2020 18:03:48 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: room for improvement in amcheck btree checking?"
}
] |
[
{
"msg_contents": "Hey, all,\n\nIt is currently slightly difficult to determine how many background worker\nprocesses are currently running, which is useful when trying to manage\nthe max_worker_processes parameter. It seems the best bet is to use the\nbackend_type column and filter out the many types that are defined in\nmiscinit.c:\n\nhttps://github.com/postgres/postgres/blob/REL_13_1/src/backend/utils/init/miscinit.c#L201-L253\n\nI would like to propose adding a simple column isbgworker, that simply\nstores the value of the expression `beentry->st_backendType == B_BG_WORKER`,\nwhich is used in pg_stat_get_activity.\n\nhttps://github.com/postgres/postgres/blob/REL_13_1/src/backend/utils/adt/pgstatfuncs.c#L854-L867\n\nSuch a column would make it easier to determine a suitable value for the\nmax_worker_processes parameter. Similar internal resource parameters all\nseem to have more straightforward ways to gauge their current usage:\n\nmax_wal_senders:\n -> COUNT(*) FROM pg_stat_replication\n\nmax_parallel_workers:\n -> COUNT(*) FROM pg_stat_activity WHERE backend_type = 'parallel worker'\n\nmax_replication_slots:\n -> COUNT(*) FROM pg_replication_slots\n\nmax_connections:\n -> COUNT(*) FROM pg_stat_activity WHERE backend_type = 'client backend'\n\n\nThoughts? I think it should be pretty easy to implement, and it would\nalso be beneficial to update the documentation for all of the above\nparameters with notes about how to determine their current usage.\n\n\nThanks,\nPaul\n\n\n(Note: I asked a question related to this on pgsql-general:\nhttps://www.postgresql.org/message-id/CACqFVBaH7OPT-smiE0xG6b_KVGkWNNhZ2-EoLNrbzLFSUgN2eQ%40mail.gmail.com\n)\n\nHey, all,It is currently slightly difficult to determine how many background workerprocesses are currently running, which is useful when trying to managethe max_worker_processes parameter. It seems the best bet is to use thebackend_type column and filter out the many types that are defined inmiscinit.c:https://github.com/postgres/postgres/blob/REL_13_1/src/backend/utils/init/miscinit.c#L201-L253I would like to propose adding a simple column isbgworker, that simplystores the value of the expression `beentry->st_backendType == B_BG_WORKER`,which is used in pg_stat_get_activity.https://github.com/postgres/postgres/blob/REL_13_1/src/backend/utils/adt/pgstatfuncs.c#L854-L867Such a column would make it easier to determine a suitable value for themax_worker_processes parameter. Similar internal resource parameters allseem to have more straightforward ways to gauge their current usage:max_wal_senders: -> COUNT(*) FROM pg_stat_replicationmax_parallel_workers: -> COUNT(*) FROM pg_stat_activity WHERE backend_type = 'parallel worker'max_replication_slots: -> COUNT(*) FROM pg_replication_slotsmax_connections: -> COUNT(*) FROM pg_stat_activity WHERE backend_type = 'client backend'Thoughts? I think it should be pretty easy to implement, and it wouldalso be beneficial to update the documentation for all of the aboveparameters with notes about how to determine their current usage.Thanks,Paul(Note: I asked a question related to this on pgsql-general:https://www.postgresql.org/message-id/CACqFVBaH7OPT-smiE0xG6b_KVGkWNNhZ2-EoLNrbzLFSUgN2eQ%40mail.gmail.com)",
"msg_date": "Tue, 1 Dec 2020 13:39:19 -0800",
"msg_from": "Paul Martinez <paulmtz@google.com>",
"msg_from_op": true,
"msg_subject": "Proposal: Adding isbgworker column to pg_stat_activity"
},
{
"msg_contents": "Hello,\n\nI just noticed that this proposal from 2020 didn't get any backers:\n\nOn 2020-Dec-01, Paul Martinez wrote:\n\n> It is currently slightly difficult to determine how many background worker\n> processes are currently running, which is useful when trying to manage\n> the max_worker_processes parameter. It seems the best bet is to use the\n> backend_type column and filter out the many types that are defined in\n> miscinit.c:\n> \n> https://github.com/postgres/postgres/blob/REL_13_1/src/backend/utils/init/miscinit.c#L201-L253\n> \n> I would like to propose adding a simple column isbgworker, that simply\n> stores the value of the expression `beentry->st_backendType == B_BG_WORKER`,\n> which is used in pg_stat_get_activity.\n\nHaving recently had the chance to interact with this --and not at all\nhappy about how it went-- I think we should do this, or something like\nit. It should be easy to filter out or in processes which correspond to\nbgworkers in pg_stat_activity. One simple idea would be to prefix\nsomething to backend_type (IIRC we used to do that), but a separate flag\nis probably more appropriate.\n\nAre you up for writing a patch? I'm guessing it should be easy.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n",
"msg_date": "Tue, 25 Oct 2022 10:33:31 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Adding isbgworker column to pg_stat_activity"
}
] |
[
{
"msg_contents": "For about a week, eelpout has been failing the pg_basebackup test\nmore often than not, but only in the 9.5 and 9.6 branches:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=eelpout&br=REL9_6_STABLE\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=eelpout&br=REL9_5_STABLE\n\nThe failures all look pretty alike:\n\n# Running: pg_basebackup -D /home/tmunro/build-farm/buildroot/REL9_6_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/tmp_test_jJOm/backupxs -X stream\npg_basebackup: could not send copy-end packet: server closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\npg_basebackup: child process exited with exit code 1\nnot ok 44 - pg_basebackup -X stream runs\n\nWhat shows up in the postmaster log is\n\n2020-12-02 09:04:53.064 NZDT [29536:1] [unknown] LOG: connection received: host=[local]\n2020-12-02 09:04:53.065 NZDT [29536:2] [unknown] LOG: replication connection authorized: user=tmunro\n2020-12-02 09:04:53.175 NZDT [29537:1] [unknown] LOG: connection received: host=[local]\n2020-12-02 09:04:53.178 NZDT [29537:2] [unknown] LOG: replication connection authorized: user=tmunro\n2020-12-02 09:05:42.860 NZDT [29502:2] LOG: using stale statistics instead of current ones because stats collector is not responding\n2020-12-02 09:05:53.074 NZDT [29542:1] LOG: using stale statistics instead of current ones because stats collector is not responding\n2020-12-02 09:05:53.183 NZDT [29537:3] pg_basebackup LOG: terminating walsender process due to replication timeout\n2020-12-02 09:05:53.183 NZDT [29537:4] pg_basebackup LOG: disconnection: session time: 0:01:00.008 user=tmunro database= host=[local]\n2020-12-02 09:06:33.996 NZDT [29536:3] pg_basebackup LOG: disconnection: session time: 0:01:40.933 user=tmunro database= host=[local]\n\nThe \"using stale statistics\" gripes seem to be from autovacuum, so they\nmay be unrelated to the problem; but they suggest that the system\nis under very heavy load, or else that there's some kernel-level issue.\nNote however that some of the failures don't have those messages, and\nI also see those messages in some runs that didn't fail.\n\nPerhaps this is just a question of the machine being too slow to complete\nthe test, in which case we ought to raise wal_sender_timeout. But it's\nweird that it would've started to fail just now, because I don't really\nsee any changes in those branches that would explain a week-old change\nin the test runtime.\n\nAny thoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Dec 2020 17:36:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Recent eelpout failures on 9.x branches"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 11:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps this is just a question of the machine being too slow to complete\n> the test, in which case we ought to raise wal_sender_timeout. But it's\n> weird that it would've started to fail just now, because I don't really\n> see any changes in those branches that would explain a week-old change\n> in the test runtime.\n\nUnfortunately, eelpout got kicked off the nice shiny ARM server it was\nrunning on so last week I moved it to a rack mounted Raspberry Pi. It\nseems to be totally I/O starved causing some timeouts to be reached,\nand I'm looking into fixing that by adding fast storage. This may\ntake a few days. Sorry for the noise.\n\n\n",
"msg_date": "Wed, 2 Dec 2020 11:50:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Recent eelpout failures on 9.x branches"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Unfortunately, eelpout got kicked off the nice shiny ARM server it was\n> running on so last week I moved it to a rack mounted Raspberry Pi. It\n> seems to be totally I/O starved causing some timeouts to be reached,\n> and I'm looking into fixing that by adding fast storage. This may\n> take a few days. Sorry for the noise.\n\nAh-hah. Now that I look, eelpout is very clearly slower overall\nthan it was a couple weeks ago, so all is explained.\n\nIt might still be reasonable to raise wal_sender_timeout in the\nbuildfarm environment, though. We usually try to make sure that\nbuildfarm timeouts border on ridiculous, not just because of\nunderpowered critters but also for cases like CLOBBER_CACHE_ALWAYS\nanimals.\n\nI'm also wondering a bit why the issue isn't affecting the newer\nbranches. It's certainly not because we made the test shorter ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Dec 2020 18:07:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Recent eelpout failures on 9.x branches"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 06:07:17PM -0500, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Unfortunately, eelpout got kicked off the nice shiny ARM server it was\n> > running on so last week I moved it to a rack mounted Raspberry Pi. It\n> > seems to be totally I/O starved causing some timeouts to be reached,\n> > and I'm looking into fixing that by adding fast storage. This may\n> > take a few days. Sorry for the noise.\n> \n> Ah-hah. Now that I look, eelpout is very clearly slower overall\n> than it was a couple weeks ago, so all is explained.\n> \n> It might still be reasonable to raise wal_sender_timeout in the\n> buildfarm environment, though. We usually try to make sure that\n> buildfarm timeouts border on ridiculous, not just because of\n> underpowered critters but also for cases like CLOBBER_CACHE_ALWAYS\n> animals.\n\nMy buildfarm animals override these:\n\n extra_config =>{\n DEFAULT => [\n \"authentication_timeout = '600s'\",\n \"wal_receiver_timeout = '18000s'\",\n \"wal_sender_timeout = '18000s'\",\n ],\n },\n build_env =>{\n PGCTLTIMEOUT => 18000,\n },\n\nEach of those timeouts caused failures before I changed it. For animals fast\nenough to make them irrelevant, I've not yet encountered a disadvantage.\n\n> I'm also wondering a bit why the issue isn't affecting the newer\n> branches. It's certainly not because we made the test shorter ...\n\nThat is peculiar.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 19:51:31 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Recent eelpout failures on 9.x branches"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm also wondering a bit why the issue isn't affecting the newer\n> branches. It's certainly not because we made the test shorter ...\n\nI looked at htop while it was building the 9.x branches and saw\npg_basebackup sitting in D state waiting for glacial I/O. Perhaps\nthis was improved by the --no-sync option that got sprinkled around\nthe place in later releases?\n\n\n",
"msg_date": "Wed, 2 Dec 2020 19:56:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Recent eelpout failures on 9.x branches"
}
] |
[
{
"msg_contents": "Hi,\n\nSomeone raised an interested point recently on pg_stat_kcache extension for\nhandling nested statements, which also applies to pg_stat_statements.\n\nThe root issue is that when pg_stat_statements tracks nested statements,\nthere's no way to really make sense of the counters, as top level statements\nwill also account for underlying statements. Using a naive example:\n\n=# CREATE FUNCTION f1() RETURNS VOID AS $$BEGIN PERFORM pg_sleep(5); END; $$ LANGUAGE plpgsql;\nCREATE FUNCTION\n\n=# SELECT pg_stat_statements_reset();\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\n=# SELECT f1();\n f1\n----\n\n(1 row)\n\n=# select sum(total_exec_time) from pg_stat_statements;\n sum\n--------------\n 10004.403601\n(1 row)\n\nIt looks like there was 10s total execution time overall all statements, which\ndoesn't really make sense.\n\nIt's of course possible to avoid that using track = top, but tracking all\nnested statements is usually quite useful so it could be better to find a way\nto better address that problem.\n\nThe only idea I have for that is to add a new field to entry key, for instance\nis_toplevel. The immediate cons is obviously that it could amplify quite a lot\nthe number of entries tracked, so people may need to increase\npg_stat_statements.max to avoid slowdown if that makes them reach frequent\nentry eviction.\n\nShould we address the problem, and in that case do you see a better way for\nthat, or should we just document this behavior?\n\n\n",
"msg_date": "Wed, 2 Dec 2020 12:05:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 8:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> Someone raised an interested point recently on pg_stat_kcache extension for\n> handling nested statements, which also applies to pg_stat_statements.\n>\n...\n\n> The only idea I have for that is to add a new field to entry key, for\n> instance\n> is_toplevel.\n\n\nThis particular problem often bothered me when dealing with\npg_stat_statements contents operating under \"track = all\" (especially when\nperforming the aggregated analysis, like you showed).\n\nI think the idea of having a flag to distinguish the top-level entries is\ngreat.\n\n\n> The immediate cons is obviously that it could amplify quite a lot\n> the number of entries tracked, so people may need to increase\n> pg_stat_statements.max to avoid slowdown if that makes them reach frequent\n> entry eviction.\n>\n\nIf all top-level records in pg_stat_statements have \"true\" in the new\ncolumn (is_toplevel), how would this lead to the need to increase\npg_stat_statements.max? The number of records would remain the same, as\nbefore extending pg_stat_statements.\n\nOn Tue, Dec 1, 2020 at 8:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nSomeone raised an interested point recently on pg_stat_kcache extension for\nhandling nested statements, which also applies to pg_stat_statements.... \nThe only idea I have for that is to add a new field to entry key, for instance\nis_toplevel.This particular problem often bothered me when dealing with pg_stat_statements contents operating under \"track = all\" (especially when performing the aggregated analysis, like you showed).I think the idea of having a flag to distinguish the top-level entries is great. The immediate cons is obviously that it could amplify quite a lot\nthe number of entries tracked, so people may need to increase\npg_stat_statements.max to avoid slowdown if that makes them reach frequent\nentry eviction.If all top-level records in pg_stat_statements have \"true\" in the new column (is_toplevel), how would this lead to the need to increase pg_stat_statements.max? The number of records would remain the same, as before extending pg_stat_statements.",
"msg_date": "Tue, 1 Dec 2020 22:08:06 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 10:08:06PM -0800, Nikolay Samokhvalov wrote:\n> On Tue, Dec 1, 2020 at 8:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > Someone raised an interested point recently on pg_stat_kcache extension for\n> > handling nested statements, which also applies to pg_stat_statements.\n> >\n> ...\n> \n> > The only idea I have for that is to add a new field to entry key, for\n> > instance\n> > is_toplevel.\n> \n> \n> This particular problem often bothered me when dealing with\n> pg_stat_statements contents operating under \"track = all\" (especially when\n> performing the aggregated analysis, like you showed).\n> \n> I think the idea of having a flag to distinguish the top-level entries is\n> great.\n> \n\nOk!\n\n> > The immediate cons is obviously that it could amplify quite a lot\n> > the number of entries tracked, so people may need to increase\n> > pg_stat_statements.max to avoid slowdown if that makes them reach frequent\n> > entry eviction.\n> >\n> \n> If all top-level records in pg_stat_statements have \"true\" in the new\n> column (is_toplevel), how would this lead to the need to increase\n> pg_stat_statements.max? The number of records would remain the same, as\n> before extending pg_stat_statements.\n\nIf the same query is getting executed both at top level and as a nested\nstatement, two entries will then be created. That's probably unlikely for\nthings like RI trigger queries, but I don't know what to expect for client\napplication queries.\n\n\n",
"msg_date": "Wed, 2 Dec 2020 14:32:41 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "\n\nOn 2020/12/02 15:32, Julien Rouhaud wrote:\n> On Tue, Dec 01, 2020 at 10:08:06PM -0800, Nikolay Samokhvalov wrote:\n>> On Tue, Dec 1, 2020 at 8:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>>> Someone raised an interested point recently on pg_stat_kcache extension for\n>>> handling nested statements, which also applies to pg_stat_statements.\n>>>\n>> ...\n>>\n>>> The only idea I have for that is to add a new field to entry key, for\n>>> instance\n>>> is_toplevel.\n>>\n>>\n>> This particular problem often bothered me when dealing with\n>> pg_stat_statements contents operating under \"track = all\" (especially when\n>> performing the aggregated analysis, like you showed).\n>>\n>> I think the idea of having a flag to distinguish the top-level entries is\n>> great.\n>>\n> \n> Ok!\n> \n>>> The immediate cons is obviously that it could amplify quite a lot\n>>> the number of entries tracked, so people may need to increase\n>>> pg_stat_statements.max to avoid slowdown if that makes them reach frequent\n>>> entry eviction.\n>>>\n>>\n>> If all top-level records in pg_stat_statements have \"true\" in the new\n>> column (is_toplevel), how would this lead to the need to increase\n>> pg_stat_statements.max? The number of records would remain the same, as\n>> before extending pg_stat_statements.\n> \n> If the same query is getting executed both at top level and as a nested\n> statement, two entries will then be created. That's probably unlikely for\n> things like RI trigger queries, but I don't know what to expect for client\n> application queries.\n\nJust idea; instead of boolean is_toplevel flag, what about\ncounting the number of times when the statement is executed\nin toplevel, and also in nested level?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 2 Dec 2020 15:52:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 03:52:37PM +0900, Fujii Masao wrote:\n> \n> On 2020/12/02 15:32, Julien Rouhaud wrote:\n> > On Tue, Dec 01, 2020 at 10:08:06PM -0800, Nikolay Samokhvalov wrote:\n> > > On Tue, Dec 1, 2020 at 8:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > \n> > > > Someone raised an interested point recently on pg_stat_kcache extension for\n> > > > handling nested statements, which also applies to pg_stat_statements.\n> > > > \n> > > ...\n> > > \n> > > > The only idea I have for that is to add a new field to entry key, for\n> > > > instance\n> > > > is_toplevel.\n> \n> [...]\n> \n> Just idea; instead of boolean is_toplevel flag, what about\n> counting the number of times when the statement is executed\n> in toplevel, and also in nested level?\n\nAh, indeed that would avoid extraneous entries. FTR we would also need that\nfor the planning part.\n\nThe cons I can see is that it'll make the counters harder to process (unless we\nprovide a specific view for the top-level statements only for instance), and\nthat it assumes that doing a simple division is representative enough for the\ntop level/nested repartition. This might be quite off for in some cases, e.g.\nbig stored procedures due to lack of autovacuum, but that can't be worse than\nwhat we currently have.\n\n\n",
"msg_date": "Wed, 2 Dec 2020 16:24:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "Hi,\n\na crazy idea:\n- add a parent_statement_id column that would be NULL for top level queries\n- build statement_id for nested queries based on the merge of:\n a/ current_statement_id and parent one\nor\n b/ current_statement_id and nested level.\n\nthis would offer the ability to track counters at any depth level ;o)\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 2 Dec 2020 06:41:36 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "Hello\n\n> - add a parent_statement_id column that would be NULL for top level queries\n\nWill generate too much entries... Every FK for each different delete/insert, for example.\nBut very useful for databases with a lot of stored procedures to find where this query is called. May be new mode track = tree? Use NULL to indicate a top-level query (same as with track=tree) and some constant for any nested queries when track = all.\n\nAlso, currently a top statement will account buffers usage for underlying statements?\n\nregards, Sergei\n\n\n",
"msg_date": "Wed, 02 Dec 2020 17:13:56 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 10:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Tue, Dec 01, 2020 at 10:08:06PM -0800, Nikolay Samokhvalov wrote:\n> > If all top-level records in pg_stat_statements have \"true\" in the new\n> > column (is_toplevel), how would this lead to the need to increase\n> > pg_stat_statements.max? The number of records would remain the same, as\n> > before extending pg_stat_statements.\n>\n> If the same query is getting executed both at top level and as a nested\n> statement, two entries will then be created. That's probably unlikely for\n> things like RI trigger queries, but I don't know what to expect for client\n> application queries.\n>\n\nRight, but this is how things already work. The extra field you've proposed\nwon't increase the number of records so it shouldn't affect how users\nchoose pg_stat_statements.max.\n\nOn Tue, Dec 1, 2020 at 10:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Tue, Dec 01, 2020 at 10:08:06PM -0800, Nikolay Samokhvalov wrote:> If all top-level records in pg_stat_statements have \"true\" in the new\n> column (is_toplevel), how would this lead to the need to increase\n> pg_stat_statements.max? The number of records would remain the same, as\n> before extending pg_stat_statements.\n\nIf the same query is getting executed both at top level and as a nested\nstatement, two entries will then be created. That's probably unlikely for\nthings like RI trigger queries, but I don't know what to expect for client\napplication queries.Right, but this is how things already work. The extra field you've proposed won't increase the number of records so it shouldn't affect how users choose pg_stat_statements.max.",
"msg_date": "Wed, 2 Dec 2020 06:23:54 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 06:23:54AM -0800, Nikolay Samokhvalov wrote:\n> On Tue, Dec 1, 2020 at 10:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > On Tue, Dec 01, 2020 at 10:08:06PM -0800, Nikolay Samokhvalov wrote:\n> > > If all top-level records in pg_stat_statements have \"true\" in the new\n> > > column (is_toplevel), how would this lead to the need to increase\n> > > pg_stat_statements.max? The number of records would remain the same, as\n> > > before extending pg_stat_statements.\n> >\n> > If the same query is getting executed both at top level and as a nested\n> > statement, two entries will then be created. That's probably unlikely for\n> > things like RI trigger queries, but I don't know what to expect for client\n> > application queries.\n> >\n> \n> Right, but this is how things already work. The extra field you've proposed\n> won't increase the number of records so it shouldn't affect how users\n> choose pg_stat_statements.max.\n\nThe extra field I've proposed would increase the number of records, as it needs\nto be a part of the key. The only alternative would be what Fufi-san\nmentioned, i.e. to split plans and calls (for instance plans_toplevel,\nplans_nested, calls_toplevel, calls_nested) and let users compute an\napproximate value for toplevel counters.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 09:00:51 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "Hi Julien,\n\n> The extra field I've proposed would increase the number of records, as it\n> needs\nto be a part of the key. \n\nTo get an increase in the number of records that means that the same\nstatement \nwould appear at top level AND nested level. This seems a corner case with\nvery low \n(neglectible) occurence rate. Did I miss something ?\n\nRegards\nPAscal \n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Thu, 3 Dec 2020 01:16:12 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "Hello\n\n> To get an increase in the number of records that means that the same\n> statement\n> would appear at top level AND nested level. This seems a corner case with\n> very low\n> (neglectible) occurence rate.\n\n+1\nI think splitting fields into plans_toplevel / plans_nested will be less convenient. And more code with higher chance of copypaste errors\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 03 Dec 2020 11:40:22 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 05:13:56PM +0300, Sergei Kornilov wrote:\n> Hello\n> \n> > - add a parent_statement_id column that would be NULL for top level queries\n> \n> Will generate too much entries... Every FK for each different delete/insert, for example.\n> But very useful for databases with a lot of stored procedures to find where this query is called. May be new mode track = tree? Use NULL to indicate a top-level query (same as with track=tree) and some constant for any nested queries when track = all.\n\nMaybe pg_stat_statements isn't the best tool for that use case. For the record\nthe profiler in plpgsql_check can now track queryid for each statements inside\na function, so you match pg_stat_statements entries. That's clearly not\nperfect as dynamic queries could generate different queryid, but that's a\nstart.\n\n> Also, currently a top statement will account buffers usage for underlying statements?\n\nI think so.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 16:52:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 11:40:22AM +0300, Sergei Kornilov wrote:\n> Hello\n> \n> > To get an increase in the number of records that means that the same\n> > statement\n> > would appear at top level AND nested level. This seems a corner case with\n> > very low\n> > (neglectible) occurence rate.\n> \n> +1\n> I think splitting fields into plans_toplevel / plans_nested will be less convenient. And more code with higher chance of copypaste errors\n\nAs I mentioned in a previous message, I really have no idea if that would be a\ncorner case or not. For instance with native partitioning, the odds to have\nmany different query executed both at top level and as a nested statement may\nbe quite higher.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 16:53:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 04:53:59PM +0800, Julien Rouhaud wrote:\n> On Thu, Dec 03, 2020 at 11:40:22AM +0300, Sergei Kornilov wrote:\n> > Hello\n> > \n> > > To get an increase in the number of records that means that the same\n> > > statement\n> > > would appear at top level AND nested level. This seems a corner case with\n> > > very low\n> > > (neglectible) occurence rate.\n> > \n> > +1\n> > I think splitting fields into plans_toplevel / plans_nested will be less convenient. And more code with higher chance of copypaste errors\n> \n> As I mentioned in a previous message, I really have no idea if that would be a\n> corner case or not. For instance with native partitioning, the odds to have\n> many different query executed both at top level and as a nested statement may\n> be quite higher.\n\nThe consensus seems to be adding a new boolean toplevel flag in the entry key,\nso PFA a patch implementing that. Note that the key now has padding, so\nmemset() calls are required.",
"msg_date": "Fri, 4 Dec 2020 16:15:48 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "Hello\n\nSeems we need also change PGSS_FILE_HEADER.\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 04 Dec 2020 12:06:10 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Fri, Dec 04, 2020 at 12:06:10PM +0300, Sergei Kornilov wrote:\n> Hello\n> \n> Seems we need also change PGSS_FILE_HEADER.\n\nIndeed, thanks! v2 attached.",
"msg_date": "Fri, 4 Dec 2020 18:09:13 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Fri, Dec 04, 2020 at 06:09:13PM +0800, Julien Rouhaud wrote:\n> On Fri, Dec 04, 2020 at 12:06:10PM +0300, Sergei Kornilov wrote:\n> > Hello\n> > \n> > Seems we need also change PGSS_FILE_HEADER.\n> \n> Indeed, thanks! v2 attached.\n\nThere was a conflict on PGSS_FILE_HEADER since some recent commit, v3 attached.",
"msg_date": "Sun, 27 Dec 2020 16:38:57 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "Hi,\n\nThanks for making the patch to add \"toplevel\" column in \npg_stat_statements.\nThis is a review comment.\n\nThere hasn't been any discussion, at least that I've been able to find.\nSo, +1 to change the status to \"Ready for Committer\".\n\n\n1. submission/feature review\n=============================\n\nThis patch can be applied cleanly to the current master branch(ed4367).\nI tested with `make check-world` and I checked there is no fail.\n\nThis patch has reasonable documents and tests.\nA \"toplevel\" column of pg_stat_statements view is documented and\nfollowing tests are added.\n- pg_stat_statements.track option : 'top' and 'all'\n- query type: normal query and nested query(pl/pgsql)\n\nI tested the \"update\" command can work.\npostgres=# ALTER EXTENSION pg_stat_statements UPDATE TO '1.10';\n\nAlthough the \"toplevel\" column of all queries which already stored is \n'false',\nwe have to decide the default. I think 'false' is ok.\n\n\n2. usability review\n====================\n\nThis patch solves the problem we can't get to know\nwhich query is top-level if pg_stat_statements.track = 'all'.\n\nThis leads that we can analyze with aggregated queries.\n\nThere is some use-case.\nFor example, we can know the sum of total_exec_time of queries\neven if nested queries are executed.\n\nWe can know how efficiently a database can use CPU resource for queries\nusing the sum of the total_exec_time, and the exec_user_time and\nexec_system_time in pg_stat_kcache which is the extension gathering\nos resources.\n\nAlthough one concern is whether only top-level is enough or not,\nI couldn't come up with any use-case to use nested level, so I think \nit's ok.\n\n\n3. coding review\n=================\n\nAlthough I had two concerns, I think they are no problem.\nSo, this patch looks good to me.\n\nFollowing were my concerns.\n\nA. the risk of too many same queries is duplicate.\n\nSince this patch adds a \"top\" member in the hash key,\nit leads to store duplicated same query which \"top\" is false and true.\n\nThis concern is already discussed and I agreed to the consensus\nit seems unlikely to have the same queries being executed both\nat the top level and as nested statements.\n\nB. add a argument of the pg_stat_statements_reset().\n\nNow, pg_stat_statements supports a flexible reset feature.\n> pg_stat_statements_reset(userid Oid, dbid Oid, queryid bigint)\n\nAlthough I wondered whether we need to add a top-level flag to the \narguments,\nI couldn't come up with any use-case to reset only top-level queries or \nnot top-level queries.\n\n\n4. others\n==========\n\nThese comments are not related to this patch.\n\nA. about the topic of commitfests\n\nSince I think this feature is for monitoring,\nit's better to change the topic from \"System Administration\"\nto \"Monitoring & Control\".\n\n\nB. check logic whether a query is top-level or not in pg_stat_kcache\n\nThis patch uses the only exec_nested_level to check whether a query is \ntop-level or not.\nThe reason is nested_level is less useful and I agree.\n\nBut, pg_stat_kcache uses plan_nested_level too.\nAlthough the check result is the same, it's better to change it\ncorresponding to this patch after it's committed.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 19 Jan 2021 17:55:18 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 19, 2021 at 4:55 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> Thanks for making the patch to add \"toplevel\" column in\n> pg_stat_statements.\n> This is a review comment.\n\nThanks a lot for the thorough review!\n\n> I tested the \"update\" command can work.\n> postgres=# ALTER EXTENSION pg_stat_statements UPDATE TO '1.10';\n>\n> Although the \"toplevel\" column of all queries which already stored is\n> 'false',\n> we have to decide the default. I think 'false' is ok.\n\nMmm, I'm not sure that I understand this result. The \"toplevel\" value\nis tracked by the C code loaded at startup, so it should have a\ncorrect value even if you used to have the extension in a previous\nversion, it's just that you can't access the toplevel field before\ndoing the ALTER EXTENSION pg_stat_statements UPDATE. There's also a\nchange in the magic number, so you can't use the stored stat file from\na version before this patch.\n\nI also fail to reproduce this behavior. I did the following:\n\n- start postgres with pg_stat_statements v1.10 (so with this patch) in\nshared_preload_libraries\n- CREATE EXTENSION pg_stat_statements VERSION '1.9';\n- execute a few queries\n- ALTER EXTENSION pg_stat_statements UPDATE;\n- SELECT * FROM pg_stat_statements reports the query with toplvel to TRUE\n\nCan you share a way to reproduce your findings?\n\n> 2. usability review\n> ====================\n> [...]\n> Although one concern is whether only top-level is enough or not,\n> I couldn't come up with any use-case to use nested level, so I think\n> it's ok.\n\nI agree, I don't see how tracking statistics per nesting level would\nreally help. The only additional use case I see would tracking\ntriggers/FK query execution, but the nesting level won't help with\nthat.\n\n> 3. coding review\n> =================\n> [...]\n> B. add a argument of the pg_stat_statements_reset().\n>\n> Now, pg_stat_statements supports a flexible reset feature.\n> > pg_stat_statements_reset(userid Oid, dbid Oid, queryid bigint)\n>\n> Although I wondered whether we need to add a top-level flag to the\n> arguments,\n> I couldn't come up with any use-case to reset only top-level queries or\n> not top-level queries.\n\nAh, I didn't think of the reset function. I also fail to see a\nreasonable use case to provide a switch for the reset function.\n\n> 4. others\n> ==========\n>\n> These comments are not related to this patch.\n>\n> A. about the topic of commitfests\n>\n> Since I think this feature is for monitoring,\n> it's better to change the topic from \"System Administration\"\n> to \"Monitoring & Control\".\n\nI agree, thanks for the change!\n\n> B. check logic whether a query is top-level or not in pg_stat_kcache\n>\n> This patch uses the only exec_nested_level to check whether a query is\n> top-level or not.\n> The reason is nested_level is less useful and I agree.\n\nDo you mean plan_nest_level is less useful?\n\n> But, pg_stat_kcache uses plan_nested_level too.\n> Although the check result is the same, it's better to change it\n> corresponding to this patch after it's committed.\n\nI agree, we should be consistent here. I'll take care of the needed\nchanges if and when this patch is commited!\n\n\n",
"msg_date": "Wed, 20 Jan 2021 17:14:54 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On 2021-01-20 18:14, Julien Rouhaud wrote:\n> On Tue, Jan 19, 2021 at 4:55 PM Masahiro Ikeda \n> <ikedamsh@oss.nttdata.com> wrote:\n>> I tested the \"update\" command can work.\n>> postgres=# ALTER EXTENSION pg_stat_statements UPDATE TO '1.10';\n>> \n>> Although the \"toplevel\" column of all queries which already stored is\n>> 'false',\n>> we have to decide the default. I think 'false' is ok.\n> \n> Mmm, I'm not sure that I understand this result. The \"toplevel\" value\n> is tracked by the C code loaded at startup, so it should have a\n> correct value even if you used to have the extension in a previous\n> version, it's just that you can't access the toplevel field before\n> doing the ALTER EXTENSION pg_stat_statements UPDATE. There's also a\n> change in the magic number, so you can't use the stored stat file from\n> a version before this patch.\n> \n> I also fail to reproduce this behavior. I did the following:\n> \n> - start postgres with pg_stat_statements v1.10 (so with this patch) in\n> shared_preload_libraries\n> - CREATE EXTENSION pg_stat_statements VERSION '1.9';\n> - execute a few queries\n> - ALTER EXTENSION pg_stat_statements UPDATE;\n> - SELECT * FROM pg_stat_statements reports the query with toplvel to \n> TRUE\n> \n> Can you share a way to reproduce your findings?\n\nSorry for making you confused. I can't reproduce although I tried now.\nI think my procedure was something wrong. So, please ignore this \ncomment, sorry about that.\n\n\n>> B. check logic whether a query is top-level or not in pg_stat_kcache\n>> \n>> This patch uses the only exec_nested_level to check whether a query is\n>> top-level or not.\n>> The reason is nested_level is less useful and I agree.\n> \n> Do you mean plan_nest_level is less useful?\n\nI think so. Anyway, it's important to correspond core's implementation\nbecause pg_stat_statements and pg_stat_kcache are used at the same time.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 20 Jan 2021 19:39:18 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 6:15 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Tue, Jan 19, 2021 at 4:55 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n> >\n> > Thanks for making the patch to add \"toplevel\" column in\n> > pg_stat_statements.\n> > This is a review comment.\n>\n> Thanks a lot for the thorough review!\n>\n> > I tested the \"update\" command can work.\n> > postgres=# ALTER EXTENSION pg_stat_statements UPDATE TO '1.10';\n> >\n> > Although the \"toplevel\" column of all queries which already stored is\n> > 'false',\n> > we have to decide the default. I think 'false' is ok.\n>\n> Mmm, I'm not sure that I understand this result. The \"toplevel\" value\n> is tracked by the C code loaded at startup, so it should have a\n> correct value even if you used to have the extension in a previous\n> version, it's just that you can't access the toplevel field before\n> doing the ALTER EXTENSION pg_stat_statements UPDATE. There's also a\n> change in the magic number, so you can't use the stored stat file from\n> a version before this patch.\n>\n> I also fail to reproduce this behavior. I did the following:\n>\n> - start postgres with pg_stat_statements v1.10 (so with this patch) in\n> shared_preload_libraries\n> - CREATE EXTENSION pg_stat_statements VERSION '1.9';\n> - execute a few queries\n> - ALTER EXTENSION pg_stat_statements UPDATE;\n> - SELECT * FROM pg_stat_statements reports the query with toplvel to TRUE\n>\n> Can you share a way to reproduce your findings?\n>\n> > 2. usability review\n> > ====================\n> > [...]\n> > Although one concern is whether only top-level is enough or not,\n> > I couldn't come up with any use-case to use nested level, so I think\n> > it's ok.\n>\n> I agree, I don't see how tracking statistics per nesting level would\n> really help. The only additional use case I see would tracking\n> triggers/FK query execution, but the nesting level won't help with\n> that.\n>\n> > 3. coding review\n> > =================\n> > [...]\n> > B. add a argument of the pg_stat_statements_reset().\n> >\n> > Now, pg_stat_statements supports a flexible reset feature.\n> > > pg_stat_statements_reset(userid Oid, dbid Oid, queryid bigint)\n> >\n> > Although I wondered whether we need to add a top-level flag to the\n> > arguments,\n> > I couldn't come up with any use-case to reset only top-level queries or\n> > not top-level queries.\n>\n> Ah, I didn't think of the reset function. I also fail to see a\n> reasonable use case to provide a switch for the reset function.\n>\n> > 4. others\n> > ==========\n> >\n> > These comments are not related to this patch.\n> >\n> > A. about the topic of commitfests\n> >\n> > Since I think this feature is for monitoring,\n> > it's better to change the topic from \"System Administration\"\n> > to \"Monitoring & Control\".\n>\n> I agree, thanks for the change!\n\nI've changed the topic accordingly.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 21 Jan 2021 00:43:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Sun, Dec 27, 2020 at 9:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Dec 04, 2020 at 06:09:13PM +0800, Julien Rouhaud wrote:\n> > On Fri, Dec 04, 2020 at 12:06:10PM +0300, Sergei Kornilov wrote:\n> > > Hello\n> > >\n> > > Seems we need also change PGSS_FILE_HEADER.\n> >\n> > Indeed, thanks! v2 attached.\n>\n> There was a conflict on PGSS_FILE_HEADER since some recent commit, v3 attached.\n\n- *\n- * Right now, this structure contains no padding. If you add any, make sure\n- * to teach pgss_store() to zero the padding bytes. Otherwise, things will\n- * break, because pgss_hash is created using HASH_BLOBS, and thus tag_hash\n- * is used to hash this.\n\nI don't think removing this comment completely is a good idea. Right\nnow it ends up there, yes, but eventually it might reach the same\nstate again. I think it's better to reword it based on the current\nsituation while keeping the part about it having to be zeroed for\npadding. And maybe along with a comment at the actual memset() sites\nas well?\n\nAFAICT, it's going to store two copies of the query in the query text\nfile (pgss_query_texts.stat)? Can we find a way around that? Maybe by\nlooking up the entry with the flag set to the other value, and then\nreusing that?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sat, 6 Mar 2021 18:56:49 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 12:43:22AM +0900, Masahiko Sawada wrote:\n> On Wed, Jan 20, 2021 at 6:15 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I agree, thanks for the change!\n> \n> I've changed the topic accordingly.\n\nThanks Sawada-san! I thought that I took care of that but I somehow missed it.\n\n\n",
"msg_date": "Sun, 7 Mar 2021 15:28:10 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 06:56:49PM +0100, Magnus Hagander wrote:\n> On Sun, Dec 27, 2020 at 9:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> - *\n> - * Right now, this structure contains no padding. If you add any, make sure\n> - * to teach pgss_store() to zero the padding bytes. Otherwise, things will\n> - * break, because pgss_hash is created using HASH_BLOBS, and thus tag_hash\n> - * is used to hash this.\n> \n> I don't think removing this comment completely is a good idea. Right\n> now it ends up there, yes, but eventually it might reach the same\n> state again. I think it's better to reword it based on the current\n> situation while keeping the part about it having to be zeroed for\n> padding. And maybe along with a comment at the actual memset() sites\n> as well?\n\nAgreed, I'll take care of that.\n\n> AFAICT, it's going to store two copies of the query in the query text\n> file (pgss_query_texts.stat)? Can we find a way around that? Maybe by\n> looking up the entry with the flag set to the other value, and then\n> reusing that?\n\nYes I was a bit worried about the possible extra text entry. I kept things\nsimple until now as the general opinion was that entries existing for both top\nlevel and nested level should be very rare so adding extra code and overhead to\nspare a few query texts wouldn't be worth it.\n\nI think that using a flag might be a bit expensive, as we would have to make\nsure that at least one of the possible two entries has it. So if there are 2\nentries and the one with the flag is evicted, we would have to transfer the\nflag to the other one, and check the existence of the flag when allocatin a new\nentry. And all of that has to be done holding an exclusive lock on pgss->lock.\n\nMaybe having a new hash table (without the toplevel flag) for those query text\nmight be better, or maybe pgss performance is already so terrible when you have\nto regularly evict entries that it wouldn't make any real difference.\n\nShould I try to add some extra code to make sure that we only store the query\ntext once, or should I document that there might be duplicate, but we expect\nthat to be very rare?\n\n\n",
"msg_date": "Sun, 7 Mar 2021 15:39:36 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Sun, Mar 7, 2021 at 8:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 06, 2021 at 06:56:49PM +0100, Magnus Hagander wrote:\n> > On Sun, Dec 27, 2020 at 9:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > - *\n> > - * Right now, this structure contains no padding. If you add any, make sure\n> > - * to teach pgss_store() to zero the padding bytes. Otherwise, things will\n> > - * break, because pgss_hash is created using HASH_BLOBS, and thus tag_hash\n> > - * is used to hash this.\n> >\n> > I don't think removing this comment completely is a good idea. Right\n> > now it ends up there, yes, but eventually it might reach the same\n> > state again. I think it's better to reword it based on the current\n> > situation while keeping the part about it having to be zeroed for\n> > padding. And maybe along with a comment at the actual memset() sites\n> > as well?\n>\n> Agreed, I'll take care of that.\n>\n> > AFAICT, it's going to store two copies of the query in the query text\n> > file (pgss_query_texts.stat)? Can we find a way around that? Maybe by\n> > looking up the entry with the flag set to the other value, and then\n> > reusing that?\n>\n> Yes I was a bit worried about the possible extra text entry. I kept things\n> simple until now as the general opinion was that entries existing for both top\n> level and nested level should be very rare so adding extra code and overhead to\n> spare a few query texts wouldn't be worth it.\n>\n> I think that using a flag might be a bit expensive, as we would have to make\n> sure that at least one of the possible two entries has it. So if there are 2\n> entries and the one with the flag is evicted, we would have to transfer the\n> flag to the other one, and check the existence of the flag when allocatin a new\n> entry. And all of that has to be done holding an exclusive lock on pgss->lock.\n\nYeah, we'd certainly want to minimize things. But what if they both\nhave the flag at that point? Then we wouldn't have to care on\neviction? But yes, for new allications we'd have to look up if the\nquery existed with the other value of the flag, and copy it over in\nthat case.\n\n> Maybe having a new hash table (without the toplevel flag) for those query text\n> might be better, or maybe pgss performance is already so terrible when you have\n> to regularly evict entries that it wouldn't make any real difference.\n>\n> Should I try to add some extra code to make sure that we only store the query\n> text once, or should I document that there might be duplicate, but we expect\n> that to be very rare?\n\nIf we expect it to be rare, I think it might be reasonable to just\ndocument that. But do we really have a strong argument for it being\nrare?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 8 Mar 2021 18:03:59 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Mon, Mar 08, 2021 at 06:03:59PM +0100, Magnus Hagander wrote:\n> On Sun, Mar 7, 2021 at 8:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Yes I was a bit worried about the possible extra text entry. I kept things\n> > simple until now as the general opinion was that entries existing for both top\n> > level and nested level should be very rare so adding extra code and overhead to\n> > spare a few query texts wouldn't be worth it.\n> >\n> > I think that using a flag might be a bit expensive, as we would have to make\n> > sure that at least one of the possible two entries has it. So if there are 2\n> > entries and the one with the flag is evicted, we would have to transfer the\n> > flag to the other one, and check the existence of the flag when allocatin a new\n> > entry. And all of that has to be done holding an exclusive lock on pgss->lock.\n> \n> Yeah, we'd certainly want to minimize things. But what if they both\n> have the flag at that point? Then we wouldn't have to care on\n> eviction? But yes, for new allications we'd have to look up if the\n> query existed with the other value of the flag, and copy it over in\n> that case.\n\nI think that we might be able to handle that without a flag. The only thing\nthat would need to be done is when creating an entry, look for an existing\nentry with the opposite flag, and if there's simply use the same\n(query_offset, query_len) info. This doesn't sound that expensive.\n\nThe real pain point will be that the garbage collection phase\nwill become way more expensive as it will now have to somehow maintain that\nknowledge, which will require additional lookups for each entry. I'm a bit\nconcerned about that, especially with the current heuristic to schedule garbage\ncollection. For now, need_qc_qtext says that we have to do it if the extent is\nmore than 512 (B) * pgss_max. This probably doesn't work well for people using\nORM as they tend to generate gigantic SQL queries.\n\nIf we implement query text deduplication, should we add another GUC for that\n\"512\" magic value so that people can minimize the gc overhead if they know they\nhave gigantic queries, or simply don't mind bigger qtext file?\n\n> > Maybe having a new hash table (without the toplevel flag) for those query text\n> > might be better, or maybe pgss performance is already so terrible when you have\n> > to regularly evict entries that it wouldn't make any real difference.\n> >\n> > Should I try to add some extra code to make sure that we only store the query\n> > text once, or should I document that there might be duplicate, but we expect\n> > that to be very rare?\n> \n> If we expect it to be rare, I think it might be reasonable to just\n> document that. But do we really have a strong argument for it being\n> rare?\n\nI don't that think that anyone really had a strong argument, mostly gut\nfeeling. Note that pg_stat_kcache already implemented that toplevel flags, so\nif people are using that extension in a recent version they might have some\nfigures to show. I'll ping some people that I know are using it.\n\nOne good argument would be that gigantic queries generated by ORM should always\nbe executed as top level statements.\n\nI previously tried with the postgres regression tests, which clearly isn't a\nrepresentative workload, and as far as I can see the vast majority of queries\nexecuted bost as top level and nested level are DDL implying recursion (e.g. a\nCREATE TABLE with underlying index creation).\n\n\n",
"msg_date": "Tue, 9 Mar 2021 10:39:54 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 3:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Mar 08, 2021 at 06:03:59PM +0100, Magnus Hagander wrote:\n> > On Sun, Mar 7, 2021 at 8:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > Yes I was a bit worried about the possible extra text entry. I kept things\n> > > simple until now as the general opinion was that entries existing for both top\n> > > level and nested level should be very rare so adding extra code and overhead to\n> > > spare a few query texts wouldn't be worth it.\n> > >\n> > > I think that using a flag might be a bit expensive, as we would have to make\n> > > sure that at least one of the possible two entries has it. So if there are 2\n> > > entries and the one with the flag is evicted, we would have to transfer the\n> > > flag to the other one, and check the existence of the flag when allocatin a new\n> > > entry. And all of that has to be done holding an exclusive lock on pgss->lock.\n> >\n> > Yeah, we'd certainly want to minimize things. But what if they both\n> > have the flag at that point? Then we wouldn't have to care on\n> > eviction? But yes, for new allications we'd have to look up if the\n> > query existed with the other value of the flag, and copy it over in\n> > that case.\n>\n> I think that we might be able to handle that without a flag. The only thing\n> that would need to be done is when creating an entry, look for an existing\n> entry with the opposite flag, and if there's simply use the same\n> (query_offset, query_len) info. This doesn't sound that expensive.\n\nThat's basically what I was trying to say :)\n\n\n> The real pain point will be that the garbage collection phase\n> will become way more expensive as it will now have to somehow maintain that\n> knowledge, which will require additional lookups for each entry. I'm a bit\n> concerned about that, especially with the current heuristic to schedule garbage\n> collection. For now, need_qc_qtext says that we have to do it if the extent is\n> more than 512 (B) * pgss_max. This probably doesn't work well for people using\n> ORM as they tend to generate gigantic SQL queries.\n\nRight, the cost would be mostly on the GC side. I've never done any\nprofiling to see how big of a thing that is in systems today -- have\nyou?\n\n\n> If we implement query text deduplication, should we add another GUC for that\n> \"512\" magic value so that people can minimize the gc overhead if they know they\n> have gigantic queries, or simply don't mind bigger qtext file?\n>\n> > > Maybe having a new hash table (without the toplevel flag) for those query text\n> > > might be better, or maybe pgss performance is already so terrible when you have\n> > > to regularly evict entries that it wouldn't make any real difference.\n> > >\n> > > Should I try to add some extra code to make sure that we only store the query\n> > > text once, or should I document that there might be duplicate, but we expect\n> > > that to be very rare?\n> >\n> > If we expect it to be rare, I think it might be reasonable to just\n> > document that. But do we really have a strong argument for it being\n> > rare?\n>\n> I don't that think that anyone really had a strong argument, mostly gut\n> feeling. Note that pg_stat_kcache already implemented that toplevel flags, so\n> if people are using that extension in a recent version they might have some\n> figures to show. I'll ping some people that I know are using it.\n\nGreat -- data always wins over gut feelings :)\n\n\n> One good argument would be that gigantic queries generated by ORM should always\n> be executed as top level statements.\n\nYes, that's true. And it probably holds as a more generic case as\nwell, that is the queries that are likely to show up both top-level\nand lower-level are more likely to be relatively simple ones. (Except\nfor example during the development of functions/procs where they're\noften executed top level as well to test etc, but that's not the most\nimportant case to optimize for)\n\n\n> I previously tried with the postgres regression tests, which clearly isn't a\n> representative workload, and as far as I can see the vast majority of queries\n> executed bost as top level and nested level are DDL implying recursion (e.g. a\n> CREATE TABLE with underlying index creation).\n\nI think the PostgreSQL regression tests are so far from a real world\nworkload that the input in this case has a value of exactly zero.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 16 Mar 2021 12:55:45 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 12:55:45PM +0100, Magnus Hagander wrote:\n> On Tue, Mar 9, 2021 at 3:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I think that we might be able to handle that without a flag. The only thing\n> > that would need to be done is when creating an entry, look for an existing\n> > entry with the opposite flag, and if there's simply use the same\n> > (query_offset, query_len) info. This doesn't sound that expensive.\n> \n> That's basically what I was trying to say :)\n\nOh ok sorry :)\n\n> > The real pain point will be that the garbage collection phase\n> > will become way more expensive as it will now have to somehow maintain that\n> > knowledge, which will require additional lookups for each entry. I'm a bit\n> > concerned about that, especially with the current heuristic to schedule garbage\n> > collection. For now, need_qc_qtext says that we have to do it if the extent is\n> > more than 512 (B) * pgss_max. This probably doesn't work well for people using\n> > ORM as they tend to generate gigantic SQL queries.\n> \n> Right, the cost would be mostly on the GC side. I've never done any\n> profiling to see how big of a thing that is in systems today -- have\n> you?\n\nI didn't, but I don't see how it could be anything but ridiculously impacting.\nit's basically preventing any query from being planned or executed on the whole\ninstance the time needed to read the previous qtext file, and write all entries\nstill needed.\n\n> > I don't that think that anyone really had a strong argument, mostly gut\n> > feeling. Note that pg_stat_kcache already implemented that toplevel flags, so\n> > if people are using that extension in a recent version they might have some\n> > figures to show. I'll ping some people that I know are using it.\n> \n> Great -- data always wins over gut feelings :)\n\nSo I asked some friends that have latest pg_stat_kcache installed on some\npreproduction environment configured to track nested queries. There isn't a\nhigh throughput but the activity should still be representative of the\nproduction queries. There are a lot of applications plugged there, around 20\ndatabases and quite a lot of PL code.\n\nAfter a few days, here are the statistics:\n\n- total of ~ 9500 entries\n- ~ 900 entries for nested statements\n- ~ 35 entries existing for both top level and nested statements\n\nSo the duplicates account for less than 4% of the nested statements, and less\nthan 0.5% of the whole entries.\n\nI wish I had more reports, but if this one is representative enough then it\nseems that trying to avoid storing duplicated queries wouldn't be worth it.\n\n> > One good argument would be that gigantic queries generated by ORM should always\n> > be executed as top level statements.\n> \n> Yes, that's true. And it probably holds as a more generic case as\n> well, that is the queries that are likely to show up both top-level\n> and lower-level are more likely to be relatively simple ones. (Except\n> for example during the development of functions/procs where they're\n> often executed top level as well to test etc, but that's not the most\n> important case to optimize for)\n\nAgreed.\n\n> > I previously tried with the postgres regression tests, which clearly isn't a\n> > representative workload, and as far as I can see the vast majority of queries\n> > executed bost as top level and nested level are DDL implying recursion (e.g. a\n> > CREATE TABLE with underlying index creation).\n> \n> I think the PostgreSQL regression tests are so far from a real world\n> workload that the input in this case has a value of exactly zero.\n\nI totally agree, but that's the only one I had at that time :) Still it wasn't\nentirely useless as I didn't realize before that that some DDL would lead to\nduplicated entries.\n\n\n",
"msg_date": "Tue, 16 Mar 2021 23:35:30 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 4:34 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Mar 16, 2021 at 12:55:45PM +0100, Magnus Hagander wrote:\n> > On Tue, Mar 9, 2021 at 3:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > I think that we might be able to handle that without a flag. The only thing\n> > > that would need to be done is when creating an entry, look for an existing\n> > > entry with the opposite flag, and if there's simply use the same\n> > > (query_offset, query_len) info. This doesn't sound that expensive.\n> >\n> > That's basically what I was trying to say :)\n>\n> Oh ok sorry :)\n>\n> > > The real pain point will be that the garbage collection phase\n> > > will become way more expensive as it will now have to somehow maintain that\n> > > knowledge, which will require additional lookups for each entry. I'm a bit\n> > > concerned about that, especially with the current heuristic to schedule garbage\n> > > collection. For now, need_qc_qtext says that we have to do it if the extent is\n> > > more than 512 (B) * pgss_max. This probably doesn't work well for people using\n> > > ORM as they tend to generate gigantic SQL queries.\n> >\n> > Right, the cost would be mostly on the GC side. I've never done any\n> > profiling to see how big of a thing that is in systems today -- have\n> > you?\n>\n> I didn't, but I don't see how it could be anything but ridiculously impacting.\n> it's basically preventing any query from being planned or executed on the whole\n> instance the time needed to read the previous qtext file, and write all entries\n> still needed.\n>\n> > > I don't that think that anyone really had a strong argument, mostly gut\n> > > feeling. Note that pg_stat_kcache already implemented that toplevel flags, so\n> > > if people are using that extension in a recent version they might have some\n> > > figures to show. I'll ping some people that I know are using it.\n> >\n> > Great -- data always wins over gut feelings :)\n>\n> So I asked some friends that have latest pg_stat_kcache installed on some\n> preproduction environment configured to track nested queries. There isn't a\n> high throughput but the activity should still be representative of the\n> production queries. There are a lot of applications plugged there, around 20\n> databases and quite a lot of PL code.\n>\n> After a few days, here are the statistics:\n>\n> - total of ~ 9500 entries\n> - ~ 900 entries for nested statements\n> - ~ 35 entries existing for both top level and nested statements\n>\n> So the duplicates account for less than 4% of the nested statements, and less\n> than 0.5% of the whole entries.\n>\n> I wish I had more reports, but if this one is representative enough then it\n> seems that trying to avoid storing duplicated queries wouldn't be worth it.\n\nI agree. If those numbers are indeed representable, it seems like\nbetter to pay that overhead than to pay the overhead of trying to\nde-dupe it.\n\nLet's hope they are :)\n\n\nLooking through ti again my feeling said the toplevel column should go\nafter the queryid and not before, but I'm not going to open up a\nbikeshed over that.\n\nI've added in a comment to cover that one that you removed (if you did\nsend an updated patch as you said, then I missed it -- sorry), and\napplied the rest.\n\nThanks!\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 10:30:53 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 10:30:53AM +0200, Magnus Hagander wrote:\n> \n> I agree. If those numbers are indeed representable, it seems like\n> better to pay that overhead than to pay the overhead of trying to\n> de-dupe it.\n> \n> Let's hope they are :)\n\n:)\n\n> Looking through ti again my feeling said the toplevel column should go\n> after the queryid and not before, but I'm not going to open up a\n> bikeshed over that.\n> \n> I've added in a comment to cover that one that you removed (if you did\n> send an updated patch as you said, then I missed it -- sorry), and\n> applied the rest.\n\nOops, somehow I totally forgot to send the new patch, sorry :(\n\nWhile looking at the patch, I unfortunately just realize that I unnecessarily\nbumped the version to 1.10, as 1.9 was already new as of pg14. Honestly I have\nno idea why I used 1.10 at that time. Version numbers are not a scarce\nresource but maybe it would be better to keep 1.10 for a future major postgres\nversion?\n\nIf yes, PFA a patch to merge 1.10 in 1.9.",
"msg_date": "Thu, 8 Apr 2021 20:05:05 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 2:04 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Apr 08, 2021 at 10:30:53AM +0200, Magnus Hagander wrote:\n> >\n> > I agree. If those numbers are indeed representable, it seems like\n> > better to pay that overhead than to pay the overhead of trying to\n> > de-dupe it.\n> >\n> > Let's hope they are :)\n>\n> :)\n>\n> > Looking through ti again my feeling said the toplevel column should go\n> > after the queryid and not before, but I'm not going to open up a\n> > bikeshed over that.\n> >\n> > I've added in a comment to cover that one that you removed (if you did\n> > send an updated patch as you said, then I missed it -- sorry), and\n> > applied the rest.\n>\n> Oops, somehow I totally forgot to send the new patch, sorry :(\n>\n> While looking at the patch, I unfortunately just realize that I unnecessarily\n> bumped the version to 1.10, as 1.9 was already new as of pg14. Honestly I have\n> no idea why I used 1.10 at that time. Version numbers are not a scarce\n> resource but maybe it would be better to keep 1.10 for a future major postgres\n> version?\n>\n> If yes, PFA a patch to merge 1.10 in 1.9.\n\nI actually thought I looked at that, but clearly I was confused one\nway or another.\n\nI think you're right, it's cleaner to merge it into 1.9, so applied and pushed.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 15:18:09 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 03:18:09PM +0200, Magnus Hagander wrote:\n> On Thu, Apr 8, 2021 at 2:04 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > If yes, PFA a patch to merge 1.10 in 1.9.\n> \n> I actually thought I looked at that, but clearly I was confused one\n> way or another.\n> \n> I think you're right, it's cleaner to merge it into 1.9, so applied and pushed.\n\nThanks Magnus!\n\n\n",
"msg_date": "Thu, 8 Apr 2021 22:54:09 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements oddity with track = all"
}
] |
[
{
"msg_contents": "Hello,\n\n\nALTER TABLE SET LOGGED/UNLOGED on a partitioned table completes successfully, but nothing changes. I expected the underlying partitions are changed accordingly. The attached patch fixes this.\n\n\nRegards\nTakayuki Tsunakawa",
"msg_date": "Wed, 2 Dec 2020 08:21:42 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table does\n nothing silently"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 1:52 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> ALTER TABLE SET LOGGED/UNLOGED on a partitioned table completes successfully, but nothing changes.\n>\n\nYeah, true.\n\n>\n> I expected the underlying partitions are changed accordingly. The attached patch fixes this.\n>\n\nMy thoughts:\n1) What happens if a partitioned table has a foreign partition along\nwith few other local partitions[1]? Currently, if we try to set\nlogged/unlogged of a foreign table, then an \"ERROR: \"XXXX\" is not a\ntable\" is thrown. This makes sense. With your patch also we see the\nsame error, but I'm not quite sure, whether it is setting the parent\nand local partitions to logged/unlogged and then throwing the error?\nOr do we want the parent relation and the all the local partitions\nbecome logged/unlogged and give a warning saying foreign table can not\nbe made logged/unlogged?\n2) What is the order in which the parent table and the partitions are\nmade logged/unlogged? Is it that first the parent table and then all\nthe partitions? What if an error occurs as in above point for a\nforeign partition or a partition having foreign key reference to a\nlogged table? During the rollback of the txn, will we undo the setting\nlogged/unlogged?\n3) Say, I have two logged tables t1 and t2. I altered t1 to be\nunlogged, and then I attach logged table t2 as a partition to t1, then\nwhat's the expectation? While attaching the partition, should we also\nmake t2 as unlogged?\n4) Currently, if we try to set logged/unlogged of a foreign table,\nthen an \"ERROR: \"XXXX\" is not a table\" is thrown. I also see that, in\ngeneral ATWrongRelkindError() throws an error saying the given\nrelation is not of expected types, but it doesn't say what is the\ngiven relation kind. Should we improve ATWrongRelkindError() by\npassing the actual relation type along with the allowed relation types\nto show a bit more informative error message, something like \"ERROR:\n\"XXXX\" is a foreign table\" with a hint \"Allowed relation types are\ntable, view, index.\"\n5) Coming to the patch, it is missing to add test cases.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Dec 2020 15:57:47 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 3:57 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Dec 2, 2020 at 1:52 PM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > ALTER TABLE SET LOGGED/UNLOGED on a partitioned table completes successfully, but nothing changes.\n> >\n>\n> Yeah, true.\n>\n> >\n> > I expected the underlying partitions are changed accordingly. The attached patch fixes this.\n> >\n>\n> My thoughts:\n> 1) What happens if a partitioned table has a foreign partition along\n> with few other local partitions[1]? Currently, if we try to set\n> logged/unlogged of a foreign table, then an \"ERROR: \"XXXX\" is not a\n> table\" is thrown. This makes sense. With your patch also we see the\n> same error, but I'm not quite sure, whether it is setting the parent\n> and local partitions to logged/unlogged and then throwing the error?\n> Or do we want the parent relation and the all the local partitions\n> become logged/unlogged and give a warning saying foreign table can not\n> be made logged/unlogged?\n> 2) What is the order in which the parent table and the partitions are\n> made logged/unlogged? Is it that first the parent table and then all\n> the partitions? What if an error occurs as in above point for a\n> foreign partition or a partition having foreign key reference to a\n> logged table? During the rollback of the txn, will we undo the setting\n> logged/unlogged?\n> 3) Say, I have two logged tables t1 and t2. I altered t1 to be\n> unlogged, and then I attach logged table t2 as a partition to t1, then\n> what's the expectation? While attaching the partition, should we also\n> make t2 as unlogged?\n> 4) Currently, if we try to set logged/unlogged of a foreign table,\n> then an \"ERROR: \"XXXX\" is not a table\" is thrown. I also see that, in\n> general ATWrongRelkindError() throws an error saying the given\n> relation is not of expected types, but it doesn't say what is the\n> given relation kind. Should we improve ATWrongRelkindError() by\n> passing the actual relation type along with the allowed relation types\n> to show a bit more informative error message, something like \"ERROR:\n> \"XXXX\" is a foreign table\" with a hint \"Allowed relation types are\n> table, view, index.\"\n> 5) Coming to the patch, it is missing to add test cases.\n>\n\nSorry, I missed to add the foreign partition use case, specified as\n[1] in my previous mail.\n\n[1]\nCREATE TABLE tbl1 (\n a INT,\n b INT,\n c TEXT default 'stuff',\n d TEXT,\n e TEXT\n) PARTITION BY LIST (b);\n\nCREATE FOREIGN TABLE foreign_part_tbl1_1(c TEXT, b INT, a INT, e TEXT,\nd TEXT) SERVER loopback;\nCREATE TABLE part_tbl1_2 (a INT, c TEXT, b INT, d TEXT, e TEXT);\n\nALTER TABLE tbl1 ATTACH PARTITION foreign_part_tbl1_1 FOR VALUES IN(1);\nALTER TABLE tbl1 ATTACH PARTITION part_tbl1_2 FOR VALUES IN(2);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Dec 2020 16:01:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n> 1) What happens if a partitioned table has a foreign partition along\r\n> with few other local partitions[1]? Currently, if we try to set\r\n> logged/unlogged of a foreign table, then an \"ERROR: \"XXXX\" is not a\r\n> table\" is thrown. This makes sense. With your patch also we see the\r\n> same error, but I'm not quite sure, whether it is setting the parent\r\n> and local partitions to logged/unlogged and then throwing the error?\r\n> Or do we want the parent relation and the all the local partitions\r\n> become logged/unlogged and give a warning saying foreign table can not\r\n> be made logged/unlogged?\r\n\r\nGood point, thanks. I think the foreign partitions should be ignored, otherwise the user would have to ALTER each local partition manually or detach the foreign partitions temporarily. Done like that.\r\n\r\n\r\n> 2) What is the order in which the parent table and the partitions are\r\n> made logged/unlogged? Is it that first the parent table and then all\r\n> the partitions? What if an error occurs as in above point for a\r\n> foreign partition or a partition having foreign key reference to a\r\n> logged table? During the rollback of the txn, will we undo the setting\r\n> logged/unlogged?\r\n\r\nThe parent is not changed because it does not have storage.\r\nIf some partition has undesirable foreign key relationship, the entire ALTER command fails. All the effects are undone when the transaction rolls back.\r\n\r\n\r\n> 3) Say, I have two logged tables t1 and t2. I altered t1 to be\r\n> unlogged, and then I attach logged table t2 as a partition to t1, then\r\n> what's the expectation? While attaching the partition, should we also\r\n> make t2 as unlogged?\r\n\r\nThe attached partition retains its property.\r\n\r\n\r\n> 4) Currently, if we try to set logged/unlogged of a foreign table,\r\n> then an \"ERROR: \"XXXX\" is not a table\" is thrown. I also see that, in\r\n> general ATWrongRelkindError() throws an error saying the given\r\n> relation is not of expected types, but it doesn't say what is the\r\n> given relation kind. Should we improve ATWrongRelkindError() by\r\n> passing the actual relation type along with the allowed relation types\r\n> to show a bit more informative error message, something like \"ERROR:\r\n> \"XXXX\" is a foreign table\" with a hint \"Allowed relation types are\r\n> table, view, index.\"\r\n\r\nAh, maybe that's a bit more friendly. But I don't think it's worth bothering to mess ATWrongRelkindError() with a long switch statement to map a relation kind to its string representation. Anyway, I'd like it to be a separate topic.\r\n\r\n\r\n> 5) Coming to the patch, it is missing to add test cases.\r\n\r\nYes, added in the revised patch.\r\n\r\nI added this to the next CF.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa",
"msg_date": "Fri, 4 Dec 2020 02:52:03 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 8:22 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > 1) What happens if a partitioned table has a foreign partition along\n> > with few other local partitions[1]? Currently, if we try to set\n> > logged/unlogged of a foreign table, then an \"ERROR: \"XXXX\" is not a\n> > table\" is thrown. This makes sense. With your patch also we see the\n> > same error, but I'm not quite sure, whether it is setting the parent\n> > and local partitions to logged/unlogged and then throwing the error?\n> > Or do we want the parent relation and the all the local partitions\n> > become logged/unlogged and give a warning saying foreign table can not\n> > be made logged/unlogged?\n>\n> Good point, thanks. I think the foreign partitions should be ignored, otherwise the user would have to ALTER each local partition manually or detach the foreign partitions temporarily. Done like that.\n>\n>\n> > 2) What is the order in which the parent table and the partitions are\n> > made logged/unlogged? Is it that first the parent table and then all\n> > the partitions? What if an error occurs as in above point for a\n> > foreign partition or a partition having foreign key reference to a\n> > logged table? During the rollback of the txn, will we undo the setting\n> > logged/unlogged?\n>\n> The parent is not changed because it does not have storage.\n> If some partition has undesirable foreign key relationship, the entire ALTER command fails. All the effects are undone when the transaction rolls back.\n>\n>\n> > 3) Say, I have two logged tables t1 and t2. I altered t1 to be\n> > unlogged, and then I attach logged table t2 as a partition to t1, then\n> > what's the expectation? While attaching the partition, should we also\n> > make t2 as unlogged?\n>\n> The attached partition retains its property.\n>\n>\n> > 4) Currently, if we try to set logged/unlogged of a foreign table,\n> > then an \"ERROR: \"XXXX\" is not a table\" is thrown. I also see that, in\n> > general ATWrongRelkindError() throws an error saying the given\n> > relation is not of expected types, but it doesn't say what is the\n> > given relation kind. Should we improve ATWrongRelkindError() by\n> > passing the actual relation type along with the allowed relation types\n> > to show a bit more informative error message, something like \"ERROR:\n> > \"XXXX\" is a foreign table\" with a hint \"Allowed relation types are\n> > table, view, index.\"\n>\n> Ah, maybe that's a bit more friendly. But I don't think it's worth bothering to mess ATWrongRelkindError() with a long switch statement to map a relation kind to its string representation. Anyway, I'd like it to be a separate topic.\n>\n>\n> > 5) Coming to the patch, it is missing to add test cases.\n>\n> Yes, added in the revised patch.\n>\n> I added this to the next CF.\n>\n\nThanks! I will review the v2 patch and provide my thoughts.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 4 Dec 2020 08:30:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 8:22 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > 1) What happens if a partitioned table has a foreign partition along\n> > with few other local partitions[1]? Currently, if we try to set\n> > logged/unlogged of a foreign table, then an \"ERROR: \"XXXX\" is not a\n> > table\" is thrown. This makes sense. With your patch also we see the\n> > same error, but I'm not quite sure, whether it is setting the parent\n> > and local partitions to logged/unlogged and then throwing the error?\n> > Or do we want the parent relation and the all the local partitions\n> > become logged/unlogged and give a warning saying foreign table can not\n> > be made logged/unlogged?\n>\n> Good point, thanks. I think the foreign partitions should be ignored, otherwise the user would have to ALTER each local partition manually or detach the foreign partitions temporarily. Done like that.\n>\n\nAgreed.\n\n>\n> > 2) What is the order in which the parent table and the partitions are\n> > made logged/unlogged? Is it that first the parent table and then all\n> > the partitions? What if an error occurs as in above point for a\n> > foreign partition or a partition having foreign key reference to a\n> > logged table? During the rollback of the txn, will we undo the setting\n> > logged/unlogged?\n>\n> The parent is not changed because it does not have storage.\n>\n\nIMHO, we should also change the parent table. Say, I have 2 local\npartitions for a logged table, then I alter that table to\nunlogged(with your patch, parent table doesn't become unlogged whereas\nthe partitions will), and I detach all the partitions for some reason.\nNow, the user would think that he set the parent table to unlogged but\nit didn't happen. So, I think we should also change the parent table's\nlogged/unlogged property though it may not have data associated with\nit when it has all the partitions. Thoughts?\n\n>\n> If some partition has undesirable foreign key relationship, the entire ALTER command fails. All the effects are undone when the transaction rolls back.\n>\n\nHmm.\n\n>\n> > 3) Say, I have two logged tables t1 and t2. I altered t1 to be\n> > unlogged, and then I attach logged table t2 as a partition to t1, then\n> > what's the expectation? While attaching the partition, should we also\n> > make t2 as unlogged?\n>\n> The attached partition retains its property.\n>\n\nThis may lead to a situation where a partition table has few\npartitions as logged and few as unlogged. Will it lead to some\nproblems? I'm not quite sure though. I feel while attaching a\npartition to a table, we have to check whether the parent is\nlogged/unlogged and set the partition accordingly. While detaching a\npartition we do not need to do anything, just retain the existing\nstate. I read this from the docs \"Any indexes created on an unlogged\ntable are automatically unlogged as well\". So, on the similar lines we\nalso should automatically make partitions logged/unlogged based on the\nparent table. We may have to also think of cases where there exists a\nmulti level parent child relationship i.e. the table to which a\npartition is being attached may be a child to another parent table.\nThoughts?\n\n>\n> > 5) Coming to the patch, it is missing to add test cases.\n>\n> Yes, added in the revised patch.\n>\n\nI think we can add foreign partition case into postgres_fdw.sql so\nthat the tests will run as part of make check-world.\n\nHow about documenting all these points in alter_table.sgml,\ncreate_table.sgml and create_foreign_table.sgml under the partition\nsection?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 4 Dec 2020 17:22:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n> IMHO, we should also change the parent table. Say, I have 2 local\r\n> partitions for a logged table, then I alter that table to\r\n> unlogged(with your patch, parent table doesn't become unlogged whereas\r\n> the partitions will), and I detach all the partitions for some reason.\r\n> Now, the user would think that he set the parent table to unlogged but\r\n> it didn't happen. So, I think we should also change the parent table's\r\n> logged/unlogged property though it may not have data associated with\r\n> it when it has all the partitions. Thoughts?\r\n\r\nI'd like to think that the logged/unlogged property is basically specific to each storage unit, which is a partition here, and ALTER TABLE on a partitioned table conveniently changes the properties of underlying storage units. (In that regard, it's unfortunate that it fails with an ERROR to try to change storage parameters like fillfactor with ALTER TABLE.)\r\n\r\n\r\n> This may lead to a situation where a partition table has few\r\n> partitions as logged and few as unlogged. Will it lead to some\r\n> problems?\r\n\r\nNo, I don't see any problem.\r\n\r\n\r\n> I think we can add foreign partition case into postgres_fdw.sql so\r\n> that the tests will run as part of make check-world.\r\n\r\nI was hesitant to add this test in postgres_fdw, but it seems to be a good place considering that postgres_fdw, and other contrib modules as well, are part of Postgres core.\r\n\r\n\r\n> How about documenting all these points in alter_table.sgml,\r\n> create_table.sgml and create_foreign_table.sgml under the partition\r\n> section?\r\n\r\nI'd like to add a statement in the description of ALTER TABLE SET LOGGED/UNLOGGED as other ALTER actions do. I don't want to add a new partition section for all CREATE/ALTER actions in this patch.\r\n\r\nIf there's no objection, I think I'll submit the (hopefully final) revised patch after a few days.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 7 Dec 2020 06:05:58 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 11:36 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > IMHO, we should also change the parent table. Say, I have 2 local\n> > partitions for a logged table, then I alter that table to\n> > unlogged(with your patch, parent table doesn't become unlogged whereas\n> > the partitions will), and I detach all the partitions for some reason.\n> > Now, the user would think that he set the parent table to unlogged but\n> > it didn't happen. So, I think we should also change the parent table's\n> > logged/unlogged property though it may not have data associated with\n> > it when it has all the partitions. Thoughts?\n>\n> I'd like to think that the logged/unlogged property is basically specific to each storage unit, which is a partition here, and ALTER TABLE on a partitioned table conveniently changes the properties of underlying storage units. (In that regard, it's unfortunate that it fails with an ERROR to try to change storage parameters like fillfactor with ALTER TABLE.)\n>\n\nDo you mean to say that if we detach all the partitions(assuming they\nare all unlogged) then the parent table(assuming logged) gets changed\nto unlogged? Does it happen on master? Am I missing something here?\n\n>\n> > I think we can add foreign partition case into postgres_fdw.sql so\n> > that the tests will run as part of make check-world.\n>\n> I was hesitant to add this test in postgres_fdw, but it seems to be a good place considering that postgres_fdw, and other contrib modules as well, are part of Postgres core.\n>\n\n+1 to add tests in postgres_fdw.\n\n>\n> > How about documenting all these points in alter_table.sgml,\n> > create_table.sgml and create_foreign_table.sgml under the partition\n> > section?\n>\n> I'd like to add a statement in the description of ALTER TABLE SET LOGGED/UNLOGGED as other ALTER actions do. I don't want to add a new partition section for all CREATE/ALTER actions in this patch.\n>\n\n+1.\n\n>\n> If there's no objection, I think I'll submit the (hopefully final) revised patch after a few days.\n>\n\nPlease do so. Thanks.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Dec 2020 16:12:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "Does \"ALTER TABLE ONLY parent\" work correctly? Namely, do not affect\nexisting partitions, but cause future partitions to acquire the new\nsetting.\n\nThis sounds very much related to previous discussion on REPLICA IDENTITY\nnot propagating to partitions; see\nhttps://postgr.es/m/201902041630.gpadougzab7v@alvherre.pgsql\nparticularly Robert Haas' comments at\nhttp://postgr.es/m/CA+TgmoYjksObOzY8b1U1=BsP_m+702eOf42fqRtQTYio1NunbA@mail.gmail.com\nand downstream discussion from there.\n\n\n\n",
"msg_date": "Mon, 7 Dec 2020 12:23:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n> Do you mean to say that if we detach all the partitions(assuming they\r\n> are all unlogged) then the parent table(assuming logged) gets changed\r\n> to unlogged? Does it happen on master? Am I missing something here?\r\n\r\nNo, the parent remains logged in that case both on master and with patched. I understand this behavior is based on the idea that (1) each storage unit (=partition) is independent, and (2) a partitioned table has no storage so the logged/unlogged setting has no meaning. (I don't know there was actually such an idea and the feature was implemented on that idea, though.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Tue, 8 Dec 2020 01:18:18 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Does \"ALTER TABLE ONLY parent\" work correctly? Namely, do not affect\n> existing partitions, but cause future partitions to acquire the new\n> setting.\n\nYes, it works correctly in the sense that ALTER TABLE ONLY on a partitioned table does nothing because it has no storage and therefore logged/unlogged has no meaning.\n\n\n> This sounds very much related to previous discussion on REPLICA IDENTITY\n> not propagating to partitions; see\n> https://postgr.es/m/201902041630.gpadougzab7v@alvherre.pgsql\n> particularly Robert Haas' comments at\n> http://postgr.es/m/CA+TgmoYjksObOzY8b1U1=BsP_m+702eOf42fqRtQTYio\n> 1NunbA@mail.gmail.com\n> and downstream discussion from there.\n\nThere was a hot discussion. I've read through it. I hope I'm not doing strange in light of that discussioin.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Tue, 8 Dec 2020 03:09:50 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On 2020-Dec-08, tsunakawa.takay@fujitsu.com wrote:\n\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > Does \"ALTER TABLE ONLY parent\" work correctly? Namely, do not affect\n> > existing partitions, but cause future partitions to acquire the new\n> > setting.\n> \n> Yes, it works correctly in the sense that ALTER TABLE ONLY on a\n> partitioned table does nothing because it has no storage and therefore\n> logged/unlogged has no meaning.\n\nBut what happens when you create another partition after you change the\n\"loggedness\" of the partitioned table?\n\n> > This sounds very much related to previous discussion on REPLICA IDENTITY\n> > not propagating to partitions; see\n> > https://postgr.es/m/201902041630.gpadougzab7v@alvherre.pgsql\n> > particularly Robert Haas' comments at\n> > http://postgr.es/m/CA+TgmoYjksObOzY8b1U1=BsP_m+702eOf42fqRtQTYio\n> > 1NunbA@mail.gmail.com\n> > and downstream discussion from there.\n> \n> There was a hot discussion. I've read through it. I hope I'm not\n> doing strange in light of that discussioin.\n\nWell, my main take from that is we should strive to change all\nsubcommands together, in some principled way, rather than change only\none or some, in arbitrary ways.\n\n\n",
"msg_date": "Tue, 8 Dec 2020 23:48:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> But what happens when you create another partition after you change the\n> \"loggedness\" of the partitioned table?\n\nThe new partition will have a property specified when the user creates it. That is, while the storage property of each storage unit (=partition) is basically independent, ALTER TABLE on a partitioned table is a convenient way to set the same property value to all its underlying storage units.\n\n\n> Well, my main take from that is we should strive to change all\n> subcommands together, in some principled way, rather than change only\n> one or some, in arbitrary ways.\n\nI got an impression from the discussion that some form of consensus on the principle was made and you were trying to create a fix for REPLICA IDENTITY. Do you think the principle was unclear and we should state it first (partly to refresh people's memories)?\n\nI'm kind of for it, but I'm hesitant to create the fix for all ALTER actions, because it may take a lot of time and energy as you were afraid. Can we define the principle, and then create individual fixes independently based on that principle?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 9 Dec 2020 04:23:39 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On 2020-Dec-09, tsunakawa.takay@fujitsu.com wrote:\n\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > But what happens when you create another partition after you change the\n> > \"loggedness\" of the partitioned table?\n> \n> The new partition will have a property specified when the user creates\n> it. That is, while the storage property of each storage unit\n> (=partition) is basically independent, ALTER TABLE on a partitioned\n> table is a convenient way to set the same property value to all its\n> underlying storage units.\n\nWell, that definition seems unfriendly to me. I prefer the stance that\nif you change the value for the parent, then future partitions inherit\nthat value.\n\n> I got an impression from the discussion that some form of consensus on\n> the principle was made and you were trying to create a fix for REPLICA\n> IDENTITY. Do you think the principle was unclear and we should state\n> it first (partly to refresh people's memories)?\n\nI think the principle was sound -- namely, that we should make all those\ncommands recurse by default, and for cases where it matters, the\nparent's setting should determine the default for future children.\n\n> I'm kind of for it, but I'm hesitant to create the fix for all ALTER\n> actions, because it may take a lot of time and energy as you were\n> afraid. Can we define the principle, and then create individual fixes\n> independently based on that principle?\n\nThat seems acceptable to me, as long as we get all changes in the same\nrelease. What we don't want (according to my reading of that\ndiscussion) is to change the semantics of a few subcommands in one\nrelease, and the semantics of a few other subcommands in another\nrelease.\n\n\n",
"msg_date": "Wed, 9 Dec 2020 09:52:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On Wed, Dec 9, 2020 at 6:22 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Dec-09, tsunakawa.takay@fujitsu.com wrote:\n>\n> > From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > > But what happens when you create another partition after you change the\n> > > \"loggedness\" of the partitioned table?\n> >\n> > The new partition will have a property specified when the user creates\n> > it. That is, while the storage property of each storage unit\n> > (=partition) is basically independent, ALTER TABLE on a partitioned\n> > table is a convenient way to set the same property value to all its\n> > underlying storage units.\n>\n> Well, that definition seems unfriendly to me. I prefer the stance that\n> if you change the value for the parent, then future partitions inherit\n> that value.\n>\n> > I got an impression from the discussion that some form of consensus on\n> > the principle was made and you were trying to create a fix for REPLICA\n> > IDENTITY. Do you think the principle was unclear and we should state\n> > it first (partly to refresh people's memories)?\n>\n> I think the principle was sound -- namely, that we should make all those\n> commands recurse by default, and for cases where it matters, the\n> parent's setting should determine the default for future children.\n>\n> > I'm kind of for it, but I'm hesitant to create the fix for all ALTER\n> > actions, because it may take a lot of time and energy as you were\n> > afraid. Can we define the principle, and then create individual fixes\n> > independently based on that principle?\n>\n> That seems acceptable to me, as long as we get all changes in the same\n> release. What we don't want (according to my reading of that\n> discussion) is to change the semantics of a few subcommands in one\n> release, and the semantics of a few other subcommands in another\n> release.\n>\n\nI'm not sure how many more of such commands exist which require changes.\n\nHow about doing it this way?\n\n1) Have a separate common thread listing the commands and specifying\nclearly the agreed behaviour for such commands.\n2) Whenever the separate patches are submitted(hopefully) by the\nhackers for such commands, after the review we can make a note of that\npatch in the common thread.\n3) Once the patches for all the listed commands are\nreceived(hopefully) , then they can be committed together in one\nrelease.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 9 Dec 2020 18:51:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On 2020-Dec-09, Bharath Rupireddy wrote:\n\n> I'm not sure how many more of such commands exist which require changes.\n\nThe other thread has a list. I think it is complete, but if you want to\ndouble-check, that would be helpful.\n\n> How about doing it this way?\n> \n> 1) Have a separate common thread listing the commands and specifying\n> clearly the agreed behaviour for such commands.\n> 2) Whenever the separate patches are submitted(hopefully) by the\n> hackers for such commands, after the review we can make a note of that\n> patch in the common thread.\n> 3) Once the patches for all the listed commands are\n> received(hopefully) , then they can be committed together in one\n> release.\n\nSounds good. I think this thread is a good place to collect those\npatches, but if you would prefer to have a new thread, feel free to\nstart one (I'd suggest CC'ing me and Tsunakawa-san).\n\n\n",
"msg_date": "Wed, 9 Dec 2020 10:56:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Well, that definition seems unfriendly to me. I prefer the stance that\n> if you change the value for the parent, then future partitions inherit\n> that value.\n\nThat would be right when the storage property is an optional specification such as fillfactor. For example, when I run ALTER TABLE mytable SET (fillfactor = 70) and then CREATE TABLE mytable_p1 PARTITION OF mytable, I find it nice that the fillfactor os mytable_p1 is also 70 (but I won't complain if it isn't, since I can run ALTER TABLE SET on the parent table again.)\n\nOTOH, CREATE TABLE and CREATE UNLOGGED TABLE is an explicit request to create a logged and unlogged relation respectively. I feel it a strange? if CREATE TABLE mytable_p1 PARTITION OF mytable creates an unlogged partition.\n\n\n> > I got an impression from the discussion that some form of consensus on\n> > the principle was made and you were trying to create a fix for REPLICA\n> > IDENTITY. Do you think the principle was unclear and we should state\n> > it first (partly to refresh people's memories)?\n> \n> I think the principle was sound -- namely, that we should make all those\n> commands recurse by default, and for cases where it matters, the\n> parent's setting should determine the default for future children.\n\nYeah, recurse by default sounded nice. But I didn't find a consensus related to \"parent's setting should determine the default for future children.\" Could you point me there?\n\n\n> > I'm kind of for it, but I'm hesitant to create the fix for all ALTER\n> > actions, because it may take a lot of time and energy as you were\n> > afraid. Can we define the principle, and then create individual fixes\n> > independently based on that principle?\n> \n> That seems acceptable to me, as long as we get all changes in the same\n> release. What we don't want (according to my reading of that\n> discussion) is to change the semantics of a few subcommands in one\n> release, and the semantics of a few other subcommands in another\n> release.\n\nAll fixes at one release seems constricting to me... Reading from the following quote in the past discussion, I understood consistency is a must and simultaneous release is an ideal. So, I think we can release individual fixes separately. I don't think it won't worsen the situation for users at least.\n\n\"try to make them all work the same, ideally in one release, rather than changing them at different times, back-patching sometimes, and having no consistency in the details.\n\nBTW, do you think you can continue to work on your REPLICA IDENTITY patch soon?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Thu, 10 Dec 2020 01:26:42 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n> I'm not sure how many more of such commands exist which require changes.\r\n> \r\n> How about doing it this way?\r\n> \r\n> 1) Have a separate common thread listing the commands and specifying\r\n> clearly the agreed behaviour for such commands.\r\n> 2) Whenever the separate patches are submitted(hopefully) by the\r\n> hackers for such commands, after the review we can make a note of that\r\n> patch in the common thread.\r\n> 3) Once the patches for all the listed commands are\r\n> received(hopefully) , then they can be committed together in one\r\n> release.\r\n\r\nThat sounds interesting and nice. Having said that, I rather think it's better to release the fixes separately. Of course, we confirm the principle of consistency that individual fixes base on beforehand and what actions need sixing (SET LOGGED/UNLOGGED was missing in the past thread.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 10 Dec 2020 01:39:41 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On Wed, Dec 09, 2020 at 09:52:17AM -0300, Alvaro Herrera wrote:\n> On 2020-Dec-09, tsunakawa.takay@fujitsu.com wrote:\n>> The new partition will have a property specified when the user creates\n>> it. That is, while the storage property of each storage unit\n>> (=partition) is basically independent, ALTER TABLE on a partitioned\n>> table is a convenient way to set the same property value to all its\n>> underlying storage units.\n> \n> Well, that definition seems unfriendly to me. I prefer the stance that\n> if you change the value for the parent, then future partitions inherit\n> that value.\n\nThat's indeed more interesting from the user perspective. So +1 from\nme.\n--\nMichael",
"msg_date": "Fri, 25 Dec 2020 16:20:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On Wed, Dec 09, 2020 at 10:56:45AM -0300, Alvaro Herrera wrote:\n> Sounds good. I think this thread is a good place to collect those\n> patches, but if you would prefer to have a new thread, feel free to\n> start one (I'd suggest CC'ing me and Tsunakawa-san).\n\nThere is an entry listed in the CF for this thread:\nhttps://commitfest.postgresql.org/31/2857/\n\nAnd it seems to me that waiting on author is most appropriate here, so\nI have changed the entry to reflect that.\n--\nMichael",
"msg_date": "Fri, 25 Dec 2020 16:22:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Michael Paquier <michael@paquier.xyz>\n> On Wed, Dec 09, 2020 at 09:52:17AM -0300, Alvaro Herrera wrote:\n> > Well, that definition seems unfriendly to me. I prefer the stance\n> > that if you change the value for the parent, then future partitions\n> > inherit that value.\n> \n> That's indeed more interesting from the user perspective. So +1 from me.\n\nAs I mentioned as below, some properties apply to that, and some don't.\n\n--------------------------------------------------\nThat would be right when the storage property is an optional specification such as fillfactor. For example, when I run ALTER TABLE mytable SET (fillfactor = 70) and then CREATE TABLE mytable_p1 PARTITION OF mytable, I find it nice that the fillfactor os mytable_p1 is also 70 (but I won't complain if it isn't, since I can run ALTER TABLE SET on the parent table again.)\n\nOTOH, CREATE TABLE and CREATE UNLOGGED TABLE is an explicit request to create a logged and unlogged relation respectively. I feel it a strange? if CREATE TABLE mytable_p1 PARTITION OF mytable creates an unlogged partition.\n--------------------------------------------------\n\n\nAnyway, I think I'll group ALTER TABLE/INDEX altering actions based on some common factors and suggest what would be a desirable behavior, asking for opinions. I'd like to explore the consensus on the basic policy for fixes. Then, I hope we will be able to work on fixes for each ALTER action in patches that can be released separately. I'd like to regist requiring all fixes to be arranged at once, since that may become a high bar for those who volunteer to fix some of the actions. (Even a committer Alvaro-san struggled to fix one action, ALTER TABLE REPLICA IDENTITY.)\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Sat, 26 Dec 2020 01:56:02 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "At Sat, 26 Dec 2020 01:56:02 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Michael Paquier <michael@paquier.xyz>\n> > On Wed, Dec 09, 2020 at 09:52:17AM -0300, Alvaro Herrera wrote:\n> > > Well, that definition seems unfriendly to me. I prefer the stance\n> > > that if you change the value for the parent, then future partitions\n> > > inherit that value.\n> > \n> > That's indeed more interesting from the user perspective. So +1 from me.\n> \n> As I mentioned as below, some properties apply to that, and some don't.\n> \n> --------------------------------------------------\n> That would be right when the storage property is an optional\n> specification such as fillfactor. For example, when I run ALTER\n> TABLE mytable SET (fillfactor = 70) and then CREATE TABLE mytable_p1\n> PARTITION OF mytable, I find it nice that the fillfactor os\n> mytable_p1 is also 70 (but I won't complain if it isn't, since I can\n> run ALTER TABLE SET on the parent table again.)\n> \n> OTOH, CREATE TABLE and CREATE UNLOGGED TABLE is an explicit request\n> to create a logged and unlogged relation respectively. I feel it a\n> strange? if CREATE TABLE mytable_p1 PARTITION OF mytable creates an\n> unlogged partition.\n> --------------------------------------------------\n\n\"CREATE TABLE\" is not \"CREATE LOGGED TABLE\". We can assume that as\n\"CREATE <default logged-ness> TABLE\", where the default logged-ness\nvaries according to context. Or it might have been so since the\nbeginning. Currently we don't have the syntax \"CREATE LOGGED TABLE\",\nbut we can add that syntax.\n\n> Anyway, I think I'll group ALTER TABLE/INDEX altering actions based\n> on some common factors and suggest what would be a desirable\n> behavior, asking for opinions. I'd like to explore the consensus on\n> the basic policy for fixes. Then, I hope we will be able to work on\n> fixes for each ALTER action in patches that can be released\n> separately. I'd like to regist requiring all fixes to be arranged\n> at once, since that may become a high bar for those who volunteer to\n> fix some of the actions. (Even a committer Alvaro-san struggled to\n> fix one action, ALTER TABLE REPLICA IDENTITY.)\n\nI'd vote +1 to:\n\n ALTER TABLE/INDEX on a parent recurses to all descendants. ALTER\n TABLE/INDEX ONLY doesn't, and the change takes effect on future\n children.\n\n We pursue relasing all fixes at once but we might release all fixes\n other than some items that cannot be fixed for some technical reasons\n at the time, like REPLICA IDENITTY.\n\nI'm not sure how long we will wait for the time of release, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 26 Jan 2021 16:20:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned\n table does nothing silently"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> \"CREATE TABLE\" is not \"CREATE LOGGED TABLE\". We can assume that as\n> \"CREATE <default logged-ness> TABLE\", where the default logged-ness\n> varies according to context. Or it might have been so since the beginning.\n> Currently we don't have the syntax \"CREATE LOGGED TABLE\", but we can add\n> that syntax.\n\nYes, I thought of that possibility after a while too. But I felt a bit(?) hesitant to do it considering back-patching. Also, the idea requires ALTER TABLE ATTACH PARTITION will have to modify the logged-ness property of the target partition and its subpartitions with that of the parent partitioned table. However, your idea is one candidate worth pursuing, including whether or not to back-patch what.\n\n\n> We pursue relasing all fixes at once but we might release all fixes other than\n> some items that cannot be fixed for some technical reasons at the time, like\n> REPLICA IDENITTY.\n> \n> I'm not sure how long we will wait for the time of release, though.\n\nAnyway, we have to define the ideal behavior for each ALTER action based on some comprehensible principle. Yeah... this may become a long, tiring journey. (I anticipate some difficulty and compromise in reaching agreement, as was seen in the past discussion for the fix for ALTER TABLE REPLICA IDENTITY. Scary)\n\nFWIW, I wonder why Postgres decided to allow different logical structure of tables such as DEFAULT values and constraints between the parent partitioned table and a child partition. That extra flexibility might stand in the way to consensus. I think it'd be easy to understand that partitions are simply physically independent containers that have identical logical structure.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 27 Jan 2021 05:30:29 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "At Wed, 27 Jan 2021 05:30:29 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > \"CREATE TABLE\" is not \"CREATE LOGGED TABLE\". We can assume that as\n> > \"CREATE <default logged-ness> TABLE\", where the default logged-ness\n> > varies according to context. Or it might have been so since the beginning.\n> > Currently we don't have the syntax \"CREATE LOGGED TABLE\", but we can add\n> > that syntax.\n> \n> Yes, I thought of that possibility after a while too. But I felt a bit(?) hesitant to do it considering back-patching. Also, the idea requires ALTER TABLE ATTACH PARTITION will have to modify the logged-ness property of the target partition and its subpartitions with that of the parent partitioned table. However, your idea is one candidate worth pursuing, including whether or not to back-patch what.\n\nI think this and all possible similar changes won't be back-patched at\nall. It is a desaster for any establish version to change this kind\nof behavior.\n\n> \n> > We pursue relasing all fixes at once but we might release all fixes other than\n> > some items that cannot be fixed for some technical reasons at the time, like\n> > REPLICA IDENITTY.\n> > \n> > I'm not sure how long we will wait for the time of release, though.\n> \n> Anyway, we have to define the ideal behavior for each ALTER action based on some comprehensible principle. Yeah... this may become a long, tiring journey. (I anticipate some difficulty and compromise in reaching agreement, as was seen in the past discussion for the fix for ALTER TABLE REPLICA IDENTITY. Scary)\n\nIt seems to me that we have agreed that the ideal behavior for ALTER\nTABLE on a parent to propagate to descendants. An observed objection\nis that behavior is dangerous for operations that requires large\namount of resources that could lead to failure after elapsing a long\ntime. The revealed difficulty of REPLICA IDENTITY is regarded as a\ntechnical issue that should be overcome.\n\n> FWIW, I wonder why Postgres decided to allow different logical structure of tables such as DEFAULT values and constraints between the parent partitioned table and a child partition. That extra flexibility might stand in the way to consensus. I think it'd be easy to understand that partitions are simply physically independent containers that have identical logical structure.\n\nI understand that -- once upon a time the \"traditional\" partitioning\nwas mere a use case of inheritance tables. Parents and children are\nbasically independent from each other other than inherited attributes.\nThe nature of table inheritance compel column addition to a parent to\ninherit to children, but other changes need not to be propagated. As\ntime goes by, partition becomes the major use and new features added\nat the time are affected by the recognition. After the appearance of\nnative partitioning, the table inheritance seems to be regarded as a\nheritage that we need to maintain.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Jan 2021 15:00:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned\n table does nothing silently"
},
{
"msg_contents": "On 1/27/21 1:00 AM, Kyotaro Horiguchi wrote:\n> At Wed, 27 Jan 2021 05:30:29 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\n>> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n>>> \"CREATE TABLE\" is not \"CREATE LOGGED TABLE\". We can assume that as\n>>> \"CREATE <default logged-ness> TABLE\", where the default logged-ness\n>>> varies according to context. Or it might have been so since the beginning.\n>>> Currently we don't have the syntax \"CREATE LOGGED TABLE\", but we can add\n>>> that syntax.\n>>\n>> Yes, I thought of that possibility after a while too. But I felt a bit(?) hesitant to do it considering back-patching. Also, the idea requires ALTER TABLE ATTACH PARTITION will have to modify the logged-ness property of the target partition and its subpartitions with that of the parent partitioned table. However, your idea is one candidate worth pursuing, including whether or not to back-patch what.\n> \n> I think this and all possible similar changes won't be back-patched at\n> all. It is a desaster for any establish version to change this kind\n> of behavior.\n>>\n>>> We pursue relasing all fixes at once but we might release all fixes other than\n>>> some items that cannot be fixed for some technical reasons at the time, like\n>>> REPLICA IDENITTY.\n>>>\n>>> I'm not sure how long we will wait for the time of release, though.\n>>\n>> Anyway, we have to define the ideal behavior for each ALTER action based on some comprehensible principle. Yeah... this may become a long, tiring journey. (I anticipate some difficulty and compromise in reaching agreement, as was seen in the past discussion for the fix for ALTER TABLE REPLICA IDENTITY. Scary)\n> \n> It seems to me that we have agreed that the ideal behavior for ALTER\n> TABLE on a parent to propagate to descendants. An observed objection\n> is that behavior is dangerous for operations that requires large\n> amount of resources that could lead to failure after elapsing a long\n> time. The revealed difficulty of REPLICA IDENTITY is regarded as a\n> technical issue that should be overcome.\n\nThis thread seems to be stalled. Since it represents a bug fix is there \nsomething we can do in the meantime while a real solution is worked out?\n\nPerhaps just inform the user that nothing was done? Or get something \ninto the documentation?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 25 Mar 2021 12:51:44 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "On Thu, Mar 25, 2021 at 10:21 PM David Steele <david@pgmasters.net> wrote:\n> This thread seems to be stalled. Since it represents a bug fix is there\n> something we can do in the meantime while a real solution is worked out?\n>\n> Perhaps just inform the user that nothing was done? Or get something\n> into the documentation?\n\nAttaching a small patch that emits a warning when the persistence\nsetting of a partitioned table is changed and also added a note into\nthe docs. Please have a look at it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 2 Apr 2021 10:15:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
},
{
"msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n> Attaching a small patch that emits a warning when the persistence setting of a\r\n> partitioned table is changed and also added a note into the docs. Please have a\r\n> look at it.\r\n\r\nThank you. However, I think I'll withdraw this CF entry since I'm not sure I can take time for the time being to research and fix other various ALTER TABLE/INDEX issues. I'll mark this as withdrawn around the middle of next week unless someone wants to continue this.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Fri, 2 Apr 2021 05:25:40 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix] ALTER TABLE SET LOGGED/UNLOGGED on a partitioned table\n does nothing silently"
}
] |
[
{
"msg_contents": "Hi\n\nI just noticed that in\n\n https://www.postgresql.org/docs/13/acronyms.html\n (i.e., doc/src/sgml/acronyms.sgml)\n\nthere is under lemma 'HOT' a link with URL:\n\n \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD\n\nIs this deliberate? Surely pointing 13-docs into HEAD-docs is wrong?\n\nI see no good alternative place to point it too than:\n\n \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;h=68c6709aa885d39f4a77c85b6c9a7c937c1a1518;hb=refs/heads/REL_13_STABLE\n\n\nI made a patch to point there.\n\nBut such links with a hash are probably not maintainable; I don't know \nif a URL can be forged without the hash? Probably not.\n\nOtherwise it may be better to remove the link altogether and just have \nthe acronym lemma: 'HOT' and explanation 'Heap-Only Tuple'.\n\n\nErik Rijkers",
"msg_date": "Wed, 02 Dec 2020 10:39:14 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "wrong link in acronyms.sgml"
},
{
"msg_contents": "On 2020-Dec-02, Erik Rijkers wrote:\n\n> Hi\n> \n> I just noticed that in\n> \n> https://www.postgresql.org/docs/13/acronyms.html\n> (i.e., doc/src/sgml/acronyms.sgml)\n> \n> there is under lemma 'HOT' a link with URL:\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD\n> \n> Is this deliberate? Surely pointing 13-docs into HEAD-docs is wrong?\n\nYeah, I noticed this while working on the glossary.\n\nThis link works:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=REL_13_STABLE\nbut the problem with this one is that we'll have to update once per\nrelease.\n\n> Otherwise it may be better to remove the link altogether and just have the\n> acronym lemma: 'HOT' and explanation 'Heap-Only Tuple'.\n\nYeah, I don't think pointing to the source-code README file is a great\nresource. I think starting with 13 we should add an entry in the\nglossary for Heap-Only Tuple, and make the acronym entry point to the\nglossary. In fact, we could do this with several other acronyms too,\nsuch as DDL, DML, GEQO, LSN, MVCC, SQL, TID, WAL. The links to external\nresources can then be put in the glossary definition, if needed.\n\n\n\n",
"msg_date": "Wed, 2 Dec 2020 10:49:19 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: wrong link in acronyms.sgml"
}
] |
[
{
"msg_contents": "While reading about deprecating and removing various things in other \nthreads, I was wondering about how deprecated SELECT INTO is. There are \nvarious source code comments about this, but the SELECT INTO reference \npage only contains soft language like \"recommended\". I'm proposing the \nattached patch to stick a more explicit deprecation notice right at the top.\n\nI also found some gratuitous uses of SELECT INTO in various tests and \ndocumentation (not ecpg or plpgsql of course). Here is a patch to \nadjust those to CREATE TABLE AS.\n\nI don't have a specific plan for removing top-level SELECT INTO \naltogether, but there is a nontrivial amount of code for handling it, so \nthere would be some gain if it could be removed eventually.",
"msg_date": "Wed, 2 Dec 2020 12:54:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "SELECT INTO deprecation"
},
{
"msg_contents": "> On 2 Dec 2020, at 12:54, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> While reading about deprecating and removing various things in other threads, I was wondering about how deprecated SELECT INTO is. There are various source code comments about this, but the SELECT INTO reference page only contains soft language like \"recommended\". I'm proposing the attached patch to stick a more explicit deprecation notice right at the top.\n\n+ This command is deprecated. Use <link\nShould this get similar strong wording to other deprecated things where we add\n\"..and may/will eventually be removed\"?\n\n> I also found some gratuitous uses of SELECT INTO in various tests and documentation (not ecpg or plpgsql of course). Here is a patch to adjust those to CREATE TABLE AS.\n\nI didn't scan for others, but the ones included in the 0001 patch all looks\nfine and IMO improves readability.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 2 Dec 2020 13:08:47 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "st 2. 12. 2020 v 12:55 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> While reading about deprecating and removing various things in other\n> threads, I was wondering about how deprecated SELECT INTO is. There are\n> various source code comments about this, but the SELECT INTO reference\n> page only contains soft language like \"recommended\". I'm proposing the\n> attached patch to stick a more explicit deprecation notice right at the\n> top.\n>\n> I also found some gratuitous uses of SELECT INTO in various tests and\n> documentation (not ecpg or plpgsql of course). Here is a patch to\n> adjust those to CREATE TABLE AS.\n>\n> I don't have a specific plan for removing top-level SELECT INTO\n> altogether, but there is a nontrivial amount of code for handling it, so\n> there would be some gain if it could be removed eventually.\n>\n\n+1\n\nPavel\n\nst 2. 12. 2020 v 12:55 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:While reading about deprecating and removing various things in other \nthreads, I was wondering about how deprecated SELECT INTO is. There are \nvarious source code comments about this, but the SELECT INTO reference \npage only contains soft language like \"recommended\". I'm proposing the \nattached patch to stick a more explicit deprecation notice right at the top.\n\nI also found some gratuitous uses of SELECT INTO in various tests and \ndocumentation (not ecpg or plpgsql of course). Here is a patch to \nadjust those to CREATE TABLE AS.\n\nI don't have a specific plan for removing top-level SELECT INTO \naltogether, but there is a nontrivial amount of code for handling it, so \nthere would be some gain if it could be removed eventually.+1Pavel",
"msg_date": "Wed, 2 Dec 2020 17:14:11 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> While reading about deprecating and removing various things in other\n> threads, I was wondering about how deprecated SELECT INTO is. There are\n> various source code comments about this, but the SELECT INTO reference page\n> only contains soft language like \"recommended\". I'm proposing the attached\n> patch to stick a more explicit deprecation notice right at the top.\n\nI don't see much value in this. Users already have 5 years to adapt\ntheir code to new major versions of PG and that strikes me as plenty\nenough time and is why we support multiple major versions of PG for so\nlong. Users who keep pace and update for each major version aren't\nlikely to have issue making this change since they're already used to\nregularly updating their code for new major versions, while others are\ngoing to complain no matter when we remove it and will ignore any\ndeprecation notices we put out there, so there isn't much point in them.\n\n> I also found some gratuitous uses of SELECT INTO in various tests and\n> documentation (not ecpg or plpgsql of course). Here is a patch to adjust\n> those to CREATE TABLE AS.\n\nIf we aren't actually removing SELECT INTO then I don't know that it\nmakes sense to just stop testing it.\n\n> I don't have a specific plan for removing top-level SELECT INTO altogether,\n> but there is a nontrivial amount of code for handling it, so there would be\n> some gain if it could be removed eventually.\n\nWe should either remove it, or remove the comments that it's deprecated,\nnot try to make it more deprecated or try to somehow increase the\nrecommendation to not use it.\n\nI'd recommend we remove it and make note that it's been removed in v14\nin the back branches at the same time, which will give those who\nactually pay attention and care that much more time before v14 comes out\nto update their code (not that I'm all that worried, as they'll also see\nit in the beta release notes too).\n\nThanks,\n\nStephen",
"msg_date": "Wed, 2 Dec 2020 12:58:36 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 12:58:36PM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> > While reading about deprecating and removing various things in\n> > other threads, I was wondering about how deprecated SELECT INTO\n> > is. There are various source code comments about this, but the\n> > SELECT INTO reference page only contains soft language like\n> > \"recommended\". I'm proposing the attached patch to stick a more\n> > explicit deprecation notice right at the top.\n> \n> I don't see much value in this. Users already have 5 years to adapt\n> their code to new major versions of PG and that strikes me as plenty\n> enough time and is why we support multiple major versions of PG for\n> so long. Users who keep pace and update for each major version\n> aren't likely to have issue making this change since they're already\n> used to regularly updating their code for new major versions, while\n> others are going to complain no matter when we remove it and will\n> ignore any deprecation notices we put out there, so there isn't much\n> point in them.\n\n+1 for removing it entirely and including this prominently in the\nrelease notes.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 2 Dec 2020 20:57:33 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n>> While reading about deprecating and removing various things in other\n>> threads, I was wondering about how deprecated SELECT INTO is. There are\n>> various source code comments about this, but the SELECT INTO reference page\n>> only contains soft language like \"recommended\". I'm proposing the attached\n>> patch to stick a more explicit deprecation notice right at the top.\n\n> I don't see much value in this.\n\nYeah, if we want to kill it let's just do so. The negative language in\nthe reference page has been there since (at least) 7.1, so people can\nhardly say they didn't see it coming.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Dec 2020 15:35:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 03:35:46PM -0500, Tom Lane wrote:\n> Yeah, if we want to kill it let's just do so. The negative language in\n> the reference page has been there since (at least) 7.1, so people can\n> hardly say they didn't see it coming.\n\n+1. I got to wonder about the impact when migrating applications\nthough. SELECT INTO has a different meaning in Oracle, but SQL server\ncreates a new table like Postgres.\n--\nMichael",
"msg_date": "Thu, 3 Dec 2020 08:54:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "Stephen Frost schrieb am 02.12.2020 um 18:58:\n> We should either remove it, or remove the comments that it's deprecated,\n> not try to make it more deprecated or try to somehow increase the\n> recommendation to not use it.\n\n(I am writing from a \"user only\" perspective, not a developer)\n\nI don't see any warning about the syntax being \"deprecated\" in the current manual.\n\nThere is only a note that says that CTAS is \"recommended\" instead of SELET INTO,\nbut for me that's something entirely different than \"deprecating\" it.\n\nI personally have nothing against removing it, but I still see it used\na lot in questions on various online forums, and I would think that\na lot of people would be very unpleasantly surprised if a feature\ngets removed without any warning (the current \"recommendation\" does not\nconstitute a deprecation or even removal warning for most people I guess)\n\nI would vote for a clear deprecation message as suggested by Peter, but I would\nadd \"and will be removed in a future version\" to it.\n\nNot sure if maybe even back-patching that warning would make sense as well, so\nthat also users of older versions get to see that warning.\n\nThen target 15 or 16 as the release for removal, but not 14\n\nThomas\n\n\n",
"msg_date": "Thu, 3 Dec 2020 09:01:39 +0100",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On 2020-12-02 18:58, Stephen Frost wrote:\n>> I also found some gratuitous uses of SELECT INTO in various tests and\n>> documentation (not ecpg or plpgsql of course). Here is a patch to adjust\n>> those to CREATE TABLE AS.\n> If we aren't actually removing SELECT INTO then I don't know that it\n> makes sense to just stop testing it.\n\nThe point here was, there is still code that actually tests SELECT INTO \nspecifically. But unrelated test code that just wants to set up a quick \ntable with some rows in it ought to use the preferred syntax for doing so.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 13:14:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On 2020-12-03 00:54, Michael Paquier wrote:\n> I got to wonder about the impact when migrating applications\n> though. SELECT INTO has a different meaning in Oracle, but SQL server\n> creates a new table like Postgres.\n\nInteresting. This appears to be the case. SQL Server uses SELECT INTO \nto create a table, and does not appear to have CREATE TABLE AS.\n\nSo maybe we should keep it, but adjust the documentation to point out \nthis use case.\n\n[some snarky comment about AWS Babelfish here ... ;-) ]\n\n\n",
"msg_date": "Thu, 3 Dec 2020 13:16:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Interesting. This appears to be the case. SQL Server uses SELECT INTO \n> to create a table, and does not appear to have CREATE TABLE AS.\n> So maybe we should keep it, but adjust the documentation to point out \n> this use case.\n\nThat argument makes sense, but only if our version is a drop-in\nreplacement for SQL Server's version: if people have to adjust their\ncommands anyway in corner cases, we're not doing them any big favor.\nSo: are the syntax and semantics really a match? Do we have feature\nparity?\n\nAs I recall, a whole lot of the pain we have with INTO has to do\nwith the semantics we've chosen for INTO in a set-operation nest.\nWe think you can write something like\n\n SELECT ... INTO foo FROM ... UNION SELECT ... FROM ...\n\nbut we insist on the INTO being in the first component SELECT.\nI'd like to know exactly how much of that messiness is shared\nby SQL Server.\n\n(FWIW, I think the fact that SELECT INTO means something entirely\ndifferent in plpgsql is a good reason for killing off one version\nor the other. As things stand, it's mighty confusing.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Dec 2020 10:34:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On 2020-12-03 16:34, Tom Lane wrote:\n> As I recall, a whole lot of the pain we have with INTO has to do\n> with the semantics we've chosen for INTO in a set-operation nest.\n> We think you can write something like\n> \n> SELECT ... INTO foo FROM ... UNION SELECT ... FROM ...\n> \n> but we insist on the INTO being in the first component SELECT.\n> I'd like to know exactly how much of that messiness is shared\n> by SQL Server.\n\nOn sqlfiddle.com, this works:\n\nselect a into t3 from t1 union select a from t2;\n\nbut this gets an error:\n\nselect a from t1 union select a into t4 from t2;\n\nSELECT INTO must be the first query in a statement containing a UNION, \nINTERSECT or EXCEPT operator.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 20:26:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On 2020-12-03 20:26, Peter Eisentraut wrote:\n> On 2020-12-03 16:34, Tom Lane wrote:\n>> As I recall, a whole lot of the pain we have with INTO has to do\n>> with the semantics we've chosen for INTO in a set-operation nest.\n>> We think you can write something like\n>>\n>> SELECT ... INTO foo FROM ... UNION SELECT ... FROM ...\n>>\n>> but we insist on the INTO being in the first component SELECT.\n>> I'd like to know exactly how much of that messiness is shared\n>> by SQL Server.\n> \n> On sqlfiddle.com, this works:\n> \n> select a into t3 from t1 union select a from t2;\n> \n> but this gets an error:\n> \n> select a from t1 union select a into t4 from t2;\n> \n> SELECT INTO must be the first query in a statement containing a UNION,\n> INTERSECT or EXCEPT operator.\n\nSo, with that in mind, here is an alternative proposal that points out \nthat SELECT INTO has some use for compatibility.",
"msg_date": "Wed, 9 Dec 2020 21:48:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On Wed, Dec 9, 2020 at 09:48:54PM +0100, Peter Eisentraut wrote:\n> On 2020-12-03 20:26, Peter Eisentraut wrote:\n> > On 2020-12-03 16:34, Tom Lane wrote:\n> > > As I recall, a whole lot of the pain we have with INTO has to do\n> > > with the semantics we've chosen for INTO in a set-operation nest.\n> > > We think you can write something like\n> > > \n> > > SELECT ... INTO foo FROM ... UNION SELECT ... FROM ...\n> > > \n> > > but we insist on the INTO being in the first component SELECT.\n> > > I'd like to know exactly how much of that messiness is shared\n> > > by SQL Server.\n> > \n> > On sqlfiddle.com, this works:\n> > \n> > select a into t3 from t1 union select a from t2;\n> > \n> > but this gets an error:\n> > \n> > select a from t1 union select a into t4 from t2;\n> > \n> > SELECT INTO must be the first query in a statement containing a UNION,\n> > INTERSECT or EXCEPT operator.\n> \n> So, with that in mind, here is an alternative proposal that points out that\n> SELECT INTO has some use for compatibility.\n\nDo we really want to carry around confusing syntax for compatibility? I\ndoubt we would ever add INTO now, even for compatibility.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 15 Dec 2020 17:13:25 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On 2020-12-15 23:13, Bruce Momjian wrote:\n> Do we really want to carry around confusing syntax for compatibility? I\n> doubt we would ever add INTO now, even for compatibility.\n\nRight, we would very likely not add it now. But it doesn't seem to \ncause a lot of ongoing maintenance burden, so if there is a use case, \nit's not unreasonable to keep it around. I have no other motive here, I \nwas just curious.\n\n\n",
"msg_date": "Wed, 16 Dec 2020 18:07:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 06:07:08PM +0100, Peter Eisentraut wrote:\n> Right, we would very likely not add it now. But it doesn't seem to cause a\n> lot of ongoing maintenance burden, so if there is a use case, it's not\n> unreasonable to keep it around. I have no other motive here, I was just\n> curious.\n\nFrom the point of view of the code, this would simplify the grammar\nrules of SELECT, which is a good thing. And this would also simplify\nsome code in the rewriter where we create some CreateTableAsStmt\non-the-fly where an IntoClause is specified, but my take is that this\nis not really a burden when it comes to long-term maintenance. And\nthe git history tells, at least it seems to me so, that this is not\nmanipulated much.\n--\nMichael",
"msg_date": "Thu, 17 Dec 2020 10:30:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On 2020-12-17 02:30, Michael Paquier wrote:\n> On Wed, Dec 16, 2020 at 06:07:08PM +0100, Peter Eisentraut wrote:\n>> Right, we would very likely not add it now. But it doesn't seem to cause a\n>> lot of ongoing maintenance burden, so if there is a use case, it's not\n>> unreasonable to keep it around. I have no other motive here, I was just\n>> curious.\n> \n> From the point of view of the code, this would simplify the grammar\n> rules of SELECT, which is a good thing. And this would also simplify\n> some code in the rewriter where we create some CreateTableAsStmt\n> on-the-fly where an IntoClause is specified, but my take is that this\n> is not really a burden when it comes to long-term maintenance. And\n> the git history tells, at least it seems to me so, that this is not\n> manipulated much.\n\nI have committed my small documentation patch that points out \ncompatibility with other implementations.\n\n\n",
"msg_date": "Sat, 30 Jan 2021 11:29:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SELECT INTO deprecation"
},
{
"msg_contents": "On 12/15/20 5:13 PM, Bruce Momjian wrote:\n> On Wed, Dec 9, 2020 at 09:48:54PM +0100, Peter Eisentraut wrote:\n>> On 2020-12-03 20:26, Peter Eisentraut wrote:\n>> > On 2020-12-03 16:34, Tom Lane wrote:\n>> > > As I recall, a whole lot of the pain we have with INTO has to do\n>> > > with the semantics we've chosen for INTO in a set-operation nest.\n>> > > We think you can write something like\n>> > > \n>> > > SELECT ... INTO foo FROM ... UNION SELECT ... FROM ...\n>> > > \n>> > > but we insist on the INTO being in the first component SELECT.\n>> > > I'd like to know exactly how much of that messiness is shared\n>> > > by SQL Server.\n>> > \n>> > On sqlfiddle.com, this works:\n>> > \n>> > select a into t3 from t1 union select a from t2;\n>> > \n>> > but this gets an error:\n>> > \n>> > select a from t1 union select a into t4 from t2;\n>> > \n>> > SELECT INTO must be the first query in a statement containing a UNION,\n>> > INTERSECT or EXCEPT operator.\n>> \n>> So, with that in mind, here is an alternative proposal that points out that\n>> SELECT INTO has some use for compatibility.\n> \n> Do we really want to carry around confusing syntax for compatibility? I\n> doubt we would ever add INTO now, even for compatibility.\n> \n\nIf memory serves the INTO syntax is a result from the first incarnation \nof PL/pgSQL being based on the Oracle PL/SQL syntax. I think it has been \nthere from the very beginning, which makes it likely that by now a lot \nof migrants are using it in rather old code.\n\nI don't think it should be our business to throw wrenches into their \nexisting applications.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Tue, 30 Mar 2021 15:05:51 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO deprecation"
}
] |
[
{
"msg_contents": "There are a number of elog(LOG) calls that appear to be user-facing, so \nthey should be ereport()s. This patch changes them. There are more \nelog(LOG) calls remaining, but they all appear to be some kind of \ndebugging support. Also, I changed a few elog(FATAL)s that were nearby, \nbut I didn't specifically look for them.",
"msg_date": "Wed, 2 Dec 2020 14:26:24 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "convert elog(LOG) calls to ereport"
},
{
"msg_contents": "On 2020-Dec-02, Peter Eisentraut wrote:\n\n> There are a number of elog(LOG) calls that appear to be user-facing, so they\n> should be ereport()s. This patch changes them. There are more elog(LOG)\n> calls remaining, but they all appear to be some kind of debugging support.\n> Also, I changed a few elog(FATAL)s that were nearby, but I didn't\n> specifically look for them.\n\n> -\t\telog(LOG, \"WSAIoctl(SIO_KEEPALIVE_VALS) failed: %ui\",\n> -\t\t\t WSAGetLastError());\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg(\"WSAIoctl(SIO_KEEPALIVE_VALS) failed: %ui\",\n> +\t\t\t\t\t\tWSAGetLastError())));\n\nPlease take the opportunity to move the flag name out of the message in\nthis one, also. I do wonder if it'd be a good idea to move the syscall\nname itself out of the message, too; that would reduce the number of\nmessages to translate 50x to just \"%s(%s) failed: %m\" instead of one\nmessage per distinct syscall.\n\nShould fd.c messages do errcode_for_file_access() like elsewhere?\n\nOverall, it looks good to me.\n\nThanks\n\n\n\n",
"msg_date": "Wed, 2 Dec 2020 11:04:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: convert elog(LOG) calls to ereport"
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 11:04:45AM -0300, Alvaro Herrera wrote:\n> Please take the opportunity to move the flag name out of the message in\n> this one, also. I do wonder if it'd be a good idea to move the syscall\n> name itself out of the message, too; that would reduce the number of\n> messages to translate 50x to just \"%s(%s) failed: %m\" instead of one\n> message per distinct syscall.\n\n+1.\n\n+ else\n+ ereport(LOG,\n+ (errmsg(\"checkpoint starting:%s%s%s%s%s%s%s%s\",\n\nWould it be better to add a note for translators here, in short that\nall those %s are options related to checkpoint/restartpoints?\n\nThe ones in geqo_erx.c look like debugging information, and the ones\nin win32_shmem.c for segment creation are code paths only used by the\npostmaster.\n--\nMichael",
"msg_date": "Thu, 3 Dec 2020 16:02:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: convert elog(LOG) calls to ereport"
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 02:26:24PM +0100, Peter Eisentraut wrote:\n> There are a number of elog(LOG) calls that appear to be user-facing, so they\n> should be ereport()s.\n\n> @@ -8591,25 +8604,46 @@ LogCheckpointEnd(bool restartpoint)\n> \t\t\tCheckpointStats.ckpt_sync_rels;\n> \taverage_msecs = (long) ((average_sync_time + 999) / 1000);\n> \n> -\telog(LOG, \"%s complete: wrote %d buffers (%.1f%%); \"\n> -\t\t \"%d WAL file(s) added, %d removed, %d recycled; \"\n> -\t\t \"write=%ld.%03d s, sync=%ld.%03d s, total=%ld.%03d s; \"\n> -\t\t \"sync files=%d, longest=%ld.%03d s, average=%ld.%03d s; \"\n> -\t\t \"distance=%d kB, estimate=%d kB\",\n\n+1 to this change.\n\n> @@ -1763,7 +1771,8 @@ pq_setkeepalivesidle(int idle, Port *port)\n> #else\n> \tif (idle != 0)\n> \t{\n> -\t\telog(LOG, \"setting the keepalive idle time is not supported\");\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg(\"setting the keepalive idle time is not supported\")));\n\n+1\n\n> --- a/src/backend/optimizer/geqo/geqo_erx.c\n> +++ b/src/backend/optimizer/geqo/geqo_erx.c\n> @@ -420,7 +420,8 @@ edge_failure(PlannerInfo *root, Gene *gene, int index, Edge *edge_table, int num\n> \t\t\t}\n> \t\t}\n> \n> -\t\telog(LOG, \"no edge found via random decision and total_edges == 4\");\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg(\"no edge found via random decision and total_edges == 4\")));\n\nThe user can't act upon this without reading the source code. This and the\nother messages proposed in this file are better as elog().\n\n> @@ -343,7 +346,8 @@ PGSharedMemoryCreate(Size size,\n> \t * care.\n> \t */\n> \tif (!CloseHandle(hmap))\n> -\t\telog(LOG, \"could not close handle to shared memory: error code %lu\", GetLastError());\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg(\"could not close handle to shared memory: error code %lu\", GetLastError())));\n\nThe numerous messages proposed in src/backend/port/ files are can't-happen\nevents, and there's little a DBA can do without reading the source code.\nThey're better as elog().\n\n> @@ -6108,8 +6111,9 @@ backend_read_statsfile(void)\n> \t\t\t\t/* Copy because timestamptz_to_str returns a static buffer */\n> \t\t\t\tfiletime = pstrdup(timestamptz_to_str(file_ts));\n> \t\t\t\tmytime = pstrdup(timestamptz_to_str(cur_ts));\n> -\t\t\t\telog(LOG, \"stats collector's time %s is later than backend local time %s\",\n> -\t\t\t\t\t filetime, mytime);\n> +\t\t\t\tereport(LOG,\n> +\t\t\t\t\t\t(errmsg(\"statistics collector's time %s is later than backend local time %s\",\n> +\t\t\t\t\t\t\t\tfiletime, mytime)));\n\n+1\n\n> @@ -769,10 +769,11 @@ StartupReplicationOrigin(void)\n> \t\treplication_states[last_state].remote_lsn = disk_state.remote_lsn;\n> \t\tlast_state++;\n> \n> -\t\telog(LOG, \"recovered replication state of node %u to %X/%X\",\n> -\t\t\t disk_state.roident,\n> -\t\t\t (uint32) (disk_state.remote_lsn >> 32),\n> -\t\t\t (uint32) disk_state.remote_lsn);\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg(\"recovered replication state of node %u to %X/%X\",\n> +\t\t\t\t\t\tdisk_state.roident,\n> +\t\t\t\t\t\t(uint32) (disk_state.remote_lsn >> 32),\n> +\t\t\t\t\t\t(uint32) disk_state.remote_lsn)));\n\n+1\n\n> @@ -1914,7 +1914,8 @@ FileClose(File file)\n> \n> \t\t/* in any case do the unlink */\n> \t\tif (unlink(vfdP->fileName))\n> -\t\t\telog(LOG, \"could not unlink file \\\"%s\\\": %m\", vfdP->fileName);\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errmsg(\"could not unlink file \\\"%s\\\": %m\", vfdP->fileName)));\n\n+1\n\n> \n> \t\t/* and last report the stat results */\n> \t\tif (stat_errno == 0)\n> @@ -1922,7 +1923,8 @@ FileClose(File file)\n> \t\telse\n> \t\t{\n> \t\t\terrno = stat_errno;\n> -\t\t\telog(LOG, \"could not stat file \\\"%s\\\": %m\", vfdP->fileName);\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errmsg(\"could not stat file \\\"%s\\\": %m\", vfdP->fileName)));\n\n+1\n\n\nFor the changes I didn't mention explicitly (most of them), I'm -0.5. Many of\nthem \"can't happen\", use source code terms of art, and/or breach guideline\n\"Avoid mentioning called function names, either; instead say what the code was\ntrying to do\" (https://www.postgresql.org/docs/devel/error-style-guide.html).\n\n\n",
"msg_date": "Wed, 2 Dec 2020 23:55:30 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: convert elog(LOG) calls to ereport"
},
{
"msg_contents": "On 2020-12-02 15:04, Alvaro Herrera wrote:\n>> -\t\telog(LOG, \"WSAIoctl(SIO_KEEPALIVE_VALS) failed: %ui\",\n>> -\t\t\t WSAGetLastError());\n>> +\t\tereport(LOG,\n>> +\t\t\t\t(errmsg(\"WSAIoctl(SIO_KEEPALIVE_VALS) failed: %ui\",\n>> +\t\t\t\t\t\tWSAGetLastError())));\n> \n> Please take the opportunity to move the flag name out of the message in\n> this one, also.\n\ndone\n\n> I do wonder if it'd be a good idea to move the syscall\n> name itself out of the message, too; that would reduce the number of\n> messages to translate 50x to just \"%s(%s) failed: %m\" instead of one\n> message per distinct syscall.\n\nSeems useful, but perhaps as a separate project.\n\n> Should fd.c messages do errcode_for_file_access() like elsewhere?\n\nyes, done\n\n\n",
"msg_date": "Fri, 4 Dec 2020 14:34:26 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: convert elog(LOG) calls to ereport"
},
{
"msg_contents": "On 2020-12-03 08:02, Michael Paquier wrote:\n> + else\n> + ereport(LOG,\n> + (errmsg(\"checkpoint starting:%s%s%s%s%s%s%s%s\",\n> \n> Would it be better to add a note for translators here, in short that\n> all those %s are options related to checkpoint/restartpoints?\n\ndone\n\n> The ones in geqo_erx.c look like debugging information, and the ones\n> in win32_shmem.c for segment creation are code paths only used by the\n> postmaster.\n\nI dialed those back a bit.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 14:34:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: convert elog(LOG) calls to ereport"
},
{
"msg_contents": "On 2020-12-03 08:55, Noah Misch wrote:\n> For the changes I didn't mention explicitly (most of them), I'm -0.5. Many of\n> them \"can't happen\", use source code terms of art, and/or breach guideline\n> \"Avoid mentioning called function names, either; instead say what the code was\n> trying to do\" (https://www.postgresql.org/docs/devel/error-style-guide.html).\n\nThanks for the detailed feedback. I dialed back some of the changes \nbased on your feedback.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 14:35:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: convert elog(LOG) calls to ereport"
},
{
"msg_contents": "On Fri, Dec 04, 2020 at 02:34:26PM +0100, Peter Eisentraut wrote:\n> On 2020-12-02 15:04, Alvaro Herrera wrote:\n>> I do wonder if it'd be a good idea to move the syscall\n>> name itself out of the message, too; that would reduce the number of\n>> messages to translate 50x to just \"%s(%s) failed: %m\" instead of one\n>> message per distinct syscall.\n> \n> Seems useful, but perhaps as a separate project.\n\n- elog(LOG, \"getsockname() failed: %m\");\n+ ereport(LOG,\n+ (errmsg(\"getsockname() failed: %m\")));\nFWIW, I disagree with the approach taken by eb93f3a. As of HEAD, it\nis now required to translate all those strings. I think that it would\nhave been better to remove the function names from all those error\nmessages and not require the same pattern to be translated N times.\n--\nMichael",
"msg_date": "Sat, 5 Dec 2020 11:22:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: convert elog(LOG) calls to ereport"
},
{
"msg_contents": "On 05.12.20 03:22, Michael Paquier wrote:\n> On Fri, Dec 04, 2020 at 02:34:26PM +0100, Peter Eisentraut wrote:\n>> On 2020-12-02 15:04, Alvaro Herrera wrote:\n>>> I do wonder if it'd be a good idea to move the syscall\n>>> name itself out of the message, too; that would reduce the number of\n>>> messages to translate 50x to just \"%s(%s) failed: %m\" instead of one\n>>> message per distinct syscall.\n>>\n>> Seems useful, but perhaps as a separate project.\n> \n> - elog(LOG, \"getsockname() failed: %m\");\n> + ereport(LOG,\n> + (errmsg(\"getsockname() failed: %m\")));\n> FWIW, I disagree with the approach taken by eb93f3a. As of HEAD, it\n> is now required to translate all those strings. I think that it would\n> have been better to remove the function names from all those error\n> messages and not require the same pattern to be translated N times.\n\nI made another pass across this and implemented the requested change.\n\n\n",
"msg_date": "Fri, 23 Apr 2021 14:42:06 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: convert elog(LOG) calls to ereport"
}
] |
[
{
"msg_contents": "Previous discussions on macOS SIP:\n\nhttps://www.postgresql.org/message-id/flat/561E73AB.8060800%40gmx.net\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoYGi5oR8Rvb2-SY1_WEjd76H5gCqSukxFQ66qR7MewWAQ%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/6a4d6124-41f0-756b-0811-c5c5def7ef4b%402ndquadrant.com\n\nTests that use a temporary installation (i.e., make check) use a \nplatform-specific environment variable to make programs and libraries \nfind shared libraries in the temporary installation rather than in the \nactual configured installation target directory. On macOS, this is \nDYLD_LIBRARY_PATH. Ever since System Integrity Protection (SIP) \narrived, this hasn't worked anymore, because DYLD_LIBRARY_PATH gets \ndisabled by the system (in somewhat obscure ways). The workaround was \nto either disable SIP or to do make install before make check.\n\nThe solution here is to use install_name_tool to edit the shared library \npath in the binaries (programs and libraries) after installation into \nthe temporary prefix.\n\nThe solution is, I think, correct in principle, but the implementation \nis hackish, since it hardcodes the names and versions of the interesting \nshared libraries in an unprincipled way. More refinement is needed \nhere. Ideas welcome.\n\nIn order to allow all that, we need to link everything with the option \n-headerpad_max_install_names so that there is enough room in the \nbinaries to put in the temporary path in place of the original one. \n(The temporary path is strictly longer than the original one.) This \nbloats the binaries, so it might only be appropriate for development \nbuilds and should perhaps be hidden behind some option. Ideas also \nwelcome here.\n\nThe attached patch works for me. You can see that it disables setting \nDYLD_LIBRARY_PATH, but the tests still work. Verification on other \nsystem variantions welcome, of course.\n\nComments welcome. I think this direction is promising, but some details \nneed to be addressed.",
"msg_date": "Wed, 2 Dec 2020 15:28:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "macOS SIP, next try"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> The solution here is to use install_name_tool to edit the shared library \n> path in the binaries (programs and libraries) after installation into \n> the temporary prefix.\n\nAh, this indeed seems like a promising direction.\n\n> The solution is, I think, correct in principle, but the implementation \n> is hackish, since it hardcodes the names and versions of the interesting \n> shared libraries in an unprincipled way. More refinement is needed \n> here. Ideas welcome.\n\nYeah, it's pretty hacky. I tried the patch as given, and \"make check\"\nfailed with\n\nerror: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/install_name_tool: can't open file: /Users/tgl/pgsql/tmp_install//Users/tgl/testversion/lib/*.so (No such file or directory)\nmake: *** [temp-install] Error 1\n\nevidently because there are no files named *.so in the \"lib\" directory\nin my build. I removed '$(abs_top_builddir)/tmp_install/$(libdir)'/*.so\nfrom the \"call temp_install_postprocess\" invocation, and then it got\nthrough \"make temp-install\", and most of the core regression tests worked.\nBut a couple of them failed with\n\n+ERROR: could not load library \"/Users/tgl/pgsql/tmp_install/Users/tgl/testversion/lib/postgresql/libpqwalreceiver.so\": dlopen(/Users/tgl/pgsql/tmp_install/Users/tgl/testversion/lib/postgresql/libpqwalreceiver.so, 10): Library not loaded: /Users/tgl/testversion/lib/libpq.5.dylib\n+ Referenced from: /Users/tgl/pgsql/tmp_install/Users/tgl/testversion/lib/postgresql/libpqwalreceiver.so\n+ Reason: image not found\n\nWhat this looks like to me is some confusion about whether \"pgsql\"\nor \"postgresql\" has been injected into the install path or not.\n(This build is using /Users/tgl/testversion as the install --prefix;\nI surmise that your path had \"pgsql\" or \"postgres\" in it already.)\n\nOne plausible way to fix this is to use \"find\" on the install tree\nto locate files to modify, instead of hard-wiring knowledge into\nthe \"call temp_install_postprocess\" invocation. A possibly better\nway is to arrange to invoke install_name_tool once per installed\nexecutable or shlib, at the time we do \"make install\" for it.\nNot sure how hard the latter approach is given our Makefiles.\n\nAnother suggestion, after reading \"man install_name_tool\", is\nthat maybe you could avoid having to know the names of the specific\nlibraries if you didn't use per-library '-change' switches, but\ninstead installed the test-installation rpath with the -add_rpath\nswitch. This would be closer to the spirit of the existing test\nprocess anyway.\n\nAside from removing that annoying requirement, this method would\nbound the amount of extra header space needed, so instead of\n\"-headerpad_max_install_names\" we could use \"-headerpad N\" with\nsome not-hugely-generous N. That ought to damp down the executable\nbloat to the point where we'd not have to worry too much about it.\n\n(Pokes around ... it appears that on my old dinosaur prairiedog,\ninstall_name_tool exists and has the -change switch, but not\nanything for rpath. So we'd better investigate how old -add_rpath\nis. Maybe there's another way to set rpath that works further back?\nAt least the -headerpad switches exist there.)\n\nI haven't experimented with any of these ideas, so maybe they won't\nwork for some reason. But I think we ought to push forward looking\nfor a solution, because it's getting harder and harder to disable\nSIP and still have a useful macOS installation :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Jan 2021 22:54:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: macOS SIP, next try"
},
{
"msg_contents": "\n\n> On Dec 2, 2020, at 6:28 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> Previous discussions on macOS SIP:\n> \n> https://www.postgresql.org/message-id/flat/561E73AB.8060800%40gmx.net\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoYGi5oR8Rvb2-SY1_WEjd76H5gCqSukxFQ66qR7MewWAQ%40mail.gmail.com\n> https://www.postgresql.org/message-id/flat/6a4d6124-41f0-756b-0811-c5c5def7ef4b%402ndquadrant.com\n\nSee also https://www.postgresql.org/message-id/flat/18012hGLG6HJ9pQDkHAMYuwQKg%40sparkpost.com\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Jan 2021 20:03:27 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: macOS SIP, next try"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> See also https://www.postgresql.org/message-id/flat/18012hGLG6HJ9pQDkHAMYuwQKg%40sparkpost.com\n\nYeah. As I recall from that thread and prior ones, the bottleneck is\nreally just /bin/sh: something, either the kernel or sh itself, is\nclearing out DYLD_LIBRARY_PATH when a shell script starts. (Since\nPATH is not reset at the same time, this seems the height of\nstupidity/uselessness; but I digress.) But if \"make\" has a setting\nfor that variable, the setting *is* successfully passed down to\nexecuted programs. Which implies that make doesn't use the shell while\nrunning stuff, which seems odd. But it gives us a possible escape\nhatch.\n\nI experimented a bit, and found that the attached very-quick-hack\npatch is enough to get through \"make check\" and also the isolation\ntests. Unsurprisingly, none of the TAP tests work. I experimented\na bit and it looks like \"prove\" may be failing to pass down\nDYLD_LIBRARY_PATH to the test scripts, which would be annoying but\ncould be worked around (say, by getting TestLib.pm to pull the\nvalue from somewhere).\n\nBased on what I've got here, this avenue probably ends up messier\nand more invasive than the install_name_tool avenue, since we'd\nlikely have to teach quite a few places about explicitly passing\ndown DYLD_LIBRARY_PATH. It does have the advantage that \"make\ncheck\" doesn't have to test executables that are different from\nthe bits we'll really install; but that's probably not worth all\nthat much. Unless someone can think of a cute way to centralize\nthe changes some more, I'm inclined to think this isn't the way\nforward. But it seemed worth posting the results anyway.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 05 Jan 2021 13:24:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: macOS SIP, next try"
},
{
"msg_contents": "BTW, a couple other things that should be noted here:\n\n* Per observation in a nearby thread, install_name_tool seems to\nbe provided by Apple's \"Command Line Tools\", but not by Xcode.\nThis is also true of bison and flex, but we don't require those\nin a build-from-tarball. So relying on install_name_tool would\nrepresent an expansion of our requirements for end-user builds.\nI don't think this is a fatal objection, not least because the\nCLT are a great deal smaller than Xcode --- so we likely ought to\nencourage people to install just the former. But it's a change.\n\n* By chance I came across a previous thread in which someone\nsuggested use of install_name_tool:\n\nhttps://www.postgresql.org/message-id/flat/CAN-RpxCFWqPXQD8CqP3UBqdMwUgQWLG%2By7vQgxQdJR8aiKB89A%40mail.gmail.com\n\nI griped in that thread that it didn't help for test executables\nthat don't get installed, such as isolationtester, because the\nhack never got applied to them. Re-reading Peter's patch with\nthat in mind, I see he replaces the path in isolationtester\nin-place, which means that it works in \"make check\" but will\nfail in \"make installcheck\". So I'm not sure how to cope\nwith that, but it's a problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jan 2021 12:32:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: macOS SIP, next try"
},
{
"msg_contents": "So, we haven't gotten anywhere satisfying with these proposed technical \nsolutions.\n\nI have since learned that there is a way to disable only the part of SIP \nthat is relevant for us. This seems like a useful compromise, and it \nappears that a number of other open-source projects are following the \nsame route. I suggest the attached documentation patch and then close \nthis issue.",
"msg_date": "Mon, 1 Mar 2021 08:15:42 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: macOS SIP, next try"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I have since learned that there is a way to disable only the part of SIP \n> that is relevant for us. This seems like a useful compromise, and it \n> appears that a number of other open-source projects are following the \n> same route. I suggest the attached documentation patch and then close \n> this issue.\n\nHmm, interesting. Where is it documented what this does?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Mar 2021 09:44:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: macOS SIP, next try"
},
{
"msg_contents": "On 01.03.21 15:44, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I have since learned that there is a way to disable only the part of SIP\n>> that is relevant for us. This seems like a useful compromise, and it\n>> appears that a number of other open-source projects are following the\n>> same route. I suggest the attached documentation patch and then close\n>> this issue.\n> \n> Hmm, interesting. Where is it documented what this does?\n\nNot really documented AFAICT, but here is a source: \nhttps://developer.apple.com/forums/thread/17452\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:02:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: macOS SIP, next try"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 01.03.21 15:44, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> I have since learned that there is a way to disable only the part of SIP\n>>> that is relevant for us. This seems like a useful compromise, and it\n>>> appears that a number of other open-source projects are following the\n>>> same route. I suggest the attached documentation patch and then close\n>>> this issue.\n\n>> Hmm, interesting. Where is it documented what this does?\n\n> Not really documented AFAICT, but here is a source: \n> https://developer.apple.com/forums/thread/17452\n\nHmm. So I tried this, ie \"csrutil enable --without debug\" in the\nrecovery system, and after rebooting what I see is\n\n$ csrutil status\nSystem Integrity Protection status: unknown (Custom Configuration).\n\nConfiguration:\n Apple Internal: disabled\n Kext Signing: enabled\n Filesystem Protections: disabled\n Debugging Restrictions: enabled\n DTrace Restrictions: enabled\n NVRAM Protections: enabled\n BaseSystem Verification: enabled\n\nThis is an unsupported configuration, likely to break in the future and leave your machine in an unknown state.\n$ \n\nwhich is, shall we say, not the set of options the command appeared\nto select. It does work, in the sense that \"make check\" is able\nto complete without having an installation tree. But really, Apple\nis doing their level best to hang a \"here be dragons\" sign on this.\nI'm not comfortable with recommending it, and I'm about to go\nturn it off again, because I have no damn idea what it really does.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 19:36:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: macOS SIP, next try"
},
{
"msg_contents": "On 05.03.21 01:36, Tom Lane wrote:\n> Hmm. So I tried this, ie \"csrutil enable --without debug\" in the\n> recovery system, and after rebooting what I see is\n> \n> $ csrutil status\n> System Integrity Protection status: unknown (Custom Configuration).\n> \n> Configuration:\n> Apple Internal: disabled\n> Kext Signing: enabled\n> Filesystem Protections: disabled\n> Debugging Restrictions: enabled\n> DTrace Restrictions: enabled\n> NVRAM Protections: enabled\n> BaseSystem Verification: enabled\n> \n> This is an unsupported configuration, likely to break in the future and leave your machine in an unknown state.\n> $\n> \n> which is, shall we say, not the set of options the command appeared\n> to select. It does work, in the sense that \"make check\" is able\n> to complete without having an installation tree. But really, Apple\n> is doing their level best to hang a \"here be dragons\" sign on this.\n\nYeah, you'd think --without-debug would make \"Debugging Restrictions\" \ndisabled. And it used to do that in older macOS versions. My \nassessment is that it actually does work, as you also observed, but that \nthe status display is somehow displaying things incorrectly.\n\n> I'm not comfortable with recommending it, and I'm about to go\n> turn it off again, because I have no damn idea what it really does.\n\nOkay. Interested users can find it at their own risk in this thread or \nsimilar ones elsewhere.\n\nI'm going to close this CF entry now. This is apparently as far as we \ncan take it for now.\n\n\n",
"msg_date": "Mon, 8 Mar 2021 10:20:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: macOS SIP, next try"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgreSQL allows writing custom encoding conversion functions between \nany character encodings, using the CREATE CONVERSION command. It's \npretty flexible, you can define default and non-default conversions, and \nthe conversions live in schemas so you can have multiple conversions \ninstalled in a system and you can switch between them by changing \nsearch_path.\n\nHowever:\n\nWe never use non-default conversions for anything. All code that \nperforms encoding conversions only cares about the default ones.\n\nI think this flexibility is kind of ridiculous anyway. If a built-in \nconversion routine doesn't handle some characters correctly, surely we \nshould fix the built-in function, rather than provide a mechanism for \nhaving your own conversion functions. If you truly need to perform the \nconversions differently than the built-in routines do, you can always \nperform the conversion in the client instead.\n\nNote that we don't support adding completely new custom encodings, all \nthis is just for conversions between the built-in encodings that we have.\n\nI propose that we add a notice to the CREATE CONVERSION docs to say that \nit is deprecated, and remove it in a few years.\n\nAny objections? Anyone using custom encoding conversions in production?\n\n- Heikki\n\n\n",
"msg_date": "Wed, 2 Dec 2020 18:04:11 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Deprecate custom encoding conversions"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> I propose that we add a notice to the CREATE CONVERSION docs to say that \n> it is deprecated, and remove it in a few years.\n\nWhile I agree that it's probably not that useful, what would we gain\nby removing it? If you intend to remove those catalogs, what lookup\nmechanism would replace them? We can't exactly drop the encoding\nconversion functionality.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Dec 2020 11:18:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "On 02/12/2020 18:18, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> I propose that we add a notice to the CREATE CONVERSION docs to say that\n>> it is deprecated, and remove it in a few years.\n> \n> While I agree that it's probably not that useful, what would we gain\n> by removing it? If you intend to remove those catalogs, what lookup\n> mechanism would replace them? We can't exactly drop the encoding\n> conversion functionality.\n\nI'm thinking of replacing the catalog with a hard-coded 2D array of \nfunction pointers. Or something along those lines.\n\nI had this idea while looking at the encoding conversions performed in \nCOPY. The current conversion functions return a null-terminated, \npalloc'd string, which is a bit awkward for the callers. The caller \nneeds to call strlen() on the result, and it would be nice to reuse the \nsame buffer for all the conversions. And I've got a hunch that it'd be \nfaster to convert the whole 64 kB raw input buffer in one go, rather \nthan convert each line separately, but the current function signature \ndoesn't make that easy either.\n\nSo I'm looking for refactoring the conversion routines to be more \nconvenient for the callers. But the current function signature is \nexposed at the SQL level, thanks to CREATE CONVERSION.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 2 Dec 2020 18:31:53 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 02/12/2020 18:18, Tom Lane wrote:\n>> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>>> I propose that we add a notice to the CREATE CONVERSION docs to say that\n>>> it is deprecated, and remove it in a few years.\n\n>> While I agree that it's probably not that useful, what would we gain\n>> by removing it? If you intend to remove those catalogs, what lookup\n>> mechanism would replace them? We can't exactly drop the encoding\n>> conversion functionality.\n\n> I'm thinking of replacing the catalog with a hard-coded 2D array of \n> function pointers. Or something along those lines.\n\nI like the current structure in which the encoding functions are in\nseparate .so's, so that you don't load the ones you don't need.\nIt's not real clear how to preserve that if we hard-code things.\n\n> So I'm looking for refactoring the conversion routines to be more \n> convenient for the callers. But the current function signature is \n> exposed at the SQL level, thanks to CREATE CONVERSION.\n\nI'd be the first to agree that the current API for conversion functions is\nnot terribly well-designed. But what if we just change it? That can't be\nany worse of a compatibility hit than removing CREATE CONVERSION\naltogether.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Dec 2020 13:02:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "From: Heikki Linnakangas <hlinnaka@iki.fi>\r\n> I propose that we add a notice to the CREATE CONVERSION docs to say that\r\n> it is deprecated, and remove it in a few years.\r\n> \r\n> Any objections? Anyone using custom encoding conversions in production?\r\n\r\nI can't answer deeper questions because I'm not familiar with character sets, but I saw some Japanese web articles that use CREATE CONVERSION to handle external characters. So, I think we may as well retain it. (OTOH, I'm wondering how other DBMSs support external characters and what Postgres should implement it.)\r\n\r\nAlso, the SQL standard has CREATE CHARACTER SET and CREATE TRANSLATION. I don't know how these are useful, but the mechanism of CREATE CONVERSION can be used to support them.\r\n\r\n\r\nCREATE CHARACTER SET <character set name> [ AS ]\r\n\r\n<character set source> [ <collate clause> ]\r\n\r\n\r\nCREATE TRANSLATION <transliteration name> FOR <source character set specification>\r\n\r\nTO <target character set specification> FROM <transliteration source>\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 3 Dec 2020 00:54:56 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Deprecate custom encoding conversions"
},
{
"msg_contents": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> From: Heikki Linnakangas <hlinnaka@iki.fi>\n>> Any objections? Anyone using custom encoding conversions in production?\n\n> I can't answer deeper questions because I'm not familiar with character sets, but I saw some Japanese web articles that use CREATE CONVERSION to handle external characters. So, I think we may as well retain it. (OTOH, I'm wondering how other DBMSs support external characters and what Postgres should implement it.)\n\nI recall a discussion in which someone explained that things are messy for\nJapanese because there's not really just one version of SJIS; there are\nseveral, because various groups extended the original standard in not-\nquite-compatible ways. In turn this means that there's not really one\nwell-defined conversion to/from Unicode --- which is the reason why\nallowing custom conversions seemed like a good idea. I don't know\nwhether anyone over there is actually using custom conversions, but\nI'd be hesitant to just nuke the feature altogether.\n\nHaving said that, it doesn't seem like the conversion API is necessarily\nany more set in stone than any of our other C-level APIs. We break things\nat the C API level whenever there's sufficient reason.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Dec 2020 20:47:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 12:54:56AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> I can't answer deeper questions because I'm not familiar with\n> character sets, but I saw some Japanese web articles that use CREATE\n> CONVERSION to handle external characters. So, I think we may as\n> well retain it. (OTOH, I'm wondering how other DBMSs support\n> external characters and what Postgres should implement it.)\n\nTsunakawa-san, could you post a link to this article, if possible? I\nam curious about their problem and why they used CREATE CONVERSION as\na way to solve it. That's fine even if it is in Japanese.\n--\nMichael",
"msg_date": "Thu, 3 Dec 2020 10:50:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "> I recall a discussion in which someone explained that things are messy for\n> Japanese because there's not really just one version of SJIS; there are\n> several, because various groups extended the original standard in not-\n> quite-compatible ways. In turn this means that there's not really one\n> well-defined conversion to/from Unicode\n\nThat's true.\n\n> --- which is the reason why\n> allowing custom conversions seemed like a good idea. I don't know\n> whether anyone over there is actually using custom conversions, but\n> I'd be hesitant to just nuke the feature altogether.\n\nBy Googling I found an instance which is using CREATE CONVERSION\n(Japanese article).\n\nhttp://grep.blog49.fc2.com/blog-entry-87.html\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 03 Dec 2020 11:27:28 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "From: Michael Paquier <michael@paquier.xyz>\n> Tsunakawa-san, could you post a link to this article, if possible? I am curious\n> about their problem and why they used CREATE CONVERSION as a way to\n> solve it. That's fine even if it is in Japanese.\n\nI just pulled info from my old memory in my previous mail. Now the following links seem to be relevant. (They should be different from what I saw in the past.)\n\nhttps://ml.postgresql.jp/pipermail/pgsql-jp/2007-January/013103.html\n\nhttps://teratail.com/questions/295988\n\nIIRC, the open source Postgres extension EUDC also uses CREATE CONVERSION.\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Thu, 3 Dec 2020 02:48:13 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Deprecate custom encoding conversions"
},
{
"msg_contents": "\n\nOn 2020/12/03 11:48, tsunakawa.takay@fujitsu.com wrote:\n> From: Michael Paquier <michael@paquier.xyz>\n>> Tsunakawa-san, could you post a link to this article, if possible? I am curious\n>> about their problem and why they used CREATE CONVERSION as a way to\n>> solve it. That's fine even if it is in Japanese.\n> \n> I just pulled info from my old memory in my previous mail. Now the following links seem to be relevant. (They should be different from what I saw in the past.)\n> \n> https://ml.postgresql.jp/pipermail/pgsql-jp/2007-January/013103.html\n> \n> https://teratail.com/questions/295988\n> \n> IIRC, the open source Postgres extension EUDC also uses CREATE CONVERSION.\n\nFWIW, about four years before, for the real project, I wrote\nthe extension [1] that provides yet another encoding conversion\nfrom UTF-8 to EUC_JP, to handle some external characters.\nThis extension uses CREATE CONVERSION.\n\n[1]\nhttps://github.com/MasaoFujii/pg_fallback_utf8_to_euc_jp\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 3 Dec 2020 12:57:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "\n\nOn 2020/12/03 1:04, Heikki Linnakangas wrote:\n> Hi,\n> \n> PostgreSQL allows writing custom encoding conversion functions between any character encodings, using the CREATE CONVERSION command. It's pretty flexible, you can define default and non-default conversions, and the conversions live in schemas so you can have multiple conversions installed in a system and you can switch between them by changing search_path.\n> \n> However:\n> \n> We never use non-default conversions for anything. All code that performs encoding conversions only cares about the default ones.\n\nYes. I had to update pg_conversion.condefault directly so that we can\nuse custom encoding when I registered it. The direct update of\npg_conversion is of course not good thing. So I was wondering\nif we should have something like ALTER CONVERSION SET DEFAULT.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 3 Dec 2020 12:58:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2020/12/03 1:04, Heikki Linnakangas wrote:\n>> We never use non-default conversions for anything. All code that performs encoding conversions only cares about the default ones.\n\n> Yes. I had to update pg_conversion.condefault directly so that we can\n> use custom encoding when I registered it. The direct update of\n> pg_conversion is of course not good thing. So I was wondering\n> if we should have something like ALTER CONVERSION SET DEFAULT.\n\nPerhaps, but what I'd think is more urgent is having some way for\na session to select a non-default conversion for client I/O.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Dec 2020 23:07:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
},
{
"msg_contents": "\n\nOn 2020/12/03 13:07, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> On 2020/12/03 1:04, Heikki Linnakangas wrote:\n>>> We never use non-default conversions for anything. All code that performs encoding conversions only cares about the default ones.\n> \n>> Yes. I had to update pg_conversion.condefault directly so that we can\n>> use custom encoding when I registered it. The direct update of\n>> pg_conversion is of course not good thing. So I was wondering\n>> if we should have something like ALTER CONVERSION SET DEFAULT.\n> \n> Perhaps, but what I'd think is more urgent is having some way for\n> a session to select a non-default conversion for client I/O.\n\n+1\n\nWhat about adding new libpq parameter like encoding_conversion\n(like client_encoding) so that a client can specify what conversion to use?\nThat setting is sent to the server while establishing the connection.\nProbably we would need to verify that the setting of this new parameter\nis consistent with that of client_encoding.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 3 Dec 2020 13:18:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Deprecate custom encoding conversions"
}
] |
[
{
"msg_contents": "Attached is a patch for key management, which will eventually be part of\ncluster file encryption (CFE), called TDE (Transparent Data Encryption)\nby Oracle. It is an update of Masahiko Sawada's patch from July 31:\n\n\thttps://www.postgresql.org/message-id/CA+fd4k6RJwNvZTro3q2f5HSDd8HgyUc4CuY9U3e6Ran4C6TO4g@mail.gmail.com\n\nSawada-san did all the hard work, and I just redirected the patch. The\ngeneral outline of this CFE feature can be seen here:\n\n\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n\nThe currently planned progression for this feature is to allow secure\nretrieval of key encryption keys (KEK) outside of the database, then use\nthose to encrypt data keys that encrypt heap/index/tmpfile files.\n\nThis patch was changed from Masahiko Sawada's version by removing\nreferences to non-cluster file encryption because having SQL-level keys\nstored in this way was considered to have limited usefulness. I have\nalso remove references to SQL-level KEK rotation since that is best done\nas a command-line too.\n\nIf most people approve of this general approach, and the design\ndecisions made, I would like to apply this in the next few weeks, but\nthis brings complications. The syntax added by this commit might not\nprovide a useful feature until PG 15, so how do we hide it from users. \nI was thinking of not applying the doc changes (or commenting them out)\nand commenting out the --help output.\n\nOnce this patch is applied, I would like to use the /dev/tty file\ndescriptor passing feature for the ssl_passphrase_command parameter, so\nthe ssl passphrase can be prompted from the terminal. (I am attaching\npass_fd.sh as a POC for how file descriptor handling works.) I would\nalso then write the KEK rotation command-line tool. After that, we can\nstart working on heap/index/tmpfile encryption using this patch as a\nbase.\n\nHere is an example of the current patch in action, using a KEK like\n'7CE7945EA2F7322536F103E4D7D91C21E52288089EF99D186B9A76D666EBCA66',\nwhich is not echoed to the terminal:\n\n\t$ initdb -R -c '/u/postgres/tmp/pass_fd.sh \"Enter password:\" %R'\n\tThe files belonging to this database system will be owned by user \"postgres\".\n\tThis user must also own the server process.\n\t\n\tThe database cluster will be initialized with locale \"en_US.UTF-8\".\n\tThe default database encoding has accordingly been set to \"UTF8\".\n\tThe default text search configuration will be set to \"english\".\n\t\n\tData page checksums are disabled.\n\tCluster file encryption is enabled.\n\t\n\tfixing permissions on existing directory /u/pgsql/data ... ok\n\tcreating subdirectories ... ok\n\tselecting dynamic shared memory implementation ... posix\n\tselecting default max_connections ... 100\n\tselecting default shared_buffers ... 128MB\n\tselecting default time zone ... America/New_York\n\tcreating configuration files ... ok\n\trunning bootstrap script ...\n-->\tEnter password:ok\n\tperforming post-bootstrap initialization ...\n-->\tEnter password:ok\n\tsyncing data to disk ... ok\n\t\n\tinitdb: warning: enabling \"trust\" authentication for local connections\n\tYou can change this by editing pg_hba.conf or using the option -A, or\n\t--auth-local and --auth-host, the next time you run initdb.\n\t\n\tSuccess. You can now start the database server using:\n\t\n\t pg_ctl -D /u/pgsql/data -l logfile start\n\t\n\t$ pg_ctl -R -l /u/pg/server.log start\n\twaiting for server to start...\n-->\tEnter password: done\n\tserver started\n\nA non-matching passphrase looks like this:\n\n\t$ pg_ctl -R -l /u/pg/server.log start\n\twaiting for server to start...\n-->\tEnter password: stopped waiting\n\tpg_ctl: could not start server\n\tExamine the log output.\n\n\t$ tail -2 /u/pg/server.log\n\t2020-12-02 16:32:34.045 EST [13056] FATAL: cluster passphrase does not match expected passphrase\n\t2020-12-02 16:32:34.047 EST [13056] LOG: database system is shut down\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Wed, 2 Dec 2020 16:38:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 04:38:14PM -0500, Bruce Momjian wrote:\n> If most people approve of this general approach, and the design\n> decisions made, I would like to apply this in the next few weeks, but\n> this brings complications. The syntax added by this commit might not\n> provide a useful feature until PG 15, so how do we hide it from users. \n> I was thinking of not applying the doc changes (or commenting them out)\n> and commenting out the --help output.\n\nHere is an updated patch to handle the new hash API introduced by\ncommit 87ae9691d2.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Fri, 4 Dec 2020 21:08:03 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 04, 2020 at 09:08:03PM -0500, Bruce Momjian wrote:\n> Here is an updated patch to handle the new hash API introduced by\n> commit 87ae9691d2.\n\n+ if (!ossl_initialized)\n+ {\n+#ifdef HAVE_OPENSSL_INIT_SSL\n+ OPENSSL_init_ssl(OPENSSL_INIT_LOAD_CONFIG, NULL);\n+#else\n+ OPENSSL_config(NULL);\n+ SSL_library_init();\n+ SSL_load_error_strings();\n+#endif\n+ ossl_initialized = true;\nThis is a duplicate of what's done in be-secure-openssl.c, and it does\nnot strike me as a good idea to do that potentially twice.\n\ngit diff --check complains.\n\n+extern bool pg_HMAC_SHA512(const uint8 *key,\n+ const uint8 *in, int inlen,\n+ uint8 *out);\nI think that the split done in this patch makes the HMAC handling in\nthe core code messier:\n- SCRAM makes use of HMAC internally, and we should try to use the\nHMAC of OpenSSL if building with it even for SCRAM.\n- For the first reason, I think that we should also have a fallback\nimplementation.\n- This API layer should not depend directly on the SHA2 used (SCRAM\nuses SHA256 with HMAC).\nFWIW, I got plans to work on that once I am done with the business\naround MD5 and OpenSSL.\n\nThe refactoring done with the ciphers moved from pgcrypto to\nsrc/common/ should be a separate patch. In short, it would be good to\nrework this patch and split it into pieces that are independently\nuseful. This would make the review much easier as well.\n--\nMichael",
"msg_date": "Sat, 5 Dec 2020 11:39:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sat, 5 Dec 2020 at 11:08, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Dec 2, 2020 at 04:38:14PM -0500, Bruce Momjian wrote:\n> > If most people approve of this general approach, and the design\n> > decisions made, I would like to apply this in the next few weeks, but\n> > this brings complications. The syntax added by this commit might not\n> > provide a useful feature until PG 15, so how do we hide it from users.\n> > I was thinking of not applying the doc changes (or commenting them out)\n> > and commenting out the --help output.\n>\n> Here is an updated patch to handle the new hash API introduced by\n> commit 87ae9691d2.\n>\n\nThank you for updating the patch and moving forward!\n\nI've tried to use this patch on my environment (FreeBSD 12.1) but it\ndoesn't work. I got the following error:\n\n$ bin/initdb -D data -E UTF8 --no-locale -c'echo\n\"1234567890123456789012345678901234567890123456789012345678901234567890\"'\nThe files belonging to this database system will be owned by user \"masahiko\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale \"C\".\nThe default text search configuration will be set to \"english\".\n\nData page checksums are disabled.\nCluster file encryption is enabled.\n\ncreating directory data ... ok\ncreating subdirectories ... ok\nselecting dynamic shared memory implementation ... posix\nselecting default max_connections ... 100\nselecting default shared_buffers ... 128MB\nselecting default time zone ... Japan\ncreating configuration files ... ok\nrunning bootstrap script ... child process was terminated by signal\n10: Bus error\ninitdb: removing data directory \"data\"\n\nThe backtrace is:\n\n(lldb) bt\n* thread #1, name = 'postgres', stop reason = signal SIGBUS: hardware error\n frame #0: 0x0000000000d3b074\npostgres`ResourceArrayRemove(resarr=0x7f7f7f7f7f7f80df,\nvalue=34383251232) at resowner.c:308:23\n frame #1: 0x0000000000d3c0ef\npostgres`ResourceOwnerForgetCryptoHash(owner=0x7f7f7f7f7f7f7f7f,\nhandle=34383251232) at resowner.c:1421:7\n frame #2: 0x0000000000d77c54\npostgres`pg_cryptohash_free(ctx=0x000000080166c720) at\ncryptohash_openssl.c:228:3\n * frame #3: 0x0000000000d6c065\npostgres`kmgr_derive_keys(passphrase=\"1234567890123456789012345678901234567890123456789012345678901234567890\\n\",\npasslen=71, enckey=\"\\x01d\n\n\\x17(\\x93\\xa8id\\xae\\xa5\\x86\\x84\\xb9Y>\\xa84\\x16\\U00000085\\U0000009d'\\r\\x11123456789012345678901234567890\n1234567890123456789012345678901234567890\\n\",\nmackey=\"0\\x1f\\xb4_eg\\x95\\x02\\x9e2\\x0e\\xba\\t\\b^\\x18\\x90U\\xa0\\x8e\\xaei\\x01oYJL\\xa4\\xb3E\\x97a,\\x06h4\\x9fX\\x9eS\\xb2\\x81%^d\\xa4\\x01d\n\n\\x17(\\x93\\xa8id\\xae\\xa5\\x86\\x84\\xb9Y>\\xa84\\x16\\U00000085\\U0000009d'\\r\\x1112345678901234567890123456789\n01234567890123456789012345678901234567890\\n\") at kmgr_utils.c:239:2\n frame #4: 0x0000000000d60810 postgres`BootStrapKmgr at kmgr.c:93:2\n frame #5: 0x0000000000647fa1 postgres`BootStrapXLOG at xlog.c:5359:3\n frame #6: 0x000000000066f034 postgres`AuxiliaryProcessMain(argc=7,\nargv=0x00007fffffffcdb8) at bootstrap.c:450:4\n frame #7: 0x00000000008e9cfb postgres`main(argc=8,\nargv=0x00007fffffffcdb0) at main.c:201:3\n frame #8: 0x000000000051010f postgres`_start(ap=<unavailable>,\ncleanup=<unavailable>) at crt1.c:76:7\n\nInvestigating the issue, it seems we need to initialize the context\nand the state with 0. The following change fixes this issue.\n\ndiff --git a/src/common/cryptohash_openssl.c b/src/common/cryptohash_openssl.c\nindex e5233daab6..a45c86fa67 100644\n--- a/src/common/cryptohash_openssl.c\n+++ b/src/common/cryptohash_openssl.c\n@@ -81,6 +81,8 @@ pg_cryptohash_create(pg_cryptohash_type type)\n return NULL;\n }\n\n+ memset(ctx, 0, sizeof(pg_cryptohash_ctx));\n+ memset(state, 0, sizeof(pg_cryptohash_state));\n ctx->data = state;\n ctx->type = type;\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 5 Dec 2020 12:15:13 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sat, Dec 5, 2020 at 11:39:18AM +0900, Michael Paquier wrote:\n> On Fri, Dec 04, 2020 at 09:08:03PM -0500, Bruce Momjian wrote:\n> > Here is an updated patch to handle the new hash API introduced by\n> > commit 87ae9691d2.\n> \n> + if (!ossl_initialized)\n> + {\n> +#ifdef HAVE_OPENSSL_INIT_SSL\n> + OPENSSL_init_ssl(OPENSSL_INIT_LOAD_CONFIG, NULL);\n> +#else\n> + OPENSSL_config(NULL);\n> + SSL_library_init();\n> + SSL_load_error_strings();\n> +#endif\n> + ossl_initialized = true;\n> This is a duplicate of what's done in be-secure-openssl.c, and it does\n> not strike me as a good idea to do that potentially twice.\n\nYeah, I kind of wondered about that. In fact, the code from the\noriginal patch would not compile so I got this init code from somewhere\nelse. I have now removed it and it works fine. :-)\n\n> git diff --check complains.\n\nUh, can you be more specific? I don't see any output from that command.\n\n> +extern bool pg_HMAC_SHA512(const uint8 *key,\n> + const uint8 *in, int inlen,\n> + uint8 *out);\n> I think that the split done in this patch makes the HMAC handling in\n> the core code messier:\n> - SCRAM makes use of HMAC internally, and we should try to use the\n> HMAC of OpenSSL if building with it even for SCRAM.\n> - For the first reason, I think that we should also have a fallback\n> implementation.\n> - This API layer should not depend directly on the SHA2 used (SCRAM\n> uses SHA256 with HMAC).\n> FWIW, I got plans to work on that once I am done with the business\n> around MD5 and OpenSSL.\n\nUh, I just kind of kept all that code and didn't modify it. It would be\ngreat if you can help me improve it. I will be using the hash code for\nthe command-line tool that alters the passphrase, so having that in\ncommon/ does help me.\n\n> The refactoring done with the ciphers moved from pgcrypto to\n> src/common/ should be a separate patch. In short, it would be good to\n\nUh, I am kind of unclear exactly what was done there since I just took\nthat part of the patch unchanged.\n\n> rework this patch and split it into pieces that are independently\n> useful. This would make the review much easier as well.\n\nI can break out the -R/file descriptor passing part as a separate patch,\nand have the ssl_passphrase_command use that, but that's the only part I\nknow can be useful on its own.\n\nSince the patch is large, I found a way to push the branch to git and\nhow to make a download link that tracks whatever I push to the 'key'\nbranch on my github account. Here is the updated patch link:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 4 Dec 2020 22:32:45 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sat, Dec 5, 2020 at 12:15:13PM +0900, Masahiko Sawada wrote:\n> diff --git a/src/common/cryptohash_openssl.c b/src/common/cryptohash_openssl.c\n> index e5233daab6..a45c86fa67 100644\n> --- a/src/common/cryptohash_openssl.c\n> +++ b/src/common/cryptohash_openssl.c\n> @@ -81,6 +81,8 @@ pg_cryptohash_create(pg_cryptohash_type type)\n> return NULL;\n> }\n> \n> + memset(ctx, 0, sizeof(pg_cryptohash_ctx));\n> + memset(state, 0, sizeof(pg_cryptohash_state));\n> ctx->data = state;\n> ctx->type = type;\n\nOK, I worked with Sawada-san and added the attached patch. The updated\nfull patch is at the same URL: :-)\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Fri, 4 Dec 2020 22:52:29 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 04, 2020 at 10:52:29PM -0500, Bruce Momjian wrote:\n> OK, I worked with Sawada-san and added the attached patch. The updated\n> full patch is at the same URL: :-)\n> \n> \thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\nOh, I see that you use SHA256 during firstboot, which is why you\nbypass the resowner paths in cryptohash_openssl.c. Wouldn't it be\nbetter to use IsBootstrapProcessingMode() then?\n\n> @@ -72,14 +72,15 @@ pg_cryptohash_create(pg_cryptohash_type type)\n> \tctx = ALLOC(sizeof(pg_cryptohash_ctx));\n> \tif (ctx == NULL)\n> \t\treturn NULL;\n> +\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> \n> \tstate = ALLOC(sizeof(pg_cryptohash_state));\n> \tif (state == NULL)\n> \t{\n> -\t\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> \t\tFREE(ctx);\n> \t\treturn NULL;\n> \t}\n> +\texplicit_bzero(state, sizeof(pg_cryptohash_state));\n\nexplicit_bzero() is used to give the guarantee that any sensitive data\ngets cleaned up after an error or on free. So I think that your use\nis incorrect here for an initialization.\n\n> \tif (state->evpctx == NULL)\n> \t{\n> -\t\texplicit_bzero(state, sizeof(pg_cryptohash_state));\n> -\t\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> #ifndef FRONTEND\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n\nAnd this original position is IMO more correct.\n\nAnyway, at quick glance, the backtrace of upthread is indicating a\ndouble-free with an attempt to free a resource owner that has been\nalready free'd.\n--\nMichael",
"msg_date": "Sat, 5 Dec 2020 21:54:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sat, Dec 5, 2020 at 09:54:33PM +0900, Michael Paquier wrote:\n> On Fri, Dec 04, 2020 at 10:52:29PM -0500, Bruce Momjian wrote:\n> > OK, I worked with Sawada-san and added the attached patch. The updated\n> > full patch is at the same URL: :-)\n> > \n> > \thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n> \n> Oh, I see that you use SHA256 during firstboot, which is why you\n> bypass the resowner paths in cryptohash_openssl.c. Wouldn't it be\n> better to use IsBootstrapProcessingMode() then?\n\nI tried that, but we also use the resource references before the\nresource system is started even in non-bootstrap mode. Maybe we should\nbe creating a resource owner for this, but it is so early in the boot\nprocess I don't know if that is safe.\n\n> > @@ -72,14 +72,15 @@ pg_cryptohash_create(pg_cryptohash_type type)\n> > \tctx = ALLOC(sizeof(pg_cryptohash_ctx));\n> > \tif (ctx == NULL)\n> > \t\treturn NULL;\n> > +\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> > \n> > \tstate = ALLOC(sizeof(pg_cryptohash_state));\n> > \tif (state == NULL)\n> > \t{\n> > -\t\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> > \t\tFREE(ctx);\n> > \t\treturn NULL;\n> > \t}\n> > +\texplicit_bzero(state, sizeof(pg_cryptohash_state));\n> \n> explicit_bzero() is used to give the guarantee that any sensitive data\n> gets cleaned up after an error or on free. So I think that your use\n> is incorrect here for an initialization.\n\nIt turns out what we were missing was a clear of state->resowner in\ncases where the resowner was null. I removed the bzero calls completely\nand it now runs fine.\n\n> > \tif (state->evpctx == NULL)\n> > \t{\n> > -\t\texplicit_bzero(state, sizeof(pg_cryptohash_state));\n> > -\t\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> > #ifndef FRONTEND\n> > \t\tereport(ERROR,\n> > \t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> \n> And this original position is IMO more correct.\n\nDo we even need them?\n\n> Anyway, at quick glance, the backtrace of upthread is indicating a\n> double-free with an attempt to free a resource owner that has been\n> already free'd.\n\nI think that is now fixed too. Updated patch at the same URL:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sat, 5 Dec 2020 10:42:05 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sun, 6 Dec 2020 at 00:42, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, Dec 5, 2020 at 09:54:33PM +0900, Michael Paquier wrote:\n> > On Fri, Dec 04, 2020 at 10:52:29PM -0500, Bruce Momjian wrote:\n> > > OK, I worked with Sawada-san and added the attached patch. The updated\n> > > full patch is at the same URL: :-)\n> > >\n> > > https://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n> >\n> > Oh, I see that you use SHA256 during firstboot, which is why you\n> > bypass the resowner paths in cryptohash_openssl.c. Wouldn't it be\n> > better to use IsBootstrapProcessingMode() then?\n>\n> I tried that, but we also use the resource references before the\n> resource system is started even in non-bootstrap mode. Maybe we should\n> be creating a resource owner for this, but it is so early in the boot\n> process I don't know if that is safe.\n>\n> > > @@ -72,14 +72,15 @@ pg_cryptohash_create(pg_cryptohash_type type)\n> > > ctx = ALLOC(sizeof(pg_cryptohash_ctx));\n> > > if (ctx == NULL)\n> > > return NULL;\n> > > + explicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> > >\n> > > state = ALLOC(sizeof(pg_cryptohash_state));\n> > > if (state == NULL)\n> > > {\n> > > - explicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> > > FREE(ctx);\n> > > return NULL;\n> > > }\n> > > + explicit_bzero(state, sizeof(pg_cryptohash_state));\n> >\n> > explicit_bzero() is used to give the guarantee that any sensitive data\n> > gets cleaned up after an error or on free. So I think that your use\n> > is incorrect here for an initialization.\n>\n> It turns out what we were missing was a clear of state->resowner in\n> cases where the resowner was null. I removed the bzero calls completely\n> and it now runs fine.\n>\n> > > if (state->evpctx == NULL)\n> > > {\n> > > - explicit_bzero(state, sizeof(pg_cryptohash_state));\n> > > - explicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> > > #ifndef FRONTEND\n> > > ereport(ERROR,\n> > > (errcode(ERRCODE_OUT_OF_MEMORY),\n> >\n> > And this original position is IMO more correct.\n>\n> Do we even need them?\n>\n> > Anyway, at quick glance, the backtrace of upthread is indicating a\n> > double-free with an attempt to free a resource owner that has been\n> > already free'd.\n>\n> I think that is now fixed too. Updated patch at the same URL:\n>\n> https://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\nThank you for updating the patch!\n\nI think we need explicit_bzero() also in freeing the keywrap context.\n\nBTW, when we need -R option pg_ctl command to start the server, how\ncan we start it in the single-user mode?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 7 Dec 2020 09:30:03 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sat, Dec 05, 2020 at 10:42:05AM -0500, Bruce Momjian wrote:\n> On Sat, Dec 5, 2020 at 09:54:33PM +0900, Michael Paquier wrote:\n>> On Fri, Dec 04, 2020 at 10:52:29PM -0500, Bruce Momjian wrote:\n>>> \tif (state->evpctx == NULL)\n>>> \t{\n>>> -\t\texplicit_bzero(state, sizeof(pg_cryptohash_state));\n>>> -\t\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n>>> #ifndef FRONTEND\n>>> \t\tereport(ERROR,\n>>> \t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n>> \n>> And this original position is IMO more correct.\n> \n> Do we even need them?\n\nThat's the intention to clean up things in a consistent way.\n--\nMichael",
"msg_date": "Mon, 7 Dec 2020 11:03:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 09:30:03AM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch!\n> \n> I think we need explicit_bzero() also in freeing the keywrap context.\n\npg_cryptohash_free() already has this:\n\n explicit_bzero(state, sizeof(pg_cryptohash_state));\n explicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n\nDo we need more?\n\n> BTW, when we need -R option pg_ctl command to start the server, how\n> can we start it in the single-user mode?\n\nI added code for that, but I hadn't tested it yet. Now that I tried it,\nI realized that it is awkward to supply a file descriptor number (that\nwill be closed) from the command-line, so I added code and docs to allow\n-1 to duplicate standard error, and it worked:\n\n\t$ postgres --single -R -1 -D /u/pg/data\n\t\n\tEnter password:\n\tPostgreSQL stand-alone backend 14devel\n\tbackend> select 100;\n\t 1: ?column? (typeid = 23, len = 4, typmod = -1, byval = t)\n\t ----\n\t 1: ?column? = \"100\" (typeid = 23, len = 4, typmod = -1, byval = t)\n\t ----\n\nUpdated patch at the same URL:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sun, 6 Dec 2020 23:42:23 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 11:03:30AM +0900, Michael Paquier wrote:\n> On Sat, Dec 05, 2020 at 10:42:05AM -0500, Bruce Momjian wrote:\n> > On Sat, Dec 5, 2020 at 09:54:33PM +0900, Michael Paquier wrote:\n> >> On Fri, Dec 04, 2020 at 10:52:29PM -0500, Bruce Momjian wrote:\n> >>> \tif (state->evpctx == NULL)\n> >>> \t{\n> >>> -\t\texplicit_bzero(state, sizeof(pg_cryptohash_state));\n> >>> -\t\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> >>> #ifndef FRONTEND\n> >>> \t\tereport(ERROR,\n> >>> \t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> >> \n> >> And this original position is IMO more correct.\n> > \n> > Do we even need them?\n> \n> That's the intention to clean up things in a consistent way.\n\nAh, I see your point. Added:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sun, 6 Dec 2020 23:46:12 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "> On 2 Dec 2020, at 22:38, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> Attached is a patch for key management, which will eventually be part of\n> cluster file encryption (CFE), called TDE (Transparent Data Encryption)\n> by Oracle.\n\nThe attackvector protected against seems to be operating systems users\n(*without* any legitimate database access) gaining access to the data files.\nSuch an attacker would also gain access to postgresql.conf and therefore may\nhave the cluster passphrase command in case it's stored there. That would\nimply the attacker is likely (or perhaps better phrased \"not entirely\nunlikely\") to be able to run that command and decrypt the data *iff* there is\nno interactive/2fa aspect to the passphrase command. Is that an accurate\ninterpretation? Do Oracle TDE et.al use a similar setup where an database\nrestart require manual intervention?\n\nAdmittedly I haven't read the other threads on the topic so if it's discussed\nsomewhere else then I'd appreciate a whacking with a cluestick.\n\nA few other comments from skimming the patch:\n\n+ is data encryption keys, specifically keys zero and one. Key zero is\n+ the key uses to encrypt database heap and index files which are stored in\n\".. is the key used to ..\"?\n\n+ matches the initdb-supplied passphrase. If this check fails, and the\n+ the server will refuse to start.\nSeems like something is missing, perhaps just s/and the//?\n\n+ <ulink url=\"https://tools.ietf.org/html/rfc2315\">RFC2315</ulink>.</simpara>\nTiny nitpick: turns out that RFC 2315 (with a space) is the correct spelling.\n\n+ * are read-only. All wrapping and unwrapping key routines depends on\n+ * the openssl library for now.\nOpenSSL is a name so it should be cased as OpenSSL in text like this.\n\n #include <openssl/ssl.h>\n+#include <openssl/conf.h>\nWhy do we need this header in be-secure-openssl.c? There are no other changes\nto that file?\n\n+/* Define to 1 if you have the `OPENSSL_init_crypto' function. */\n+#undef HAVE_OPENSSL_INIT_CRYPTO\nThis seems to be unused?\n\nKmgrSaveCryptoKeys doesn't seem to be explicitly clamping down permissions on\nthe generated file, is that something we want for belt-and-suspender level\nparanoia around keys? The same goes for read_one_keyfile etc.\n\nAlso, KmgrSaveCryptoKeys assumes that keys are stored in plain files which is\ntrue for OpenSSL but won't necessarily be true for other crypto libraries.\nWould it make sense to have an API in be-kmgr.c with the OpenSSL specifics in\nbe-kmgr-openssl.c like how the TLS backend support is written? We have spent a\nlot effort in making the backend not assume a particular TLS library, it seems\na shame to step away from that here with a tight coupling. (yes, I am biased)\n\nThe same goes for src/common/cipher.c which wraps calls in ifdefs, why not just\nhave an thin wrapper API which cipher-openssl.c implements?\n\n+ case 'K':\n+ file_encryption_keylen = atoi(optarg);\n+ break;\natoi() will return zero on failing to parse the keylen, where 0 implies\n\"disabled\". Wouldn't it make sense to parse this with code which can't\nsilently fall back on disabling in case of users mistyping?\n\n+ * Skip cryptographic keys. It's generally not good idea to copy the\n\".. not *a* good idea ..\"\n\n+ pg_log_fatal(\"cluser passphrase referenced %%R, but -R not specified\");\ns/cluser/cluster/ (there are few copy-pasteos of that elsewhere too)\n\n+ elog(ERROR, \"too many cryptographic kes\");\ns/kes/keys/\n\n+#ifndef CIPHER_H\n+#define CIPHER_H\nThe risk for headerguard collision with non-PG headers seem quite high, maybe\nPG_CIPHER_H would be better?\n\nI will try to dive in a bit deeper over the holidays.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 9 Dec 2020 22:34:59 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 2 Dec 2020, at 22:38, Bruce Momjian <bruce@momjian.us> wrote:\n> > Attached is a patch for key management, which will eventually be part of\n> > cluster file encryption (CFE), called TDE (Transparent Data Encryption)\n> > by Oracle.\n> \n> The attackvector protected against seems to be operating systems users\n> (*without* any legitimate database access) gaining access to the data files.\n\nThat isn't the only attack vector that this addresses (though it is one\nthat this is envisioned to help with- to wit, someone rsync'ing the DB\ndirectory). TDE also helps with traditional data at rest requirements,\nin environments where you don't trust the storage layer to handle that\n(for whatever reason), and it's at least imagined that backups with\npg_basebackup would also be encrypted, helping protect against backup\ntheft.\n\nThere's, further, certainly no shortage of folks asking for this.\n\n> Such an attacker would also gain access to postgresql.conf and therefore may\n> have the cluster passphrase command in case it's stored there. That would\n> imply the attacker is likely (or perhaps better phrased \"not entirely\n> unlikely\") to be able to run that command and decrypt the data *iff* there is\n> no interactive/2fa aspect to the passphrase command. Is that an accurate\n> interpretation? Do Oracle TDE et.al use a similar setup where an database\n> restart require manual intervention?\n\nThis is very much an 'it depends' as other products have different\ncapabilities in this area. The most similar implementation to what is\nbeing contemplated for PG today would be the big O's \"tablespace\" TDE,\nwhich offers options of either \"Password-based software keystores\" (you\nhave to provide a password when you want to bring that tablespace\nonline), or \"Auto-login software keystores\" (there's a \"system generated\npassword\" that's used to encrypt the keystore, which seems to be\nmore-or-less the username and hostname of the system), or HSM based\noptions. As such, a DB restart might require manual intervention (if\nthe keystore is password based, or an HSM that requires manual\ninvolvement) or it might not (auto-login keystore of some kind).\n\n> Admittedly I haven't read the other threads on the topic so if it's discussed\n> somewhere else then I'd appreciate a whacking with a cluestick.\n\nI'm sure it was, but hopefully the above helps you avoid having to dig\nthrough the very (very (very)) long threads on this topic.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 9 Dec 2020 17:18:37 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 10:32:45PM -0500, Bruce Momjian wrote:\n> I can break out the -R/file descriptor passing part as a separate patch,\n> and have the ssl_passphrase_command use that, but that's the only part I\n> know can be useful on its own.\n> \n> Since the patch is large, I found a way to push the branch to git and\n> how to make a download link that tracks whatever I push to the 'key'\n> branch on my github account. Here is the updated patch link:\n> \n> \thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\nI have made some good progress on the patch. I realized that pg_upgrade\ncan't just copy the keys from the old cluster --- they encrypt the user\nheap/index files that are copied/linked by pg_upgrade, but also encrypt\nthe system tables, which initdb creates, so the keys have to be copied\nat initdb bootstrap time --- I have added an option to do that. I also\nrealized that pg_upgrade will be starting/stopping the server, so I need\nto add an option to pg_upgrade to allow that prompting. I can now\nsuccessfully pg_upgrade a cluster that uses cluster file encryption, and\nkeep the same keys. All at the same URL.\n\nIn addition I have completed the command-line tool to allow changing of\nthe cluster passphrase, which applies over the first diff; diff at:\n\n\thttps://github.com/bmomjian/postgres/compare/key...bmomjian:key-alter.diff\n\nMy next task is to write a script for Yubikey authentication.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 9 Dec 2020 20:40:50 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi, everyone\n>\n> I have read the patch and did some simple tests. I'm not entirely sure\n> about some code segments; e.g.:\n>\n> In the BootStrapKmgr() we generate a data encryption key by:\n> key = generate_crypto_key(file_encryption_keylen);\n>\n> However, I found that the file_encryption_keylen is always 0 in bootstrap\n> mode because there exitst another variable bootstrap_file_encryption_keylen\n> in xlog.c and bootstrap.c.\n>\n> We get the REL/WAL key by KmgrGetKey() call and it works like:\n> return (const CryptoKey *) &(KmgrShmem->intlKeys[id]);\n>\n> But in bootstrap mode, the KmgrShmem are not assigned. So, if we want to\n> use it to encrypt something in bootstrap mode, I suggest we make the\n> following changes:\n> if ( in bootstrap mode)\n> return intlKeys[id]; // a static variable which contains key\n> else\n> reutrn (const CryptoKey *) &(KmgrShmem->intlKeys[id]);\n>\n>\n\n-- \nThere is no royal road to learning.\nHighgo Software Co.\n\nHi, everyoneI have read the patch and did some simple tests. I'm not entirely sure about some code segments; e.g.:In the BootStrapKmgr() we generate a data encryption key by:\tkey = generate_crypto_key(file_encryption_keylen);However, I found that the file_encryption_keylen is always 0 in bootstrap mode because there exitst another variable bootstrap_file_encryption_keylen in xlog.c and bootstrap.c.We get the REL/WAL key by KmgrGetKey() call and it works like:\treturn (const CryptoKey *) &(KmgrShmem->intlKeys[id]);But in bootstrap mode, the KmgrShmem are not assigned. So, if we want to use it to encrypt something in bootstrap mode, I suggest we make the following changes:\tif ( in bootstrap mode)\t\treturn intlKeys[id];\t// a static variable which contains key\telse\t\treutrn (const CryptoKey *) &(KmgrShmem->intlKeys[id]);\n-- There is no royal road to learning.Highgo Software Co.",
"msg_date": "Thu, 10 Dec 2020 19:26:48 +0800",
"msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 9, 2020 at 10:34:59PM +0100, Daniel Gustafsson wrote:\n> > On 2 Dec 2020, at 22:38, Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > Attached is a patch for key management, which will eventually be part of\n> > cluster file encryption (CFE), called TDE (Transparent Data Encryption)\n> > by Oracle.\n> \n> The attackvector protected against seems to be operating systems users\n> (*without* any legitimate database access) gaining access to the data files.\n> Such an attacker would also gain access to postgresql.conf and therefore may\n> have the cluster passphrase command in case it's stored there. That would\n> imply the attacker is likely (or perhaps better phrased \"not entirely\n> unlikely\") to be able to run that command and decrypt the data *iff* there is\n> no interactive/2fa aspect to the passphrase command. Is that an accurate\n> interpretation? Do Oracle TDE et.al use a similar setup where an database\n> restart require manual intervention?\n\nI have updated the docs to say \"read\" access more clearly:\n\n The purpose of cluster file encryption is to prevent users with read\n access to the directories used to store database files from being able to\n access the data stored in those files. For example, when using cluster\n file encryption, users who have read access to the cluster directories\n for backup purposes will not be able to read the data stored in the\n these files.\n\nI already said \"read access\" in the later part of the paragraph, but\nobviously it needs to be mentioned early too. If we need more text\nhere, please let me know.\n\nAs far as someone hijacking the passphrase command, that is a serious\nrisk. No matter how you do the authentication, even TFA, you are going\nto have someone able to grab that passphrase as it comes out of the\nscript and passes to the server. It might help to store the script and\npostgres binaries in a more secure location than the data, but I am not\nsure how realistic that is. I can if you are using a NAS for data store\nand have local binaries and passphrase script, that would be more secure\nthan putting the script on the NAS. Is that something we should explain?\nShould we explicity say that there is no protection against write\naccess? What can someone with write access to PGDATA do if\npostgresql.conf is not stored there? I don't know.\n\n> Admittedly I haven't read the other threads on the topic so if it's discussed\n> somewhere else then I'd appreciate a whacking with a cluestick.\n> \n> A few other comments from skimming the patch:\n> \n> + is data encryption keys, specifically keys zero and one. Key zero is\n> + the key uses to encrypt database heap and index files which are stored in\n> \".. is the key used to ..\"?\n\nFixed, thanks.\n\n> + matches the initdb-supplied passphrase. If this check fails, and the\n> + the server will refuse to start.\n> Seems like something is missing, perhaps just s/and the//?\n\nFixed.\n\n> + <ulink url=\"https://tools.ietf.org/html/rfc2315\">RFC2315</ulink>.</simpara>\n> Tiny nitpick: turns out that RFC 2315 (with a space) is the correct spelling.\n\nFixed.\n\n> + * are read-only. All wrapping and unwrapping key routines depends on\n> + * the openssl library for now.\n> OpenSSL is a name so it should be cased as OpenSSL in text like this.\n\nFixed, and fixed \"depends\".\n\n> #include <openssl/ssl.h>\n> +#include <openssl/conf.h>\n> Why do we need this header in be-secure-openssl.c? There are no other changes\n> to that file?\n\nNot sure, removed, since it compiles fine without it.\n> \n> +/* Define to 1 if you have the `OPENSSL_init_crypto' function. */\n> +#undef HAVE_OPENSSL_INIT_CRYPTO\n> This seems to be unused?\n\nAgreed, removed.\n\n> KmgrSaveCryptoKeys doesn't seem to be explicitly clamping down permissions on\n> the generated file, is that something we want for belt-and-suspender level\n> paranoia around keys? The same goes for read_one_keyfile etc.\n\nWell, KmgrSaveCryptoKeys is calling BasicOpenFile, which is calling\nBasicOpenFilePerm(fileName, fileFlags, pg_file_create_mode). The\nquestion is whether we should honor the pg_file_create_mode specified on\nthe data directory, or make it tighter for these keys. The file is\nalready owner-rw-only by default:\n\n\t$ ls -l\n\tdrwx------ 2 postgres postgres 4096 Dec 10 13:13 live/\n\t$ cd live/\n\t$ ls -l\n\ttotal 8\n\t-rw------- 1 postgres postgres 356 Dec 10 13:13 0\n\t-rw------- 1 postgres postgres 356 Dec 10 13:13 1\n\nIf the key files were mixed in with a bunch of other files of lesser\nvalue, like with Apache, I could see not honoring it, but for this case,\nsince all the files are of equal value, I think just honoring it makes\nsense.\n\n> Also, KmgrSaveCryptoKeys assumes that keys are stored in plain files which is\n> true for OpenSSL but won't necessarily be true for other crypto libraries.\n> Would it make sense to have an API in be-kmgr.c with the OpenSSL specifics in\n> be-kmgr-openssl.c like how the TLS backend support is written? We have spent a\n> lot effort in making the backend not assume a particular TLS library, it seems\n> a shame to step away from that here with a tight coupling. (yes, I am biased)\n\nI have no idea, sorry. I am mostly using the excellent routines\nSawada-san used in the original patch.\n\n> The same goes for src/common/cipher.c which wraps calls in ifdefs, why not just\n> have an thin wrapper API which cipher-openssl.c implements?\n\nSame. I am open to it, sure, but I would need a patch.\n\n> + case 'K':\n> + file_encryption_keylen = atoi(optarg);\n> + break;\n> atoi() will return zero on failing to parse the keylen, where 0 implies\n> \"disabled\". Wouldn't it make sense to parse this with code which can't\n> silently fall back on disabling in case of users mistyping?\n\nGood question, I found 43 references to atoi(optarg) in our code, so it\nseems they all should be fixed, no? Not sure I want to be different here.\n\n> + * Skip cryptographic keys. It's generally not good idea to copy the\n> \".. not *a* good idea ..\"\n\nThank, fixed\n\n> + pg_log_fatal(\"cluser passphrase referenced %%R, but -R not specified\");\n> s/cluser/cluster/ (there are few copy-pasteos of that elsewhere too)\n\nAll fixed, thanks.\n\n> + elog(ERROR, \"too many cryptographic kes\");\n> s/kes/keys/\n\nFixed.\n> \n> +#ifndef CIPHER_H\n> +#define CIPHER_H\n> The risk for headerguard collision with non-PG headers seem quite high, maybe\n> PG_CIPHER_H would be better?\n\nAgreed, fixed.\n\n> I will try to dive in a bit deeper over the holidays.\n\nThanks. The diff URL has all the updates.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 10 Dec 2020 13:42:36 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 9, 2020 at 05:18:37PM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Daniel Gustafsson (daniel@yesql.se) wrote:\n> > > On 2 Dec 2020, at 22:38, Bruce Momjian <bruce@momjian.us> wrote:\n> > > Attached is a patch for key management, which will eventually be part of\n> > > cluster file encryption (CFE), called TDE (Transparent Data Encryption)\n> > > by Oracle.\n> > \n> > The attackvector protected against seems to be operating systems users\n> > (*without* any legitimate database access) gaining access to the data files.\n> \n> That isn't the only attack vector that this addresses (though it is one\n> that this is envisioned to help with- to wit, someone rsync'ing the DB\n> directory). TDE also helps with traditional data at rest requirements,\n> in environments where you don't trust the storage layer to handle that\n> (for whatever reason), and it's at least imagined that backups with\n> pg_basebackup would also be encrypted, helping protect against backup\n> theft.\n> \n> There's, further, certainly no shortage of folks asking for this.\n\nCan we flesh this out more in the docs? Any idea on wording compared to\nwhat I have?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 10 Dec 2020 13:49:39 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 07:26:48PM +0800, Neil Chen wrote:\n> \n> \n> Hi, everyone\n> \n> I have read the patch and did some simple tests. I'm not entirely sure\n> about some code segments; e.g.:\n> \n> In the BootStrapKmgr() we generate a data encryption key by:\n> key = generate_crypto_key(file_encryption_keylen);\n> \n> However, I found that the file_encryption_keylen is always 0 in bootstrap\n> mode because there exitst another variable bootstrap_file_encryption_keylen\n> in xlog.c and bootstrap.c.\n\nOh, good point; that is very helpful. I was relying on SetConfigOption\nto set file_encryption_keylen, but that happens _after_ we create the\nkeys, so they were zero length. I have fixed this by passing\nbootstrap_file_encryption_keylen to the boot routines. The diff URL has\nthe fix:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\n> We get the REL/WAL key by KmgrGetKey() call and it works like:\n> return (const CryptoKey *) &(KmgrShmem->intlKeys[id]);\n> \n> But in bootstrap mode, the KmgrShmem are not assigned. So, if we want to\n> use it to encrypt something in bootstrap mode, I suggest we make the\n> following changes:\n> if ( in bootstrap mode)\n> return intlKeys[id]; // a static variable which contains key\n> else\n> reutrn (const CryptoKey *) &(KmgrShmem->intlKeys[id]);\n\nYes, you are also correct here. I had not gotten to using KmgrGetKey\nyet, but it clearly needs your suggestion, so have done that.\n\nThanks for your help.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 10 Dec 2020 20:32:39 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 9, 2020 at 08:40:50PM -0500, Bruce Momjian wrote:\n> My next task is to write a script for Yubikey authentication.\n\nI know Craig Ringer wanted Yubikey support, which allows two-factor\nauthentication, so I have added it to the most recent patch by adding a\ncluster_passphrase_command %d/directory parameter:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\nYou can also store the PIN in a file, so you don't need a PIN to be\nentered by the user for each server start. Attached is the script I\nwith a PIN required. Here is a session:\n\n\t$ initdb -K 256 -R -c '/u/postgres/tmp/pass_yubi.sh %R \"%d\"'\n\tThe files belonging to this database system will be owned by user \"postgres\".\n\tThis user must also own the server process.\n\t\n\tThe database cluster will be initialized with locale \"en_US.UTF-8\".\n\tThe default database encoding has accordingly been set to \"UTF8\".\n\tThe default text search configuration will be set to \"english\".\n\t\n\tData page checksums are disabled.\n\tCluster file encryption is enabled.\n\t\n\tfixing permissions on existing directory /u/pgsql/data ... ok\n\tcreating subdirectories ... ok\n\tselecting dynamic shared memory implementation ... posix\n\tselecting default max_connections ... 100\n\tselecting default shared_buffers ... 128MB\n\tselecting default time zone ... America/New_York\n\tcreating configuration files ... ok\n\trunning bootstrap script ...\n\tEnter Yubikey PIN:\n\t\n\tWARNING: The Yubikey can be locked and require a reset if too many pin\n\tattempts fail. It is recommended to run this command manually and save\n\tthe passphrase in a secure location for possible recovery.\n\t\n-->\tEnter Yubikey PIN:\n\tok\n\tperforming post-bootstrap initialization ...\n-->\tEnter Yubikey PIN:\n\tok\n\tsyncing data to disk ... ok\n\t\n\tinitdb: warning: enabling \"trust\" authentication for local connections\n\tYou can change this by editing pg_hba.conf or using the option -A, or\n\t--auth-local and --auth-host, the next time you run initdb.\n\t\n\tSuccess. You can now start the database server using:\n\t\n\t pg_ctl -D /u/pgsql/data -l logfile start\n\n\n\t$ pg_ctl -R -l /u/pg/server.log start\n\twaiting for server to start...\n-->\tEnter Yubikey PIN:\n\t done\n\tserver started\n\nIt even allows for passphrase rotation using my pg_altercpass tool with\nthis patch:\n\n https://github.com/bmomjian/postgres/compare/key...bmomjian:key-alter.diff\n\nThe Yubikey-encrypted passphrase is stored in the key directory, so the\nencrypted passphrase stays with the data/WAL keys during passphrase\nrotation:\n\n\t$ pg_altercpass -R '/u/postgres/tmp/pass_yubi.sh %R \"%d\"' '/u/postgres/tmp/pass_yubi.sh %R \"%d\"'\n\t\n-->\tEnter Yubikey PIN:\n\t\n-->\tEnter Yubikey PIN:\n\t\n\tWARNING: The Yubikey can be locked and require a reset if too many pin\n\tattempts fail. It is recommended to run this command manually and save\n\tthe passphrase in a secure location for possible recovery.\n\t\n-->\tEnter Yubikey PIN:\n\nYubikey PIN rotation has to be done using the Yubikey tool, and data/WAL\nkey rotation has to be done via switching to a standby, which hasn't\nbeen implemented yet.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Fri, 11 Dec 2020 13:01:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 01:01:14PM -0500, Bruce Momjian wrote:\n> On Wed, Dec 9, 2020 at 08:40:50PM -0500, Bruce Momjian wrote:\n> > My next task is to write a script for Yubikey authentication.\n> \n> I know Craig Ringer wanted Yubikey support, which allows two-factor\n> authentication, so I have added it to the most recent patch by adding a\n> cluster_passphrase_command %d/directory parameter:\n> \n> \thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n> \n> You can also store the PIN in a file, so you don't need a PIN to be\n> entered by the user for each server start.\n\nHere is the output without requiring a PIN; attached is the script used:\n\n\t++ initdb -K 256 -R -c '/u/postgres/tmp/pass_yubi_nopin.sh \"%d\"'\n\tThe files belonging to this database system will be owned by user \"postgres\".\n\tThis user must also own the server process.\n\t\n\tThe database cluster will be initialized with locale \"en_US.UTF-8\".\n\tThe default database encoding has accordingly been set to \"UTF8\".\n\tThe default text search configuration will be set to \"english\".\n\t\n\tData page checksums are disabled.\n\tCluster file encryption is enabled.\n\t\n\tfixing permissions on existing directory /u/pgsql/data ... ok\n\tcreating subdirectories ... ok\n\tselecting dynamic shared memory implementation ... posix\n\tselecting default max_connections ... 100\n\tselecting default shared_buffers ... 128MB\n\tselecting default time zone ... America/New_York\n\tcreating configuration files ... ok\n\trunning bootstrap script ... engine \"pkcs11\" set.\n\t\n\tWARNING: The Yubikey can be locked and require a reset if too many pin\n\tattempts fail. It is recommended to run this command manually and save\n\tthe passphrase in a secure location for possible recovery.\n\tengine \"pkcs11\" set.\n\tok\n\tperforming post-bootstrap initialization ... engine \"pkcs11\" set.\n\tok\n\tsyncing data to disk ... ok\n\t\n\tinitdb: warning: enabling \"trust\" authentication for local connections\n\tYou can change this by editing pg_hba.conf or using the option -A, or\n\t--auth-local and --auth-host, the next time you run initdb.\n\t\n\tSuccess. You can now start the database server using:\n\t\n\t pg_ctl -D /u/pgsql/data -l logfile start\n\t\n\t++ pg_ctl -R -l /u/pg/server.log start\n\twaiting for server to start... done\n\tserver started\n\t++ pg_altercpass -R '/u/postgres/tmp/pass_yubi_nopin.sh \"%d\"' '/u/postgres/tmp/pass_yubi_nopin.sh \"%d\"'\n\tengine \"pkcs11\" set.\n\tengine \"pkcs11\" set.\n\t\n\tWARNING: The Yubikey can be locked and require a reset if too many pin\n\tattempts fail. It is recommended to run this command manually and save\n\tthe passphrase in a secure location for possible recovery.\n\tengine \"pkcs11\" set.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Fri, 11 Dec 2020 13:21:21 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 04:38:14PM -0500, Bruce Momjian wrote:\n> Attached is a patch for key management, which will eventually be part of\n> cluster file encryption (CFE), called TDE (Transparent Data Encryption)\n> by Oracle. It is an update of Masahiko Sawada's patch from July 31:\n> \n> \thttps://www.postgresql.org/message-id/CA+fd4k6RJwNvZTro3q2f5HSDd8HgyUc4CuY9U3e6Ran4C6TO4g@mail.gmail.com\n> \n> Sawada-san did all the hard work, and I just redirected the patch. The\n> general outline of this CFE feature can be seen here:\n> \n> \thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n> \n> The currently planned progression for this feature is to allow secure\n> retrieval of key encryption keys (KEK) outside of the database, then use\n> those to encrypt data keys that encrypt heap/index/tmpfile files.\n...\n> If most people approve of this general approach, and the design\n> decisions made, I would like to apply this in the next few weeks, but\n> this brings complications. The syntax added by this commit might not\n> provide a useful feature until PG 15, so how do we hide it from users. \n> I was thinking of not applying the doc changes (or commenting them out)\n> and commenting out the --help output.\n\nI am getting close to applying these patches, probably this week. The\npatches are cumulative:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\thttps://github.com/bmomjian/postgres/compare/key...bmomjian:key-alter.diff\n\nI do have a few questions:\n\n\tWhy is KmgrShmemData a struct, when it only has a single member? Are\n\tall shared memory areas structs?\n\t\n\tShould pg_altercpass be using fsync's for directory renames?\n\t\n\tCan anyone test this on Windows, particularly -R handling?\n\t\n\tWhat testing infrastructure should this have?\n\t\n\tThere are a few shell script I should include to show how to create\n\tcommands. Where should they be stored? /contrib module?\n\t\n\tAre people okay with having the feature enabled, but invisible\n\tsince the docs and --help output are missing? When we enable\n\tssl_passphrase_command to prompt from the terminal, some of the\n\tcommand-line options will be useful.\n\n\tDo people like the command-letter choices?\n\n\tI called the alter passphrase utility pg_altercpass. I could\n\thave called it pg_clusterpass, but I wanted to highlight it is\n\tonly for changing the passphrase, not for creating them.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n \n\n\n",
"msg_date": "Mon, 14 Dec 2020 18:06:15 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 06:06:15PM -0500, Bruce Momjian wrote:\n> I am getting close to applying these patches, probably this week. The\n> patches are cumulative:\n> \n> \thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n> \thttps://github.com/bmomjian/postgres/compare/key...bmomjian:key-alter.diff\n\nI strongly object to a commit based on the current state of the patch.\nBased on my lookup of the patches you are referring to, I see a couple\nof things that should be splitted up and refactored properly before\nthinking about the core part of the patch (FWIW, I don't have much\nthoughts to offer about the core part because I haven't really thought\nabout it, but it does not prevent to do a correct refactoring). Here\nare some notes:\n- The code lacks a lot of comments IMO. Why is retrieve_passphrase()\ndoing what it does? It seems to me that pg_altercpass needs a large\nbrushup.\n- There are no changes to src/tools/msvc/. Seeing the diffs in\nsrc/common/, this stuff breaks Windows builds.\n- The HMAC split is terrible, as mentioned upthread. I think that you\nwould need to extract this stuff as a separate patch, and not add more\nmess to the existing mess (SCRAM uses its own HMAC with SHA256, which\nis already wrong). We can and should have a fallback implementation,\nbecause that's easy to do. And we need to have the fallback\nimplementation depend on the contents of cryptohash.c (in a\nsrc/common/hmac.c), while the OpenSSL portion requires a\nhmac_openssl.c where you can choose the hash type based on\npg_cryptohash_type. So ossl_HMAC_SHA512() does not do the right\nthing. APIs flexible enough to allow a new SSL library to plug into\nthis stuff are required.\n- Not much a fan of the changes done in cryptohash.c for the resource\nowners as well. It also feels like this could be thought as something\ndirectly for resowner.c.\n- The cipher split also should be done in its own patch, and reviewed\non its own. There is a mix of dependencies between non-OpenSSL and\nOpenSSL code paths, making the pluggin of a new SSL library harder to\ndo. In short, this requires proper API design, which is not something\nwe have here. There are also no changes in pgcrypto for that stuff.\n\n> I do have a few questions:\n\nThat looks like a lot of things to sort out as well.\n\n> \tCan anyone test this on Windows, particularly -R handling?\n> \t\n> \tWhat testing infrastructure should this have?\n\nUsing TAP tests would be adapted for those two points.\n\n> \tThere are a few shell script I should include to show how to create\n> \tcommands. Where should they be stored? /contrib module?\n\nWhy are these needed. Is it a matter of documentation?\n\n> \tAre people okay with having the feature enabled, but invisible\n> \tsince the docs and --help output are missing? When we enable\n> \tssl_passphrase_command to prompt from the terminal, some of the\n> \tcommand-line options will be useful.\n\nYou are suggesting to enable the feature by default, make it invisible\nto the users and leave it undocumented? Is there something I missing\nhere?\n\n> \tDo people like the command-letter choices?\n> \n> \tI called the alter passphrase utility pg_altercpass. I could\n> \thave called it pg_clusterpass, but I wanted to highlight it is\n> \tonly for changing the passphrase, not for creating them.\n\nI think that you should attach such patches directly to the emails\nsent to pgsql-hackers, if those github links disappear for some\nreason, it would become impossible to refer to see what has been\ndiscussed here.\n\n+/*\n+ * We have to use postgres.h not postgres_fe.h here, because there's so much\n+ * backend-only stuff in the kmgr include files we need. But we need a\n+ * frontend-ish environment otherwise. Hence this ugly hack.\n+ */\n+#define FRONTEND 1\n+\n+#include \"postgres.h\"\nI would advise to really understand why this happens and split up the\nbackend and frontend parts cleanly. This style ought to be avoided as\nmuch as possible.\n\n@@ -95,9 +101,9 @@ pg_cryptohash_create(pg_cryptohash_type type)\n\n if (state->evpctx == NULL)\n {\n+#ifndef FRONTEND\n explicit_bzero(state, sizeof(pg_cryptohash_state));\n\texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n-#ifndef FRONTEND\nI think that this change is incorrect. Any clean up of memory should\nbe done for the frontend *and* the backend.\n\n+#ifdef USE_OPENSSL\n+ ctx = (PgCipherCtx *) palloc0(sizeof(PgCipherCtx));\n+\n+ ctx->encctx = ossl_cipher_ctx_create(cipher, key, klen, true);\n+ ctx->decctx = ossl_cipher_ctx_create(cipher, key, klen, false);\n+#endif\nThere is a cipher_openssl.c and a cipher.c that includes USE_OPENSSL.\nThis requires a cleaner split IMO. We should avoid as much as\npossible OpenSSL-specific code paths in files part of src/common/ when\nnot building with OpenSSL. So this is now a mixed bag of\ndependencies.\n--\nMichael",
"msg_date": "Tue, 15 Dec 2020 10:05:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi, Bruce\n\nI read your question and here are some of my thoughts.\n\n Why is KmgrShmemData a struct, when it only has a single member?\n> Are\n> all shared memory areas structs?\n>\n\nYes, all areas created by ShmemInitStruct() are structs. I think the\nsignificance of using struct is that it delimits an \"area\" to store the\nKMS related things, which makes our memory management more clear and\nunified.\n\n What testing infrastructure should this have?\n>\n> There are a few shell script I should include to show how to create\n> commands. Where should they be stored? /contrib module?\n\n\n\n Are people okay with having the feature enabled, but invisible\n> since the docs and --help output are missing? When we enable\n> ssl_passphrase_command to prompt from the terminal, some of the\n> command-line options will be useful.\n\n\nSince our implementation is not in contrib, I don't think we should put the\nscript there. Maybe we can refer to postgresql.conf.sample?\nActually, I wonder whether we can add the UDK(user data encryption key) to\nthe first version of KMS, which can be provided to plug-ins such as\npgsodium. In this way, users can at least use it to a certain extent.\n\nAlso, I have tested some KMS functionalities by adding very simple TDE\nlogic. In the meantime, I found some confusion in the following places:\n\n1. Previously, we added a variable bootstrap_keys_wrap that is used for\nencryption during initdb. However, since we save the \"wrapped\" key, we need\nto use a global KEK that can be accessed in boot mode to unwrap it before\nuse... I don't know if that's good. To make it simple, I modified the\nbootstrap_keys_wrap to store the \"unwrapped\" key so that the encryption\nfunction can get it correctly. (The variable name should be changed\naccordingly).\n\n2. I tried to add support for AES_CTR mode, and the code for encrypting\nbuffer blocks. In the process I found that in pg_cipher_ctx_create, the key\nlength is declared as \"byte\". However, in the CryptoKey structure, the\nlength is stored as \"bit\", which leads me to use a form similar to\nKey->klen / 8 when I call this function. Maybe we should unify the two to\navoid unnecessary confusion.\n\n3. This one is not a problem/bug. I have some doubts about the length of\ndata encryption blocks. For the page, we do not encrypt the PageHeaderData,\nwhich is 192 bit long. By default, the page size is 8K(65536 bits), so the\nlength of the data we want to encrypt is 65344 bits. This number can't be\ndivisible by 128 and 192 keys and 256 bits long keys. Will this cause a\nproblem?\n\nHere is a simple patch that I wrote with references to Sawada's previous\nTDE works, hope it can help you.\n\nThanks.\n-- \nThere is no royal road to learning.\nHighGo Software Co.",
"msg_date": "Tue, 15 Dec 2020 10:36:56 +0800",
"msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 10:05:49AM +0900, Michael Paquier wrote:\n> On Mon, Dec 14, 2020 at 06:06:15PM -0500, Bruce Momjian wrote:\n> > I am getting close to applying these patches, probably this week. The\n> > patches are cumulative:\n> > \n> > \thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n> > \thttps://github.com/bmomjian/postgres/compare/key...bmomjian:key-alter.diff\n> \n> I strongly object to a commit based on the current state of the patch.\n> Based on my lookup of the patches you are referring to, I see a couple\n> of things that should be splitted up and refactored properly before\n> thinking about the core part of the patch (FWIW, I don't have much\n> thoughts to offer about the core part because I haven't really thought\n> about it, but it does not prevent to do a correct refactoring). Here\n> are some notes:\n> - The code lacks a lot of comments IMO. Why is retrieve_passphrase()\n> doing what it does? It seems to me that pg_altercpass needs a large\n> brushup.\n\nUh, pg_altercpass is a new file I wrote and almost every block has a\ncomment. I added a few more, but can you be more specific?\n\n> - There are no changes to src/tools/msvc/. Seeing the diffs in\n> src/common/, this stuff breaks Windows builds.\n\nOK, done in patch at URL.\n\n> - The HMAC split is terrible, as mentioned upthread. I think that you\n> would need to extract this stuff as a separate patch, and not add more\n> mess to the existing mess (SCRAM uses its own HMAC with SHA256, which\n> is already wrong). We can and should have a fallback implementation,\n> because that's easy to do. And we need to have the fallback\n> implementation depend on the contents of cryptohash.c (in a\n> src/common/hmac.c), while the OpenSSL portion requires a\n> hmac_openssl.c where you can choose the hash type based on\n> pg_cryptohash_type. So ossl_HMAC_SHA512() does not do the right\n> thing. APIs flexible enough to allow a new SSL library to plug into\n> this stuff are required.\n> - Not much a fan of the changes done in cryptohash.c for the resource\n> owners as well. It also feels like this could be thought as something\n> directly for resowner.c.\n> - The cipher split also should be done in its own patch, and reviewed\n> on its own. There is a mix of dependencies between non-OpenSSL and\n> OpenSSL code paths, making the pluggin of a new SSL library harder to\n> do. In short, this requires proper API design, which is not something\n> we have here. There are also no changes in pgcrypto for that stuff.\n\nI am going to need someone to help me make these changes. I don't feel\nI know enough about the crypto API to do it, and it will take me 1+ week\nto learn it.\n\n> > I do have a few questions:\n> \n> That looks like a lot of things to sort out as well.\n> \n> > \tCan anyone test this on Windows, particularly -R handling?\n> > \t\n> > \tWhat testing infrastructure should this have?\n> \n> Using TAP tests would be adapted for those two points.\n\nOK, I will try that.\n\n> > \tThere are a few shell script I should include to show how to create\n> > \tcommands. Where should they be stored? /contrib module?\n> \n> Why are these needed. Is it a matter of documentation?\n\nI have posted some of the scripts. They involved complex shell\nscripting that I doubt the average user can do. The scripts are for\nprompting for a passphrase from the user's terminal, or using the\nYubikey, with our without typing a pin. I can put them in the docs and\nthen users can copy them into a file. Is that the preferred method?\n\n> > \tAre people okay with having the feature enabled, but invisible\n> > \tsince the docs and --help output are missing? When we enable\n> > \tssl_passphrase_command to prompt from the terminal, some of the\n> > \tcommand-line options will be useful.\n> \n> You are suggesting to enable the feature by default, make it invisible\n> to the users and leave it undocumented? Is there something I missing\n> here?\n\nThe point is that the command-line options will be active, but will not\nbe documented. It will not do anything on its own unless you specify\nthat command-line option. I can comment-out the command-line options\nfrom being active but the code it going to look very messy.\n\n> > \tDo people like the command-letter choices?\n> > \n> > \tI called the alter passphrase utility pg_altercpass. I could\n> > \thave called it pg_clusterpass, but I wanted to highlight it is\n> > \tonly for changing the passphrase, not for creating them.\n> \n> I think that you should attach such patches directly to the emails\n> sent to pgsql-hackers, if those github links disappear for some\n> reason, it would become impossible to refer to see what has been\n> discussed here.\n\nWell, the patches are changing frequently. I am attaching a version to\nthis email.\n\n> +/*\n> + * We have to use postgres.h not postgres_fe.h here, because there's so much\n> + * backend-only stuff in the kmgr include files we need. But we need a\n> + * frontend-ish environment otherwise. Hence this ugly hack.\n> + */\n> +#define FRONTEND 1\n> +\n> +#include \"postgres.h\"\n> I would advise to really understand why this happens and split up the\n> backend and frontend parts cleanly. This style ought to be avoided as\n> much as possible.\n\nUh, I got this code from pg_resetwal.c, which does something similar to\npg_altercpass.\n\n> @@ -95,9 +101,9 @@ pg_cryptohash_create(pg_cryptohash_type type)\n> \n> if (state->evpctx == NULL)\n> {\n> +#ifndef FRONTEND\n> explicit_bzero(state, sizeof(pg_cryptohash_state));\n> \texplicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n> -#ifndef FRONTEND\n> I think that this change is incorrect. Any clean up of memory should\n> be done for the frontend *and* the backend.\n\nOh, good point. Fixed.\n\n> +#ifdef USE_OPENSSL\n> + ctx = (PgCipherCtx *) palloc0(sizeof(PgCipherCtx));\n> +\n> + ctx->encctx = ossl_cipher_ctx_create(cipher, key, klen, true);\n> + ctx->decctx = ossl_cipher_ctx_create(cipher, key, klen, false);\n> +#endif\n> There is a cipher_openssl.c and a cipher.c that includes USE_OPENSSL.\n> This requires a cleaner split IMO. We should avoid as much as\n> possible OpenSSL-specific code paths in files part of src/common/ when\n> not building with OpenSSL. So this is now a mixed bag of\n> dependencies.\n\nAgain, I need help here.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Mon, 14 Dec 2020 22:19:02 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 10:36:56AM +0800, Neil Chen wrote:\n> Hi, Bruce��\n> \n> I read your question and here are some of my thoughts.\n> \n> \n> � � � � Why is KmgrShmemData a struct, when it only has a single member?�\n> Are\n> � � � � all shared memory areas structs?\n> \n> �\n> Yes, all areas created by ShmemInitStruct() are structs. I think the\n> significance of using struct is that it delimits an \"area\" �to store the KMS\n> related things, which makes our memory management more clear and unified.\n\nOK, thanks, that helps.\n\n> � � � � What testing infrastructure should this have?\n> \n> � � � � There are a few shell script I should include to show how to create\n> � � � � commands.� Where should they be stored?� /contrib module?\n> \n> �\n> \n> � � � � Are people okay with having the feature enabled, but invisible\n> � � � � since the docs and --help output are missing? When we enable\n> � � � � ssl_passphrase_command to prompt from the terminal, some of the\n> � � � � command-line options will be useful.\n> \n> �\n> Since our implementation is not in contrib, I don't think we should put the\n> script there. Maybe we can refer to postgresql.conf.sample?��\n\nUh, the script are 20-60 lines long --- I am attaching them to this\nemail. Plus, when we allow user prompting for the SSL passphrase, we\nwill have another script, or maybe three mor if people want to use a\nYubikey to unlock the SSL passphrase.\n\n> Actually, I wonder whether we can add the UDK(user data encryption key) to the\n> first version of KMS, which can be provided to plug-ins such as pgsodium. In\n> this way, users can at least use it to a certain extent.\n\nI don't thinK I want to get into that at this point. It can be done\nlater.\n\n> Also, I have tested some KMS functionalities by adding very simple TDE logic.\n> In the meantime, I found some confusion in the following places:\n\nOK, I know Cybertec has a TDE patch too.\n\n> 1. Previously, we added a variable bootstrap_keys_wrap that is used for\n> encryption during initdb. However, since we save the \"wrapped\" key, we need to\n> use a global KEK that can be accessed in boot mode to unwrap it before use... I\n> don't know if that's good. To make it simple, I modified the\n> bootstrap_keys_wrap to store the \"unwrapped\" key so that the encryption\n> function can get it correctly. (The variable name should be changed\n> accordingly).\n\nI see what you are saying. We store the wrapped in bootstrap mode, but\nthe unwrapped in normal mode. There is also the case of when we copy\nthe keys from an old cluster. I will work on a patch tomorrow and\nreport back here.\n\n> 2. I tried to add support for AES_CTR mode, and the code for encrypting buffer\n> blocks. In the process I found that in pg_cipher_ctx_create, the key length is\n> declared as \"byte\". However, in the CryptoKey structure, the length is stored\n> as \"bit\", which leads me to use a form similar to Key->klen / 8 when I call\n> this function. Maybe we should unify the two to avoid unnecessary confusion.\n\nAgreed. I will look at that too tomorrow.\n\n> 3. This one is not a problem/bug. I have some doubts about the length of data\n> encryption blocks. For the page, we do not encrypt the PageHeaderData, which is\n> 192 bit long. By default, the page size is 8K(65536 bits), so the length of the\n> data we want to encrypt is 65344 bits. This number can't be divisible by 128\n> and 192 keys and 256 bits long keys. Will this cause a problem?\n\nSince we are using CTR mode for everything, it should be fine. We\nencrypt WAL as it is written.\n\n> Here is a simple patch that I wrote with references to Sawada's previous TDE\n> works, hope it can help you.\n\nThanks. I will work on the items you mentioned. Can you look at the\nCybertec patch and then our wiki to see what needs to be done next?\n\n\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n\nThanks for getting a proof-of-concept patch out there. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Mon, 14 Dec 2020 23:16:18 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 10:19:02PM -0500, Bruce Momjian wrote:\n> On Tue, Dec 15, 2020 at 10:05:49AM +0900, Michael Paquier wrote:\n>> - The HMAC split is terrible, as mentioned upthread. I think that you\n>> would need to extract this stuff as a separate patch, and not add more\n>> mess to the existing mess (SCRAM uses its own HMAC with SHA256, which\n>> is already wrong). We can and should have a fallback implementation,\n>> because that's easy to do. And we need to have the fallback\n>> implementation depend on the contents of cryptohash.c (in a\n>> src/common/hmac.c), while the OpenSSL portion requires a\n>> hmac_openssl.c where you can choose the hash type based on\n>> pg_cryptohash_type. So ossl_HMAC_SHA512() does not do the right\n>> thing. APIs flexible enough to allow a new SSL library to plug into\n>> this stuff are required.\n>> - Not much a fan of the changes done in cryptohash.c for the resource\n>> owners as well. It also feels like this could be thought as something\n>> directly for resowner.c.\n>> - The cipher split also should be done in its own patch, and reviewed\n>> on its own. There is a mix of dependencies between non-OpenSSL and\n>> OpenSSL code paths, making the pluggin of a new SSL library harder to\n>> do. In short, this requires proper API design, which is not something\n>> we have here. There are also no changes in pgcrypto for that stuff.\n> \n> I am going to need someone to help me make these changes. I don't feel\n> I know enough about the crypto API to do it, and it will take me 1+ week\n> to learn it.\n\nI think that designing a correct set of APIs that can be plugged with\nany SSL library is the correct move in the long term. I have on my\nagenda to clean up HMAC as SCRAM uses that with SHA256 and you would\nuse that with SHA512. Daniel has mentioned that he has been touching\nthis area, and I also got a patch halfly done though pgcrypto needs\nsome extra thoughts. So this is still WIP but you could reuse that\nhere.\n\n> Uh, I got this code from pg_resetwal.c, which does something similar to\n> pg_altercpass.\n\nYeah, that's actually the part where it is a bad idea to just copy\nthis pattern. pg_resetwal should not do that in the long term in my\nopinion, but nobody has come to clean up this stuff. (- -;)\n\n>> +#ifdef USE_OPENSSL\n>> + ctx = (PgCipherCtx *) palloc0(sizeof(PgCipherCtx));\n>> +\n>> + ctx->encctx = ossl_cipher_ctx_create(cipher, key, klen, true);\n>> + ctx->decctx = ossl_cipher_ctx_create(cipher, key, klen, false);\n>> +#endif\n>> There is a cipher_openssl.c and a cipher.c that includes USE_OPENSSL.\n>> This requires a cleaner split IMO. We should avoid as much as\n>> possible OpenSSL-specific code paths in files part of src/common/ when\n>> not building with OpenSSL. So this is now a mixed bag of\n>> dependencies.\n> \n> Again, I need help here.\n\nMy take would be to try to sort out the HMAC part first, then look at\nthe ciphers. One step at a time.\n--\nMichael",
"msg_date": "Tue, 15 Dec 2020 14:20:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 02:20:33PM +0900, Michael Paquier wrote:\n> > Uh, I got this code from pg_resetwal.c, which does something similar to\n> > pg_altercpass.\n> \n> Yeah, that's actually the part where it is a bad idea to just copy\n> this pattern. pg_resetwal should not do that in the long term in my\n> opinion, but nobody has come to clean up this stuff. (- -;)\n\nGlad you asked about this. It turns out pg_altercpass works fine with\npostgres_fe.h, so I will now use that. pg_resetwal still can't use it\nthough.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 15 Dec 2020 10:17:28 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 11:16:18PM -0500, Bruce Momjian wrote:\n> > 1. Previously, we added a variable bootstrap_keys_wrap that is used for\n> > encryption during initdb. However, since we save the \"wrapped\" key, we need to\n> > use a global KEK that can be accessed in boot mode to unwrap it before use... I\n> > don't know if that's good. To make it simple, I modified the\n> > bootstrap_keys_wrap to store the \"unwrapped\" key so that the encryption\n> > function can get it correctly. (The variable name should be changed\n> > accordingly).\n> \n> I see what you are saying. We store the wrapped in bootstrap mode, but\n> the unwrapped in normal mode. There is also the case of when we copy\n> the keys from an old cluster. I will work on a patch tomorrow and\n> report back here.\n\nI had not considered that we need the date keys available in bootstrap\nmode, even if we copied them from another cluster during pg_upgrade. I\nhave updated the diff URLs and attaching a patch showing the changes I\nmade. Basically, I had to separate BootStrapKmgr() into sections:\n\n1. copy or create an empty live key directory\n2. get the pass phrase\n3. populate the live key directory if we didn't copy it\n4. decrypt they keys into a file-scoped variable\n\nThanks for showing me this missing feature.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Tue, 15 Dec 2020 11:34:41 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 10:36:56AM +0800, Neil Chen wrote:\n> 2. I tried to add support for AES_CTR mode, and the code for encrypting buffer\n> blocks. In the process I found that in pg_cipher_ctx_create, the key length is\n> declared as \"byte\". However, in the CryptoKey structure, the length is stored\n> as \"bit\", which leads me to use a form similar to Key->klen / 8 when I call\n> this function. Maybe we should unify the two to avoid unnecessary confusion.\n\nYes, I would also like to get opinions on this. We certainly have to\nhave the key length be in _bit_ units when visible by users, but I see a\nlot of cases where we allocate arrays based on bytes. I am unclear\nwhere the proper units should be. At a minimum, we should specify the\nunits in the function parameter names.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 15 Dec 2020 12:00:08 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Tue, Dec 15, 2020 at 10:05:49AM +0900, Michael Paquier wrote:\n> > On Mon, Dec 14, 2020 at 06:06:15PM -0500, Bruce Momjian wrote:\n> > > I am getting close to applying these patches, probably this week. The\n> > > patches are cumulative:\n> > > \n> > > \thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n> > > \thttps://github.com/bmomjian/postgres/compare/key...bmomjian:key-alter.diff\n> > \n> > I strongly object to a commit based on the current state of the patch.\n\nYeah, I agree that this could use some work though I don't think it's\nall that far away from being something we can get in, to allow us to\nmove forward with the other work around supporting TDE.\n\n> > > \tThere are a few shell script I should include to show how to create\n> > > \tcommands. Where should they be stored? /contrib module?\n> > \n> > Why are these needed. Is it a matter of documentation?\n> \n> I have posted some of the scripts. They involved complex shell\n> scripting that I doubt the average user can do. The scripts are for\n> prompting for a passphrase from the user's terminal, or using the\n> Yubikey, with our without typing a pin. I can put them in the docs and\n> then users can copy them into a file. Is that the preferred method?\n\nIf we are going to include these in core anywhere, I would think a new\ndirectory under contrib would make the most sense. I'd hope that we\ncould find a way to automate the testing of them though, so that we have\nsome idea when they break because otherwise I'd be concerned that\nthey'll break somewhere down the line and we won't notice for quite a\nwhile.\n\nThis doesn't seem like something that would make sense to only have in\nthe documentation, which isn't a terribly convenient way to make use of\nthem.\n\n> > > \tAre people okay with having the feature enabled, but invisible\n> > > \tsince the docs and --help output are missing? When we enable\n> > > \tssl_passphrase_command to prompt from the terminal, some of the\n> > > \tcommand-line options will be useful.\n> > \n> > You are suggesting to enable the feature by default, make it invisible\n> > to the users and leave it undocumented? Is there something I missing\n> > here?\n> \n> The point is that the command-line options will be active, but will not\n> be documented. It will not do anything on its own unless you specify\n> that command-line option. I can comment-out the command-line options\n> from being active but the code it going to look very messy.\n\nI'm a bit concerned with what's being contemplated here.. Ideally,\nwe'll actually get everything committed for v14 but if we don't and this\ndoesn't serve any use-case then I'm not sure that it should be\nincluded in the release. I also don't like committing and reverting\nthings either, but having command line options that aren't documented\nbut which exist in the code isn't great either, nor is having random\ncode commented out..\n\n> > > \tDo people like the command-letter choices?\n> > > \n> > > \tI called the alter passphrase utility pg_altercpass. I could\n> > > \thave called it pg_clusterpass, but I wanted to highlight it is\n> > > \tonly for changing the passphrase, not for creating them.\n> > \n> > I think that you should attach such patches directly to the emails\n> > sent to pgsql-hackers, if those github links disappear for some\n> > reason, it would become impossible to refer to see what has been\n> > discussed here.\n> \n> Well, the patches are changing frequently. I am attaching a version to\n> this email.\n\nI agree with having the patches posted to the list. I don't really\nagree with the argument of \"they change frequently\".\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, Dec 14, 2020 at 10:19:02PM -0500, Bruce Momjian wrote:\n> > On Tue, Dec 15, 2020 at 10:05:49AM +0900, Michael Paquier wrote:\n> >> - The cipher split also should be done in its own patch, and reviewed\n> >> on its own. There is a mix of dependencies between non-OpenSSL and\n> >> OpenSSL code paths, making the pluggin of a new SSL library harder to\n> >> do. In short, this requires proper API design, which is not something\n> >> we have here. There are also no changes in pgcrypto for that stuff.\n> > \n> > I am going to need someone to help me make these changes. I don't feel\n> > I know enough about the crypto API to do it, and it will take me 1+ week\n> > to learn it.\n> \n> I think that designing a correct set of APIs that can be plugged with\n> any SSL library is the correct move in the long term. I have on my\n> agenda to clean up HMAC as SCRAM uses that with SHA256 and you would\n> use that with SHA512. Daniel has mentioned that he has been touching\n> this area, and I also got a patch halfly done though pgcrypto needs\n> some extra thoughts. So this is still WIP but you could reuse that\n> here.\n\nYeah, looking at what's been done there seems like the right approach to\nme as well, ideally we'd have one set of APIs that'll support all these\nuse cases (not unlike what curl does..).\n\nThe other thought I had here off-hand was that we might want to randomly\ngenerate a key during startup and have that available and use it for\nanything that's temporary and which is going to get blown away on a\nrestart, I've been thinking that doing that might allow us to have a\nrelatively simpler API for transient/temporary files too.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 15 Dec 2020 14:09:09 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 02:20:33PM +0900, Michael Paquier wrote:\n> On Mon, Dec 14, 2020 at 10:19:02PM -0500, Bruce Momjian wrote:\n> > I am going to need someone to help me make these changes. I don't feel\n> > I know enough about the crypto API to do it, and it will take me 1+ week\n> > to learn it.\n> \n> I think that designing a correct set of APIs that can be plugged with\n> any SSL library is the correct move in the long term. I have on my\n> agenda to clean up HMAC as SCRAM uses that with SHA256 and you would\n> use that with SHA512. Daniel has mentioned that he has been touching\n> this area, and I also got a patch halfly done though pgcrypto needs\n> some extra thoughts. So this is still WIP but you could reuse that\n> here.\n\nI thought this was going to be a huge job, but once I looked at it, it\nwas clear exactly what you were saying. Comparing cryptohash.c and\ncryptohash_openssl.c I saw exactly what you wanted, and I think I have\ncompleted it in the attached patch. cryptohash.c implemented the hash\nin C code if OpenSSL is not present --- I assume you didn't want me to\ndo that, but rather to split the API so it was easy to add another\nimplementation.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Tue, 15 Dec 2020 16:02:12 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 02:09:09PM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Tue, Dec 15, 2020 at 10:05:49AM +0900, Michael Paquier wrote:\n> > > On Mon, Dec 14, 2020 at 06:06:15PM -0500, Bruce Momjian wrote:\n> > > > I am getting close to applying these patches, probably this week. The\n> > > > patches are cumulative:\n> > > > \n> > > > \thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n> > > > \thttps://github.com/bmomjian/postgres/compare/key...bmomjian:key-alter.diff\n> > > \n> > > I strongly object to a commit based on the current state of the patch.\n> \n> Yeah, I agree that this could use some work though I don't think it's\n> all that far away from being something we can get in, to allow us to\n> move forward with the other work around supporting TDE.\n\nGlad to hear that. Michael Paquier was right that the common/cipher*.c\nAPI was wrong and should not be committed until fixed, which I think it\nhas been. If there are other messy parts of this patch, I would like to\nfix those too.\n\n> > I have posted some of the scripts. They involved complex shell\n> > scripting that I doubt the average user can do. The scripts are for\n> > prompting for a passphrase from the user's terminal, or using the\n> > Yubikey, with our without typing a pin. I can put them in the docs and\n> > then users can copy them into a file. Is that the preferred method?\n> \n> If we are going to include these in core anywhere, I would think a new\n> directory under contrib would make the most sense. I'd hope that we\n> could find a way to automate the testing of them though, so that we have\n> some idea when they break because otherwise I'd be concerned that\n> they'll break somewhere down the line and we won't notice for quite a\n> while.\n\nI am doing automated testing here, but I have a Yubikey. Michael\nPaquier recommended TAP tests, I guess like we do for pg_upgrade, and I\nwill look into that.\n\n> This doesn't seem like something that would make sense to only have in\n> the documentation, which isn't a terribly convenient way to make use of\n> them.\n\nYeah, it is hard to cut-paste 60 lines. The /contrib directory would be\nfor SSL and cluster file encryption passphrase control, so it should\nhave more general usefulness. Would those file be copied to the install\ndirectory somewhere if you install /contrib? Do we have any cases of\nthis?\n\n> > The point is that the command-line options will be active, but will not\n> > be documented. It will not do anything on its own unless you specify\n> > that command-line option. I can comment-out the command-line options\n> > from being active but the code it going to look very messy.\n> \n> I'm a bit concerned with what's being contemplated here.. Ideally,\n> we'll actually get everything committed for v14 but if we don't and this\n> doesn't serve any use-case then I'm not sure that it should be\n> included in the release. I also don't like committing and reverting\n> things either, but having command line options that aren't documented\n> but which exist in the code isn't great either, nor is having random\n> code commented out..\n\nAgreed. Once we use this code for SSL passphrase prompting, many of the\noptions will actually have usefulness. What we could do is to not hide\nanything, including the docs, and then hide them once we are ready to\nstart beta. At that point, maybe putting in the #ifdefs will be\nacceptable, since we would not be working on the feature until we\nbranch, and then we can just remove them again. We had similar issues\nwith the Win32 port, but that had fewer visible knobs.\n\n> > > I think that you should attach such patches directly to the emails\n> > > sent to pgsql-hackers, if those github links disappear for some\n> > > reason, it would become impossible to refer to see what has been\n> > > discussed here.\n> > \n> > Well, the patches are changing frequently. I am attaching a version to\n> > this email.\n> \n> I agree with having the patches posted to the list. I don't really\n> agree with the argument of \"they change frequently\".\n\nSure, they are only 35k compressed. My point is that I am modifying my\ngit tree all day, and with github, I can easily push the changes and\nwhen someone looks at the diff, they see the most recent version.\n\n> > I think that designing a correct set of APIs that can be plugged with\n> > any SSL library is the correct move in the long term. I have on my\n> > agenda to clean up HMAC as SCRAM uses that with SHA256 and you would\n> > use that with SHA512. Daniel has mentioned that he has been touching\n> > this area, and I also got a patch halfly done though pgcrypto needs\n> > some extra thoughts. So this is still WIP but you could reuse that\n> > here.\n> \n> Yeah, looking at what's been done there seems like the right approach to\n> me as well, ideally we'd have one set of APIs that'll support all these\n> use cases (not unlike what curl does..).\n\nI think I accomplished this in a patch I just posted.\n\n> The other thought I had here off-hand was that we might want to randomly\n> generate a key during startup and have that available and use it for\n> anything that's temporary and which is going to get blown away on a\n> restart, I've been thinking that doing that might allow us to have a\n> relatively simpler API for transient/temporary files too.\n\nThat is easy to do because of the design. If you want it done now, I\ncan to it --- just let me know.\n\nI want to thank everyone for the very helpful feedback I have received\nsince posting the first version of this patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 15 Dec 2020 16:39:47 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi Bruce et al\n\nFirstly, thanks for shaping the patch, getting it down to a manageable\nscope of cluster file encryption. I think this is a great feature and it\nmatters to a lot of the customers I talk to at VMware about\nadopting Postgres.\n\nSince it's exciting stuff, I've been trying to lash together a PoC\nintegration with the crypto infrastructure we see at these customers.\nUnfortunately, in short, the API doesn't seem to suit integration with HSM\nbig iron, like Thales, Utimaco, (other options are available), or their\ncloudy lookalikes.\n\nI understand the model of wrapping the Data Encryption Key and storing the\nwrapped key with the encrypted data. The thing I can't find support for in\nyour patch is passing a wrapped DEK to an external system for decryption. A\nkey feature of these key management approaches is that the\napplication handling the encrypted data doesn't get the KEK, the HSM/KSM\npasses the DEK back after unwrapping it. Google have a neat illustration of\ntheir approach, which is very similar to others, at\nhttps://cloud.google.com/kms/docs/envelope-encryption#how_to_decrypt_data_using_envelope_encryption\n\nSo instead of calling out to a passphrase command which returns input for a\nPBKDF, Postgres (in the form of initdb or postmaster) should be passing the\nwrapped DEK and getting back the DEK in plain. The value passed from\nPostgres to the external process doesn't even have to be a wrapped DEK, it\ncould be a key in the key->value sense, for use in a lookup against Vault,\nCredHub or the Kubernetes secret store. Making the stored keys opaque to\nthe Postgres processes and pushing the task of turning them into\ncryptographic material to an external hepler process probably wouldn't\nshrink the patch overall, but it would move a lot of code from core\nprocesses into the helper. Maybe that helper should live in contrib, as\ndiscussed below, where it would hopefully be joined by a bridge for KMIP\nand others.\n\nOn Tue, 15 Dec 2020 at 19:09, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Tue, Dec 15, 2020 at 10:05:49AM +0900, Michael Paquier wrote:\n> > > On Mon, Dec 14, 2020 at 06:06:15PM -0500, Bruce Momjian wrote:\n\n <snip>\n\n> > > > There are a few shell script I should include to show how to create\n> > > > commands. Where should they be stored? /contrib module?\n> > >\n> > > Why are these needed. Is it a matter of documentation?\n> >\n> > I have posted some of the scripts. They involved complex shell\n> > scripting that I doubt the average user can do. The scripts are for\n> > prompting for a passphrase from the user's terminal, or using the\n> > Yubikey, with our without typing a pin. I can put them in the docs and\n> > then users can copy them into a file. Is that the preferred method?\n>\n> If we are going to include these in core anywhere, I would think a new\n> directory under contrib would make the most sense. I'd hope that we\n> could find a way to automate the testing of them though, so that we have\n> some idea when they break because otherwise I'd be concerned that\n> they'll break somewhere down the line and we won't notice for quite a\n> while.\n>\n> Regards\n\nAlastair\n\nHi Bruce et alFirstly, thanks for shaping the patch, getting it down to a manageable scope of cluster file encryption. I think this is a great feature and it matters to a lot of the customers I talk to at VMware about adopting Postgres.Since it's exciting stuff, I've been trying to lash together a PoC integration with the crypto infrastructure we see at these customers. Unfortunately, in short, the API doesn't seem to suit integration with HSM big iron, like Thales, Utimaco, (other options are available), or their cloudy lookalikes.I understand the model of wrapping the Data Encryption Key and storing the wrapped key with the encrypted data. The thing I can't find support for in your patch is passing a wrapped DEK to an external system for decryption. A key feature of these key management approaches is that the application handling the encrypted data doesn't get the KEK, the HSM/KSM passes the DEK back after unwrapping it. Google have a neat illustration of their approach, which is very similar to others, at https://cloud.google.com/kms/docs/envelope-encryption#how_to_decrypt_data_using_envelope_encryptionSo instead of calling out to a passphrase command which returns input for a PBKDF, Postgres (in the form of initdb or postmaster) should be passing the wrapped DEK and getting back the DEK in plain. The value passed from Postgres to the external process doesn't even have to be a wrapped DEK, it could be a key in the key->value sense, for use in a lookup against Vault, CredHub or the Kubernetes secret store. Making the stored keys opaque to the Postgres processes and pushing the task of turning them into cryptographic material to an external hepler process probably wouldn't shrink the patch overall, but it would move a lot of code from core processes into the helper. Maybe that helper should live in contrib, as discussed below, where it would hopefully be joined by a bridge for KMIP and others.On Tue, 15 Dec 2020 at 19:09, Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Tue, Dec 15, 2020 at 10:05:49AM +0900, Michael Paquier wrote:\n> > On Mon, Dec 14, 2020 at 06:06:15PM -0500, Bruce Momjian wrote: <snip>\n> > > There are a few shell script I should include to show how to create\n> > > commands. Where should they be stored? /contrib module?\n> > \n> > Why are these needed. Is it a matter of documentation?\n> \n> I have posted some of the scripts. They involved complex shell\n> scripting that I doubt the average user can do. The scripts are for\n> prompting for a passphrase from the user's terminal, or using the\n> Yubikey, with our without typing a pin. I can put them in the docs and\n> then users can copy them into a file. Is that the preferred method?\n\nIf we are going to include these in core anywhere, I would think a new\ndirectory under contrib would make the most sense. I'd hope that we\ncould find a way to automate the testing of them though, so that we have\nsome idea when they break because otherwise I'd be concerned that\nthey'll break somewhere down the line and we won't notice for quite a\nwhile.RegardsAlastair",
"msg_date": "Tue, 15 Dec 2020 23:13:12 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 11:13:12PM +0000, Alastair Turner wrote:\n> Since it's exciting stuff, I've been trying to lash together a PoC integration\n> with the crypto infrastructure�we see at these customers. Unfortunately, in\n> short, the API doesn't seem to suit integration with HSM big iron, like Thales,\n> Utimaco, (other options are available), or their cloudy lookalikes.\n> \n> I understand the model of wrapping the Data Encryption Key and storing the\n> wrapped key with the encrypted data. The thing I can't find support for in your\n> patch�is passing a wrapped DEK to an external system for decryption. A key\n> feature of these key management approaches is that the application�handling the\n> encrypted data doesn't get the KEK, the HSM/KSM passes the DEK back after\n> unwrapping it. Google have a neat illustration of their approach, which is very\n> similar to others, at�https://cloud.google.com/kms/docs/envelope-encryption#\n> how_to_decrypt_data_using_envelope_encryption\n> \n> So instead of calling out to a passphrase command which returns input for a\n> PBKDF, Postgres (in the form of initdb or postmaster) should be passing the\n> wrapped DEK and getting back the DEK in plain. The value passed from Postgres\n> to the external process�doesn't even have to be a wrapped DEK, it could be a\n> key in the key->value sense, for use in a lookup against Vault, CredHub or the\n> Kubernetes secret store. Making the stored keys opaque to the Postgres\n> processes and pushing the task of turning them into cryptographic material to\n> an external hepler process probably wouldn't shrink the patch overall, but it\n> would move a lot of code from core processes into the helper. Maybe that helper\n> should live in contrib, as discussed below, where it would hopefully be joined\n> by a bridge for KMIP and others.\n\nWell, I think we can go two ways with this. First, if you look at my\nattached Yubikey script, you will see that, at boot time, since\n$DIR/yubipass.key does not exit, the script generates a passphrase,\nwraps it with the Yubikey's PIV #3 public key, and stores it in the live\nkey directory. The next time it is called, it decrypts the wrapped\npassphrase using the Yubikey, and uses that as the KEK to decrypt the\ntwo DEKs. Now, you could modify the script to call the HSM at boot time\nto retrieve a DEK wrapped in a KEK, and store it in the key directory. \nOn all future calls, you can pass the DEK wrapped by KEK to the HSM, and\nuse the returned DEK as the KEK/passphrase to decrypt the real DEK. The\nbig value of this is that the keys are validated, and it uses the\nexisting API. Ideally we write a sample script of how to this and\ninclude it in /contrib.\n\nThe second approach is to make a new API for what you want. It would be\nsimilar to the cluster_passphrase_command, but it would be simple in\nsome ways, but complex in others since we need at least two DEKs, and\nkey rotation might be very risky since there isn't validation in the\nserver. I don't think this can be accomplished by a contrib module, but\nwould actually be easy to implement, but perhaps hard to document\nbecause the user API might be tricky.\n\nI think my big question is about your sentence, \"A key feature of these\nkey management approaches is that the application�handling the\nencrypted data doesn't get the KEK, the HSM/KSM passes the DEK back\nafter unwrapping it.\" It is less secure for the HSM to return a KEK\nrather than a DEK? I can't see why it would be. The application never\nsees the KEK used to wrap the DEK it has stored in the file system,\nthough that DEK is actually used as a passphrase by Postgres. This is\nthe same with the Yubikey --- Postgres never sees the private key used\nto unlock what it locked with the Yubikey public key.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Tue, 15 Dec 2020 19:12:28 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi Stephen,\n\nOn Tue, Dec 15, 2020 at 02:09:09PM -0500, Stephen Frost wrote:\n> Yeah, looking at what's been done there seems like the right approach to\n> me as well, ideally we'd have one set of APIs that'll support all these\n> use cases (not unlike what curl does..).\n\nOoh... This is interesting. What did curl do wrong here? It is\nalways good to hear from such past experiences.\n--\nMichael",
"msg_date": "Wed, 16 Dec 2020 09:36:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 04:02:12PM -0500, Bruce Momjian wrote:\n> I thought this was going to be a huge job, but once I looked at it, it\n> was clear exactly what you were saying. Comparing cryptohash.c and\n> cryptohash_openssl.c I saw exactly what you wanted, and I think I have\n> completed it in the attached patch. cryptohash.c implemented the hash\n> in C code if OpenSSL is not present --- I assume you didn't want me to\n> do that, but rather to split the API so it was easy to add another\n> implementation.\n\nHmm. I don't think that this is right. First, this still mixes\nciphers and HMAC. Second, it is only possible here to use HMAC with\nSHA512 but we want to be able to use SHA256 as well with SCRAM (per se\nthe scram_HMAC_* business in scram-common.c). Third, this lacks a\nfallback implementation. Finally, pgcrypto is not touched, but we\nshould be able to simplify the area around int_digest_list in\npx-hmac.c which lists for HMAC as available options MD5, SHA1 and all\nfour SHA2.\n\nPlease note that we have MD5 and SHA2 as choices in cryptohash.h, and\nI have already a patch to add SHA1 to the cryptohash set pending for\nreview: https://commitfest.postgresql.org/31/2868/. If all that is\ndone correctly, a lot of code can be cleaned up while making things\nstill pluggable with various SSL libraries.\n--\nMichael",
"msg_date": "Wed, 16 Dec 2020 10:23:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Tue, Dec 15, 2020 at 02:09:09PM -0500, Stephen Frost wrote:\n> > Yeah, looking at what's been done there seems like the right approach to\n> > me as well, ideally we'd have one set of APIs that'll support all these\n> > use cases (not unlike what curl does..).\n> \n> Ooh... This is interesting. What did curl do wrong here? It is\n> always good to hear from such past experiences.\n\nNot sure that came across very well- curl did something right in terms\nof their vTLS layer which allows for building curl with a number of\ndifferent SSL/TLS libraries without the core code having to know about\nall the differences. I was suggesting that we might want to look at how\nthey did that, and naturally discuss with Daniel and ask him what\nthoughts he has from having worked with curl and the vTLS layer.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 16 Dec 2020 11:26:46 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi Bruce\n\nOn Wed, 16 Dec 2020 at 00:12, Bruce Momjian <bruce@momjian.us> wrote:\n>\n...\n>\n> The second approach is to make a new API for what you want....\n\nI am trying to motivate for an alternate API. Specifically, an API\nwhich allows any potential adopter of Postgres and Cluster File\nEncryption to adopt them without having to accept any particular\napproach to key management, key derivation, wrapping, validation, etc.\nA passphrase key-wrapper with validation will probably be very useful\nto a lot of people, but making it mandatory and requiring twists and\nturns to integrate with already-established security infrastructure\nsounds like a barrier to adoption.\n\n> ...It would be\n> similar to the cluster_passphrase_command, but it would be simple in\n> some ways, but complex in others since we need at least two DEKs, and\n> key rotation might be very risky since there isn't validation in the\n> server.\n>\n\nI guess that the risk you're talking about here is encryption with a\nbogus key and bricking the data? In a world where the wrapped keys are\nopaque to the application, a key would be validated through a\nroundtrip: request the unwrapping of the key, encrypt a known string,\nrequest the unwrapping again, decrypt the known string, compare. If\nany of those steps fail, don't use the wrapped key provided.\nValidating that the stored keys have not been fiddled/damaged is the\nKMS/HSM's responsibility.\n\n>\n>... I don't think this can be accomplished by a contrib module, but\n> would actually be easy to implement, but perhaps hard to document\n> because the user API might be tricky.\n>\n\nIf integration through a pipeline isn't good enough (it would be a\npain for the passphrase, with multiple prompts), what else do you see\nan API having to provide?\n\n>\n> I think my big question is about your sentence, \"A key feature of these\n> key management approaches is that the application handling the\n> encrypted data doesn't get the KEK, the HSM/KSM passes the DEK back\n> after unwrapping it.\" It is less secure for the HSM to return a KEK\n> rather than a DEK? I can't see why it would be. The application never\n> sees the KEK used to wrap the DEK it has stored in the file system,\n> though that DEK is actually used as a passphrase by Postgres. This is\n> the same with the Yubikey --- Postgres never sees the private key used\n> to unlock what it locked with the Yubikey public key.\n>\n\nThe protocols and practices are designed for a lot of DEKs and small\nnumber of KEK's. So the compromise of a KEK would, potentially, lead\nto compromise of thousands of DEKs. In this particular case with 2\nDEKs wrapped by one KEK, it doesn't sound like much of a difference, I\nagree. From an acceptance and adoption point of view, I'm just\ninclined to use the security ecosystem in an established and\nunderstood way.\n\nRegards\nAlastair\n\n\n",
"msg_date": "Wed, 16 Dec 2020 18:07:26 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 06:07:26PM +0000, Alastair Turner wrote:\n> Hi Bruce\n> \n> On Wed, 16 Dec 2020 at 00:12, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> ...\n> >\n> > The second approach is to make a new API for what you want....\n> \n> I am trying to motivate for an alternate API. Specifically, an API\n> which allows any potential adopter of Postgres and Cluster File\n> Encryption to adopt them without having to accept any particular\n> approach to key management, key derivation, wrapping, validation, etc.\n> A passphrase key-wrapper with validation will probably be very useful\n> to a lot of people, but making it mandatory and requiring twists and\n> turns to integrate with already-established security infrastructure\n> sounds like a barrier to adoption.\n\nAttached is a script that uses the AWS Secrets Manager, and it does key\nrotation with the new pg_altercpass tool too, just like all the other\nmethods.\n\nI am not inclined to add any more APIs unless there is clear value for\nit, and I am not seeing that yet.\n\n> > ...It would be\n> > similar to the cluster_passphrase_command, but it would be simple in\n> > some ways, but complex in others since we need at least two DEKs, and\n> > key rotation might be very risky since there isn't validation in the\n> > server.\n> >\n> \n> I guess that the risk you're talking about here is encryption with a\n> bogus key and bricking the data? In a world where the wrapped keys are\n> opaque to the application, a key would be validated through a\n> roundtrip: request the unwrapping of the key, encrypt a known string,\n> request the unwrapping again, decrypt the known string, compare. If\n> any of those steps fail, don't use the wrapped key provided.\n> Validating that the stored keys have not been fiddled/damaged is the\n> KMS/HSM's responsibility.\n\nOK, but we already have validation.\n\n> >... I don't think this can be accomplished by a contrib module, but\n> > would actually be easy to implement, but perhaps hard to document\n> > because the user API might be tricky.\n> >\n> \n> If integration through a pipeline isn't good enough (it would be a\n> pain for the passphrase, with multiple prompts), what else do you see\n> an API having to provide?\n\nThe big problem is that at bootstrap time you have to call the command\nat a specific time, and I don't see how that could be done via contrib.\nAlso, I am trying to see value in offering another API. We don't need\nto serve every need.\n\n> > I think my big question is about your sentence, \"A key feature of these\n> > key management approaches is that the application handling the\n> > encrypted data doesn't get the KEK, the HSM/KSM passes the DEK back\n> > after unwrapping it.\" It is less secure for the HSM to return a KEK\n> > rather than a DEK? I can't see why it would be. The application never\n> > sees the KEK used to wrap the DEK it has stored in the file system,\n> > though that DEK is actually used as a passphrase by Postgres. This is\n> > the same with the Yubikey --- Postgres never sees the private key used\n> > to unlock what it locked with the Yubikey public key.\n> \n> The protocols and practices are designed for a lot of DEKs and small\n> number of KEK's. So the compromise of a KEK would, potentially, lead\n> to compromise of thousands of DEKs. In this particular case with 2\n> DEKs wrapped by one KEK, it doesn't sound like much of a difference, I\n> agree. From an acceptance and adoption point of view, I'm just\n> inclined to use the security ecosystem in an established and\n> understood way.\n\nRight, if there were many DEKs I can see your point. Using many DEKs in\na database is a big problem since so many parts are interrelated. We\nlooked at per-user or per-tablespace DEKs, but found it unworkable. We\nuse two DEKs so we can failover to a standby for DEK rotation purposes.\n\nI think for your purposes, your KMS DEK ends up being a KEK for\nPostgres. I am guessing most applications don't have the validation and\nkey rotation needs Postgres has, so a DEK is fine, but for Postgres,\nbecause we are supporing already four different authentication methods\nvia a single command, we have those features already, so getting a DEK\nfrom a KMS that we treat as a KEK seems natural to me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Wed, 16 Dec 2020 13:42:57 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Alastair Turner (minion@decodable.me) wrote:\n> On Wed, 16 Dec 2020 at 00:12, Bruce Momjian <bruce@momjian.us> wrote:\n> > The second approach is to make a new API for what you want....\n> \n> I am trying to motivate for an alternate API. Specifically, an API\n> which allows any potential adopter of Postgres and Cluster File\n> Encryption to adopt them without having to accept any particular\n> approach to key management, key derivation, wrapping, validation, etc.\n> A passphrase key-wrapper with validation will probably be very useful\n> to a lot of people, but making it mandatory and requiring twists and\n> turns to integrate with already-established security infrastructure\n> sounds like a barrier to adoption.\n\nYeah, I mentioned this in one of the other threads where we were\ndiscussing KEKs and DEKs and such. Forcing one solution for working\nwith the KEK/DEK isn't really ideal.\n\nThat said, maybe there's an option to have the user indicate to PG if\nthey're passing in a passphrase or a DEK, but we otherwise more-or-less\nkeep things as they are where the DEK that the user provides actually\ngoes to encrypting the sub-keys that we use for tables vs. WAL..? That\navoids the complication of having to have an API that somehow provides\nmore than one key, while also using the primary DEK key as-is from the\nkey management service and the KEK never being seen on the system where\nPG is running.\n\n> > ...It would be\n> > similar to the cluster_passphrase_command, but it would be simple in\n> > some ways, but complex in others since we need at least two DEKs, and\n> > key rotation might be very risky since there isn't validation in the\n> > server.\n> \n> I guess that the risk you're talking about here is encryption with a\n> bogus key and bricking the data? In a world where the wrapped keys are\n> opaque to the application, a key would be validated through a\n> roundtrip: request the unwrapping of the key, encrypt a known string,\n> request the unwrapping again, decrypt the known string, compare. If\n> any of those steps fail, don't use the wrapped key provided.\n> Validating that the stored keys have not been fiddled/damaged is the\n> KMS/HSM's responsibility.\n\nI'm not really sure what the validation concern being raised here is,\nbut I can understand Bruce's point about having to set up an API that\nallows us to ask for two distinct DEK's- but I'd think that keeping the\ntwo keys we have today as sub-keys addresses that as I outline above.\n\n> >... I don't think this can be accomplished by a contrib module, but\n> > would actually be easy to implement, but perhaps hard to document\n> > because the user API might be tricky.\n> \n> If integration through a pipeline isn't good enough (it would be a\n> pain for the passphrase, with multiple prompts), what else do you see\n> an API having to provide?\n\nI certainly agree that it'd be good to have a way to get an encrypted\ncluster set up and running which doesn't involve actual prompts.\n\n> > I think my big question is about your sentence, \"A key feature of these\n> > key management approaches is that the application handling the\n> > encrypted data doesn't get the KEK, the HSM/KSM passes the DEK back\n> > after unwrapping it.\" It is less secure for the HSM to return a KEK\n> > rather than a DEK? I can't see why it would be. The application never\n> > sees the KEK used to wrap the DEK it has stored in the file system,\n> > though that DEK is actually used as a passphrase by Postgres. This is\n> > the same with the Yubikey --- Postgres never sees the private key used\n> > to unlock what it locked with the Yubikey public key.\n> \n> The protocols and practices are designed for a lot of DEKs and small\n> number of KEK's. So the compromise of a KEK would, potentially, lead\n> to compromise of thousands of DEKs. In this particular case with 2\n> DEKs wrapped by one KEK, it doesn't sound like much of a difference, I\n> agree. From an acceptance and adoption point of view, I'm just\n> inclined to use the security ecosystem in an established and\n> understood way.\n\nI'm not quite following this- I agree that we don't want PG, or anything\nreally, to see the private key that's on the Yubikey, as that wouldn't\nbe good- instead, we should be able to ask for the Yubikey to decrypt\nthe DEK for us and then use that, which I had thought was happening but\nit's not entirely clear from the above discussion (and, admittedly, I've\nnot tried using the patch with my yubikey yet, but I do plan to...).\n\nAre we sure we're talking about the same thing here..? That's really\nthe question I'm asking myself.\n\nThere's an entirely independent discussion to be had about figuring out\na way to actually off-load *entirely* the encryption/decryption of data\nto the linux crypto system or hardware devices, but unless someone can\nstep up and write code for those today, I'd suggest that we table that\neffort until after we get this initial capability of having TDE with PG\ndoing all of the encryption/decryption.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 16 Dec 2020 16:32:00 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 10:23:23AM +0900, Michael Paquier wrote:\n> On Tue, Dec 15, 2020 at 04:02:12PM -0500, Bruce Momjian wrote:\n> > I thought this was going to be a huge job, but once I looked at it, it\n> > was clear exactly what you were saying. Comparing cryptohash.c and\n> > cryptohash_openssl.c I saw exactly what you wanted, and I think I have\n> > completed it in the attached patch. cryptohash.c implemented the hash\n> > in C code if OpenSSL is not present --- I assume you didn't want me to\n> > do that, but rather to split the API so it was easy to add another\n> > implementation.\n> \n> Hmm. I don't think that this is right. First, this still mixes\n> ciphers and HMAC.\n\nI looked at the code. The HMAC function body is just one function call\nto OpenSSL. Do you want it moved to cryptohash.c and\ncryptohash_openssl.c? To a new C file?\n\n> Second, it is only possible here to use HMAC with\n> SHA512 but we want to be able to use SHA256 as well with SCRAM (per se\n> the scram_HMAC_* business in scram-common.c). Third, this lacks a\n\nI looked at the SCRAM HMAC code. It looks complex, but I assume ideally\nthat would be moved into a separate C file and used by cluster file\nencryption and SCRAM, right? I need help to do that because I am\nunclear how much of the SCRAM hash code is specific to SCRAM.\n\n> fallback implementation. Finally, pgcrypto is not touched, but we\n\nI have a fallback implemention --- it fails? ;-) Did you want me to\ninclude an AES implementation?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 16 Dec 2020 17:04:12 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 04:32:00PM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Alastair Turner (minion@decodable.me) wrote:\n> > On Wed, 16 Dec 2020 at 00:12, Bruce Momjian <bruce@momjian.us> wrote:\n> > > The second approach is to make a new API for what you want....\n> > \n> > I am trying to motivate for an alternate API. Specifically, an API\n> > which allows any potential adopter of Postgres and Cluster File\n> > Encryption to adopt them without having to accept any particular\n> > approach to key management, key derivation, wrapping, validation, etc.\n> > A passphrase key-wrapper with validation will probably be very useful\n> > to a lot of people, but making it mandatory and requiring twists and\n> > turns to integrate with already-established security infrastructure\n> > sounds like a barrier to adoption.\n> \n> Yeah, I mentioned this in one of the other threads where we were\n> discussing KEKs and DEKs and such. Forcing one solution for working\n> with the KEK/DEK isn't really ideal.\n> \n> That said, maybe there's an option to have the user indicate to PG if\n> they're passing in a passphrase or a DEK, but we otherwise more-or-less\n> keep things as they are where the DEK that the user provides actually\n> goes to encrypting the sub-keys that we use for tables vs. WAL..? That\n\nYes, that is what I suggested in an earlier email.\n\n> avoids the complication of having to have an API that somehow provides\n> more than one key, while also using the primary DEK key as-is from the\n> key management service and the KEK never being seen on the system where\n> PG is running.\n\nHow is that diffeent from what we have now? Did you read my reply today\nto Alastair with the AWS script?\n\n> > > ...It would be\n> > > similar to the cluster_passphrase_command, but it would be simple in\n> > > some ways, but complex in others since we need at least two DEKs, and\n> > > key rotation might be very risky since there isn't validation in the\n> > > server.\n> > \n> > I guess that the risk you're talking about here is encryption with a\n> > bogus key and bricking the data? In a world where the wrapped keys are\n> > opaque to the application, a key would be validated through a\n> > roundtrip: request the unwrapping of the key, encrypt a known string,\n> > request the unwrapping again, decrypt the known string, compare. If\n> > any of those steps fail, don't use the wrapped key provided.\n> > Validating that the stored keys have not been fiddled/damaged is the\n> > KMS/HSM's responsibility.\n> \n> I'm not really sure what the validation concern being raised here is,\n> but I can understand Bruce's point about having to set up an API that\n> allows us to ask for two distinct DEK's- but I'd think that keeping the\n> two keys we have today as sub-keys addresses that as I outline above.\n\nRight, how is a passphrase different than subkeys?\n\n> > >... I don't think this can be accomplished by a contrib module, but\n> > > would actually be easy to implement, but perhaps hard to document\n> > > because the user API might be tricky.\n> > \n> > If integration through a pipeline isn't good enough (it would be a\n> > pain for the passphrase, with multiple prompts), what else do you see\n> > an API having to provide?\n> \n> I certainly agree that it'd be good to have a way to get an encrypted\n> cluster set up and running which doesn't involve actual prompts.\n\nUh, two of my four scripts do not require prompts.\n\n> > > I think my big question is about your sentence, \"A key feature of these\n> > > key management approaches is that the application handling the\n> > > encrypted data doesn't get the KEK, the HSM/KSM passes the DEK back\n> > > after unwrapping it.\" It is less secure for the HSM to return a KEK\n> > > rather than a DEK? I can't see why it would be. The application never\n> > > sees the KEK used to wrap the DEK it has stored in the file system,\n> > > though that DEK is actually used as a passphrase by Postgres. This is\n> > > the same with the Yubikey --- Postgres never sees the private key used\n> > > to unlock what it locked with the Yubikey public key.\n> > \n> > The protocols and practices are designed for a lot of DEKs and small\n> > number of KEK's. So the compromise of a KEK would, potentially, lead\n> > to compromise of thousands of DEKs. In this particular case with 2\n> > DEKs wrapped by one KEK, it doesn't sound like much of a difference, I\n> > agree. From an acceptance and adoption point of view, I'm just\n> > inclined to use the security ecosystem in an established and\n> > understood way.\n> \n> I'm not quite following this- I agree that we don't want PG, or anything\n> really, to see the private key that's on the Yubikey, as that wouldn't\n> be good- instead, we should be able to ask for the Yubikey to decrypt\n> the DEK for us and then use that, which I had thought was happening but\n> it's not entirely clear from the above discussion (and, admittedly, I've\n> not tried using the patch with my yubikey yet, but I do plan to...).\n\nWhat it does is to encrypt the passphrase on boot, store it on disk, and\nthen pass the file to the Yubikey to be decrypted and the passphrase\npassed to the server.\n\n> Are we sure we're talking about the same thing here..? That's really\n> the question I'm asking myself.\n\nSeems so.\n> \n> There's an entirely independent discussion to be had about figuring out\n> a way to actually off-load *entirely* the encryption/decryption of data\n> to the linux crypto system or hardware devices, but unless someone can\n> step up and write code for those today, I'd suggest that we table that\n> effort until after we get this initial capability of having TDE with PG\n> doing all of the encryption/decryption.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 16 Dec 2020 17:09:45 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 01:42:57PM -0500, Bruce Momjian wrote:\n> On Wed, Dec 16, 2020 at 06:07:26PM +0000, Alastair Turner wrote:\n> > Hi Bruce\n> > \n> > On Wed, 16 Dec 2020 at 00:12, Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > ...\n> > >\n> > > The second approach is to make a new API for what you want....\n> > \n> > I am trying to motivate for an alternate API. Specifically, an API\n> > which allows any potential adopter of Postgres and Cluster File\n> > Encryption to adopt them without having to accept any particular\n> > approach to key management, key derivation, wrapping, validation, etc.\n> > A passphrase key-wrapper with validation will probably be very useful\n> > to a lot of people, but making it mandatory and requiring twists and\n> > turns to integrate with already-established security infrastructure\n> > sounds like a barrier to adoption.\n> \n> Attached is a script that uses the AWS Secrets Manager, and it does key\n> rotation with the new pg_altercpass tool too, just like all the other\n> methods.\n\nAttached is an improved script that does not pass the secret on the\ncommand line.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Wed, 16 Dec 2020 17:18:12 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, 16 Dec 2020 at 21:32, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Alastair Turner (minion@decodable.me) wrote:\n> > On Wed, 16 Dec 2020 at 00:12, Bruce Momjian <bruce@momjian.us> wrote:\n> > > The second approach is to make a new API for what you want....\n> >\n> > I am trying to motivate for an alternate API. Specifically, an API\n> > which allows any potential adopter of Postgres and Cluster File\n> > Encryption to adopt them without having to accept any particular\n> > approach to key management, key derivation, wrapping, validation, etc.\n> > A passphrase key-wrapper with validation will probably be very useful\n> > to a lot of people, but making it mandatory and requiring twists and\n> > turns to integrate with already-established security infrastructure\n> > sounds like a barrier to adoption.\n>\n> Yeah, I mentioned this in one of the other threads where we were\n> discussing KEKs and DEKs and such. Forcing one solution for working\n> with the KEK/DEK isn't really ideal.\n>\n> That said, maybe there's an option to have the user indicate to PG if\n> they're passing in a passphrase or a DEK, ...\n\nSome options very much like the current cluster_passphrase_command,\nbut that takes an input sounds to me like it would meet that\nrequirement. The challenge sent to the command could be just a text\nprompt, or it could be the wrapped DEK. The selection of the command\nwould indicate what the user was expected to pass. The return would be\na DEK.\n\n> ...but we otherwise more-or-less\n> keep things as they are where the DEK that the user provides actually\n> goes to encrypting the sub-keys that we use for tables vs. WAL..? ...\n\nThe idea of subkeys sounds great, if it can merge the two potential\ninteractions into one and not complicate rotating the two parts\nseparately. But having Postgres encrypting them, leaves us open to\nmany of the same issues. That still feels like managing the keys, so a\ncorporate who have strong opinions on how that should be done will\nstill ask all sorts of pointy questions, complicating adoption.\n\n> ...That\n> avoids the complication of having to have an API that somehow provides\n> more than one key, while also using the primary DEK key as-is from the\n> key management service and the KEK never being seen on the system where\n> PG is running.\n>\n\nOther than calling out (and therefore potentially prompting) twice,\nwhat do you see as the complications of having more than one key?\n\n> > > ...It would be\n> > > similar to the cluster_passphrase_command, but it would be simple in\n> > > some ways, but complex in others since we need at least two DEKs, and\n> > > key rotation might be very risky since there isn't validation in the\n> > > server.\n> >\n...\n>\n> I'm not quite following this- I agree that we don't want PG, or anything\n> really, to see the private key that's on the Yubikey, as that wouldn't\n> be good- instead, we should be able to ask for the Yubikey to decrypt\n> the DEK for us and then use that, which I had thought was happening but\n> it's not entirely clear from the above discussion (and, admittedly, I've\n> not tried using the patch with my yubikey yet, but I do plan to...).\n>\n> Are we sure we're talking about the same thing here..? That's really\n> the question I'm asking myself.\n>\n\nI'd describe what the current patch does as using YubiKey to encrypt\nand then decrypt an intermediate secret, which is then used to\ngenerate/derive a KEK, which is then used to unwrap the stored,\nwrapped DEK.\n\n> There's an entirely independent discussion to be had about figuring out\n> a way to actually off-load *entirely* the encryption/decryption of data\n> to the linux crypto system or hardware devices, but unless someone can\n> step up and write code for those today, I'd suggest that we table that\n> effort until after we get this initial capability of having TDE with PG\n> doing all of the encryption/decryption.\n\nI'm hopeful that the work on abstracting OpenSSL, nsstls, etc is going\nto help in this direct.\n\nRegards\n\nAlastair\n\n\n",
"msg_date": "Wed, 16 Dec 2020 22:24:31 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Alastair Turner (minion@decodable.me) wrote:\n> On Wed, 16 Dec 2020 at 21:32, Stephen Frost <sfrost@snowman.net> wrote:\n> > * Alastair Turner (minion@decodable.me) wrote:\n> > > On Wed, 16 Dec 2020 at 00:12, Bruce Momjian <bruce@momjian.us> wrote:\n> > > > The second approach is to make a new API for what you want....\n> > >\n> > > I am trying to motivate for an alternate API. Specifically, an API\n> > > which allows any potential adopter of Postgres and Cluster File\n> > > Encryption to adopt them without having to accept any particular\n> > > approach to key management, key derivation, wrapping, validation, etc.\n> > > A passphrase key-wrapper with validation will probably be very useful\n> > > to a lot of people, but making it mandatory and requiring twists and\n> > > turns to integrate with already-established security infrastructure\n> > > sounds like a barrier to adoption.\n> >\n> > Yeah, I mentioned this in one of the other threads where we were\n> > discussing KEKs and DEKs and such. Forcing one solution for working\n> > with the KEK/DEK isn't really ideal.\n> >\n> > That said, maybe there's an option to have the user indicate to PG if\n> > they're passing in a passphrase or a DEK, ...\n> \n> Some options very much like the current cluster_passphrase_command,\n> but that takes an input sounds to me like it would meet that\n> requirement. The challenge sent to the command could be just a text\n> prompt, or it could be the wrapped DEK. The selection of the command\n> would indicate what the user was expected to pass. The return would be\n> a DEK.\n\nIf I'm following, you're suggesting something like:\n\ncluster_passphrase_command = 'aws get %q'\n\nand then '%q' gets replaced with \"Please provide the WAL DEK: \", or\nsomething like that? Prompting the user for each key? Not sure how\nwell that's work if want to automate everything though.\n\nIf you have specific ideas, it'd really be helpful to give examples of\nwhat you're thinking.\n\n> > ...but we otherwise more-or-less\n> > keep things as they are where the DEK that the user provides actually\n> > goes to encrypting the sub-keys that we use for tables vs. WAL..? ...\n> \n> The idea of subkeys sounds great, if it can merge the two potential\n> interactions into one and not complicate rotating the two parts\n> separately. But having Postgres encrypting them, leaves us open to\n> many of the same issues. That still feels like managing the keys, so a\n> corporate who have strong opinions on how that should be done will\n> still ask all sorts of pointy questions, complicating adoption.\n\nThis really needs more detail- what exactly is the concern? What are\nthe 'pointy questions'? What's complicated about using sub-keys? One\nof the great advantages of using sub-keys is that you can rotate the KEK\n(or whatever is passed to PG) without having to actually go through the\nentire system and decyprt/re-encrypt everything. There's, of course,\nthe downside that then you don't get the keys rotated as often as you\nmight like to, but I am, at least, hopeful that we might be able to find\na way to do that in the future.\n\n> > ...That\n> > avoids the complication of having to have an API that somehow provides\n> > more than one key, while also using the primary DEK key as-is from the\n> > key management service and the KEK never being seen on the system where\n> > PG is running.\n> \n> Other than calling out (and therefore potentially prompting) twice,\n> what do you see as the complications of having more than one key?\n\nMainly just a concern about the API and about what happens if, say, we\ndecide that we want to have another sub-key, for example. If we're\nhandling them then there's really no issue- we just add another key in,\nbut if that's not happening then it's going to mean changes for\nadministrators. If there's a good justification for it, then perhaps\nthat's alright, but hand waving at what the issue is doesn't really\nhelp.\n\n> > > > ...It would be\n> > > > similar to the cluster_passphrase_command, but it would be simple in\n> > > > some ways, but complex in others since we need at least two DEKs, and\n> > > > key rotation might be very risky since there isn't validation in the\n> > > > server.\n> > >\n> ...\n> >\n> > I'm not quite following this- I agree that we don't want PG, or anything\n> > really, to see the private key that's on the Yubikey, as that wouldn't\n> > be good- instead, we should be able to ask for the Yubikey to decrypt\n> > the DEK for us and then use that, which I had thought was happening but\n> > it's not entirely clear from the above discussion (and, admittedly, I've\n> > not tried using the patch with my yubikey yet, but I do plan to...).\n> >\n> > Are we sure we're talking about the same thing here..? That's really\n> > the question I'm asking myself.\n> \n> I'd describe what the current patch does as using YubiKey to encrypt\n> and then decrypt an intermediate secret, which is then used to\n> generate/derive a KEK, which is then used to unwrap the stored,\n> wrapped DEK.\n\nThis seems like a crux of at least one concern- that the current patch\nis deriving the actual KEK from the passphrase and not just taking the\nprovided value (at least, that's what it looks like from a *very* quick\nlook into it), and that's the part that I was suggesting that we might\nadd an option for- to indicate if the cluster passphrase command is\nactually returning a passphrase which should be used to derive a key, or\nif it's returning a key directly to be used. That does seem to be a\nmaterial difference to me and one which we should care about.\n\n> > There's an entirely independent discussion to be had about figuring out\n> > a way to actually off-load *entirely* the encryption/decryption of data\n> > to the linux crypto system or hardware devices, but unless someone can\n> > step up and write code for those today, I'd suggest that we table that\n> > effort until after we get this initial capability of having TDE with PG\n> > doing all of the encryption/decryption.\n> \n> I'm hopeful that the work on abstracting OpenSSL, nsstls, etc is going\n> to help in this direct.\n\nYes, I agree with that general idea but it's a 'next step' kind of\nthing, not something we need to try and bake in today.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 16 Dec 2020 17:43:31 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 05:04:12PM -0500, Bruce Momjian wrote:\n> On Wed, Dec 16, 2020 at 10:23:23AM +0900, Michael Paquier wrote:\n>> Hmm. I don't think that this is right. First, this still mixes\n>> ciphers and HMAC.\n> \n> I looked at the code. The HMAC function body is just one function call\n> to OpenSSL. Do you want it moved to cryptohash.c and\n> cryptohash_openssl.c? To a new C file?\n> \n>> Second, it is only possible here to use HMAC with\n>> SHA512 but we want to be able to use SHA256 as well with SCRAM (per se\n>> the scram_HMAC_* business in scram-common.c). Third, this lacks a\n> \n> I looked at the SCRAM HMAC code. It looks complex, but I assume ideally\n> that would be moved into a separate C file and used by cluster file\n> encryption and SCRAM, right? I need help to do that because I am\n> unclear how much of the SCRAM hash code is specific to SCRAM.\n\nI have gone though the same exercise for HMAC, with more success it\nseems:\nhttps://www.postgresql.org/message-id/X9m0nkEJEzIPXjeZ@paquier.xyz\n\nThis includes a fallback implementation that returns the same results\nas OpenSSL, and the same results as samples in wikipedia or such\nsources. The set APIs in this refactoring could be reused here and\nplugged into any SSL libraries, and my patch has changed SCRAM to\nreuse that. With this, it is also possible to remove all the HMAC\ncode from pgcrypto but this cannot happen without SHA1 support in\ncryptohash.c, another patch I have submitted -hackers recently. So I\nthink that it is a win as a whole.\n\n>> fallback implementation. Finally, pgcrypto is not touched, but we\n> \n> I have a fallback implemention --- it fails? ;-) Did you want me to\n> include an AES implementation?\n\nNo idea about this one yet. There are no direct users of AES except\npgcrypto in core. One thing that would be good IMO is to properly\nsplit the patch of this thread into individual parts that could be\nreviewed separately using for example \"git format-patch\" to generate\npatch series. What's presented is a mixed bag, so that's harder to\nlook at it and consider how this stuff should work, and if there are\npieces that should be designed better or not.\n--\nMichael",
"msg_date": "Thu, 17 Dec 2020 08:26:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, 16 Dec 2020 at 22:43, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n...\n>\n> If I'm following, you're suggesting something like:\n>\n> cluster_passphrase_command = 'aws get %q'\n>\n> and then '%q' gets replaced with \"Please provide the WAL DEK: \", or\n> something like that? Prompting the user for each key? Not sure how\n> well that's work if want to automate everything though.\n>\n> If you have specific ideas, it'd really be helpful to give examples of\n> what you're thinking.\n\nI can think of three specific ideas off the top of my head: the\npassphrase key wrapper, the secret store and the cloud/HW KMS.\n\nSince the examples expand the purpose of cluster_passphrase_command,\nlet's call it cluster_key_challenge_command in the examples.\n\nStarting with the passphrase key wrapper, since it's what's in place now.\n\n - cluster_key_challenge_command = 'password_key_wrapper %q'\n - Supplied on stdin to the process is the wrapped DEK (either a\ncombined key for db files and WAL or one for each, on separate calls)\n - %q is \"Please provide WAL key wrapper password\" or just \"...provide\nkey wrapper password\"\n - Returned on stdout is the unwrapped DEK\n\nFor a secret store\n\n - cluster_key_challenge_command = 'vault_key_fetch'\n - Supplied on stdin to the process is the secret's identifier (pg_dek_xxUUIDxx)\n - Returned on stdout is the DEK, which may never have been wrapped,\ndepending on implementation\n - Access control to the secret store is managed through the challenge\ncommand's own config, certs, HBA, ...\n\nFor an HSM or cloud KMS\n\n - cluster_key_challenge_command = 'gcp_kms_key_fetch'\n - Supplied on stdin to the process is the the wrapped DEK (individual\nor combined)\n - Returned on stdout is the DEK (individual or combined)\n - Access control to the kms is managed through the challenge\ncommand's own config, certs, HBA, ...\n\nThe secret store and HSM/KMS options may be promptless, and therefore\namenable to automation, depending on the setup of those clients.\n\n>\n...\n>\n> > > ...That\n> > > avoids the complication of having to have an API that somehow provides\n> > > more than one key, while also using the primary DEK key as-is from the\n> > > key management service and the KEK never being seen on the system where\n> > > PG is running.\n> >\n> > Other than calling out (and therefore potentially prompting) twice,\n> > what do you see as the complications of having more than one key?\n>\n> Mainly just a concern about the API and about what happens if, say, we\n> decide that we want to have another sub-key, for example. If we're\n> handling them then there's really no issue- we just add another key in,\n> but if that's not happening then it's going to mean changes for\n> administrators. If there's a good justification for it, then perhaps\n> that's alright, but hand waving at what the issue is doesn't really\n> help.\n>\n\nSorry, I wasn't trying to hand wave it away. For automated\ninteractions, like big iron HSMs or cloud KSMs, the difference between\n2 operations and 10 operations to start a DB server won't matter. For\nan admin/operator having to type 10 passwords or get 10 clean\nthumbprint scans, it would be horrible. My underlying question was, is\nthat toil the only problem to be solved, or is there another reason to\nget into key combination, key splitting and the related issues which\nare less documented and less well understood than key wrapping.\n\n>\n...\n> >\n> > I'd describe what the current patch does as using YubiKey to encrypt\n> > and then decrypt an intermediate secret, which is then used to\n> > generate/derive a KEK, which is then used to unwrap the stored,\n> > wrapped DEK.\n>\n> This seems like a crux of at least one concern- that the current patch\n> is deriving the actual KEK from the passphrase and not just taking the\n> provided value (at least, that's what it looks like from a *very* quick\n> look into it), and that's the part that I was suggesting that we might\n> add an option for- to indicate if the cluster passphrase command is\n> actually returning a passphrase which should be used to derive a key, or\n> if it's returning a key directly to be used. That does seem to be a\n> material difference to me and one which we should care about.\n>\n\nYes. Caring about that is the reason I've been making a nuisance of myself.\n\n> > > There's an entirely independent discussion to be had about figuring out\n> > > a way to actually off-load *entirely* the encryption/decryption of data\n> > > to the linux crypto system or hardware devices, but unless someone can\n> > > step up and write code for those today, I'd suggest that we table that\n> > > effort until after we get this initial capability of having TDE with PG\n> > > doing all of the encryption/decryption.\n> >\n> > I'm hopeful that the work on abstracting OpenSSL, nsstls, etc is going\n> > to help in this direct.\n>\n> Yes, I agree with that general idea but it's a 'next step' kind of\n> thing, not something we need to try and bake in today.\n>\n\nAgreed.\n\nThanks\nAlastair\n\n\n",
"msg_date": "Thu, 17 Dec 2020 00:12:46 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "> On 16 Dec 2020, at 17:26, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> * Michael Paquier (michael@paquier.xyz) wrote:\n>> On Tue, Dec 15, 2020 at 02:09:09PM -0500, Stephen Frost wrote:\n>>> Yeah, looking at what's been done there seems like the right approach to\n>>> me as well, ideally we'd have one set of APIs that'll support all these\n>>> use cases (not unlike what curl does..).\n>> \n>> Ooh... This is interesting. What did curl do wrong here? It is\n>> always good to hear from such past experiences.\n> \n> Not sure that came across very well- curl did something right in terms\n> of their vTLS layer which allows for building curl with a number of\n> different SSL/TLS libraries without the core code having to know about\n> all the differences.\n\nThe vtls layer in libcurl is actually quite similar to what we have in libpq\nwith be-secure.c and be-secure-*.c for TLS and with what Michael has been doing\nlately with cryptohash.\n\nIn vtls library contexts are abstracted to the core code, with implementations\nsupplying a struct with a set of function pointers implementing functionality\n(this difference is due to libcurl supporting multiple TLS libraries compiled\nat the same time, something postgres IMO shouldn't do). We do give\nimplementations a bit more leeway with how feature complete they must be,\nmainly due to the wide variety of libraries supported (from OpenSSL to IBM\nGSKit and most ones in between). While basic it has served us quite well and\nwe have had first time contributors successfully come with a new TLS library as\na patch.\n\nWhat we have so far in the tree (an abstract *reasonably generic* API hiding\nall library context interactions) makes implementing support for a new TLS\nlibrary less daunting as no core code needs to be touched really. Having\nkicked the tyres on this a fair bit I really think we should maintain that\nseparation, it's complicated enough as it is given just how much code needs to\nbe written.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 17 Dec 2020 01:15:37 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 01:15:37AM +0100, Daniel Gustafsson wrote:\n> In vtls library contexts are abstracted to the core code, with implementations\n> supplying a struct with a set of function pointers implementing functionality\n> (this difference is due to libcurl supporting multiple TLS libraries compiled\n> at the same time, something postgres IMO shouldn't do). We do give\n> implementations a bit more leeway with how feature complete they must be,\n> mainly due to the wide variety of libraries supported (from OpenSSL to IBM\n> GSKit and most ones in between). While basic it has served us quite well and\n> we have had first time contributors successfully come with a new TLS library as\n> a patch.\n\nThis infrastructure has been chosen because curl requires to be able\nto use multiple types of libraries at run-time, right? I don't think\nwe need to get down to that for Postgres and keep things so as we are\nonly able to use one TLS library at the same time, the one compiled\nwith. This makes the protocol simpler. But perhaps I just lack\nambition and vision.\n--\nMichael",
"msg_date": "Thu, 17 Dec 2020 10:24:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Wed, Dec 16, 2020 at 05:04:12PM -0500, Bruce Momjian wrote:\n> >> fallback implementation. Finally, pgcrypto is not touched, but we\n> > \n> > I have a fallback implemention --- it fails? ;-) Did you want me to\n> > include an AES implementation?\n> \n> No idea about this one yet. There are no direct users of AES except\n> pgcrypto in core. One thing that would be good IMO is to properly\n> split the patch of this thread into individual parts that could be\n> reviewed separately using for example \"git format-patch\" to generate\n> patch series. What's presented is a mixed bag, so that's harder to\n> look at it and consider how this stuff should work, and if there are\n> pieces that should be designed better or not.\n\nI don't think there's any need for us to implement a fallback\nimplementation of AES. I'm not entirely sure we need it for hashes\nbut since we've already got it...\n\nThanks,\n\nStephen",
"msg_date": "Thu, 17 Dec 2020 11:39:55 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 11:39:55AM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Michael Paquier (michael@paquier.xyz) wrote:\n> > On Wed, Dec 16, 2020 at 05:04:12PM -0500, Bruce Momjian wrote:\n> > >> fallback implementation. Finally, pgcrypto is not touched, but we\n> > > \n> > > I have a fallback implemention --- it fails? ;-) Did you want me to\n> > > include an AES implementation?\n> > \n> > No idea about this one yet. There are no direct users of AES except\n> > pgcrypto in core. One thing that would be good IMO is to properly\n> > split the patch of this thread into individual parts that could be\n> > reviewed separately using for example \"git format-patch\" to generate\n> > patch series. What's presented is a mixed bag, so that's harder to\n> > look at it and consider how this stuff should work, and if there are\n> > pieces that should be designed better or not.\n> \n> I don't think there's any need for us to implement a fallback\n> implementation of AES. I'm not entirely sure we need it for hashes\n> but since we've already got it...\n\nAgreed. I think there is serious risk we would do AES in a different\nway than OpenSSL, especially if I did it. ;-) We can add a native AES\none day if we want, but as stated by Michael Paquier, it has to be\ntested so we are sure it returns exactly the same values as OpenSSL.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 12:10:22 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 11:16:18PM -0500, Bruce Momjian wrote:\n> On Tue, Dec 15, 2020 at 10:36:56AM +0800, Neil Chen wrote:\n> > Since our implementation is not in contrib, I don't think we should put the\n> > script there. Maybe we can refer to postgresql.conf.sample?��\n> \n> Uh, the script are 20-60 lines long --- I am attaching them to this\n> email. Plus, when we allow user prompting for the SSL passphrase, we\n> will have another script, or maybe three mor if people want to use a\n> Yubikey to unlock the SSL passphrase.\n\nHere is a run of all four authentication methods, and updated scripts. \nI have renamed Yubiki to PIV since the script should work with anY\nPIV-enabled deviced, like a CAC.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Thu, 17 Dec 2020 14:02:37 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 12:10:22PM -0500, Bruce Momjian wrote:\n> On Thu, Dec 17, 2020 at 11:39:55AM -0500, Stephen Frost wrote:\n>> I don't think there's any need for us to implement a fallback\n>> implementation of AES. I'm not entirely sure we need it for hashes\n>> but since we've already got it...\n\nWe need fallback implementations for cryptohashes (MD5, SHA1/2) and\nHMAC because we have SCRAM authentication, pgcrypto and SQL functions\nthat should be able to work even when not building with any SSL\nlibraries. So that's here to stay to ensure compatibility with what\nwe do. All this stuff is already in the tree for ages, it was just\nnot shaped for a more centralized reuse.\n\n> Agreed. I think there is serious risk we would do AES in a different\n> way than OpenSSL, especially if I did it. ;-) We can add a native AES\n> one day if we want, but as stated by Michael Paquier, it has to be\n> tested so we are sure it returns exactly the same values as OpenSSL.\n\nI think that it would be good to put some generalization here, and\nlook at options that are offered by other SSL libraries, like libnss\nso as we don't finish with a design that restricts the use of a given\nfeature only to OpenSSL.\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 10:01:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 10:01:22AM +0900, Michael Paquier wrote:\n> On Thu, Dec 17, 2020 at 12:10:22PM -0500, Bruce Momjian wrote:\n> > On Thu, Dec 17, 2020 at 11:39:55AM -0500, Stephen Frost wrote:\n> >> I don't think there's any need for us to implement a fallback\n> >> implementation of AES. I'm not entirely sure we need it for hashes\n> >> but since we've already got it...\n> \n> We need fallback implementations for cryptohashes (MD5, SHA1/2) and\n> HMAC because we have SCRAM authentication, pgcrypto and SQL functions\n> that should be able to work even when not building with any SSL\n> libraries. So that's here to stay to ensure compatibility with what\n> we do. All this stuff is already in the tree for ages, it was just\n> not shaped for a more centralized reuse.\n\nOne question is whether we want to support cluster file encryption\nwithout OpenSSL --- right now we can't. I am wondering if we really\nneed the hardware acceleration of OpenSSL for AES so doing our own AES\nimplementation might not even make sense, performance-wise.\n\n> > Agreed. I think there is serious risk we would do AES in a different\n> > way than OpenSSL, especially if I did it. ;-) We can add a native AES\n> > one day if we want, but as stated by Michael Paquier, it has to be\n> > tested so we are sure it returns exactly the same values as OpenSSL.\n> \n> I think that it would be good to put some generalization here, and\n> look at options that are offered by other SSL libraries, like libnss\n> so as we don't finish with a design that restricts the use of a given\n> feature only to OpenSSL.\n\nUh, you mean the C API or the user API? I don't think we have any\nOpenSSL requirement at the user level, except that my examples use\ncommand-line openssl to generate the passphrase.\n\nI split apart my patch to create cipher{_openssl}.c and hmac{_openssl}.c\nso the single HMAC function is not in the cipher file anymore, attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Thu, 17 Dec 2020 22:14:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 3:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> Here is a run of all four authentication methods, and updated scripts.\n> I have renamed Yubiki to PIV since the script should work with anY\n> PIV-enabled deviced, like a CAC.\n>\n>\nThanks for attaching these patches.\nThe unfortunate thing is that I am not very familiar with yubikey, so I\nwill try to read it but may not be able to give useful advice.\nRegarding the location of script storage, why don't we name them like\n\"pass_fd.sh.sample\" and store them in the $DATA/share/postgresql directory\nafter installation, where other .sample files are also stored here. In the\nsource code directory, just put them in a directory related to KMGR.\n\nThrough your suggestions, I am learning about Cybertec's TDE which is a\nrelatively \"complete\" implementation. I will continue to rely on these TDE\npatches and the goals listed in the Wiki to verify whether the KMS system\ncan support our future feature.\n\nThanks.\n-- \nThere is no royal road to learning.\nHighGo Software Co.\n\nOn Fri, Dec 18, 2020 at 3:02 AM Bruce Momjian <bruce@momjian.us> wrote:\nHere is a run of all four authentication methods, and updated scripts. \nI have renamed Yubiki to PIV since the script should work with anY\nPIV-enabled deviced, like a CAC.\n Thanks for attaching these patches. The unfortunate thing is that I am not very familiar with yubikey, so I will try to read it but may not be able to give useful advice. Regarding the location of script storage, why don't we name them like \"pass_fd.sh.sample\" and store them in the $DATA/share/postgresql directory after installation, where other .sample files are also stored here. In the source code directory, just put them in a directory related to KMGR.Through your suggestions, I am learning about Cybertec's TDE which is a relatively \"complete\" implementation. I will continue to rely on these TDE patches and the goals listed in the Wiki to verify whether the KMS system can support our future feature.Thanks.-- There is no royal road to learning.HighGo Software Co.",
"msg_date": "Fri, 18 Dec 2020 11:19:02 +0800",
"msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 11:19:02AM +0800, Neil Chen wrote:\n> \n> \n> On Fri, Dec 18, 2020 at 3:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> \n> Here is a run of all four authentication methods, and updated scripts.\n> I have renamed Yubiki to PIV since the script should work with anY\n> PIV-enabled deviced, like a CAC.\n> \n> \n> �\n> Thanks for attaching these patches.�\n> The unfortunate thing is that I am not very familiar with yubikey, so I will\n> try to read it but may not be able to give useful advice.�\n> Regarding the location of script storage, why don't we name them like\n> \"pass_fd.sh.sample\" and store them in the $DATA/share/postgresql directory\n> after installation, where other .sample files are also stored here. In the\n> source code directory, just put them in a directory related to KMGR.\n\nYeah, that makes sense. They are small.\n\n> Through your suggestions, I am learning about Cybertec's TDE which is a\n> relatively \"complete\" implementation. I will continue to rely on these TDE\n> patches and the goals listed in the Wiki to verify whether the KMS system can\n> support our future feature.\n\nGreat to hear, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 22:21:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Thu, Dec 17, 2020 at 12:10:22PM -0500, Bruce Momjian wrote:\n> > Agreed. I think there is serious risk we would do AES in a different\n> > way than OpenSSL, especially if I did it. ;-) We can add a native AES\n> > one day if we want, but as stated by Michael Paquier, it has to be\n> > tested so we are sure it returns exactly the same values as OpenSSL.\n> \n> I think that it would be good to put some generalization here, and\n> look at options that are offered by other SSL libraries, like libnss\n> so as we don't finish with a design that restricts the use of a given\n> feature only to OpenSSL.\n\nWhile I agree with the general idea proposed here, I don't know that we\nneed to push super hard on it to be somehow perfect right now because it\nsimply won't be until we actually add support for another library, and I\ndon't think that's really this patch's responsibility.\n\nSo, yes, let's lay the groundwork and structure and perhaps spend a bit\nof time looking at other libraries, but not demand this patch also add\nsupport for a second library today, and accept that that means that the\nstructure we put in place may not end up being exactly perfect.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 18 Dec 2020 08:12:21 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Fri, Dec 18, 2020 at 10:01:22AM +0900, Michael Paquier wrote:\n> > On Thu, Dec 17, 2020 at 12:10:22PM -0500, Bruce Momjian wrote:\n> > > On Thu, Dec 17, 2020 at 11:39:55AM -0500, Stephen Frost wrote:\n> > >> I don't think there's any need for us to implement a fallback\n> > >> implementation of AES. I'm not entirely sure we need it for hashes\n> > >> but since we've already got it...\n> > \n> > We need fallback implementations for cryptohashes (MD5, SHA1/2) and\n> > HMAC because we have SCRAM authentication, pgcrypto and SQL functions\n> > that should be able to work even when not building with any SSL\n> > libraries. So that's here to stay to ensure compatibility with what\n> > we do. All this stuff is already in the tree for ages, it was just\n> > not shaped for a more centralized reuse.\n> \n> One question is whether we want to support cluster file encryption\n> without OpenSSL --- right now we can't. I am wondering if we really\n> need the hardware acceleration of OpenSSL for AES so doing our own AES\n> implementation might not even make sense, performance-wise.\n\nNo, I don't think we need to support file encryption independently of\nany library being available. Maybe some day we will, but that should\nreally just be another option to build with.\n\nGuess it wasn't clear, but I was being a bit tounge-in-cheek regarding\nthe idea of dropping SCRAM support if we aren't built with OpenSSL.\n\n> > > Agreed. I think there is serious risk we would do AES in a different\n> > > way than OpenSSL, especially if I did it. ;-) We can add a native AES\n> > > one day if we want, but as stated by Michael Paquier, it has to be\n> > > tested so we are sure it returns exactly the same values as OpenSSL.\n> > \n> > I think that it would be good to put some generalization here, and\n> > look at options that are offered by other SSL libraries, like libnss\n> > so as we don't finish with a design that restricts the use of a given\n> > feature only to OpenSSL.\n> \n> Uh, you mean the C API or the user API? I don't think we have any\n> OpenSSL requirement at the user level, except that my examples use\n> command-line openssl to generate the passphrase.\n\nWhat I would be thinking about with this are really three pieces-\n\n- C API for libpq (not relevant for this, but we have had issues in the\n past with exposing OpenSSL-specific things there)\n\n- C API in backend - we should try to at least set up the structure to\n allow multiple encryption implementations, either via different\n libraries or if someone feels it'd be useful to write a built-in\n implementation, but as I mentioned just a moment ago, we aren't going\n to get this perfect and we should accept that.\n\n- User API - we should avoid having OpenSSL-specific bits exposed to\n users, and I don't think we do today, so I don't think this is an\n issue at the moment.\n\n> I split apart my patch to create cipher{_openssl}.c and hmac{_openssl}.c\n> so the single HMAC function is not in the cipher file anymore, attached.\n\nWill try to look at this soon, but in general the idea seems right.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 18 Dec 2020 08:18:53 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 08:18:53AM -0500, Stephen Frost wrote:\n> What I would be thinking about with this are really three pieces-\n> \n> - C API for libpq (not relevant for this, but we have had issues in the\n> past with exposing OpenSSL-specific things there)\n\nRight.\n\n> - C API in backend - we should try to at least set up the structure to\n> allow multiple encryption implementations, either via different\n> libraries or if someone feels it'd be useful to write a built-in\n> implementation, but as I mentioned just a moment ago, we aren't going\n> to get this perfect and we should accept that.\n\nAll my OpenSSL-specific stuff is isolated in src/common.\n\n> - User API - we should avoid having OpenSSL-specific bits exposed to\n> users, and I don't think we do today, so I don't think this is an\n> issue at the moment.\n\nRight, there is nothing OpenSSL-specific on the user side, but some of\nthe scripts assume you can pass an open file descriptor to a\nPKCS11-enabled tool, and I don't know if non-OpenSSL libraries support\nthat.\n\nIdeally, we would have scripts for each library to use their\ncommand-line API for this. I am hestitant to rename the scripts to\ncontain the name openssl because I am unclear if we will ever have\nnon-OpenSSL implementations of these. I updated the script comments at\nthe top to indicate \"It uses OpenSSL with PKCS11 enabled via OpenSC.\".\n\nOne point is that the passphrase scripts are useful for cluster file\nencryption _and_ unlocking TLS certificates.\n\n> > I split apart my patch to create cipher{_openssl}.c and hmac{_openssl}.c\n> > so the single HMAC function is not in the cipher file anymore, attached.\n> \n> Will try to look at this soon, but in general the idea seems right.\n\nShould I be using the init/update/final HMAC API that SCRAM uses so it\nis easier to integrate into Michael's patch? I can do that, but didn't\nbecause that is going to require me to create a much larger footprint in\nthe main code, and that might be harder to integrate once Michael's API\nis final. It is easier for me to change one line to five lines than to\nchange five lines to five different lines.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 10:44:46 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Fri, Dec 18, 2020 at 08:18:53AM -0500, Stephen Frost wrote:\n> > - C API in backend - we should try to at least set up the structure to\n> > allow multiple encryption implementations, either via different\n> > libraries or if someone feels it'd be useful to write a built-in\n> > implementation, but as I mentioned just a moment ago, we aren't going\n> > to get this perfect and we should accept that.\n> \n> All my OpenSSL-specific stuff is isolated in src/common.\n\n... and in 'openssl' files, with 'generic' functions on top, right?\nThat's what I recall from before and seems like the right approach to at\nleast start with, and then we likely make adjustments as needed to the\nAPI when we add another encryption library.\n\n> > - User API - we should avoid having OpenSSL-specific bits exposed to\n> > users, and I don't think we do today, so I don't think this is an\n> > issue at the moment.\n> \n> Right, there is nothing OpenSSL-specific on the user side, but some of\n> the scripts assume you can pass an open file descriptor to a\n> PKCS11-enabled tool, and I don't know if non-OpenSSL libraries support\n> that.\n\nThat generally seems alright to me.\n\n> Ideally, we would have scripts for each library to use their\n> command-line API for this. I am hestitant to rename the scripts to\n> contain the name openssl because I am unclear if we will ever have\n> non-OpenSSL implementations of these. I updated the script comments at\n> the top to indicate \"It uses OpenSSL with PKCS11 enabled via OpenSC.\".\n\nI'm not too worried about the specific names of those scripts.\nDefinitely having comments that indicate what the dependencies are is\ncertainly good.\n\n> One point is that the passphrase scripts are useful for cluster file\n> encryption _and_ unlocking TLS certificates.\n\nThat's certainly good.\n\n> > > I split apart my patch to create cipher{_openssl}.c and hmac{_openssl}.c\n> > > so the single HMAC function is not in the cipher file anymore, attached.\n> > \n> > Will try to look at this soon, but in general the idea seems right.\n> \n> Should I be using the init/update/final HMAC API that SCRAM uses so it\n> is easier to integrate into Michael's patch? I can do that, but didn't\n> because that is going to require me to create a much larger footprint in\n> the main code, and that might be harder to integrate once Michael's API\n> is final. It is easier for me to change one line to five lines than to\n> change five lines to five different lines.\n\nFor my 2c, ideally we'd get a review done of Michael's changes and just\nget that committed and have your work here rebased over that. I don't\nsee any reason why we can't get that done in relatively short order.\nJust because it's registered in the January CF doesn't mean we actually\nhave to wait for that to get done, it just needs a review from another\ncommitter (or self certification from Michael if he's happy with it).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 18 Dec 2020 11:13:44 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 11:13:44AM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Fri, Dec 18, 2020 at 08:18:53AM -0500, Stephen Frost wrote:\n> > > - C API in backend - we should try to at least set up the structure to\n> > > allow multiple encryption implementations, either via different\n> > > libraries or if someone feels it'd be useful to write a built-in\n> > > implementation, but as I mentioned just a moment ago, we aren't going\n> > > to get this perfect and we should accept that.\n> > \n> > All my OpenSSL-specific stuff is isolated in src/common.\n> \n> ... and in 'openssl' files, with 'generic' functions on top, right?\n> That's what I recall from before and seems like the right approach to at\n> least start with, and then we likely make adjustments as needed to the\n> API when we add another encryption library.\n\nUh, I did it the way cryptohash_openssl.c does it. Sawada-san's patch did\nit kind of like that, but had #ifdef calls to OpenSSL versions, while\nthe cryptohash_openssl.c method has two files with exactly the same\nfunction names, and they are conditionally compiled in the makefile\nbased on makefile defines, and that is how I did it. Michael Paquier\npointed this out, and the msvc changes needed to handle it.\n\n> > > > I split apart my patch to create cipher{_openssl}.c and hmac{_openssl}.c\n> > > > so the single HMAC function is not in the cipher file anymore, attached.\n> > > \n> > > Will try to look at this soon, but in general the idea seems right.\n> > \n> > Should I be using the init/update/final HMAC API that SCRAM uses so it\n> > is easier to integrate into Michael's patch? I can do that, but didn't\n> > because that is going to require me to create a much larger footprint in\n> > the main code, and that might be harder to integrate once Michael's API\n> > is final. It is easier for me to change one line to five lines than to\n> > change five lines to five different lines.\n> \n> For my 2c, ideally we'd get a review done of Michael's changes and just\n> get that committed and have your work here rebased over that. I don't\n\nThat would be nice.\n\n> see any reason why we can't get that done in relatively short order.\n> Just because it's registered in the January CF doesn't mean we actually\n> have to wait for that to get done, it just needs a review from another\n> committer (or self certification from Michael if he's happy with it).\n\nAgreed. I just don't want to wait until mid-January to get this into\nthe tree, and I am going to defer to Michael's timeline on this, which\nis why I figured I might need to go first.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 12:02:50 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings Alastair,\n\n* Alastair Turner (minion@decodable.me) wrote:\n> On Wed, 16 Dec 2020 at 22:43, Stephen Frost <sfrost@snowman.net> wrote:\n> > If I'm following, you're suggesting something like:\n> >\n> > cluster_passphrase_command = 'aws get %q'\n> >\n> > and then '%q' gets replaced with \"Please provide the WAL DEK: \", or\n> > something like that? Prompting the user for each key? Not sure how\n> > well that's work if want to automate everything though.\n> >\n> > If you have specific ideas, it'd really be helpful to give examples of\n> > what you're thinking.\n> \n> I can think of three specific ideas off the top of my head: the\n> passphrase key wrapper, the secret store and the cloud/HW KMS.\n> \n> Since the examples expand the purpose of cluster_passphrase_command,\n> let's call it cluster_key_challenge_command in the examples.\n\nThese ideas don't seem too bad but I'd think we'd pass which key we want\non the command-line using a %i or something like that, rather than using\nstdin, unless there's some reason that would be an issue..? Certainly\nthe CLI utilities I've seen tend to expect the name of the secret that\nyou're asking for to be passed on the command line.\n\n> Starting with the passphrase key wrapper, since it's what's in place now.\n> \n> - cluster_key_challenge_command = 'password_key_wrapper %q'\n\nI do tend to like this idea of having what\ncluster_key_challenge_command, or whatever we call it, expects is an\nactual key and have the command that is run be a separate command that\ntakes the passphrase and runs the KDF (key derivation function) on it,\nwhen a passphrase is what the user wishes to use.\n\nThat generally makes more sense to me than having the key derivation\neffort built into the backend which I have a hard time seeing any\nparticular reason for, as long we're already calling out to some\nexternal utility of some kind to get the key.\n\n> - Supplied on stdin to the process is the wrapped DEK (either a\n> combined key for db files and WAL or one for each, on separate calls)\n> - %q is \"Please provide WAL key wrapper password\" or just \"...provide\n> key wrapper password\"\n> - Returned on stdout is the unwrapped DEK\n\n> For a secret store\n> \n> - cluster_key_challenge_command = 'vault_key_fetch'\n> - Supplied on stdin to the process is the secret's identifier (pg_dek_xxUUIDxx)\n> - Returned on stdout is the DEK, which may never have been wrapped,\n> depending on implementation\n> - Access control to the secret store is managed through the challenge\n> command's own config, certs, HBA, ...\n\n> For an HSM or cloud KMS\n> \n> - cluster_key_challenge_command = 'gcp_kms_key_fetch'\n> - Supplied on stdin to the process is the the wrapped DEK (individual\n> or combined)\n> - Returned on stdout is the DEK (individual or combined)\n> - Access control to the kms is managed through the challenge\n> command's own config, certs, HBA, ...\n> \n> The secret store and HSM/KMS options may be promptless, and therefore\n> amenable to automation, depending on the setup of those clients.\n\nWe can't really have what is passed on stdin, or not, be different\nwithout having some other option say which it is and that seems like\nit'd be making it overly complicated, and I get why Bruce would rather\nnot make this too complicated.\n\nWith the thought of trying to keep it a reasonably simple interface, I\nhad a thought along these lines:\n\n- Separate command that runs the KDF, this is simple enough as it's\n really just a hash, and it returns the key on stdout.\n- initdb time option that says if we're going to have PG manage the\n sub-keys, or not.\n- cluster_key_command defined as \"a command that is passed the ID of\n the key, or keys, required for the cluster to start\"\n\ninitdb --pg-subkeys\n - Calls cluster_key_command once with \"$PG_SYSTEM_ID-main\" or similar\n and expects the main key to be provided on stdout. Everything else\n is then more-or-less as is today: PG generates DEK sub-keys for data\n and WAL and then encrypts them with the main key and stores them.\n\nAs long as the enveloped keys file exists on the filesystem, when PG\nstarts, it'll call the cluster_key_command and will expect the 'main'\nkey to be provided and it'll then decrypt and verify the DEK sub-keys,\nvery similar to today.\n\nIn this scenario, cluster_key_command might be set to call a command\nwhich accepts a passphrase and runs a KDF on it, or it might be set to\ncall out to an external vaulting system or to a Yubikey, etc.\n\ninitdb --no-pg-subkeys\n - Calls cluster_key_command for each of the sub-keys that \"pg-subkeys\"\n would normally generate itself, passing \"$PG_SYSTEM_ID-keyid\" for\n each (eg: $PG_SYSTEM_ID-data, $PG_SYSTEM_ID-wal), and getting back\n the keys on stdout to use.\n\nWhen PG starts, it sees that the enveloped keys file doesn't exist and\ndoes the same as initdb did- calls cluster_key_command multiple times to\nget the keys which are needed. We'd want to make sure to have a way\nearly on that checks that the data DEK provided actually decrypts the\ncluster, and bail out otherwise, before actually writing any data with\nthat key. I'll note though that this approach would actually allow you\nto change the WAL key, if you wanted to, though, which could certainly\nbe nice (naturally there would be various caveats about doing so and\nthat replicas would have to also be switched to the new key, etc, but\nthat all seems reasonably solvable). Having a stand-alone utility that\ncould do that for the --pg-subkeys case would be useful too (and just\ngenerally decrypt it for viewing/backup/replacement/etc).\n\nDown the line this might even allow us to do an online re-keying, at\nleast once the challenges around enabling online data checksums are\nsorted out, but I don't think that's something to worry about today.\nStill, it seems like this potentially provides a pretty clean interface\nfor that to happen eventually.\n\n> > > > ...That\n> > > > avoids the complication of having to have an API that somehow provides\n> > > > more than one key, while also using the primary DEK key as-is from the\n> > > > key management service and the KEK never being seen on the system where\n> > > > PG is running.\n> > >\n> > > Other than calling out (and therefore potentially prompting) twice,\n> > > what do you see as the complications of having more than one key?\n> >\n> > Mainly just a concern about the API and about what happens if, say, we\n> > decide that we want to have another sub-key, for example. If we're\n> > handling them then there's really no issue- we just add another key in,\n> > but if that's not happening then it's going to mean changes for\n> > administrators. If there's a good justification for it, then perhaps\n> > that's alright, but hand waving at what the issue is doesn't really\n> > help.\n> \n> Sorry, I wasn't trying to hand wave it away. For automated\n> interactions, like big iron HSMs or cloud KSMs, the difference between\n> 2 operations and 10 operations to start a DB server won't matter. For\n> an admin/operator having to type 10 passwords or get 10 clean\n> thumbprint scans, it would be horrible. My underlying question was, is\n> that toil the only problem to be solved, or is there another reason to\n> get into key combination, key splitting and the related issues which\n> are less documented and less well understood than key wrapping.\n\nI appreciate that you're not trying to hand wave it away but this also\ndidn't really answer the actual question- what's the advantage to having\nall of the keys provided by something external rather than having that\nexternal thing provide just one 'main' key, which we then use to decrypt\nour enveloped keys that we actually use? I can think of some possible\nadvantages but I'd really like to know what advantages you're seeing in\ndoing that.\n\n> > > I'd describe what the current patch does as using YubiKey to encrypt\n> > > and then decrypt an intermediate secret, which is then used to\n> > > generate/derive a KEK, which is then used to unwrap the stored,\n> > > wrapped DEK.\n> >\n> > This seems like a crux of at least one concern- that the current patch\n> > is deriving the actual KEK from the passphrase and not just taking the\n> > provided value (at least, that's what it looks like from a *very* quick\n> > look into it), and that's the part that I was suggesting that we might\n> > add an option for- to indicate if the cluster passphrase command is\n> > actually returning a passphrase which should be used to derive a key, or\n> > if it's returning a key directly to be used. That does seem to be a\n> > material difference to me and one which we should care about.\n> \n> Yes. Caring about that is the reason I've been making a nuisance of myself.\n\nRight, I think we can agree on this aspect and I've chatted with Bruce\nabout it some also. When a direct key can be provided, we should use\nthat, and not run it through a KDF. This seems like a particularly\nimportant case that we should care about even at this early stage.\n\n> > > > There's an entirely independent discussion to be had about figuring out\n> > > > a way to actually off-load *entirely* the encryption/decryption of data\n> > > > to the linux crypto system or hardware devices, but unless someone can\n> > > > step up and write code for those today, I'd suggest that we table that\n> > > > effort until after we get this initial capability of having TDE with PG\n> > > > doing all of the encryption/decryption.\n> > >\n> > > I'm hopeful that the work on abstracting OpenSSL, nsstls, etc is going\n> > > to help in this direct.\n> >\n> > Yes, I agree with that general idea but it's a 'next step' kind of\n> > thing, not something we need to try and bake in today.\n> \n> Agreed.\n\nGreat, glad we can agree to hold off on this until a future point- being\nable to agree on how we box in this particular capability is important\nor we may never end up releasing it. I know that's one of the concerns\nthat others on this thread have and it's an important one.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 18 Dec 2020 16:36:24 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 04:36:24PM -0500, Stephen Frost wrote:\n> > Starting with the passphrase key wrapper, since it's what's in place now.\n> > \n> > - cluster_key_challenge_command = 'password_key_wrapper %q'\n> \n> I do tend to like this idea of having what\n> cluster_key_challenge_command, or whatever we call it, expects is an\n> actual key and have the command that is run be a separate command that\n> takes the passphrase and runs the KDF (key derivation function) on it,\n> when a passphrase is what the user wishes to use.\n> \n> That generally makes more sense to me than having the key derivation\n> effort built into the backend which I have a hard time seeing any\n> particular reason for, as long we're already calling out to some\n> external utility of some kind to get the key.\n\nI have modified the patch to do as you suggested, and as you explained\nto me on the phone. :-) Instead of having the server hash the pass\nphrase provided by the cluster_passphrase_command, have the\ncluster_passphrase_command generate a sha256 hash if the user-provided\ninput is not already 64 hex bytes. If it is already 64 hex bytes, e.g,\nvia openssl rand -hex 32, no need to have the server hash that again. \nDouble-hashing is less secure.\n\nI have modified the attached patch to do that, and the scripts --- all\nmy tests pass. FYI, I moved hex_decode to src/common so that front-end\ncould use it, and removed it from ecpg since ecpg can use the common one\nnow too.\n\n> > Sorry, I wasn't trying to hand wave it away. For automated\n> > interactions, like big iron HSMs or cloud KSMs, the difference between\n> > 2 operations and 10 operations to start a DB server won't matter. For\n> > an admin/operator having to type 10 passwords or get 10 clean\n> > thumbprint scans, it would be horrible. My underlying question was, is\n> > that toil the only problem to be solved, or is there another reason to\n> > get into key combination, key splitting and the related issues which\n> > are less documented and less well understood than key wrapping.\n> \n> I appreciate that you're not trying to hand wave it away but this also\n> didn't really answer the actual question- what's the advantage to having\n> all of the keys provided by something external rather than having that\n> external thing provide just one 'main' key, which we then use to decrypt\n> our enveloped keys that we actually use? I can think of some possible\n> advantages but I'd really like to know what advantages you're seeing in\n> doing that.\n\nI am not going be as kind. Our workflow is:\n\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\thttps://wiki.postgresql.org/wiki/Todo#Development_Process\n\nI have already asked about the first item, and received a reply talking\nabout the design --- that is not helpful. I only have so much patience\nfor the \"I want my secret decoder ring to glow in the dark\" type of\nfeature additions to this already complex feature. Unless we stay on\nDesirability, I will not be replying. If you can't answer the\nDesirability question, well, talking about items farther right on the\nlist is not very helpful.\n\nNow, I will say that your question about how a KMS will use this got me\nthinking about how to test this, and that got me to implement the AWS\nSecret Manager script, so that we definitely helpful. My point is that\nI don't think it is helpful to get into long discussions unless the\nDesirability is clearly answered. This is not just related to this\nthread --- this kind of jump-over-Desirability has happened a lot in\nrelation to this feature, so I thought it would be good to clearly state\nit now.\n\n> > > This seems like a crux of at least one concern- that the current patch\n> > > is deriving the actual KEK from the passphrase and not just taking the\n> > > provided value (at least, that's what it looks like from a *very* quick\n> > > look into it), and that's the part that I was suggesting that we might\n> > > add an option for- to indicate if the cluster passphrase command is\n> > > actually returning a passphrase which should be used to derive a key, or\n> > > if it's returning a key directly to be used. That does seem to be a\n> > > material difference to me and one which we should care about.\n> > \n> > Yes. Caring about that is the reason I've been making a nuisance of myself.\n> \n> Right, I think we can agree on this aspect and I've chatted with Bruce\n> about it some also. When a direct key can be provided, we should use\n> that, and not run it through a KDF. This seems like a particularly\n> important case that we should care about even at this early stage.\n\nAgreed, and done in the attached patch. :-) I am thankful for all the\nideas that has helped move this feature forward.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Fri, 18 Dec 2020 21:38:44 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi Stephen\n\nOn Fri, 18 Dec 2020 at 21:36, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings Alastair,\n>\n> * Alastair Turner (minion@decodable.me) wrote:\n> > On Wed, 16 Dec 2020 at 22:43, Stephen Frost <sfrost@snowman.net> wrote:\n...\n> > passphrase key wrapper, the secret store and the cloud/HW KMS.\n> >\n> > Since the examples expand the purpose of cluster_passphrase_command,\n> > let's call it cluster_key_challenge_command in the examples.\n>\n> These ideas don't seem too bad but I'd think we'd pass which key we want\n> on the command-line using a %i or something like that, rather than using\n> stdin, unless there's some reason that would be an issue..? Certainly\n> the CLI utilities I've seen tend to expect the name of the secret that\n> you're asking for to be passed on the command line.\n\nAgreed. I was working on the assumption that the calling process\n(initdb or pg_ctl) would have access to the challenge material and be\npassing it to the utility, so putting it in a command line would allow\nit to leak. If the utility is accessing the challenge material\ndirectly, then just an indicator of which key to work with would be a\nlot simpler, true.\n\n>\n> > Starting with the passphrase key wrapper, since it's what's in place now.\n> >\n> > - cluster_key_challenge_command = 'password_key_wrapper %q'\n>\n> I do tend to like this idea of having what\n> cluster_key_challenge_command, or whatever we call it, expects is an\n> actual key and have the command that is run be a separate command that\n> takes the passphrase and runs the KDF (key derivation function) on it,\n> when a passphrase is what the user wishes to use.\n>\n> That generally makes more sense to me than having the key derivation\n> effort built into the backend which I have a hard time seeing any\n> particular reason for, as long we're already calling out to some\n> external utility of some kind to get the key.\n>\n...\n>\n> With the thought of trying to keep it a reasonably simple interface, I\n> had a thought along these lines:\n>\n> - Separate command that runs the KDF, this is simple enough as it's\n> really just a hash, and it returns the key on stdout.\n> - initdb time option that says if we're going to have PG manage the\n> sub-keys, or not.\n> - cluster_key_command defined as \"a command that is passed the ID of\n> the key, or keys, required for the cluster to start\"\n>\n> initdb --pg-subkeys\n> - Calls cluster_key_command once with \"$PG_SYSTEM_ID-main\" or similar\n> and expects the main key to be provided on stdout. Everything else\n> is then more-or-less as is today: PG generates DEK sub-keys for data\n> and WAL and then encrypts them with the main key and stores them.\n>\n> As long as the enveloped keys file exists on the filesystem, when PG\n> starts, it'll call the cluster_key_command and will expect the 'main'\n> key to be provided and it'll then decrypt and verify the DEK sub-keys,\n> very similar to today.\n>\n> In this scenario, cluster_key_command might be set to call a command\n> which accepts a passphrase and runs a KDF on it, or it might be set to\n> call out to an external vaulting system or to a Yubikey, etc.\n>\n> initdb --no-pg-subkeys\n> - Calls cluster_key_command for each of the sub-keys that \"pg-subkeys\"\n> would normally generate itself, passing \"$PG_SYSTEM_ID-keyid\" for\n> each (eg: $PG_SYSTEM_ID-data, $PG_SYSTEM_ID-wal), and getting back\n> the keys on stdout to use.\n>\n> When PG starts, it sees that the enveloped keys file doesn't exist and\n> does the same as initdb did- calls cluster_key_command multiple times to\n> get the keys which are needed. We'd want to make sure to have a way\n> early on that checks that the data DEK provided actually decrypts the\n> cluster, and bail out otherwise, before actually writing any data with\n> that key. I'll note though that this approach would actually allow you\n> to change the WAL key, if you wanted to, though, which could certainly\n> be nice (naturally there would be various caveats about doing so and\n> that replicas would have to also be switched to the new key, etc, but\n> that all seems reasonably solvable). Having a stand-alone utility that\n> could do that for the --pg-subkeys case would be useful too (and just\n> generally decrypt it for viewing/backup/replacement/etc).\n>\n> Down the line this might even allow us to do an online re-keying, at\n> least once the challenges around enabling online data checksums are\n> sorted out, but I don't think that's something to worry about today.\n> Still, it seems like this potentially provides a pretty clean interface\n> for that to happen eventually.\n>\n\nChanging the WAL key independently of the local storage key is the big\noperational advantage I see of managing the two keys separately. It\nlays the basis for a full key rotation story.\n\nRotating WAL keys during switchover between a pair with the new key\napplicable from a certain timeline number maye be close enough to\nonline to satisfy most requirements. Not something to worry about\ntoday, as you say.\n\n>\n> Right, I think we can agree on this aspect and I've chatted with Bruce\n> about it some also. When a direct key can be provided, we should use\n> that, and not run it through a KDF. This seems like a particularly\n> important case that we should care about even at this early stage.\n>\n\nThanks for following it up.\n\nThanks,\nAlastair\n\n\n",
"msg_date": "Sat, 19 Dec 2020 11:45:08 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi Bruce\n\nOn Sat, 19 Dec 2020 at 02:38, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I am not going be as kind. Our workflow is:\n>\n> Desirability -> Design -> Implement -> Test -> Review -> Commit\n> https://wiki.postgresql.org/wiki/Todo#Development_Process\n>\n> I have already asked about the first item, and received a reply talking\n> about the design --- that is not helpful. I only have so much patience\n> for the \"I want my secret decoder ring to glow in the dark\" type of\n> feature additions to this already complex feature. Unless we stay on\n> Desirability, I will not be replying. If you can't answer the\n> Desirability question, well, talking about items farther right on the\n> list is not very helpful.\n>\n> Now, I will say that your question about how a KMS will use this got me\n> thinking about how to test this, and that got me to implement the AWS\n> Secret Manager script, so that we definitely helpful. My point is that\n> I don't think it is helpful to get into long discussions unless the\n> Desirability is clearly answered. This is not just related to this\n> thread --- this kind of jump-over-Desirability has happened a lot in\n> relation to this feature, so I thought it would be good to clearly state\n> it now.\n>\n\nSorry, I have waved Desirability through under the headings of ease of\nadoption or not raising barriers to adoption, without detailing what\nbarriers I see or how to avoid them. I also realise that \"don't scare\nthe users\" is so open-ended so as to be actively unhelpful and very\nquickly starts to sound like \"I want my secret decoder ring to glow\npink, or blue, or green when anyone asks\". Given the complexity of the\nfeature and pixels spilled in discussing it, I understand that it gets\nfrustrating. That said, I believe there is an important case to be\nmade here.\n\nIn summary, I believe that forcing an approach for key generation and\nwrapping onto users of Cluster File Encryption limits the Desirability\nof the feature.\n\nCluster File Encryption for Postgres would be Desirable to many users\nI meet if, and only if, the generation and handling of keys fits with\ntheir corporate policies. Therefore, giving a user the option to pass\nan encryption key to Postgres for CFE is Desirable. I realise that the\noption for feeding the keys in directly is an additional feature, and\nthat it has a user experience impact if a passphrase is used. It is,\nhowever, a feature which opens up near-infinite flexibility. To\nstretch the analogy, it is the clip for attaching coloured or opaque\ncoverings to the glowy bit on the secret decoder ring.\n\nThe generation of keys and the wrapping of keys are contentious issues\nand are frequently addressed/specified in corporate security policies,\nstandards and technical papers (NIST 800-38F is often mentioned in\nproduct docs). There are, for instance, policies in the wild which\nrequire that keys for long term use are generated from the output of a\nTrue Random Number Generator - the practical implication being that\nthe key used to encrypt data at rest should be created by an HSM. When\nand why to use symmetric or asymmetric cyphers for key wrapping is\nanother item which different organisations have different policies on\n- the hardware and cloud service vendors all offer both options, even\nif they recommend one and make it easier to use.\n\nThanks\n\nAlastair\n\n\n",
"msg_date": "Sat, 19 Dec 2020 11:45:15 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 11:45:15AM +0000, Alastair Turner wrote:\n> Sorry, I have waved Desirability through under the headings of ease of\n> adoption or not raising barriers to adoption, without detailing what\n> barriers I see or how to avoid them. I also realise that \"don't scare\n> the users\" is so open-ended so as to be actively unhelpful and very\n> quickly starts to sound like \"I want my secret decoder ring to glow\n> pink, or blue, or green when anyone asks\". Given the complexity of the\n> feature and pixels spilled in discussing it, I understand that it gets\n> frustrating. That said, I believe there is an important case to be\n> made here.\n\nI am pleased you understood my feelings on this. Our last big\ndiscussion on this topic is here:\n\n\thttps://www.postgresql.org/message-id/flat/CA%2Bfd4k7q5o6Nc_AaX6BcYM9yqTbC6_pnH-6nSD%3D54Zp6NBQTCQ%40mail.gmail.com\n\nand it was so unproductive that we started having closed voice calls\nevery other Friday so we could discuss this without lots of \"decoder\nring\" ideas that had to be explained. The result is our wiki page:\n\n\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n\nIt has taken me a while to understand why this topic seems to almost\nuniquely gravitate toward \"decoder ring\" discussion.\n\n> In summary, I believe that forcing an approach for key generation and\n> wrapping onto users of Cluster File Encryption limits the Desirability\n> of the feature.\n> \n> Cluster File Encryption for Postgres would be Desirable to many users\n> I meet if, and only if, the generation and handling of keys fits with\n> their corporate policies. Therefore, giving a user the option to pass\n> an encryption key to Postgres for CFE is Desirable. I realise that the\n> option for feeding the keys in directly is an additional feature, and\n> that it has a user experience impact if a passphrase is used. It is,\n> however, a feature which opens up near-infinite flexibility. To\n> stretch the analogy, it is the clip for attaching coloured or opaque\n> coverings to the glowy bit on the secret decoder ring.\n> \n> The generation of keys and the wrapping of keys are contentious issues\n> and are frequently addressed/specified in corporate security policies,\n> standards and technical papers (NIST 800-38F is often mentioned in\n> product docs). There are, for instance, policies in the wild which\n> require that keys for long term use are generated from the output of a\n> True Random Number Generator - the practical implication being that\n> the key used to encrypt data at rest should be created by an HSM. When\n> and why to use symmetric or asymmetric cyphers for key wrapping is\n> another item which different organisations have different policies on\n> - the hardware and cloud service vendors all offer both options, even\n> if they recommend one and make it easier to use.\n\nTo enable the direct injection of keys into the server, we would need a\nnew command for this, since trying to make the passphrase command do\nthis will lead to unnecessary complexity. The passphrase command should\ndo one thing, and you can't have it changing its behavior based on\nparameters, since the parameters are for the script to process, not to\nchange behavior of the server.\n\nIf we wanted to do this, we would need a new command, and one of the\nparameters would be %k or something that would identify the key number\nwe want to retrieve from the KMS. Stephen pointed out that we could\nstill validate that key; the key would not be stored wrapped in the\nfile system, but we could store an HMAC in the file system, and use that\nfor validation.\n\nOn other interesting approach would be to allow the server to call out\nfor a KMS when it needs to generate the initial data keys that are\nwrapped by the passphrase; this is in contrast to calling the KMS\neverytime it needs the data keys.\n\nWe should create code for the general use-case, which is currently\npassphrase-wrapped system-generated data keys, and then get feedback on\nwhat else we need. However, I should point out that the community is\nsomewhat immune to the \"my company needs this\" kind of argument without\nlogic to back it up, though sometimes I think this entire feature is in\nthat category. The data keys are generated using this random value\ncode:\n\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/port/pg_strong_random.c;hb=HEAD\n\nso someone would have to explain why generating this remotely in a KMS\nis superior, not just based on company policy.\n\nMy final point is that we can find ways to do what you are suggesting as\nan addition to what we are adding now. What we need is clear\njustification of why these additional features are needed. Requiring\nthe use of a true random number generator is a valid argument, but we\nneed to figure out, once this is implemented, who really wants that, and\nhow to implement it cleanly.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sat, 19 Dec 2020 11:58:37 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 11:58:37AM -0500, Bruce Momjian wrote:\n> My final point is that we can find ways to do what you are suggesting as\n> an addition to what we are adding now. What we need is clear\n> justification of why these additional features are needed. Requiring\n> the use of a true random number generator is a valid argument, but we\n> need to figure out, once this is implemented, who really wants that, and\n> how to implement it cleanly.\n\nI added a comment to this script to explain how to generate a true\nrandom passphrase, attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Sat, 19 Dec 2020 15:22:34 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Thanks, Bruce\n\nOn Sat, 19 Dec 2020 at 16:58, Bruce Momjian <bruce@momjian.us> wrote:\n>\n...\n>\n> To enable the direct injection of keys into the server, we would need a\n> new command for this, since trying to make the passphrase command do\n> this will lead to unnecessary complexity. The passphrase command should\n> do one thing, and you can't have it changing its behavior based on\n> parameters, since the parameters are for the script to process, not to\n> change behavior of the server.\n>\n> If we wanted to do this, we would need a new command, and one of the\n> parameters would be %k or something that would identify the key number\n> we want to retrieve from the KMS. Stephen pointed out that we could\n> still validate that key; the key would not be stored wrapped in the\n> file system, but we could store an HMAC in the file system, and use that\n> for validation.\n>\n\nI think that this is where we have ended up talking at cross-purposes.\nMy idea (after some refining based on Stephen's feedback) is that\nthere should be only this new command (the cluster key command) and\nthat the program for unwrapping stored keys should be called this way.\nAs could a program to contact an HSM, etc. This moves the generating\nand wrapping functionality out of the core Postgres processes, making\nit far easier to add alternatives. I see this approach was discussed\non the email thread you linked to, but I can't see where or how a\nconclusion was reached about it...\n\n>\n> On other interesting approach would be to allow the server to call out\n> for a KMS when it needs to generate the initial data keys that are\n> wrapped by the passphrase; this is in contrast to calling the KMS\n> everytime it needs the data keys.\n>\n...\n> ...The data keys are generated using this random value\n> code:\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/port/pg_strong_random.c;hb=HEAD\n>\n> so someone would have to explain why generating this remotely in a KMS\n> is superior, not just based on company policy.\n>\n\nThe pg_strong_random() function uses OpesnSSL's RAND_bytes() where\navailable. With appropriate configuration of an OpenSSL engine\nsupplying an alternative RAND, this could be handed off to a TRNG.\n\nThis is an example of the other option for providing flexibility to\nsupport specific key generation, wrapping, ... requirements - handing\nthe operations off to a library like OpenSSL or nss tls which, in\nturn, can use pluggable providers for these functions. FWIW, the\nproprietary DBMSs use a pluggable engine approach to meet requirements\nin this category.\n\n>\n> We should create code for the general use-case, which is currently\n> passphrase-wrapped system-generated data keys, and then get feedback on\n> what else we need. However, I should point out that the community is\n> somewhat immune to the \"my company needs this\" kind of argument without\n> logic to back it up, though sometimes I think this entire feature is in\n> that category. ...\n\nYeah. Security in general, and crypto in particular, are often\n\"because someone told me to\" requirements.\n\n>\n> My final point is that we can find ways to do what you are suggesting as\n> an addition to what we are adding now. What we need is clear\n> justification of why these additional features are needed. Requiring\n> the use of a true random number generator is a valid argument, but we\n> need to figure out, once this is implemented, who really wants that, and\n> how to implement it cleanly.\n>\n\nClean interfaces would be either \"above\" the database calling out to\ncommands in user-land or \"below\" the database in the abstractions\nwhich OpenSSL, nss tls and friends provide. Since the conclusion seems\nalready to have been reached that the keyring should be inside the\ncore process and only the passphrase should pop out above, I'll review\nthe patch in this vein and see if I can suggest any routes to carving\nout more manageable subsets of the patch.\n\n\"...once this is implemented...\" changes become a lot more difficult.\nParticularly when the changes would affect what are regarded as the\ndatabase's on-disk files. Which I suspect is a contributing factor to\nthe level of engagement surrounding this feature.\n\nRegards\nAlastair\n\n\n",
"msg_date": "Sun, 20 Dec 2020 22:45:06 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings Alastair,\n\n* Alastair Turner (minion@decodable.me) wrote:\n> On Sat, 19 Dec 2020 at 16:58, Bruce Momjian <bruce@momjian.us> wrote:\n> > To enable the direct injection of keys into the server, we would need a\n> > new command for this, since trying to make the passphrase command do\n> > this will lead to unnecessary complexity. The passphrase command should\n> > do one thing, and you can't have it changing its behavior based on\n> > parameters, since the parameters are for the script to process, not to\n> > change behavior of the server.\n> >\n> > If we wanted to do this, we would need a new command, and one of the\n> > parameters would be %k or something that would identify the key number\n> > we want to retrieve from the KMS. Stephen pointed out that we could\n> > still validate that key; the key would not be stored wrapped in the\n> > file system, but we could store an HMAC in the file system, and use that\n> > for validation.\n> \n> I think that this is where we have ended up talking at cross-purposes.\n> My idea (after some refining based on Stephen's feedback) is that\n> there should be only this new command (the cluster key command) and\n> that the program for unwrapping stored keys should be called this way.\n> As could a program to contact an HSM, etc. This moves the generating\n> and wrapping functionality out of the core Postgres processes, making\n> it far easier to add alternatives. I see this approach was discussed\n> on the email thread you linked to, but I can't see where or how a\n> conclusion was reached about it...\n\nI do generally like the idea of having the single command able to be\nused to either provide the KEK (where PG manages the keyring\ninternally), or called multiple times to retrieve the DEKs (in which\ncase PG wouldn't be managing the keyring).\n\nHowever, after chatting with Bruce about it for a bit this weekend, I'm\nnot sure that we need to tackle the second case today. I don't think\nthere's any doubt that there will be users who will want PG to manage\nthe keyring and to be able to work with just a passphrase, as Bruce has\nworked to make happen here and which we have a patch for. I'm hoping\nBruce will post a new patch soon based on the work that he and I\ndiscussed today (mostly just clarifications around keys vs. passphrases\nand having the PG side accept a key which the command that's run will\nproduce). With that, I'm feeling pretty comfortable that we can move\nforward and at least get this initial work committed.\n\n> The pg_strong_random() function uses OpesnSSL's RAND_bytes() where\n> available. With appropriate configuration of an OpenSSL engine\n> supplying an alternative RAND, this could be handed off to a TRNG.\n\nSure.\n\n> This is an example of the other option for providing flexibility to\n> support specific key generation, wrapping, ... requirements - handing\n> the operations off to a library like OpenSSL or nss tls which, in\n> turn, can use pluggable providers for these functions. FWIW, the\n> proprietary DBMSs use a pluggable engine approach to meet requirements\n> in this category.\n\nOpenSSL provides for this configuration and gives us a pluggable\narchitecture to make this happen, so I'm not sure what the concern here\nis..? I agree that eventually it'd be nice to allow the administrator\nto, more directly, control the DEKs but we're still going to need to\nhave a solution for the user who wishes to just provide a passphrase or\na KEK and I don't see any particular reason to hold off on the\nimplementation of that.\n\n> > My final point is that we can find ways to do what you are suggesting as\n> > an addition to what we are adding now. What we need is clear\n> > justification of why these additional features are needed. Requiring\n> > the use of a true random number generator is a valid argument, but we\n> > need to figure out, once this is implemented, who really wants that, and\n> > how to implement it cleanly.\n> \n> Clean interfaces would be either \"above\" the database calling out to\n> commands in user-land or \"below\" the database in the abstractions\n> which OpenSSL, nss tls and friends provide. Since the conclusion seems\n> already to have been reached that the keyring should be inside the\n> core process and only the passphrase should pop out above, I'll review\n> the patch in this vein and see if I can suggest any routes to carving\n> out more manageable subsets of the patch.\n\nThere's no doubt that there needs to be a solution where the keyring is\nmanaged by PG. Perhaps there are options to allow an external keyring\nas well, but we can add that in the future.\n\n> \"...once this is implemented...\" changes become a lot more difficult.\n> Particularly when the changes would affect what are regarded as the\n> database's on-disk files. Which I suspect is a contributing factor to\n> the level of engagement surrounding this feature.\n\nYes, it's true that after things are implemented it can be more\ndifficult to change them- but if you're concerned about the specific\non-disk format of the keyring then please help us understand what your\nconcern is there. The concern you've raised so far is just that you'd\nlike an option to have an external keyring, and that's fine, but we are\nalso going to need to have an internal one and which comes first doesn't\nseem particularly material to me. I don't think having one or the other\nin place first will really detract or make more difficult the adding of\nthe other later.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 20 Dec 2020 17:59:09 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi Stephen\n\nOn Sun, 20 Dec 2020 at 22:59, Stephen Frost <sfrost@snowman.net> wrote:\n>\n...\n> However, after chatting with Bruce about it for a bit this weekend, I'm\n> not sure that we need to tackle the second case today. I don't think\n> there's any doubt that there will be users who will want PG to manage\n> the keyring and to be able to work with just a passphrase, as Bruce has\n> worked to make happen here and which we have a patch for. I'm hoping\n> Bruce will post a new patch soon based on the work that he and I\n> discussed today (mostly just clarifications around keys vs. passphrases\n> and having the PG side accept a key which the command that's run will\n> produce)...\n>\n\nHaving Postgres accept a key which the command will produce sounds great\n\n>\n> Yes, it's true that after things are implemented it can be more\n> difficult to change them- but if you're concerned about the specific\n> on-disk format of the keyring then please help us understand what your\n> concern is there. The concern you've raised so far is just that you'd\n> like an option to have an external keyring, and that's fine, but we are\n> also going to need to have an internal one and which comes first doesn't\n> seem particularly material to me. I don't think having one or the other\n> in place first will really detract or make more difficult the adding of\n> the other later.\n>\nWhat I'd like specifically is to have the option of an external\nkeyring as a first class key store, where the keys stored in the\nexternal keyring are used directly to encrypt the database contents.\nThe examples in this discussion so far have all put the internal\nkeyring between any other crypto infrastructure and the file\nencryption process. Your description above of changes to pass keys out\nof the command sound like they may have addressed this.\n\nRegarding the on-disk format, separate storage of the key HMACs and\nthe wrapped keys sounds like a requirement for implementing a fully\nexternal keyring or doing key wrapping externally via an OpenSSL or\nnss tls pluggable engine. I'm still looking through the code.\n\nThanks\nAlastair\n\n\n",
"msg_date": "Mon, 21 Dec 2020 00:22:44 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Alastair Turner (minion@decodable.me) wrote:\n> On Sun, 20 Dec 2020 at 22:59, Stephen Frost <sfrost@snowman.net> wrote:\n> > Yes, it's true that after things are implemented it can be more\n> > difficult to change them- but if you're concerned about the specific\n> > on-disk format of the keyring then please help us understand what your\n> > concern is there. The concern you've raised so far is just that you'd\n> > like an option to have an external keyring, and that's fine, but we are\n> > also going to need to have an internal one and which comes first doesn't\n> > seem particularly material to me. I don't think having one or the other\n> > in place first will really detract or make more difficult the adding of\n> > the other later.\n>\n> What I'd like specifically is to have the option of an external\n> keyring as a first class key store, where the keys stored in the\n> external keyring are used directly to encrypt the database contents.\n\nRight- understood, but that's not what the patch does today and there\nisn't anyone who has proposed such a patch, nor explained why we\ncouldn't add that capability later.\n\n> The examples in this discussion so far have all put the internal\n> keyring between any other crypto infrastructure and the file\n> encryption process. Your description above of changes to pass keys out\n> of the command sound like they may have addressed this.\n\nThe updates are intended to make it so that the KEK which wraps the\ninternal keyring will be obtained directly from the cluster key command,\npushing the transformation of a passphrase (should one be provided) into\na proper key to the script, but otherwise allowing the result of things\nlike 'openssl rand -hex 32' to be used directly as the KEK.\n\n> Regarding the on-disk format, separate storage of the key HMACs and\n> the wrapped keys sounds like a requirement for implementing a fully\n> external keyring or doing key wrapping externally via an OpenSSL or\n> nss tls pluggable engine. I'm still looking through the code.\n\nThis seems a bit confusing as the question at hand is the on-disk format\nof the internal keyring, not anything to do with an external keyring\n(which we wouldn't have any storage of and so I don't understand how it\nmakes sense to even discuss the idea of storage for the external\nkeyring..?).\n\nAgain, we will need the ability to perform the encryption using a simple\npassphrase provided by the user, while using multiple randomly generated\ndata encryption keys, and that's what the latest set of patches are\ngeared towards. I'm generally in support of adding additional\ncapabilities in this area in the future, but I don't think we need to\nbog down the current effort by demanding that be implemented today.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 20 Dec 2020 19:33:21 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Thanks Stephen,\n\nOn Mon, 21 Dec 2020 at 00:33, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Alastair Turner (minion@decodable.me) wrote:\n...\n> >\n> > What I'd like specifically is to have the option of an external\n> > keyring as a first class key store, where the keys stored in the\n> > external keyring are used directly to encrypt the database contents.\n>\n> Right- understood, but that's not what the patch does today and there\n> isn't anyone who has proposed such a patch, nor explained why we\n> couldn't add that capability later.\n>\n\nI'm keen to propose a patch along those lines, even if just as a\nsample. Minimising the amount of code which would have to be unpicked\nin that effort would be great. Particularly avoiding any changes to\ndata structures, since that may effectively block adding new\ncapabilities.\n\n>\n> >...Your description above of changes to pass keys out\n> > of the command sound like they may have addressed this.\n>\n> The updates are intended to make it so that the KEK which wraps the\n> internal keyring will be obtained directly from the cluster key command,\n> pushing the transformation of a passphrase (should one be provided) into\n> a proper key to the script, but otherwise allowing the result of things\n> like 'openssl rand -hex 32' to be used directly as the KEK.\n>\n\nSounds good.\n\n>\n> > Regarding the on-disk format, separate storage of the key HMACs and\n> > the wrapped keys sounds like a requirement for implementing a fully\n> > external keyring or doing key wrapping externally via an OpenSSL or\n> > nss tls pluggable engine. I'm still looking through the code.\n>\n> This seems a bit confusing as the question at hand is the on-disk format\n> of the internal keyring, not anything to do with an external keyring\n> (which we wouldn't have any storage of and so I don't understand how it\n> makes sense to even discuss the idea of storage for the external\n> keyring..?).\n>\n\nA requirement for validating the keys, no matter which keyring they\ncame from, was mentioned along the way. Since there would be no point\nin validating keys from the internal ring twice, storing the\nvalidation data (HMACs in the current design) separately from the\ninternal keyring's keys seems like it would avoid changing the data\nstructures for the internal keyring when adding an external option.\n\n>\n> Again, we will need the ability to perform the encryption using a simple\n> passphrase provided by the user, while using multiple randomly generated\n> data encryption keys, and that's what the latest set of patches are\n> geared towards. I'm generally in support of adding additional\n> capabilities in this area in the future, but I don't think we need to\n> bog down the current effort by demanding that be implemented today.\n>\n\nI'm really not looking to bog down the current effort.\n\nI'll have a go at adding another keyring and/or abstracting the key\nwrap and see how well a true peer to the passphrase approach fits in.\n\nRegards\nAlastair\n\n\n",
"msg_date": "Mon, 21 Dec 2020 02:31:57 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Alastair Turner (minion@decodable.me) wrote:\n> On Mon, 21 Dec 2020 at 00:33, Stephen Frost <sfrost@snowman.net> wrote:\n> > * Alastair Turner (minion@decodable.me) wrote:\n> > > What I'd like specifically is to have the option of an external\n> > > keyring as a first class key store, where the keys stored in the\n> > > external keyring are used directly to encrypt the database contents.\n> >\n> > Right- understood, but that's not what the patch does today and there\n> > isn't anyone who has proposed such a patch, nor explained why we\n> > couldn't add that capability later.\n> \n> I'm keen to propose a patch along those lines, even if just as a\n> sample. Minimising the amount of code which would have to be unpicked\n> in that effort would be great. Particularly avoiding any changes to\n> data structures, since that may effectively block adding new\n> capabilities.\n\nOk, sure, but are there such changes that would need to be made for this\ncase...? Seems to only be speculation at this point.\n\n> > > Regarding the on-disk format, separate storage of the key HMACs and\n> > > the wrapped keys sounds like a requirement for implementing a fully\n> > > external keyring or doing key wrapping externally via an OpenSSL or\n> > > nss tls pluggable engine. I'm still looking through the code.\n> >\n> > This seems a bit confusing as the question at hand is the on-disk format\n> > of the internal keyring, not anything to do with an external keyring\n> > (which we wouldn't have any storage of and so I don't understand how it\n> > makes sense to even discuss the idea of storage for the external\n> > keyring..?).\n> \n> A requirement for validating the keys, no matter which keyring they\n> came from, was mentioned along the way. Since there would be no point\n> in validating keys from the internal ring twice, storing the\n> validation data (HMACs in the current design) separately from the\n> internal keyring's keys seems like it would avoid changing the data\n> structures for the internal keyring when adding an external option.\n\nThis doesn't strike me as very clear- specifically which HMACs are you\nreferring to which would need to be stored separately from the internal\nkeyring to make it possible for an external keyring to be used?\n\n> > Again, we will need the ability to perform the encryption using a simple\n> > passphrase provided by the user, while using multiple randomly generated\n> > data encryption keys, and that's what the latest set of patches are\n> > geared towards. I'm generally in support of adding additional\n> > capabilities in this area in the future, but I don't think we need to\n> > bog down the current effort by demanding that be implemented today.\n> \n> I'm really not looking to bog down the current effort.\n\nGreat, glad to hear that.\n\n> I'll have a go at adding another keyring and/or abstracting the key\n> wrap and see how well a true peer to the passphrase approach fits in.\n\nHaving patches to review and consider definitely helps, imv.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 20 Dec 2020 21:38:55 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 05:59:09PM -0500, Stephen Frost wrote:\n> However, after chatting with Bruce about it for a bit this weekend, I'm\n> not sure that we need to tackle the second case today. I don't think\n> there's any doubt that there will be users who will want PG to manage\n> the keyring and to be able to work with just a passphrase, as Bruce has\n> worked to make happen here and which we have a patch for. I'm hoping\n> Bruce will post a new patch soon based on the work that he and I\n> discussed today (mostly just clarifications around keys vs. passphrases\n> and having the PG side accept a key which the command that's run will\n> produce). With that, I'm feeling pretty comfortable that we can move\n> forward and at least get this initial work committed.\n\nOK, here it the updated patch. The major change, as Stephen pointed\nout, is that the cluser_key_command (was cluster_passphrase_command) now\nreturns a hex-string representing the 32-byte KEK, rather than having\nthe server hash whatever the command used to return. This avoids\ndouble-hashing in cases where you are _not_ using a passphrase, but are\ncomputing a random 32-byte value in the script. This does mean the\nscript has to run sha256 for passphrases, but it seems like a win. \nUpdated script are attached too.\n\nThe URLs are still accurate:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:key.diff\n\thttps://github.com/bmomjian/postgres/compare/key...bmomjian:key-alter.diff\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Mon, 21 Dec 2020 16:35:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 09:38:55PM -0500, Stephen Frost wrote:\n> > I'll have a go at adding another keyring and/or abstracting the key\n> > wrap and see how well a true peer to the passphrase approach fits in.\n> \n> Having patches to review and consider definitely helps, imv.\n\nI disagree. Our order is:\n\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\thttps://wiki.postgresql.org/wiki/Todo#Development_Process\n\nso, by posting a patch, you are decided to skip the first _two_\nrequirements. I think it might be time for me to create a new thread\nso it is not confused by whatever is posted here.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 21 Dec 2020 16:44:34 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 05:59:09PM -0500, Stephen Frost wrote:\n> I do generally like the idea of having the single command able to be\n> used to either provide the KEK (where PG manages the keyring\n> internally), or called multiple times to retrieve the DEKs (in which\n> case PG wouldn't be managing the keyring).\n> \n> However, after chatting with Bruce about it for a bit this weekend, I'm\n> not sure that we need to tackle the second case today. I don't think\n> there's any doubt that there will be users who will want PG to manage\n> the keyring and to be able to work with just a passphrase, as Bruce has\n> worked to make happen here and which we have a patch for. I'm hoping\n> Bruce will post a new patch soon based on the work that he and I\n> discussed today (mostly just clarifications around keys vs. passphrases\n> and having the PG side accept a key which the command that's run will\n> produce). With that, I'm feeling pretty comfortable that we can move\n> forward and at least get this initial work committed.\n\nI agree with very little of this sub-discussion, but I am still reading\nit to see if I can get useful ideas from it. Looking at what we used to\ndo with 'passphrase', we did the prompting in the script, and did the\nhash, HMAC validation, data key generation and its wrap in the server. \nWith the 'cluster key' patch I just posted, the hashing of the\npassphrase is removed from the server and happens only in the script,\nbecause in many cases the hashing is not needed, and double-hashing is\nless secure, so that seems like a win.\n\nTo go any further, you are starting to look at possible data key\ngeneration in the script, either at boot time, and then wrapped with a\npassphrase, or just retrieved on every boot. That would create a larger\nburden for any script, meaning a passphrase usage would have to not only\nhash, which it does now, but also verify its MAC, then decrypt keys\nstored in the file system, then echo those to its output, to be read by\nthe server. I think this is the big point --- I have five scripts, and\nonly one needs to hash its input (passphrase). If you go any farther in\npushing code into the scripts, these scripts become much harder to\nwrite, and much harder to do error checks.\n\nI think the common case is passphrase, so I want to optimize for that. \nPushing anymore features into the scripts is going to make that common\ncase less reliable, which I am opposed to.\n\nAlso, as far as Desirability, we only have _one_ person saying people\nwill want more options. I need to hear from a lot more people before\nthis gets added to Postgres.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 21 Dec 2020 16:56:48 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 2:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Sun, Dec 20, 2020 at 09:38:55PM -0500, Stephen Frost wrote:\n> > > I'll have a go at adding another keyring and/or abstracting the key\n> > > wrap and see how well a true peer to the passphrase approach fits in.\n> >\n> > Having patches to review and consider definitely helps, imv.\n>\n> I disagree. Our order is:\n>\n> Desirability -> Design -> Implement -> Test -> Review -> Commit\n> https://wiki.postgresql.org/wiki/Todo#Development_Process\n>\n> so, by posting a patch, you are decided to skip the first _two_\n> requirements. I think it might be time for me to create a new thread\n> so it is not confused by whatever is posted here.\n>\n>\nI agree with Stephen; so maybe that part of the Wiki needs to be updated\ninstead of having it sit there for use as a hammer when someone disagrees\nwith a proffered patch.\n\nOr maybe consider that \"a patch\" doesn't only mean \"implement\" - it is\nsimply another language through which desirability and design can also be\ncommunicated. The author gets to decide how wordy their prose is and\nchoosing a wordy but concise language that is understandable by the vast\nmajority of the target audience seems like it should be considered\nacceptable.\n\nDavid J.\n\nOn Mon, Dec 21, 2020 at 2:44 PM Bruce Momjian <bruce@momjian.us> wrote:On Sun, Dec 20, 2020 at 09:38:55PM -0500, Stephen Frost wrote:\n> > I'll have a go at adding another keyring and/or abstracting the key\n> > wrap and see how well a true peer to the passphrase approach fits in.\n> \n> Having patches to review and consider definitely helps, imv.\n\nI disagree. Our order is:\n\n Desirability -> Design -> Implement -> Test -> Review -> Commit\n https://wiki.postgresql.org/wiki/Todo#Development_Process\n\nso, by posting a patch, you are decided to skip the first _two_\nrequirements. I think it might be time for me to create a new thread\nso it is not confused by whatever is posted here.I agree with Stephen; so maybe that part of the Wiki needs to be updated instead of having it sit there for use as a hammer when someone disagrees with a proffered patch.Or maybe consider that \"a patch\" doesn't only mean \"implement\" - it is simply another language through which desirability and design can also be communicated. The author gets to decide how wordy their prose is and choosing a wordy but concise language that is understandable by the vast majority of the target audience seems like it should be considered acceptable.David J.",
"msg_date": "Mon, 21 Dec 2020 15:00:32 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 03:00:32PM -0700, David G. Johnston wrote:\n> I agree with Stephen; so maybe that part of the Wiki needs to be updated\n> instead of having it sit there for use as a hammer when someone disagrees with\n> a proffered patch.\n> \n> Or maybe consider that \"a patch\" doesn't only mean \"implement\" - it is simply\n> another language�through which desirability and design can also be\n> communicated.� The author gets to decide how wordy their prose is and choosing\n> a wordy but concise language that is understandable by the vast majority of the\n> target audience seems like it should be considered acceptable.\n\nIf you want to debate that TODO section, you will need to start a new\nthread. I suggest Alastair's patch also appear in a new thread.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 21 Dec 2020 17:17:52 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 04:35:14PM -0500, Bruce Momjian wrote:\n> On Sun, Dec 20, 2020 at 05:59:09PM -0500, Stephen Frost wrote:\n> OK, here it the updated patch. The major change, as Stephen pointed\n> out, is that the cluser_key_command (was cluster_passphrase_command) now\n> returns a hex-string representing the 32-byte KEK, rather than having\n> the server hash whatever the command used to return. This avoids\n> double-hashing in cases where you are _not_ using a passphrase, but are\n> computing a random 32-byte value in the script. This does mean the\n> script has to run sha256 for passphrases, but it seems like a win. \n> Updated script are attached too.\n\nAttached is the script patch. It is also at:\n\n\thttps://github.com/postgres/postgres/compare/master...bmomjian:cfe-sh.diff\n\nI think it still needs docs but those will have to be done after the\ncode doc patch is added.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Mon, 21 Dec 2020 22:07:48 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 10:07:48PM -0500, Bruce Momjian wrote:\n> Attached is the script patch. It is also at:\n> \n> \thttps://github.com/postgres/postgres/compare/master...bmomjian:cfe-sh.diff\n> \n> I think it still needs docs but those will have to be done after the\n> code doc patch is added.\n\nHere is an updated patch. Are people happy with the Makefile, its\nlocation in the source tree, and the install directory name? I used the\ndirectory name 'auth_commands' because I thought 'auth' was too easily\nmisinterpreted. I put the scripts in /src/backend/utils/auth_commands. \nIt also contains a script that can be used for SSL passphrase prompting,\nbut I haven't written the C code for that yet.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Tue, 22 Dec 2020 10:40:17 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hi Bruce\n\nIn ckey_passphrase.sh.sample\n\n+\n+echo \"$PASS\" | sha256sum | cut -d' ' -f1\n+\n\nUnder the threat model discussed, a copy of the keyfile could be\nattacked offline. So getting from passphrase to DEKs should be as\nresource intensive as possible to slow down brute-force attempts.\nInstead of just a SHA hash, this should be at least a PBKDF2 (PKCS#5)\noperation or preferably scrypt (which is memory intensive as well as\ncomputationally intensive). While OpenSSL provides these key\nderivation options for it's encoding operations, it does not offer a\ncommand line option for key derivation using them. What's your view on\nincluding third party utilities like nettlepbkdf in the sample or\nproviding this utility as a compiled program?\n\nIn the same theme - in ossl_cipher_ctx_create and the functions using\nthat context should rather be using the key-wrapping variants of the\ncipher. As well as lower throughput, with the resulting effects on\nbrute force attempts, the key wrap variants provide more robust\nciphertext output\n\"\nKW, KWP, and TKW are each robust in the sense that each bit of output\ncan be expected to depend in a nontrivial fashion on each bit of\ninput, even when the length of the input data is greater than one\nblock. This property is achieved at the cost of a considerably lower\nthroughput rate, compared to other authenticated-encryption modes, but\nthe tradeoff may be appropriate for some key management applications.\nFor example, a robust method may be desired when the length of the\nkeys to be protected is greater than the block size of the underlying\nblock cipher, or when the value of the protected data is very high.\n\" - NIST SP 800-38F\n\nand aim to manage the risk of short or predictable IVs ([1] final para\nof page 1)\n\nOn Tue, 22 Dec 2020 at 15:40, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Here is an updated patch. Are people happy with the Makefile, its\n> location in the source tree, and the install directory name? I used the\n> directory name 'auth_commands' because I thought 'auth' was too easily\n> misinterpreted. I put the scripts in /src/backend/utils/auth_commands.\n>\n\nWhat's implemented in these patches is an internal keystore, wrapped\nwith a key derived from a passphrase. I'd think that the scripts\ndirectory should reflect what they interact with, so\n'keystore_commands' or 'local_keystore_command' sounds more specific\nand therefore better than 'auth_commands'.\n\nRegards\nAlastair\n\n[1] https://web.cs.ucdavis.edu/~rogaway/papers/keywrap.pdf\n\n\n",
"msg_date": "Tue, 22 Dec 2020 20:15:27 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 08:15:27PM +0000, Alastair Turner wrote:\n> Hi Bruce\n> \n> In ckey_passphrase.sh.sample\n> \n> +\n> +echo \"$PASS\" | sha256sum | cut -d' ' -f1\n> +\n> \n> Under the threat model discussed, a copy of the keyfile could be\n> attacked offline. So getting from passphrase to DEKs should be as\n> resource intensive as possible to slow down brute-force attempts.\n> Instead of just a SHA hash, this should be at least a PBKDF2 (PKCS#5)\n\nI am satisfied with the security of SHA256.\n\n> On Tue, 22 Dec 2020 at 15:40, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Here is an updated patch. Are people happy with the Makefile, its\n> > location in the source tree, and the install directory name? I used the\n> > directory name 'auth_commands' because I thought 'auth' was too easily\n> > misinterpreted. I put the scripts in /src/backend/utils/auth_commands.\n> >\n> \n> What's implemented in these patches is an internal keystore, wrapped\n> with a key derived from a passphrase. I'd think that the scripts\n> directory should reflect what they interact with, so\n> 'keystore_commands' or 'local_keystore_command' sounds more specific\n> and therefore better than 'auth_commands'.\n\nThe point is that some commands are used for keystore and some for SSL\ncertificate passphrase entry, hence \"auth\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 22 Dec 2020 16:13:06 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 04:13:06PM -0500, Bruce Momjian wrote:\n> On Tue, Dec 22, 2020 at 08:15:27PM +0000, Alastair Turner wrote:\n> > Hi Bruce\n> > \n> > In ckey_passphrase.sh.sample\n> > \n> > +\n> > +echo \"$PASS\" | sha256sum | cut -d' ' -f1\n> > +\n> > \n> > Under the threat model discussed, a copy of the keyfile could be\n> > attacked offline. So getting from passphrase to DEKs should be as\n> > resource intensive as possible to slow down brute-force attempts.\n> > Instead of just a SHA hash, this should be at least a PBKDF2 (PKCS#5)\n> \n> I am satisfied with the security of SHA256.\n\nSorry, I should have said I am happy with a SHA512 HMAC in a 256-bit\nkeyspace.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 22 Dec 2020 16:34:09 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 10:40:17AM -0500, Bruce Momjian wrote:\n> On Mon, Dec 21, 2020 at 10:07:48PM -0500, Bruce Momjian wrote:\n> > Attached is the script patch. It is also at:\n> > \n> > \thttps://github.com/postgres/postgres/compare/master...bmomjian:cfe-sh.diff\n> > \n> > I think it still needs docs but those will have to be done after the\n> > code doc patch is added.\n> \n> Here is an updated patch. Are people happy with the Makefile, its\n> location in the source tree, and the install directory name? I used the\n> directory name 'auth_commands' because I thought 'auth' was too easily\n> misinterpreted. I put the scripts in /src/backend/utils/auth_commands. \n> It also contains a script that can be used for SSL passphrase prompting,\n> but I haven't written the C code for that yet.\n\nHere is a new patch, build on previous patches, which allows for the SSL\npassphrase to be prompted from the terminal.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Tue, 22 Dec 2020 21:58:18 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 11:45:08AM +0000, Alastair Turner wrote:\n> On Fri, 18 Dec 2020 at 21:36, Stephen Frost <sfrost@snowman.net> wrote:\n> > These ideas don't seem too bad but I'd think we'd pass which key we want\n> > on the command-line using a %i or something like that, rather than using\n> > stdin, unless there's some reason that would be an issue..? Certainly\n> > the CLI utilities I've seen tend to expect the name of the secret that\n> > you're asking for to be passed on the command line.\n> \n> Agreed. I was working on the assumption that the calling process\n> (initdb or pg_ctl) would have access to the challenge material and be\n> passing it to the utility, so putting it in a command line would allow\n> it to leak. If the utility is accessing the challenge material\n> directly, then just an indicator of which key to work with would be a\n> lot simpler, true.\n\nI want to repeat here what I said in another thread:\n\n> I think ultimately we will need three commands to control the keys.\n> First, there is the cluster_key_command, which we have now. Second, I\n> think we will need an optional command which returns random bytes ---\n> this would allow users to get random bytes from a different source than\n> that used by the server code.\n> \n> Third, we will probably need a command that returns the data encryption\n> keys directly, either heap/index or WAL keys, probably based on key\n> number --- you pass the key number you want, and the command returns the\n> data key. There would not be a cluster key in this case, but the\n> command could still prompt the user for perhaps a password to the KMS\n> server. It could not be used if any of the previous two commands are\n> used. I assume an HMAC would still be stored in the pg_cryptokeys\n> directory to check that the right key has been returned.\n> \n> I thought we should implement the first command, because it will\n> probably be the most common and easiest to use, and then see what people\n> want added.\n\nThere is also a fourth option where the command returns multiple keys,\none per line of hex digits. That could be written in shell script, but\nit would be fragile and complex. It could be written in Perl, but that\nwould add a new language requirement for this feature. It could be\nwritten in C, but that would limits its flexibility because changes\nwould require a recompile, and you would probably need that C file to\ncall external scripts to tailor input like we do now from the server.\n\nYou could actually write a full implemention of what we do on the server\nside in client code, but pg_alterckey would not work, since it would not\nknow the data format, so we would need another cluster key alter for that.\n\nIt could be written as a C extension, but that would be also have C's\nlimitations. In summary, having the server do most of the complex work\nfor the default case seems best, and eventually allowing the ability for\nthe client to do everything seems ideal. I think we need more input\nbefore we go beyond what we do now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sun, 27 Dec 2020 12:25:26 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "> I want to repeat here what I said in another thread:\n>\n>> I think ultimately we will need three commands to control the keys.\n>> First, there is the cluster_key_command, which we have now. Second, I\n>> think we will need an optional command which returns random bytes ---\n>> this would allow users to get random bytes from a different source than\n>> that used by the server code.\n>>\n>> Third, we will probably need a command that returns the data encryption\n>> keys directly, either heap/index or WAL keys, probably based on key\n>> number --- you pass the key number you want, and the command returns the\n>> data key. There would not be a cluster key in this case, but the\n>> command could still prompt the user for perhaps a password to the KMS\n>> server. It could not be used if any of the previous two commands are\n>> used. I assume an HMAC would still be stored in the pg_cryptokeys\n>> directory to check that the right key has been returned.\n>>\n>> I thought we should implement the first command, because it will\n>> probably be the most common and easiest to use, and then see what people\n>> want added.\n>\n> There is also a fourth option where the command returns multiple keys,\n> one per line of hex digits. That could be written in shell script, but\n> it would be fragile and complex. It could be written in Perl, but that\n> would add a new language requirement for this feature. It could be\n> written in C, but that would limits its flexibility because changes\n> would require a recompile, and you would probably need that C file to\n> call external scripts to tailor input like we do now from the server.\n>\n> You could actually write a full implemention of what we do on the server\n> side in client code, but pg_alterckey would not work, since it would not\n> know the data format, so we would need another cluster key alter for that.\n>\n> It could be written as a C extension, but that would be also have C's\n> limitations. In summary, having the server do most of the complex work\n> for the default case seems best, and eventually allowing the ability for\n> the client to do everything seems ideal. I think we need more input\n> before we go beyond what we do now.\n\nAs I said in the commit thread, I disagree with this approach because it \npushes for no or partial or maybe bad design.\n\nI think that an API should be carefully thought about, without assumption \nabout the underlying cryptography (algorithm, key lengths, modes, how keys \nare derived and stored, and so on), and its usefulness be demonstrated by \nactually being used for one implementation which would be what is \ncurrently being proposed in the patch, and possibly others thrown in for \nfree.\n\nThe implementations should not have to be in any particular language: \nShell, Perl, Python, C should be possible.\n\nAfter giving it more thought during the day, I think that only one\ncommand and a basic protocol is needed. Maybe something as simple as\n\n /path/to/command --options arguments…\n\nWith a basic (text? binary?) protocol on stdin/stdout (?) for the \ndifferent functions. What the command actually does (connect to a remote \nserver, ask for a master password, open some other database, whatever) \nshould be irrelevant to pg, which would just get and pass bunch of bytes \nto functions, which could use them for keys, secrets, whatever, and be \neasily replaceable.\n\nThe API should NOT make assumptions about the cryptographic design, what \ndepends about what, where things are stored… ISTM that Pg should only care \nabout naming keys, holding them when created/retrieved (but not create \nthem), basically interacting with the key manager, passing the stuff to \nfunctions for encryption/decryption seen as black boxes.\n\nI may have suggested something along these lines at the beginning of the \nkey management thread, probably. Not going this way implicitely implies \nmaking some assumptions which may or may not suit other use cases, so \nmakes them specific not generic. I do not think pg should do that.\n\n-- \nFabien.",
"msg_date": "Mon, 28 Dec 2020 18:15:44 -0400 (AST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Mon, Dec 28, 2020 at 06:15:44PM -0400, Fabien COELHO wrote:\n> The API should NOT make assumptions about the cryptographic design, what\n> depends about what, where things are stored… ISTM that Pg should only care\n> about naming keys, holding them when created/retrieved (but not create\n> them), basically interacting with the key manager, passing the stuff to\n> functions for encryption/decryption seen as black boxes.\n> \n> I may have suggested something along these lines at the beginning of the key\n> management thread, probably. Not going this way implicitely implies making\n> some assumptions which may or may not suit other use cases, so makes them\n> specific not generic. I do not think pg should do that.\n\nI am not sure what else I can add to this discussion. Having something\nthat is completely external might be a nice option, but I don't think it\nis the common use-case, and will make the most common use cases more\ncomplex. Adding a pre-defined system will not prevent future work on a\ncompletely external option. I know archive_command is completely\nexternal, but that is much simpler than what would need to be done for\nkey management, key rotation, and key verification.\n\nI will say that if the community feels external-only should be the only\noption, I will stop working on this feature because I feel the result\nwould be too fragile to be reliable, and I would not want to be\nassociated with it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 30 Dec 2020 18:47:22 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "Greetings,\n\n* Fabien COELHO (coelho@cri.ensmp.fr) wrote:\n> I think that an API should be carefully thought about, without assumption\n> about the underlying cryptography (algorithm, key lengths, modes, how keys\n> are derived and stored, and so on), and its usefulness be demonstrated by\n> actually being used for one implementation which would be what is currently\n> being proposed in the patch, and possibly others thrown in for free.\n\nPerhaps I'm misunderstanding, but this is basically what we have in the\ncurrently proposed patch as 'cipher.h', with cipher.c as a stub that\nfails if called directly, and cipher_openssl.c with the actual OpenSSL\nbased implementation.\n\n> The implementations should not have to be in any particular language: Shell,\n> Perl, Python, C should be possible.\n\nI disagree that it makes any sense to pass actual encryption out to a\nshell script.\n\n> After giving it more thought during the day, I think that only one\n> command and a basic protocol is needed. Maybe something as simple as\n> \n> /path/to/command --options arguments…\n\nThis is what's proposed- a command is run to acquire the KEK (as a hex\nencoded set of bytes, making it possible to use a shell script, which is\nwhat the patch does too).\n\n> With a basic (text? binary?) protocol on stdin/stdout (?) for the different\n> functions. What the command actually does (connect to a remote server, ask\n> for a master password, open some other database, whatever) should be\n> irrelevant to pg, which would just get and pass bunch of bytes to functions,\n> which could use them for keys, secrets, whatever, and be easily replaceable.\n\nRight- we just get bytes back from the command and then we use that as\nthe KEK. How that scripts gets those bytes is entirely up to the script\nand is not an issue for PG to worry about.\n\n> The API should NOT make assumptions about the cryptographic design, what\n> depends about what, where things are stored… ISTM that Pg should only care\n> about naming keys, holding them when created/retrieved (but not create\n> them), basically interacting with the key manager, passing the stuff to\n> functions for encryption/decryption seen as black boxes.\n\nThe API to fetch the KEK doesn't care at all about where it's stored or\nhow it's derived or anything like that. There's a relatively small\nchange which could be made to have PG request all of the keys that it'll\nneed on startup, if we want to go there as has been suggested elsewhere,\nbut even if we do that, PG needs to be able to do that itself too,\notherwise it's not a complete capability and there seems little point in\ndoing something that's just a pass-thru to something else and isn't able\nto really be used.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 30 Dec 2020 18:49:34 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 06:49:34PM -0500, Stephen Frost wrote:\n> The API to fetch the KEK doesn't care at all about where it's stored or\n> how it's derived or anything like that. There's a relatively small\n> change which could be made to have PG request all of the keys that it'll\n> need on startup, if we want to go there as has been suggested elsewhere,\n> but even if we do that, PG needs to be able to do that itself too,\n> otherwise it's not a complete capability and there seems little point in\n> doing something that's just a pass-thru to something else and isn't able\n> to really be used.\n\nRight now, the script returns a cluster key (KEK), and during initdb the\nserver generates data encryption keys and wraps them with the KEK. \nDuring server start, the server validates the KEK and decrypts the data\nkeys. pg_alterckey allows changing the KEK.\n\nI think Fabien is saying this all should _only_ be done using external\ntools --- that's what I don't agree with.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 30 Dec 2020 20:35:36 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Hello Bruce,\n\n>> The API should NOT make assumptions about the cryptographic design, what\n>> depends about what, where things are stored… ISTM that Pg should only care\n>> about naming keys, holding them when created/retrieved (but not create\n>> them), basically interacting with the key manager, passing the stuff to\n>> functions for encryption/decryption seen as black boxes.\n>>\n>> I may have suggested something along these lines at the beginning of the key\n>> management thread, probably. Not going this way implicitely implies making\n>> some assumptions which may or may not suit other use cases, so makes them\n>> specific not generic. I do not think pg should do that.\n>\n> I am not sure what else I can add to this discussion. Having something \n> that is completely external might be a nice option, but I don't think it \n> is the common use-case, and will make the most common use cases more \n> complex.\n\nI'm unsure about what is the \"common use case\". ISTM that having the \npostgres process holding the master key looks like a \"no\" for me in any \nuse case with actual security concern: as the database must be remotely \naccessible, a reasonable security assumption is that a pg account could be \ncompromised, and the \"master key\" (if any, that is just one particular \ncryptographic design) should not be accessible in that case. The first \nbarrier would be pg admin account. With external, the options are that the \nkey is hold by another id, so the host must be compromised as well, and \ncould even be managed remotely on another host (I agree that this later \noption would be fragile because of the implied network connection, but it \nwould be a tradeoff depending on the security concerns, but it pretty much \ncorrespond to the kerberos model).\n\n> Adding a pre-defined system will not prevent future work on a\n> completely external option.\n\nYes and no.\n\nHaving two side-by-side cluster-encryption scheme in core, the first that \ncould be implemented on top of the second without much effort, would look \npretty weird. Moreover, required changes in core to allow an API are the \nvery same as the one in this patch. The other concern I have with doing \nthe cleaning work afterwards is that the API would be somehow in the \nmiddle of the existing functions if it has not been thought of before.\n\n> I know archive_command is completely external, but that is much simpler \n> than what would need to be done for key management, key rotation, and \n> key verification.\n\nI'm not sure of the design of the key rotation stuff as I recall it from \nthe patches I looked at, but the API design looks like the more pressing \nissue. First, the mere existence of an \"master key\" is a cryptographic \nchoice which is debatable, because it creates issues if one must be able \nto change it, hence the whole \"key rotation\" stuff. ISTM that what core \nneeds to know is that one particular key is changed by a new one and the \nunderlying file is rewritten. It should not need to know anything about \nmaster keys and key derivation at all. It should need to know that the \nuser wants to change file keys, though.\n\n> I will say that if the community feels external-only should be the only\n> option, I will stop working on this feature because I feel the result\n> would be too fragile to be reliable,\n\nI'm do not see why it would be the case. I'm just arguing to have key \nmanagement in a separate, possibly suid something-else, process, which \ngiven the security concerns which dictates the feature looks like a must \nhave, or at least must be possible. From a line count point of view, it \nshould be a small addition to the current code.\n\n> and I would not want to be associated with it.\n\nYou do as you want. I'm no one, you can ignore me and proceed to commit \nwhatever you want. I'm only advising to the best of my ability, what I \nthink should be the level of API pursued for such a feature in pg, at the \ndesign level.\n\nI feel that the current proposal is built around one particular use case \nwith many implicit and unnecessary assumptions without giving much \nthoughts to the generic API which should really be pg concern, and I do \nnot think that it can really be corrected once the ship has sailed, hence \nmy attempt at having some thoughts given to the matter before that.\n\nIMHO, the *minimum* should be to have the API clearly designed, and the \nimplementation compatible with it at some level, including not having \nassumptions about underlying cryptographic functions and key derivations \n(I mean at the API level, the code itself can do it obviously).\n\nIf you want a special \"key_mgt_command = internal\" because you feel that \nprocesses are fragile and unreliable and do not bring security, so be it, \nbut I think that the API should be designed beforehand nevertheless.\n\nNote that if people think that I'm wrong, then I'm wrong by definition.\n\n-- \nFabien.",
"msg_date": "Thu, 31 Dec 2020 11:11:11 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "Hello Stephen,\n\n>> The implementations should not have to be in any particular language: Shell,\n>> Perl, Python, C should be possible.\n>\n> I disagree that it makes any sense to pass actual encryption out to a\n> shell script.\n\nYes, sure. I'm talking about key management. For actual crypto functions, \nISTM that they should be \"replaceable\", i.e. if some institution does not \nbelieve in AES then they could switch to something else.\n\n>> After giving it more thought during the day, I think that only one\n>> command and a basic protocol is needed. Maybe something as simple as\n>>\n>> /path/to/command --options arguments…\n>\n> This is what's proposed- a command is run to acquire the KEK (as a hex\n> encoded set of bytes, making it possible to use a shell script, which is\n> what the patch does too).\n\nYep, but that is not what I'm trying to advocate. The \"KEK\" (if any), \nwould stay in the process, not be given back to the database or command \nusing it. Then the database/tool would interact with the command to get \nthe actual per-file keys when needed.\n\n> [...] The API to fetch the KEK doesn't care at all about where it's \n> stored or how it's derived or anything like that.\n\n> There's a relatively small change which could be made to have PG request \n> all of the keys that it'll need on startup, if we want to go there as \n> has been suggested elsewhere, but even if we do that, PG needs to be \n> able to do that itself too, otherwise it's not a complete capability and \n> there seems little point in doing something that's just a pass-thru to \n> something else and isn't able to really be used.\n\nI do not understand. Postgres should provide a working implementation, \nwhether the functions are directly inside pg or though an external \nprocess, they need to be there anyway. I'm trying to fix a clean, possibly \nexternal API so that it is possible to have something different from the \nchoices made in the patch.\n\n-- \nFabien.",
"msg_date": "Thu, 31 Dec 2020 11:21:12 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "\nHello,\n\n>> The API to fetch the KEK doesn't care at all about where it's stored or\n>> how it's derived or anything like that. There's a relatively small\n>> change which could be made to have PG request all of the keys that it'll\n>> need on startup, if we want to go there as has been suggested elsewhere,\n>> but even if we do that, PG needs to be able to do that itself too,\n>> otherwise it's not a complete capability and there seems little point in\n>> doing something that's just a pass-thru to something else and isn't able\n>> to really be used.\n>\n> Right now, the script returns a cluster key (KEK), and during initdb the\n> server generates data encryption keys and wraps them with the KEK.\n> During server start, the server validates the KEK and decrypts the data\n> keys. pg_alterckey allows changing the KEK.\n>\n> I think Fabien is saying this all should _only_ be done using external\n> tools --- that's what I don't agree with.\n\nYep.\n\nI could compromise on \"could be done using an external tool\", but that \nrequires designing the API and thinking about where and how things are \ndone before everything is hardwired. Designing afterwards is too late. \nISTM that the current patch does not separate API design and cryptographic \ndesign, so both are deeply entwined, and would be difficult to \ndisentangle.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 31 Dec 2020 11:31:48 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Wed, Dec 30, 2020 at 06:49:34PM -0500, Stephen Frost wrote:\n> > The API to fetch the KEK doesn't care at all about where it's stored or\n> > how it's derived or anything like that. There's a relatively small\n> > change which could be made to have PG request all of the keys that it'll\n> > need on startup, if we want to go there as has been suggested elsewhere,\n> > but even if we do that, PG needs to be able to do that itself too,\n> > otherwise it's not a complete capability and there seems little point in\n> > doing something that's just a pass-thru to something else and isn't able\n> > to really be used.\n> \n> Right now, the script returns a cluster key (KEK), and during initdb the\n> server generates data encryption keys and wraps them with the KEK. \n> During server start, the server validates the KEK and decrypts the data\n> keys. pg_alterckey allows changing the KEK.\n\nRight- all of those are really necessary things for PG to have a useful\nimplementation.\n\n> I think Fabien is saying this all should _only_ be done using external\n> tools --- that's what I don't agree with.\n\nI also don't agree that everything should be external. We effectively\nhave that today and it certainly doesn't get us any closer to having a\nsolution in PG, a point which we are frequently pointed to as a reason\nwhy certain, entirely reasonable, use-cases can't be satisfied with PG.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 31 Dec 2020 09:18:05 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "Greetings,\n\n* Fabien COELHO (coelho@cri.ensmp.fr) wrote:\n> >>The API should NOT make assumptions about the cryptographic design, what\n> >>depends about what, where things are stored… ISTM that Pg should only care\n> >>about naming keys, holding them when created/retrieved (but not create\n> >>them), basically interacting with the key manager, passing the stuff to\n> >>functions for encryption/decryption seen as black boxes.\n> >>\n> >>I may have suggested something along these lines at the beginning of the key\n> >>management thread, probably. Not going this way implicitely implies making\n> >>some assumptions which may or may not suit other use cases, so makes them\n> >>specific not generic. I do not think pg should do that.\n> >\n> >I am not sure what else I can add to this discussion. Having something\n> >that is completely external might be a nice option, but I don't think it\n> >is the common use-case, and will make the most common use cases more\n> >complex.\n> \n> I'm unsure about what is the \"common use case\". ISTM that having the\n> postgres process holding the master key looks like a \"no\" for me in any use\n> case with actual security concern: as the database must be remotely\n> accessible, a reasonable security assumption is that a pg account could be\n> compromised, and the \"master key\" (if any, that is just one particular\n> cryptographic design) should not be accessible in that case. The first\n> barrier would be pg admin account. With external, the options are that the\n> key is hold by another id, so the host must be compromised as well, and\n> could even be managed remotely on another host (I agree that this later\n> option would be fragile because of the implied network connection, but it\n> would be a tradeoff depending on the security concerns, but it pretty much\n> correspond to the kerberos model).\n\nNo, this isn't anything like the Kerberos model and there's no case\nwhere the PG account won't have access to the DEK (data encryption key)\nduring normal operation (except with the possibility of off-loading to a\nhardware crypto device which, while interesting, is definitely something\nthat's of very limited utility and which could be added on as a\ncapability later anyway. This is also well understood by those who are\ninterested in this capability and it's perfectly acceptable that PG will\nhave, in memory, the DEK.\n\nThe keys which are stored on disk with the proposed patch are encrypted\nusing a KEK which is external to PG- that's all part of the existing\npatch and design.\n\n> >Adding a pre-defined system will not prevent future work on a\n> >completely external option.\n> \n> Yes and no.\n> \n> Having two side-by-side cluster-encryption scheme in core, the first that\n> could be implemented on top of the second without much effort, would look\n> pretty weird. Moreover, required changes in core to allow an API are the\n> very same as the one in this patch. The other concern I have with doing the\n> cleaning work afterwards is that the API would be somehow in the middle of\n> the existing functions if it has not been thought of before.\n\nThis has been considered and the functions which are proposed are\nintentionally structured to allow for multiple implementations already,\nso it's unclear to me what the argument here is. Further, we haven't\neven gotten to actual cluster encryption yet- this is all just the key\nmanagement side of things, which is absolutely a necessary and important\npart but argueing that it doesn't address the cluster encryption\napproach in core simply doesn't make any sense as that's not a part of\nthis patch.\n\nLet's have that discussion when we actually get to the point of talking\nabout cluster encryption.\n\n> >I know archive_command is completely external, but that is much simpler\n> >than what would need to be done for key management, key rotation, and key\n> >verification.\n> \n> I'm not sure of the design of the key rotation stuff as I recall it from the\n> patches I looked at, but the API design looks like the more pressing issue.\n> First, the mere existence of an \"master key\" is a cryptographic choice which\n> is debatable, because it creates issues if one must be able to change it,\n> hence the whole \"key rotation\" stuff. ISTM that what core needs to know is\n> that one particular key is changed by a new one and the underlying file is\n> rewritten. It should not need to know anything about master keys and key\n> derivation at all. It should need to know that the user wants to change file\n> keys, though.\n\nNo, it's not debatable that a KEK is needed, I disagree entirely.\nPerhaps we can have support for an external key store in the future for\nthe DEKs, but we really need to have our own key management and the\nability to reasonably rotate keys (which is what the KEK provides).\nIdeally, we'll get to a point where we can even have multiple DEKs and\ndeal with rotating them too, but that, if anything, point even stronger\nto the need to have a key management system and KEK as we'll be dealing\nwith that many more keys.\n\n> >I will say that if the community feels external-only should be the only\n> >option, I will stop working on this feature because I feel the result\n> >would be too fragile to be reliable,\n> \n> I'm do not see why it would be the case. I'm just arguing to have key\n> management in a separate, possibly suid something-else, process, which given\n> the security concerns which dictates the feature looks like a must have, or\n> at least must be possible. From a line count point of view, it should be a\n> small addition to the current code.\n\nAll of this hand-waving really isn't helping.\n\nIf it's a small addition to the current code then it'd be fantastic if\nyou'd propose a specific patch which adds what you're suggesting. I\ndon't think either Bruce or I would have any issue with others helping\nout on this effort, but let's be clear- we need something that *is* part\nof core PG, even if we have an ability to have other parts exist outside\nof PG.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 31 Dec 2020 09:36:51 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "Greetings,\n\n* Fabien COELHO (coelho@cri.ensmp.fr) wrote:\n> >>The implementations should not have to be in any particular language: Shell,\n> >>Perl, Python, C should be possible.\n> >\n> >I disagree that it makes any sense to pass actual encryption out to a\n> >shell script.\n> \n> Yes, sure. I'm talking about key management. For actual crypto functions,\n> ISTM that they should be \"replaceable\", i.e. if some institution does not\n> believe in AES then they could switch to something else.\n\nThe proposed implementation makes it pretty straight-forward to add in\nother implementations and other crypto algorithms if people wish to do\nso.\n\n> >>After giving it more thought during the day, I think that only one\n> >>command and a basic protocol is needed. Maybe something as simple as\n> >>\n> >> /path/to/command --options arguments…\n> >\n> >This is what's proposed- a command is run to acquire the KEK (as a hex\n> >encoded set of bytes, making it possible to use a shell script, which is\n> >what the patch does too).\n> \n> Yep, but that is not what I'm trying to advocate. The \"KEK\" (if any), would\n> stay in the process, not be given back to the database or command using it.\n> Then the database/tool would interact with the command to get the actual\n> per-file keys when needed.\n\n\"When needed\" is every single write that we do of any file in the entire\nbackend. Reaching out to something external every time we go to use the\nkey really isn't sensible- except, perhaps, in the case where we are\ndoing full crypto off-loading and we don't actually have to deal with\nthe KEK or the DEK at all, but that's all on the cluster encryption\nside, really, and isn't the focus of this patch and what it's bringing-\nall of which is needed anyway.\n\n> >[...] The API to fetch the KEK doesn't care at all about where it's stored\n> >or how it's derived or anything like that.\n> \n> >There's a relatively small change which could be made to have PG request\n> >all of the keys that it'll need on startup, if we want to go there as has\n> >been suggested elsewhere, but even if we do that, PG needs to be able to\n> >do that itself too, otherwise it's not a complete capability and there\n> >seems little point in doing something that's just a pass-thru to something\n> >else and isn't able to really be used.\n> \n> I do not understand. Postgres should provide a working implementation,\n> whether the functions are directly inside pg or though an external process,\n> they need to be there anyway. I'm trying to fix a clean, possibly external\n> API so that it is possible to have something different from the choices made\n> in the patch.\n\nRight, we need a working implementation that's part of core, that's what\nthis effort is trying to bring.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 31 Dec 2020 09:40:49 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key managment"
},
{
"msg_contents": "On Thu, Dec 31, 2020 at 11:11:11AM +0100, Fabien COELHO wrote:\n> > I am not sure what else I can add to this discussion. Having something\n> > that is completely external might be a nice option, but I don't think it\n> > is the common use-case, and will make the most common use cases more\n> > complex.\n> \n> I'm unsure about what is the \"common use case\". ISTM that having the\n> postgres process holding the master key looks like a \"no\" for me in any use\n> case with actual security concern: as the database must be remotely\n> accessible, a reasonable security assumption is that a pg account could be\n> compromised, and the \"master key\" (if any, that is just one particular\n> cryptographic design) should not be accessible in that case. The first\n> barrier would be pg admin account.\n\nLet's unpack this logic, since I know others, like Alastair Turner (CC\nadded), had similar concerns. Frankly, I feel this feature has limited\nsecurity usefulness, so if we don't feel it has sufficient value, let's\nmark it as not wanted and move on.\n\nI think there are two basic key configurations:\n\n1. pass data encryption keys in from an external source\n2. store the data encryption keys wrapped by a key encryption key (KEK)\n passed in from an external source\n\nThe current code implements #2 because it simplifies administration,\nchecking, external key (KEK) rotation, and provides good error reporting\nwhen something goes wrong. For example, with #1, there is no way to\nrotate the externally-stored key except reencrypting the entire cluster.\n\nWhen using AES256 with GCM (for verification) is #1 more secure than #2?\nI don't think so. If the Postgres account is compromised, they can grab\nthe data encryption keys as the come in from the external script (#1),\nor when they are decrypted by the KEK (#2), or while they are in shared\nmemory while the server is running. If the postgres account is not\ncompromised, I don't think it is any easier to get the data key wrapped\nby a KEK than it is to try decrypting the actual heap/index/WAL blocks.\n(Once you find the key for one, they would all decrypt.)\n\n> With external, the options are that the\n> key is hold by another id, so the host must be compromised as well, and\n> could even be managed remotely on another host (I agree that this later\n> option would be fragile because of the implied network connection, but it\n> would be a tradeoff depending on the security concerns, but it pretty much\n> correspond to the kerberos model).\n\nWe don't store the data encryption keys raw on the server, but rather\nwrap them with a KEK. If the external keystore is compromised using #1,\nyou can't rotate those keys without reencrypting the entire cluster.\n\n> > Adding a pre-defined system will not prevent future work on a\n> > completely external option.\n> \n> Yes and no.\n> \n> Having two side-by-side cluster-encryption scheme in core, the first that\n> could be implemented on top of the second without much effort, would look\n> pretty weird. Moreover, required changes in core to allow an API are the\n> very same as the one in this patch. The other concern I have with doing the\n> cleaning work afterwards is that the API would be somehow in the middle of\n> the existing functions if it has not been thought of before.\n\nThe question is whether there is a common use-case, and if so, do we\nimplement that first, with all the safeguards, and then allow others to\ncustomize? I don't want to optimize for the rare case if that makes the\ncommon case more error-prone.\n\n> > I know archive_command is completely external, but that is much simpler\n> > than what would need to be done for key management, key rotation, and\n> > key verification.\n> \n> I'm not sure of the design of the key rotation stuff as I recall it from the\n> patches I looked at, but the API design looks like the more pressing issue.\n> First, the mere existence of an \"master key\" is a cryptographic choice which\n> is debatable, because it creates issues if one must be able to change it,\n> hence the whole \"key rotation\" stuff. ISTM that what core needs to know is\n\nWell, you would want to change the external-stored key independent of\nthe data encryption keys, which are harder to change.\n\n> > I will say that if the community feels external-only should be the only\n> > option, I will stop working on this feature because I feel the result\n> > would be too fragile to be reliable,\n> \n> I'm do not see why it would be the case. I'm just arguing to have key\n> management in a separate, possibly suid something-else, process, which given\n> the security concerns which dictates the feature looks like a must have, or\n> at least must be possible. From a line count point of view, it should be a\n> small addition to the current code.\n\nSee above. Please explain the security value of this complexity.\n\n> > and I would not want to be associated with it.\n> \n> You do as you want. I'm no one, you can ignore me and proceed to commit\n> whatever you want. I'm only advising to the best of my ability, what I think\n> should be the level of API pursued for such a feature in pg, at the design\n> level.\n\nYou are not the only one who is suggesting this, so we should decide as\na group on an acceptable direction.\n\n> I feel that the current proposal is built around one particular use case\n> with many implicit and unnecessary assumptions without giving much thoughts\n> to the generic API which should really be pg concern, and I do not think\n> that it can really be corrected once the ship has sailed, hence my attempt\n> at having some thoughts given to the matter before that.\n\nI have already explained why I am optimizing for the common use case,\nand you are not addressing my reasons here.\n\n> IMHO, the *minimum* should be to have the API clearly designed, and the\n> implementation compatible with it at some level, including not having\n> assumptions about underlying cryptographic functions and key derivations (I\n> mean at the API level, the code itself can do it obviously).\n> \n> If you want a special \"key_mgt_command = internal\" because you feel that\n> processes are fragile and unreliable and do not bring security, so be it,\n> but I think that the API should be designed beforehand nevertheless.\n\nI think we need a separate command for the fully external case, rather\nthan trying to lay it on top of the command for the internal one.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 31 Dec 2020 11:44:45 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Thu, Dec 31, 2020 at 11:11:11AM +0100, Fabien COELHO wrote:\n> > > I am not sure what else I can add to this discussion. Having something\n> > > that is completely external might be a nice option, but I don't think it\n> > > is the common use-case, and will make the most common use cases more\n> > > complex.\n> > \n> > I'm unsure about what is the \"common use case\". ISTM that having the\n> > postgres process holding the master key looks like a \"no\" for me in any use\n> > case with actual security concern: as the database must be remotely\n> > accessible, a reasonable security assumption is that a pg account could be\n> > compromised, and the \"master key\" (if any, that is just one particular\n> > cryptographic design) should not be accessible in that case. The first\n> > barrier would be pg admin account.\n> \n> Let's unpack this logic, since I know others, like Alastair Turner (CC\n> added), had similar concerns. Frankly, I feel this feature has limited\n> security usefulness, so if we don't feel it has sufficient value, let's\n> mark it as not wanted and move on.\n\nConsidering the amount of requests for this, I don't feel it's at all\nreasonable to suggest that it's 'not wanted'.\n\n> I think there are two basic key configurations:\n> \n> 1. pass data encryption keys in from an external source\n> 2. store the data encryption keys wrapped by a key encryption key (KEK)\n> passed in from an external source\n\nI agree those are two basic configurations (though they aren't the only\npossible ones).\n\n> The current code implements #2 because it simplifies administration,\n> checking, external key (KEK) rotation, and provides good error reporting\n> when something goes wrong. For example, with #1, there is no way to\n> rotate the externally-stored key except reencrypting the entire cluster.\n\nRight, and there also wouldn't be a very simple way to actually get PG\ngoing with a single key or a passphrase.\n\n> When using AES256 with GCM (for verification) is #1 more secure than #2?\n> I don't think so. If the Postgres account is compromised, they can grab\n> the data encryption keys as the come in from the external script (#1),\n> or when they are decrypted by the KEK (#2), or while they are in shared\n> memory while the server is running. If the postgres account is not\n> compromised, I don't think it is any easier to get the data key wrapped\n> by a KEK than it is to try decrypting the actual heap/index/WAL blocks.\n> (Once you find the key for one, they would all decrypt.)\n\nI can see some usefulness for supporting #1, but that has got next to\nnothing to do with what this patch is about and is all about the *next*\nstep, which is to actually look at supporting encryption of the rest of\nthe cluster, beyond just the keys. We need to get past this first step\nthough, and it seems to be nearly impossible to do so, which has\nfrustrated multiple attempts to actually accomplish anything here.\n\nI agree that none of these approaches address an online compromise of\nthe postgres account. Thankfully, that isn't actually what this is\nintended to address and to talk about that case is just a distraction\nthat isn't solvable and wastes time.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 31 Dec 2020 12:05:53 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "On Thu, Dec 31, 2020 at 12:05:53PM -0500, Stephen Frost wrote:\n> > Let's unpack this logic, since I know others, like Alastair Turner (CC\n> > added), had similar concerns. Frankly, I feel this feature has limited\n> > security usefulness, so if we don't feel it has sufficient value, let's\n> > mark it as not wanted and move on.\n> \n> Considering the amount of requests for this, I don't feel it's at all\n> reasonable to suggest that it's 'not wanted'.\n\nWell, people ask for (or used to ask for) query hints, so request count\nand usefulness aren't always correlated. I brought up this point\nbecause people sometimes want to add complexity to try and guard against\nsome exploit that is impossible to guard against, so I thought we should\nbe clear what we can't protect, no matter how we configure it.\n\n> > When using AES256 with GCM (for verification) is #1 more secure than #2?\n> > I don't think so. If the Postgres account is compromised, they can grab\n> > the data encryption keys as the come in from the external script (#1),\n> > or when they are decrypted by the KEK (#2), or while they are in shared\n> > memory while the server is running. If the postgres account is not\n> > compromised, I don't think it is any easier to get the data key wrapped\n> > by a KEK than it is to try decrypting the actual heap/index/WAL blocks.\n> > (Once you find the key for one, they would all decrypt.)\n> \n> I can see some usefulness for supporting #1, but that has got next to\n> nothing to do with what this patch is about and is all about the *next*\n> step, which is to actually look at supporting encryption of the rest of\n> the cluster, beyond just the keys. We need to get past this first step\n> though, and it seems to be nearly impossible to do so, which has\n> frustrated multiple attempts to actually accomplish anything here.\n> \n> I agree that none of these approaches address an online compromise of\n> the postgres account. Thankfully, that isn't actually what this is\n> intended to address and to talk about that case is just a distraction\n> that isn't solvable and wastes time.\n\nI assume people don't want the data keys stored in the file system, even\nin encrypted form, and they believe this makes the system more secure. I\ndon't think it does, and I think it makes external key rotation very\ndifficult, among other negatives. Also, keep in mind any failure of the\nkey management system could make the cluster unreadable, so making it as\nsimple/error-free as possible is a big requirement for me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 31 Dec 2020 12:21:27 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 3:47 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> I will say that if the community feels external-only should be the only\n> option, I will stop working on this feature because I feel the result\n> would be too fragile to be reliable, and I would not want to be\n> associated with it.\n>\n>\nI can say that the people that I work with would prefer an \"officially\"\nsupported mechanism from Postgresql.org. The use of a generic API would be\nuseful for the ecosystem who wants to build on core functionality but\nPostgresql should have competent built-in support as well.\n\nJD\n\n>\n>\n\nOn Wed, Dec 30, 2020 at 3:47 PM Bruce Momjian <bruce@momjian.us> wrote:\nI will say that if the community feels external-only should be the only\noption, I will stop working on this feature because I feel the result\nwould be too fragile to be reliable, and I would not want to be\nassociated with it.\nI can say that the people that I work with would prefer an \"officially\" supported mechanism from Postgresql.org. The use of a generic API would be useful for the ecosystem who wants to build on core functionality but Postgresql should have competent built-in support as well.JD",
"msg_date": "Thu, 31 Dec 2020 10:41:01 -0800",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": ">\n>\n> > >I will say that if the community feels external-only should be the only\n> > >option, I will stop working on this feature because I feel the result\n> > >would be too fragile to be reliable,\n> >\n> > I'm do not see why it would be the case. I'm just arguing to have key\n> > management in a separate, possibly suid something-else, process, which\n> given\n> > the security concerns which dictates the feature looks like a must have,\n> or\n> > at least must be possible. From a line count point of view, it should be\n> a\n> > small addition to the current code.\n>\n> All of this hand-waving really isn't helping.\n>\n> If it's a small addition to the current code then it'd be fantastic if\n> you'd propose a specific patch which adds what you're suggesting. I\n> don't think either Bruce or I would have any issue with others helping\n> out on this effort, but let's be clear- we need something that *is* part\n> of core PG, even if we have an ability to have other parts exist outside\n> of PG.\n>\n\n+1\n\nJD\n\n\n> >I will say that if the community feels external-only should be the only\n> >option, I will stop working on this feature because I feel the result\n> >would be too fragile to be reliable,\n> \n> I'm do not see why it would be the case. I'm just arguing to have key\n> management in a separate, possibly suid something-else, process, which given\n> the security concerns which dictates the feature looks like a must have, or\n> at least must be possible. From a line count point of view, it should be a\n> small addition to the current code.\n\nAll of this hand-waving really isn't helping.\n\nIf it's a small addition to the current code then it'd be fantastic if\nyou'd propose a specific patch which adds what you're suggesting. I\ndon't think either Bruce or I would have any issue with others helping\nout on this effort, but let's be clear- we need something that *is* part\nof core PG, even if we have an ability to have other parts exist outside\nof PG.+1JD",
"msg_date": "Thu, 31 Dec 2020 10:46:57 -0800",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "Hi Stephen, Bruce, Fabien\n\nOn Thu, 31 Dec 2020 at 17:05, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Thu, Dec 31, 2020 at 11:11:11AM +0100, Fabien COELHO wrote:\n> > > > I am not sure what else I can add to this discussion. Having something\n> > > > that is completely external might be a nice option, but I don't think it\n> > > > is the common use-case, and will make the most common use cases more\n> > > > complex.\n> > >\n> > > I'm unsure about what is the \"common use case\". ISTM that having the\n> > > postgres process holding the master key looks like a \"no\" for me in any use\n> > > case with actual security concern: as the database must be remotely\n> > > accessible, a reasonable security assumption is that a pg account could be\n> > > compromised, and the \"master key\" (if any, that is just one particular\n> > > cryptographic design) should not be accessible in that case. The first\n> > > barrier would be pg admin account.\n> >\n> > Let's unpack this logic, since I know others, like Alastair Turner (CC\n> > added), had similar concerns. Frankly, I feel this feature has limited\n> > security usefulness, so if we don't feel it has sufficient value, let's\n> > mark it as not wanted and move on.\n>\n> Considering the amount of requests for this, I don't feel it's at all\n> reasonable to suggest that it's 'not wanted'.\n>\n\nFWIW, I think that cluster file encryption has great value, even if we\nonly consider its incremental value on top of disk or volume\nencryption. Particularly in big IT estates where monitoring tools,\nbackup managers, etc have read access to a lot of mounted volumes.\nLimiting the impact of these systems (or their credentials) being\ncompromised is definitely useful.\n\n>\n> > I think there are two basic key configurations:\n> >\n> > 1. pass data encryption keys in from an external source\n> > 2. store the data encryption keys wrapped by a key encryption key (KEK)\n> > passed in from an external source\n>\n> I agree those are two basic configurations (though they aren't the only\n> possible ones).\n>\n\nIf we rephrase these two in terms of what the server process does to\nget a key, we could say\n\n 1. Get the keys from another process at startup (and maybe reload) time\n 2. Get the wrapped keys from a local file...\n 2.a. directly and unwrap them using the server's own logic and user prompt\n 2.b. via a library which manages wrapping and user prompting\n\nWith the general feeling in the mailing list discussions favouring an\napproach in the #2 family, I have been looking at options around 2b. I\nwas hoping to provide some flexibility for users without adding\ncomplexity to the pg code base and use some long standing,\nstandardised, features rather than reinventing or recoding the wheel.\nIt has not been a complete success, or failure.\n\nThe nearest thing to a standard for wrapped secret store files is\nPKCS#12. I searched and cannot find it discussed on-list in this\ncontext before. It is the format used for Java local keystores in\nrecent versions. The format is supported, up to a point, by OpenSSL,\nnss and gnuTLS. OpenSSL also has command line support for many\noperations on the files, including updating the wrapping password.\n\nContent in the PKCS12 files is stored in typed \"bags\". The challenge\nis that the OpenSSL command line tools currently support only the\ncontent types related to TLS operations - certificates, public or\nprivate keys, CRLs, ... and not the untyped secret bag which is\nappropriate for storing AES keys. The c APIs will work with this\ncontent but there are not as many helper functions as for other\ncontent types.\n\nFor future information - the nss command line tools understand the\nsecret bags, but don't offer nearly so many options for configuration.\n\nFor building something now - the OpenSSL pkcs12 functions provide\nfeatures like tags and friendly names for keys, and configurable key\nderivation from passwords, but the code to interact with the\nsecretBags is accumulating quickly in my prototype.\n\nAfter the long intro, my question - If using a standard format,\nmanaged by a library, for the internal keystore does not result in a\nsmaller or simpler patch, is there enough other value for this to be\nworth considering? For security features, standards and building on\nwell exercised code bases do really have value, but is it enough...\n\nRegards\nAlastair\n\n\n",
"msg_date": "Fri, 1 Jan 2021 18:26:36 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "Hello Stephen,\n\n>> I'm unsure about what is the \"common use case\". ISTM that having the\n>> postgres process holding the master key looks like a \"no\" for me in any use\n>> case with actual security concern: as the database must be remotely\n>> accessible, a reasonable security assumption is that a pg account could be\n>> compromised, and the \"master key\" (if any, that is just one particular\n>> cryptographic design) should not be accessible in that case. The first\n>> barrier would be pg admin account. With external, the options are that the\n>> key is hold by another id, so the host must be compromised as well, and\n>> could even be managed remotely on another host (I agree that this later\n>> option would be fragile because of the implied network connection, but it\n>> would be a tradeoff depending on the security concerns, but it pretty much\n>> correspond to the kerberos model).\n>\n> No, this isn't anything like the Kerberos model and there's no case\n> where the PG account won't have access to the DEK (data encryption key)\n> during normal operation (except with the possibility of off-loading to a\n> hardware crypto device which, while interesting, is definitely something\n> that's of very limited utility and which could be added on as a\n> capability later anyway. This is also well understood by those who are\n> interested in this capability and it's perfectly acceptable that PG will\n> have, in memory, the DEK.\n\nI'm sorry that I'm not very clear. My ranting is having a KEK \"master key\" \nin pg memory. I think I'm fine at this point with having DEKs available on \nthe host to code/decode files. What I meant about kerberos is that the \nsetup I was describing was making the database dependent on a remote host, \nwhich is somehow what keroberos does.\n\n> The keys which are stored on disk with the proposed patch are encrypted\n> using a KEK which is external to PG- that's all part of the existing\n> patch and design.\n\nYep. My point is that the patch hardwires a cryptographic design with many \nassumptions, whereas I think it should design an API compatible with a \nlarger class of designs, and possibly implement one with KEK and so on, \nI'm fine with that.\n\n>>> Adding a pre-defined system will not prevent future work on a\n>>> completely external option.\n>>\n>> Yes and no.\n>>\n>> Having two side-by-side cluster-encryption scheme in core, the first that\n>> could be implemented on top of the second without much effort, would look\n>> pretty weird. Moreover, required changes in core to allow an API are the\n>> very same as the one in this patch. The other concern I have with doing the\n>> cleaning work afterwards is that the API would be somehow in the middle of\n>> the existing functions if it has not been thought of before.\n>\n> This has been considered and the functions which are proposed are\n> intentionally structured to allow for multiple implementations already,\n\nDoes it? This is unclear to me from looking at the code, and the limited\ndocumentation does not point to that. I can see that the \"kek provider\" \nand the \"random provider\" can be changed, but the overall cryptographic \ndesign seems hardwired.\n\n> so it's unclear to me what the argument here is.\n\nThe argument is that professional cryptophers do wrong designs, and I do \nnot see why you would do different. So instead of hardwiring choice that \nyou think are THE good ones, ISTM that pg should design a reasonably \nflexible API, and also provide an implementation, obviously.\n\n> Further, we haven't even gotten to actual cluster encryption yet- this \n> is all just the key management side of things,\n\nWith hardwired choices about 1 KEK and 2 DEK that are debatable, see \nbelow.\n\n> which is absolutely a necessary and important part but argueing that it \n> doesn't address the cluster encryption approach in core simply doesn't \n> make any sense as that's not a part of this patch.\n>\n> Let's have that discussion when we actually get to the point of talking\n> about cluster encryption.\n>\n>>> I know archive_command is completely external, but that is much simpler\n>>> than what would need to be done for key management, key rotation, and key\n>>> verification.\n>>\n>> I'm not sure of the design of the key rotation stuff as I recall it from the\n>> patches I looked at, but the API design looks like the more pressing issue.\n>> First, the mere existence of an \"master key\" is a cryptographic choice which\n>> is debatable, because it creates issues if one must be able to change it,\n>> hence the whole \"key rotation\" stuff. ISTM that what core needs to know is\n>> that one particular key is changed by a new one and the underlying file is\n>> rewritten. It should not need to know anything about master keys and key\n>> derivation at all. It should need to know that the user wants to change file\n>> keys, though.\n>\n> No, it's not debatable that a KEK is needed, I disagree entirely.\n\nISTM that there is cryptographic design which suits your needs and you \nseem to decide that it is the only possible way to do it It seems that you \nknow. As a researcher, I know that I do not know:-) As a software \nengineer, I'm trying to suggest enabling more with the patch, including \nnot hardwiring a three-key management scheme.\n\nI'm advocating that you pg not decide for others what is best for them, \nbut rather allow multiple choices and implementations through an API. Note \nthat I'm not claiming that the 1 KEK + 2 DEK + AES… design is bad. I'm \nclaiming that it is just one possible design, quite peculiar though, and \nthat pg should not hardwire it, and it does not need to anyway.\n\nThe documentation included in the key management patch states: \"Key zero \nis the key used to encrypt database heap and index files which are stored \nin the file system, plus temporary files created during database \noperation.\" Read as is, it suggests that the same key is used to encrypt \nmany files. From a cryptographic point of view this looks like a bad idea. \nThe mode used is unclear. If this is a stream cypher generated in counter \nmode, it could be a very bad idea. Hopefully this is not what is intended, \nbut the included documentation is frankly misleading in that case. IMHO \nthere should be one key per file. Also, the CTR mode should be avoided if \npossible because it has quite special properties, unless these properties \nare absolutely necessary.\n\n From what I see in the patch, the DEKs cannot be changed, only the KEK \nwhich encodes de DEK. I cannot say I'm thrilled by such a property. If the \nKEK is lost, then probably the DEKs are lost as well. Changing the KEK \ndoes not change anything about it, so that the attacker would be able to \nretrieve the contents of later/previous state of the database if they can \nretrieve a previous/later encoded version, even if the KEK is changed \nevery day. This is a bad property which somehow provides a false sense of \nsecurity. I'm not sure that it makes much sense to change a KEK without \nchanging the associated DEK, security-wise.\n\nAnyway, I'd suggest to provide a script to re-encrypt a cluster, possibly \nwhile offline, with different DEKs. From an audit perspective, this could \nbe a requirement.\n\nI'd expect a very detailed README or documentation somewhere, but there \nare none in the committed/uncommitted patch.\n\nIn https://wiki.postgresql.org/wiki/Transparent_Data_Encryption, I see a \nlot of minute cryptographic design decision which are debatable, and a few \nwhich seem to be still opened, whereas other things are already in the \nprocess of being committed. I'm annoyed by the decision to reuse the same \nkey for a lot of encryption, even if there are discussions about changing \nthe iv in some way. If you have some clever way to build an iv, then \nplease please change the key as well?\n\nISTM that pg at the core level should (only) directly provide:\n\n(1) a per-file encryption scheme, with loadable (hook-replaceable \nfunctions??) to manage pages, maybe:\n\n encrypt(page_id, *key, *clear_page, *encrypted_page);\n decrypt(page_id, *key, *encrypted_page, *clear_page);\n\nWhat encrypt/decrypt does is beyond pg core stuff. Ok, a reasonable \nimplementation should be provided, obviously, possibly in core. There may \nbe some block-size issues if not full pages are encrypted, so maybe the \ninterface needs to be a little more subtle.\n\n(2) offer a key management scheme interface, to manage *per-file* keys, \npossibly loadable (hook replaceable?). If all files have the same key, \nwhich is stored in a directory and encoded with a KEK, this is just one \n(debatable) implementation choice. For that, ISTM that what is needed at \nthis level is:\n\n get_key(file_id (relative name? oid? 8 or 16 bytes something?));\n\nMaybe that should be enough to create a new key implicitely, or maybe it \nshould be a different interface, eg get_new_key(file_id), not sure. There \nis the question of the key length, which may for instance include an iv. \nMaybe an information seeking function should be provided, to know about \nsizes or whatever, not sure. Maybe there should/could be a parameter to \nmanage key versions, not sure.\n\nThe internal key infrastructure would interact with this/these functions \nwhen needed, and store the key in the file management structure. All this \nshould be a black box to core pg, that is the point of an API. This is \nwhat I'd have expected to see in the \"postgres core key management\" stuff, \nnot a hardwired 2-key store with a KEK.\n\n(3) ISTM that the key management interface should be external, or at least \nit should be possible to make it external easily. I do not think that \nthere is a significant performance issue because keys are needed once, and \nonce loaded they are there. A simple way to do that is a separate process \nwith a basic protocol on stdin/stdout to implement \"get_key\", which is \nbasically already half implemented in the patch for retrieving the KEK.\n\n> Perhaps we can have support for an external key store in the future for\n> the DEKs, but we really need to have our own key management and the\n> ability to reasonably rotate keys (which is what the KEK provides).\n\nI think that the KEK/2-DEK store is a peculiar design which does not \n*need* to be hardwired deeply in core.\n\n> Ideally, we'll get to a point where we can even have multiple DEKs and\n> deal with rotating them too, but that, if anything, point even stronger\n> to the need to have a key management system and KEK as we'll be dealing\n> with that many more keys.\n>\n>>> I will say that if the community feels external-only should be the only\n>>> option, I will stop working on this feature because I feel the result\n>>> would be too fragile to be reliable,\n>>\n>> I'm do not see why it would be the case. I'm just arguing to have key\n>> management in a separate, possibly suid something-else, process, which given\n>> the security concerns which dictates the feature looks like a must have, or\n>> at least must be possible. From a line count point of view, it should be a\n>> small addition to the current code.\n>\n> All of this hand-waving really isn't helping.\n\nI'm sorry that you feel that. I'm really trying to help improve the \napproach and design with my expertise. As I said in another mail, I'm no \none, you can always chose to ignore me.\n\n> If it's a small addition to the current code then it'd be fantastic if\n> you'd propose a specific patch which adds what you're suggesting.\n\nMy point is to discuss an API.\n\nOnce there is some agreement the possible API outline, then \nimplementations can be provided, but if people do not want an API in the \nfirst place, I'm not sure I can see the point of sending patches.\n\n> I don't think either Bruce or I would have any issue with others helping \n> out on this effort, but let's be clear- we need something that *is* part \n> of core PG, even if we have an ability to have other parts exist outside \n> of PG.\n\nYep. I have not suggested that PG should not implement the API, on the \ncontrary it definitely should.\n\n-- \nFabien.",
"msg_date": "Sat, 2 Jan 2021 10:50:15 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "Hi Fabien\n\nOn Sat, 2 Jan 2021 at 09:50, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n...\n> ISTM that pg at the core level should (only) directly provide:\n>\n> (1) a per-file encryption scheme, with loadable (hook-replaceable\n> functions??) to manage pages, maybe:\n>\n> encrypt(page_id, *key, *clear_page, *encrypted_page);\n> decrypt(page_id, *key, *encrypted_page, *clear_page);\n>\n> What encrypt/decrypt does is beyond pg core stuff. Ok, a reasonable\n> implementation should be provided, obviously, possibly in core. There may\n> be some block-size issues if not full pages are encrypted, so maybe the\n> interface needs to be a little more subtle.\n>\n\nThere are a lot of specifics of the encryption implementation which\nneed to be addressed in future patches. This patch focuses on making\nkeys available to the encryption processes at run-time, so ...\n\n>\n> (2) offer a key management scheme interface, to manage *per-file* keys,\n> possibly loadable (hook replaceable?). If all files have the same key,\n> which is stored in a directory and encoded with a KEK, this is just one\n> (debatable) implementation choice. For that, ISTM that what is needed at\n> this level is:\n>\n> get_key(file_id (relative name? oid? 8 or 16 bytes something?));\n>\n\nPer-cluster keys for permanent data and WAL allow a useful level of\nprotection, even if it could be improved upon. It's also going to be\nquicker/simpler to implement, so any API should allow for it. If\nthere's an arbitrary number of DEK's, using a scope label for\naccessing them sounds right, so \"WAL\", \"local_data\",\n\"local_data/tablespaceiod\" or \"local_data/dboid/tableoid\".\n\n>\n...\n> (3) ISTM that the key management interface should be external, or at least\n> it should be possible to make it external easily. I do not think that\n> there is a significant performance issue because keys are needed once, and\n> once loaded they are there. A simple way to do that is a separate process\n> with a basic protocol on stdin/stdout to implement \"get_key\", which is\n> basically already half implemented in the patch for retrieving the KEK.\n>\n\nIf keys can have arbitrary scope, then the pg backend won't know what\nto ask for. So the API becomes even simpler with no specific request\non stdin and all the relevant keys on stdout. I generally like this\napproach as well, and it will be the only option for some\nintegrations. On the other hand, there is an advantage to having the\nkey retrieval piece of key management in-process - the keys are not\nbeing passed around in plain.\n\nThere is also a further validation task - probably beyond the scope of\nthe key management patch and into the encryption patch[es] territory -\nchecking that the keys supplied are the same keys in use for the data\ncurrently on disk. It feels to me like this should be done at startup,\nrather than as each file is accessed, which could make startup quite\nslow if there are a lot of keys with narrow scope.\n\nRegards\nAlastair\n\n\n",
"msg_date": "Sat, 2 Jan 2021 12:47:19 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "On Sat, Jan 2, 2021 at 10:50:15AM +0100, Fabien COELHO wrote:\n> > No, this isn't anything like the Kerberos model and there's no case\n> > where the PG account won't have access to the DEK (data encryption key)\n> > during normal operation (except with the possibility of off-loading to a\n> > hardware crypto device which, while interesting, is definitely something\n> > that's of very limited utility and which could be added on as a\n> > capability later anyway. This is also well understood by those who are\n> > interested in this capability and it's perfectly acceptable that PG will\n> > have, in memory, the DEK.\n> \n> I'm sorry that I'm not very clear. My ranting is having a KEK \"master key\"\n> in pg memory. I think I'm fine at this point with having DEKs available on\n> the host to code/decode files. What I meant about kerberos is that the setup\n> I was describing was making the database dependent on a remote host, which\n> is somehow what keroberos does.\n\nWe bzero the master key once we are done with it in the server. Why is\nhaving the master key in memory for a short time while a problem while\nhaving the data keys always in memory acceptable? Aren't the data keys\nmore valuable, and hence they fact the master key is in memory for a\nshort time no additional risk?\n\n> > The keys which are stored on disk with the proposed patch are encrypted\n> > using a KEK which is external to PG- that's all part of the existing\n> > patch and design.\n> \n> Yep. My point is that the patch hardwires a cryptographic design with many\n> assumptions, whereas I think it should design an API compatible with a\n> larger class of designs, and possibly implement one with KEK and so on, I'm\n> fine with that.\n\nYou are not addressing my complexity/fragility reasons for having a\nsingle method. As stated above, this feature has limited usefulness,\nand if it breaks or you lose the KEK, your database server and backups\nmight be gone.\n\n> > so it's unclear to me what the argument here is.\n> \n> The argument is that professional cryptophers do wrong designs, and I do not\n> see why you would do different. So instead of hardwiring choice that you\n> think are THE good ones, ISTM that pg should design a reasonably flexible\n> API, and also provide an implementation, obviously.\n\nSee above.\n\n> > Further, we haven't even gotten to actual cluster encryption yet- this\n> > is all just the key management side of things,\n> \n> With hardwired choices about 1 KEK and 2 DEK that are debatable, see below.\n\nThen let's debate them, since this patch isn't moving forward as long as\npeople are asking questions about the implementation.\n\n> > > I'm not sure of the design of the key rotation stuff as I recall it from the\n> > > patches I looked at, but the API design looks like the more pressing issue.\n> > > First, the mere existence of an \"master key\" is a cryptographic choice which\n> > > is debatable, because it creates issues if one must be able to change it,\n> > > hence the whole \"key rotation\" stuff. ISTM that what core needs to know is\n> > > that one particular key is changed by a new one and the underlying file is\n> > > rewritten. It should not need to know anything about master keys and key\n> > > derivation at all. It should need to know that the user wants to change file\n> > > keys, though.\n> > \n> > No, it's not debatable that a KEK is needed, I disagree entirely.\n> \n> ISTM that there is cryptographic design which suits your needs and you seem\n> to decide that it is the only possible way to do it It seems that you know.\n> As a researcher, I know that I do not know:-) As a software engineer, I'm\n> trying to suggest enabling more with the patch, including not hardwiring a\n> three-key management scheme.\n\nWell, your open API goal might work, but I am not comfortable\nimplementing that for reasons already explained, so what do we do?\n\n> I'm advocating that you pg not decide for others what is best for them, but\n> rather allow multiple choices and implementations through an API. Note that\n> I'm not claiming that the 1 KEK + 2 DEK + AES… design is bad. I'm claiming\n> that it is just one possible design, quite peculiar though, and that pg\n> should not hardwire it, and it does not need to anyway.\n> \n> The documentation included in the key management patch states: \"Key zero is\n> the key used to encrypt database heap and index files which are stored in\n> the file system, plus temporary files created during database operation.\"\n> Read as is, it suggests that the same key is used to encrypt many files.\n> From a cryptographic point of view this looks like a bad idea. The mode used\n> is unclear. If this is a stream cypher generated in counter mode, it could\n> be a very bad idea. Hopefully this is not what is intended, but the included\n> documentation is frankly misleading in that case. IMHO there should be one\n> key per file. Also, the CTR mode should be avoided if possible because it\n> has quite special properties, unless these properties are absolutely\n> necessary.\n\nWe need CTR since the WAL and even data blocks are not encrypted in full\nblocks. We are using GCM for key validation and tamper detection.\n\nWhat is the value of one key per file? It means every CREATE TABLE\nneeds a new data key, and if we lose any key, we lose some data. I\nwould rather walk away from this feature rather than implement this.\n\nThis gets to a general thread in this discussion of adding complexity\nand fragility without a clear explanation of the security value of\nadding it. This is not new --- we created private voice meetings to\navoid the distraction of such suggestions, but now that the patch is\nbeing considered, that isn't possible anymore.\n\n> > From what I see in the patch, the DEKs cannot be changed, only the KEK\n> which encodes de DEK. I cannot say I'm thrilled by such a property. If the\n> KEK is lost, then probably the DEKs are lost as well. Changing the KEK does\n\nYes.\n\n> not change anything about it, so that the attacker would be able to retrieve\n> the contents of later/previous state of the database if they can retrieve a\n> previous/later encoded version, even if the KEK is changed every day. This\n> is a bad property which somehow provides a false sense of security. I'm not\n> sure that it makes much sense to change a KEK without changing the\n> associated DEK, security-wise.\n\nWell, this is a common security feature. You might have your KMS\ncompromised, but if you change your KEK before you are attacked, or if\nsomeone leaves the organization, you get some security value. However,\nyou are right that if you can access a DEK wrapped with an old KEK, you\ncan read the data keys, and the DEK will work with the new cluster.\n\n> Anyway, I'd suggest to provide a script to re-encrypt a cluster, possibly\n> while offline, with different DEKs. From an audit perspective, this could be\n> a requirement.\n\nWe already have that documented in the wiki that it will use a standby\nfor key rotation, though I have heard someone suggest having LSN-based\nkeys that can change as transactions increment.\n\n> I'd expect a very detailed README or documentation somewhere, but there are\n> none in the committed/uncommitted patch.\n\nFor what? They key management? We don't have the data encryption part\ndone yet.\n\n> In https://wiki.postgresql.org/wiki/Transparent_Data_Encryption, I see a lot\n> of minute cryptographic design decision which are debatable, and a few which\n> seem to be still opened, whereas other things are already in the process of\n> being committed. I'm annoyed by the decision to reuse the same key for a lot\n> of encryption, even if there are discussions about changing the iv in some\n> way. If you have some clever way to build an iv, then please please change\n> the key as well?\n\nWhy? See above. Adding complexity for unclear security value isn't\nhelpful.\n\n> > All of this hand-waving really isn't helping.\n> \n> I'm sorry that you feel that. I'm really trying to help improve the approach\n> and design with my expertise. As I said in another mail, I'm no one, you can\n> always chose to ignore me.\n\nNo, we can't. People read this thread, conclude the implementation path\nis unclear, and assume these issues should be resolved before moving\nforward. I am frankly leaning in the TDE-not-wanted direction based on\ndiscussions here. Frankly, I am happier with that than to try to push\nthis rock up hill for several months.\n\n> > If it's a small addition to the current code then it'd be fantastic if\n> > you'd propose a specific patch which adds what you're suggesting.\n> \n> My point is to discuss an API.\n> \n> Once there is some agreement the possible API outline, then implementations\n> can be provided, but if people do not want an API in the first place, I'm\n> not sure I can see the point of sending patches.\n\nI think you are accurate here. You can certainly propose an external\nAPI, but I think any external API would have more negatives (complexity,\nfragility) than security positives. Any discussion that ignores the\nnegatives is unlikely to be adopted.\n\nPerhaps someone proposes an external API, and I, and perhaps others,\ncomplain about its negatives, so it isn't adopted. So the internal key\nmanagement is rejected, the external API is rejected, so we do nothing.\nI feel that is where we are now. Marking this feature as not wanted\nwould at least eliminate discussions that will never lead to any value.\nAnother option would be to move forward without complete agreement.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 4 Jan 2021 12:54:04 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "On Sat, Jan 2, 2021 at 12:47:19PM +0000, Alastair Turner wrote:\n> If keys can have arbitrary scope, then the pg backend won't know what\n> to ask for. So the API becomes even simpler with no specific request\n> on stdin and all the relevant keys on stdout. I generally like this\n> approach as well, and it will be the only option for some\n> integrations. On the other hand, there is an advantage to having the\n> key retrieval piece of key management in-process - the keys are not\n> being passed around in plain.\n> \n> There is also a further validation task - probably beyond the scope of\n> the key management patch and into the encryption patch[es] territory -\n> checking that the keys supplied are the same keys in use for the data\n> currently on disk. It feels to me like this should be done at startup,\n> rather than as each file is accessed, which could make startup quite\n> slow if there are a lot of keys with narrow scope.\n\nWe do that already on startup by using GCM to validate the KEK when\nencrypting each DEK.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 4 Jan 2021 12:56:18 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "On Fri, Jan 1, 2021 at 06:26:36PM +0000, Alastair Turner wrote:\n> After the long intro, my question - If using a standard format,\n> managed by a library, for the internal keystore does not result in a\n> smaller or simpler patch, is there enough other value for this to be\n> worth considering? For security features, standards and building on\n> well exercised code bases do really have value, but is it enough...\n\nI know Stephen Frost looked at PKCS12 but didn't see much value in it\nsince the features provided by PKCS12 didn't seem to add much for\nPostgres, and added complexity. Using GCM for key wrapping, heap/index\nfiles, and WAL seemed simple.\n\nFYI, the current patch has an initdb option --copy-encryption-key, which\ncopies the key from an old cluster, for use by pg_upgrade. Technically\nyou could create a directory called pg_cryptokey/live, put wrapped files\n0/1 in there, and use that when initializing a new cluster. That would\nallow you to generate the data keys using your own random source. I am\nnot sure how useful that would be, but we could document this ability.\n\nThere is all sorts of flexibility being proposed:\n\n* scope of keys\n* encryption method\n* encryption mode\n* internal/external\n\nSome of this is related to wrapping the data keys, and some of it gets\nto the encryption of cluster files. I just don't see how anything with\nthe level of flexibility being asked for could ever be reliable or\nsimple to administer, and I don't see the security value of that\nflexibility. I feel at a certain point, we just need to choose the best\noptions and move forward.\n\nI thought this had all been debated, but people continue to show up and\nask for it to be debated again. At one level, these discussions are\nvaluable, but at a certain point you end up re-litigate this that you\nget nothing else done. I know some people who have given up working on\nthis feature for this reason.\n\nI am not asking we stop discussing it, but I do ask we address the\nnegatives of suggestions, especially when the negatives have already\nbeen clearly stated.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 4 Jan 2021 13:23:29 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 10:18 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Jan 1, 2021 at 06:26:36PM +0000, Alastair Turner wrote:\n>\n> There is all sorts of flexibility being proposed:\n>\n> * scope of keys\n> * encryption method\n> * encryption mode\n> * internal/external\n>\n> Some of this is related to wrapping the data keys, and some of it gets\n> to the encryption of cluster files. I just don't see how anything with\n> the level of flexibility being asked for could ever be reliable or\n> simple to administer, and I don't see the security value of that\n> flexibility. I feel at a certain point, we just need to choose the best\n> options and move forward.\n>\n> +1.\n\nI thought this had all been debated, but people continue to show up and\n> ask for it to be debated again. At one level, these discussions are\n> valuable, but at a certain point you end up re-litigate this that you\n> get nothing else done. I know some people who have given up working on\n> this feature for this reason.\n>\n> I am not asking we stop discussing it, but I do ask we address the\n> negatives of suggestions, especially when the negatives have already\n> been clearly stated.\n>\n> Well, in fact, because the discussion about TDE/KMS has several very long\nthreads, maybe not everyone can read them completely first, and inevitably,\nthere will be topics that have already been discussed.\n\nAt least for the initial version, I think we should pay more attention to\nwhether the currently provided functions are acceptable, instead of always\nstaring at what it lacks, and how it should be more flexible.\n\nFor basic KMS functions, we need to ensure its stability and safety. Use a\nmore flexible API to support different encryption algorithms. Yes, it does\nlook more powerful, but it does not see much value for stability and\nsecurity. Regardless of whether we want to do this or not, but I think this\ncan at least not be discussed in the first version. We will not discuss\nwhether it is safer to use more keys, or even introduce HSM management. We\nonly evaluate whether this design can resist attacks under our Threat_models\n<https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#Threat_models>\nas the initial version.\n\nOf course, the submission of a feature needs to accept the opinions of many\npeople, but for a large and complex feature, I think that dividing the\nstage to limit the scope of the discussion will help us avoid getting into\nendless disputes.\n\n-- \nThere is no royal road to learning.\nHighGo Software Co.\n\nOn Tue, Jan 5, 2021 at 10:18 AM Bruce Momjian <bruce@momjian.us> wrote:On Fri, Jan 1, 2021 at 06:26:36PM +0000, Alastair Turner wrote:\nThere is all sorts of flexibility being proposed:\n\n* scope of keys\n* encryption method\n* encryption mode\n* internal/external\n\nSome of this is related to wrapping the data keys, and some of it gets\nto the encryption of cluster files. I just don't see how anything with\nthe level of flexibility being asked for could ever be reliable or\nsimple to administer, and I don't see the security value of that\nflexibility. I feel at a certain point, we just need to choose the best\noptions and move forward.\n+1. \nI thought this had all been debated, but people continue to show up and\nask for it to be debated again. At one level, these discussions are\nvaluable, but at a certain point you end up re-litigate this that you\nget nothing else done. I know some people who have given up working on\nthis feature for this reason.\n\nI am not asking we stop discussing it, but I do ask we address the\nnegatives of suggestions, especially when the negatives have already\nbeen clearly stated.\n\n\nWell, in fact, because the discussion about TDE/KMS has several very long threads, maybe not everyone can read them completely first, and inevitably, there will be topics that have already been discussed. At least for the initial version, I think we should pay more attention to whether the currently provided functions are acceptable, instead of always staring at what it lacks, and how it should be more flexible. For basic KMS functions, we need to ensure its stability and safety. Use a more flexible API to support different encryption algorithms. Yes, it does look more powerful, but it does not see much value for stability and security. Regardless of whether we want to do this or not, but I think this can at least not be discussed in the first version. We will not discuss whether it is safer to use more keys, or even introduce HSM management. We only evaluate whether this design can resist attacks under our Threat_models as the initial version. Of course, the submission of a feature needs to accept the opinions of many people, but for a large and complex feature, I think that dividing the stage to limit the scope of the discussion will help us avoid getting into endless disputes. -- There is no royal road to learning.HighGo Software Co.",
"msg_date": "Tue, 5 Jan 2021 11:08:31 +0800",
"msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "Hi Bruce\n\nOn Mon, 4 Jan 2021 at 18:23, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Jan 1, 2021 at 06:26:36PM +0000, Alastair Turner wrote:\n> > After the long intro, my question - If using a standard format,\n> > managed by a library, for the internal keystore does not result in a\n> > smaller or simpler patch, is there enough other value for this to be\n> > worth considering? For security features, standards and building on\n> > well exercised code bases do really have value, but is it enough...\n>\n> I know Stephen Frost looked at PKCS12 but didn't see much value in it\n> since the features provided by PKCS12 didn't seem to add much for\n> Postgres, and added complexity...\n\nYeah, OpenSSL 1.1's PKCS12 implementation doesn't offer much for\nmanaging arbitrary secrets. Building a reasonable abstraction over the\nprimitives it does provide requires a non-trivial amount of code.\n\n> ... Using GCM for key wrapping, heap/index\n> files, and WAL seemed simple.\n\nHaving Postgres take on the task of wrapping the keys and deriving the\nwrapping key requires less code than getting a library to do it, sure.\nIt also expands the functional scope of Postgres, what Postgres is\nresponsible for. This isn't what libraries should ideally do for\nsoftware development, but it's not uncommon and it's very fertile\nground for some of the discussion which has dogged this feature's\ndevelopment.\n\nThere's a question of whether the new scope should be taken on at all.\nFabien expressed a strong opinion that it should not be earlier in the\nthread. Even if you don't take an absolute view on the additional\nscope, lines of code do not all create equal risk. Ensuring the\nconfidentiality of an encryption key is far higher stakes code than\nserialising and deserialising the key which is being secured and\nstored by a library or external provider. Which is why I like the idea\nof having OpenSSL (and, at some point, nss) do the KDF, wrapping and\nstorage bit, and have been hacking on some code in that direction.\n\nIf all that Postgres is doing is the serde, that's also something\nwhich can objectively be said to be right or wrong. Key derivation and\nwrapping are only ever \"strong enough for the moment\" or \"in line with\npolicy/best practice\". Which brings us to flexibility.\n\n> There is all sorts of flexibility being proposed:\n>\n> * scope of keys\n> * encryption method\n> * encryption mode\n> * internal/external\n>\n> Some of this is related to wrapping the data keys, and some of it gets\n> to the encryption of cluster files. I just don't see how anything with\n> the level of flexibility being asked for could ever be reliable or\n> simple to administer,...\n\nExternalising the key wrap and its KDF (either downward to OpenSSL or\nto a separate process) makes the related points of flexibility\nsomething else's problem. OpenSSL and the like have put a lot of\neffort into making these things configurable and building the APIs\n(mainly PCKS11 and KMIP) for handing the functions off to dedicated\nhardware. Many or most organisations requiring that sort of\nintegration will have those complexities budgeted in already.\n\n>... and I don't see the security value of that\n> flexibility. I feel at a certain point, we just need to choose the best\n> options and move forward.\n>\n> I thought this had all been debated, but people continue to show up and\n> ask for it to be debated again. At one level, these discussions are\n> valuable, but at a certain point you end up re-litigate this that you\n> get nothing else done. I know some people who have given up working on\n> this feature for this reason.\n\nApologies for covering the PCKS12 ground again, I really did look for\nany on-list and wiki references to the topic.\n\n> I am not asking we stop discussing it, but I do ask we address the\n> negatives of suggestions, especially when the negatives have already\n> been clearly stated.\n\nThe negatives are increasing the amount of code involved in delivering\nTDE and, implicitly, pushing back the delivery of TDE. I'm not\nclaiming that they aren't valid. I was hoping that PCKS12 would\ndeliver some benefits on the code side, but a bit of investigation\nshowed that wasn't the case.\n\nTo offset these, I believe that taking a slightly longer route now\n(and I am keen to do some of that slog) has significant benefits in\ngetting Postgres (or at least the core server processes) out of the\nbusiness of key derivation and wrapping. Not just the implementation\ndetail, but also decisions about what is \"good enough\" - the KDF\nworries me particularly in this regard. With my presales techie hat\non, I'm very aware that everyone's good enough is slightly different,\nand not just on one axis. Using library functions (let's say OpenSSL)\nwill provide a range of flexibility to fit these varied boundaries, at\nno incremental cost beyond the library integration, and avoid any\nimplementation decisions becoming a lightning rod for objections to\nPostgres.\n\nRegards\nAlastair\n\n\n",
"msg_date": "Tue, 5 Jan 2021 21:16:11 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
},
{
"msg_contents": "On Mon, 4 Jan 2021 at 17:56, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, Jan 2, 2021 at 12:47:19PM +0000, Alastair Turner wrote:\n> >\n> > There is also a further validation task - probably beyond the scope of\n> > the key management patch and into the encryption patch[es] territory -\n> > checking that the keys supplied are the same keys in use for the data\n> > currently on disk. It feels to me like this should be done at startup,\n> > rather than as each file is accessed, which could make startup quite\n> > slow if there are a lot of keys with narrow scope.\n>\n> We do that already on startup by using GCM to validate the KEK when\n> encrypting each DEK.\n>\nWhich validates two things - that the KEK is the same one which was\nused to encrypt the DEKs (instead of returning garbage plaintext when\ngiven a garbage key), and that the DEKs have not been tampered with at\nrest. What it does not check is that the DEKs are the keys used to\nencrypt the data, that one has not been copied or restored independent\nof the other.\n\n\n",
"msg_date": "Tue, 5 Jan 2021 21:28:04 +0000",
"msg_from": "Alastair Turner <minion@decodable.me>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for key management"
}
] |
[
{
"msg_contents": "Hi all,\n\nNow that the tree has a new set of files for cryptohash functions as\nof src/common/cryptohash*.c, it is a bit weird to have another file\ncalled cryptohashes.c for the set of SQL functions.\n\nAny objections to rename that to cryptohashfuncs.c? That would be\nmuch more consistent with the surroundings (cleaning up anythig\nrelated to MD5 is on my TODO list). Patch is attached.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 3 Dec 2020 11:03:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Renaming cryptohashes.c to cryptohashfuncs.c"
},
{
"msg_contents": "> On 3 Dec 2020, at 03:03, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Any objections to rename that to cryptohashfuncs.c? That would be\n> much more consistent with the surroundings (cleaning up anythig\n> related to MD5 is on my TODO list). Patch is attached.\n\n+1 on this proposed rename.\n\ncheers ./daniel\n\n\n",
"msg_date": "Thu, 3 Dec 2020 22:10:44 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Renaming cryptohashes.c to cryptohashfuncs.c"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 10:10:44PM +0100, Daniel Gustafsson wrote:\n> +1 on this proposed rename.\n\nThanks. I have been able to get that done as of bd94a9c.\n--\nMichael",
"msg_date": "Fri, 4 Dec 2020 14:37:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Renaming cryptohashes.c to cryptohashfuncs.c"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are using Windows 2019 server.\n\nSometimes we see the pg_ctl.exe getting automatically deleted.\nDue to this, while starting up Postgres windows service we are getting the error.\n\"Error 2: The system cannot find the file specified\"\n\nCan you let us know the potential causes for this pg_ctl.exe file deletion?\n\nRegards,\nJoel\n\n\n\n\n\n\n\n\n\nHi,\n \nWe are using Windows 2019 server.\n \nSometimes we see the pg_ctl.exe getting automatically deleted.\n\nDue to this, while starting up Postgres windows service we are getting the error.\n“Error 2: The system cannot find the file specified”\n \nCan you let us know the potential causes for this pg_ctl.exe file deletion?\n \nRegards,\nJoel",
"msg_date": "Thu, 3 Dec 2020 05:05:49 +0000",
"msg_from": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com>",
"msg_from_op": true,
"msg_subject": "pg_ctl.exe file deleted automatically"
},
{
"msg_contents": "On Thu, 3 Dec 2020, 13:06 Joel Mariadasan (jomariad), <jomariad@cisco.com>\nwrote:\n\n> Hi,\n>\n>\n>\n> We are using Windows 2019 server.\n>\n>\n>\n> Sometimes we see the *pg_ctl.exe* getting automatically deleted.\n>\n> Due to this, while starting up Postgres windows service we are getting the\n> error.\n>\n> “Error 2: The system cannot find the file specified”\n>\n>\n>\n> Can you let us know the potential causes for this pg_ctl.exe file deletion?\n>\n>\n>\n\nPostgreSQL will never delete pg_ctl.exe\n\nCheck your antivirus software logs for false positives.\n\nThis mailing list is for software development on postgres so I suggest\nfollowing up on pgsql-general.\n\n> Joel\n>\n\nOn Thu, 3 Dec 2020, 13:06 Joel Mariadasan (jomariad), <jomariad@cisco.com> wrote:\n\n\nHi,\n \nWe are using Windows 2019 server.\n \nSometimes we see the pg_ctl.exe getting automatically deleted.\n\nDue to this, while starting up Postgres windows service we are getting the error.\n“Error 2: The system cannot find the file specified”\n \nCan you let us know the potential causes for this pg_ctl.exe file deletion?\n PostgreSQL will never delete pg_ctl.exeCheck your antivirus software logs for false positives.This mailing list is for software development on postgres so I suggest following up on pgsql-general.\nJoel",
"msg_date": "Thu, 3 Dec 2020 21:43:57 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl.exe file deleted automatically"
},
{
"msg_contents": "We can confirm that the file is not deleted by antivirus.\r\n\r\nWhen windows is shutdown abruptly, we are noticing this issue.\r\nWe will post this query in pgsql-general and check if someone else has encountered the same issue.\r\n\r\nRegards,\r\nJoel\r\nFrom: Craig Ringer <craig.ringer@enterprisedb.com>\r\nSent: 03 December 2020 19:14\r\nTo: Joel Mariadasan (jomariad) <jomariad@cisco.com>\r\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; Tejaswini L (tlingach) <tlingach@cisco.com>; Balakumaran Swaminathan (balsamin) <balsamin@cisco.com>\r\nSubject: Re: pg_ctl.exe file deleted automatically\r\n\r\n\r\nOn Thu, 3 Dec 2020, 13:06 Joel Mariadasan (jomariad), <jomariad@cisco.com<mailto:jomariad@cisco.com>> wrote:\r\nHi,\r\n\r\nWe are using Windows 2019 server.\r\n\r\nSometimes we see the pg_ctl.exe getting automatically deleted.\r\nDue to this, while starting up Postgres windows service we are getting the error.\r\n“Error 2: The system cannot find the file specified”\r\n\r\nCan you let us know the potential causes for this pg_ctl.exe file deletion?\r\n\r\n\r\nPostgreSQL will never delete pg_ctl.exe\r\n\r\nCheck your antivirus software logs for false positives.\r\n\r\nThis mailing list is for software development on postgres so I suggest following up on pgsql-general.\r\nJoel\r\n\n\n\n\n\n\n\n\n\nWe can confirm that the file is not deleted by antivirus.\n \nWhen windows is shutdown abruptly, we are noticing this issue.\r\n\nWe will post this query in\r\npgsql-general and check if someone else has encountered the same issue.\n \nRegards,\nJoel\n\nFrom: Craig Ringer <craig.ringer@enterprisedb.com>\r\n\nSent: 03 December 2020 19:14\nTo: Joel Mariadasan (jomariad) <jomariad@cisco.com>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; Tejaswini L (tlingach) <tlingach@cisco.com>; Balakumaran Swaminathan (balsamin) <balsamin@cisco.com>\nSubject: Re: pg_ctl.exe file deleted automatically\n\n \n\n\n \n\n\nOn Thu, 3 Dec 2020, 13:06 Joel Mariadasan (jomariad), <jomariad@cisco.com> wrote:\n\n\n\n\nHi,\n \nWe are using Windows 2019 server.\n \nSometimes we see the\r\npg_ctl.exe getting automatically deleted. \nDue to this, while starting up Postgres windows service we are getting the error.\n“Error 2: The system cannot find the file specified”\n \nCan you let us know the potential causes for this pg_ctl.exe file deletion?\n \n\n\n\n\n\n\n \n\n\nPostgreSQL will never delete pg_ctl.exe\n\n\n \n\n\nCheck your antivirus software logs for false positives.\n\n\n \n\n\nThis mailing list is for software development on postgres so I suggest following up on pgsql-general.\n\n\n\n\n\n\nJoel",
"msg_date": "Mon, 14 Dec 2020 06:58:04 +0000",
"msg_from": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com>",
"msg_from_op": true,
"msg_subject": "RE: pg_ctl.exe file deleted automatically"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI'm looking for more horsepower for testing commitfest entries\nautomatically, and today I tried out $SUBJECT. The attached is a\nrudimentary first attempt, for show-and-tell. If you have a Github\naccount, you just have to push it to a branch there and look at the\nActions tab on the web page for the results. Does anyone else have\n.github files and want to share, to see if we can combine efforts\nhere?\n\nThe reason for creating three separate \"workflows\" for Linux, Windows\nand macOS rather than three separate \"jobs\" inside one workflow is so\nthat cfbot.cputube.org could potentially get separate pass/fail\nresults for each OS out of the API rather than one combined result. I\nrather like that feature of cfbot's results. (I could be wrong about\nneeding to do that, this is the first time I've ever looked at this\nstuff.)\n\nThe Windows test actually fails right now, exactly as reported by\nRanier[1]. It is a release build on a recent MSVC, so I guess that is\nexpected and off-topic for this thread. But generally,\n.github/workflows/ci-windows.yml is the weakest part of this. It'd be\ngreat to get a debug/assertion build, show backtraces when it crashes,\nrun more of the tests, etc etc, but I don't know nearly enough about\nWindows to do that myself. Another thing is that it uses Choco for\nflex and bison; it'd be better to find those on the image, if\npossible. Also, for all 3 OSes, it's not currently attempting to\ncache build results or anything like that.\n\nI'm a bit sad that GH doesn't have FreeBSD build runners. Those are\nnow popping up on other CIs, but I'm not sure if their free/open\nsource tiers have enough resources for cfbot.\n\n[1] https://www.postgresql.org/message-id/flat/CAEudQArhn8bH836OB%2B3SboiaeEcgOtrJS58Bki4%3D5yeVqToxgw%40mail.gmail.com",
"msg_date": "Thu, 3 Dec 2020 19:33:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Github Actions (CI)"
},
{
"msg_contents": "čt 3. 12. 2020 v 7:34 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:\n>\n> Hi hackers,\n>\n> I'm looking for more horsepower for testing commitfest entries\n> automatically, and today I tried out $SUBJECT. The attached is a\n> rudimentary first attempt, for show-and-tell. If you have a Github\n> account, you just have to push it to a branch there and look at the\n> Actions tab on the web page for the results. Does anyone else have\n> .github files and want to share, to see if we can combine efforts\n> here?\n>\n> The reason for creating three separate \"workflows\" for Linux, Windows\n> and macOS rather than three separate \"jobs\" inside one workflow is so\n> that cfbot.cputube.org could potentially get separate pass/fail\n> results for each OS out of the API rather than one combined result. I\n> rather like that feature of cfbot's results. (I could be wrong about\n> needing to do that, this is the first time I've ever looked at this\n> stuff.)\n>\n> The Windows test actually fails right now, exactly as reported by\n> Ranier[1]. It is a release build on a recent MSVC, so I guess that is\n> expected and off-topic for this thread. But generally,\n> .github/workflows/ci-windows.yml is the weakest part of this. It'd be\n> great to get a debug/assertion build, show backtraces when it crashes,\n> run more of the tests, etc etc, but I don't know nearly enough about\n> Windows to do that myself. Another thing is that it uses Choco for\n> flex and bison; it'd be better to find those on the image, if\n> possible. Also, for all 3 OSes, it's not currently attempting to\n> cache build results or anything like that.\n\nAny chance to also share links to failing/passing testing builds?\n\n> I'm a bit sad that GH doesn't have FreeBSD build runners. Those are\n> now popping up on other CIs, but I'm not sure if their free/open\n> source tiers have enough resources for cfbot.\n>\n> [1] https://www.postgresql.org/message-id/flat/CAEudQArhn8bH836OB%2B3SboiaeEcgOtrJS58Bki4%3D5yeVqToxgw%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 3 Dec 2020 14:54:58 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Github Actions (CI)"
},
{
"msg_contents": "\nOn 12/3/20 1:33 AM, Thomas Munro wrote:\n> Hi hackers,\n>\n> I'm looking for more horsepower for testing commitfest entries\n> automatically, and today I tried out $SUBJECT. The attached is a\n> rudimentary first attempt, for show-and-tell. If you have a Github\n> account, you just have to push it to a branch there and look at the\n> Actions tab on the web page for the results. \n>\n\nAwesome. That's a pretty big bang for the buck.\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Thu, 3 Dec 2020 09:57:31 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Github Actions (CI)"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 2:55 AM Josef Šimánek <josef.simanek@gmail.com> wrote:\n> Any chance to also share links to failing/passing testing builds?\n\nhttps://github.com/macdice/postgres/runs/1490727809\nhttps://github.com/macdice/postgres/runs/1490727838\n\nHowever, looking these in a clean/cookieless browser, I see that it\nwon't show you anything useful unless you're currently logged in to\nGithub, so that's a major point against using it for cfbot (it's\ncertainly not my intention to make cfbot only useful to people who\nhave Github accounts).\n\n\n",
"msg_date": "Fri, 4 Dec 2020 09:32:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Github Actions (CI)"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.