threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "After my recent commit d207038053837ae9365df2776371632387f6f655,\nsidewinder is failing with error \"insufficient file descriptors ..\" in\ntest 006_logical_decoding.pl [1]. The detailed failure displays\nmessages as below:\n\n006_logical_decoding_master.log\n2020-01-02 19:51:05.567 CET [26174:3] 006_logical_decoding.pl LOG:\nstatement: ALTER SYSTEM SET max_files_per_process = 26;\n2020-01-02 19:51:05.570 CET [2777:4] LOG: received fast shutdown request\n2020-01-02 19:51:05.570 CET [26174:4] 006_logical_decoding.pl LOG:\ndisconnection: session time: 0:00:00.005 user=pgbf database=postgres\nhost=[local]\n2020-01-02 19:51:05.571 CET [2777:5] LOG: aborting any active transactions\n2020-01-02 19:51:05.572 CET [2777:6] LOG: background worker \"logical\nreplication launcher\" (PID 23736) exited with exit code 1\n2020-01-02 19:51:05.572 CET [15764:1] LOG: shutting down\n2020-01-02 19:51:05.575 CET [2777:7] LOG: database system is shut down\n2020-01-02 19:51:05.685 CET [24138:1] LOG: starting PostgreSQL 12.1\non x86_64-unknown-netbsd7.0, compiled by gcc (nb2 20150115) 4.8.4,\n64-bit\n2020-01-02 19:51:05.686 CET [24138:2] LOG: listening on Unix socket\n\"/tmp/sxAcn7SAzt/.s.PGSQL.56110\"\n2020-01-02 19:51:05.687 CET [24138:3] FATAL: insufficient file\ndescriptors available to start server process\n2020-01-02 19:51:05.687 CET [24138:4] DETAIL: System allows 19, we\nneed at least 20.\n2020-01-02 19:51:05.687 CET [24138:5] LOG: database system is shut down\n\nHere, I think it is clear that the failure happens because we are\nsetting the value of max_files_per_process as 26 which is low for this\nmachine. It seems to me that the reason it is failing is that before\nreaching set_max_safe_fds, it has already seven open files. Now, I\nsee on my CentOS system, the value of already_open files is 3, 6 and 6\nrespectively for versions HEAD, 12 and 10. We can easily see the\nnumber of already opened files by changing the error level from DEBUG2\nto LOG for elog message in set_max_safe_fds. It is not very clear to\nme how many files we can expect to be kept open during startup? Can\nthe number vary on different setups?\n\nOne possible way to fix is that we change the test to set\nmax_files_per_process to a slightly higher number say 35, but I am not\nsure what will be the safe value for the same. Alternatively, we can\nthink of removing the test entirely, but it seems like a useful case\nto test corner cases, so we have added it in the first place.\n\nI am planning to investigate this further by seeing which all files\nare kept open and why. I will share my findings on this further, but\nin the meantime, if anyone has any thoughts on this matter, please\nfeel free to share the same.\n\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-01-02%2018%3A45%3A25\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 17:31:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "sidewinder has one failure"
},
{
"msg_contents": "\nOn 2020-01-03 13:01, Amit Kapila wrote:\n\n> 2020-01-02 19:51:05.687 CET [24138:3] FATAL: insufficient file\n> descriptors available to start server process\n> 2020-01-02 19:51:05.687 CET [24138:4] DETAIL: System allows 19, we\n> need at least 20.\n> 2020-01-02 19:51:05.687 CET [24138:5] LOG: database system is shut down\n> \n> Here, I think it is clear that the failure happens because we are\n> setting the value of max_files_per_process as 26 which is low for this\n> machine. It seems to me that the reason it is failing is that before\n> reaching set_max_safe_fds, it has already seven open files. Now, I\n> see on my CentOS system, the value of already_open files is 3, 6 and 6\n> respectively for versions HEAD, 12 and 10. We can easily see the\n> number of already opened files by changing the error level from DEBUG2\n> to LOG for elog message in set_max_safe_fds. It is not very clear to\n> me how many files we can expect to be kept open during startup? Can\n> the number vary on different setups?\n\nHm, where does it get the limit from? Is it something we set?\n\nWhy is this machine different from everybody else when it comes to this \nlimit?\n\nulimit -a says:\n\n$ ulimit -a\ntime(cpu-seconds) unlimited\nfile(blocks) unlimited\ncoredump(blocks) unlimited\ndata(kbytes) 262144\nstack(kbytes) 4096\nlockedmem(kbytes) 672036\nmemory(kbytes) 2016108\nnofiles(descriptors) 1024\nprocesses 1024\nthreads 1024\nvmemory(kbytes) unlimited\nsbsize(bytes) unlimited\n\nIs there any configuration setting I could do on the machine to increase \nthis limit?\n\n/Mikael\n\n\n",
"msg_date": "Fri, 3 Jan 2020 14:04:10 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 6:34 PM Mikael Kjellström\n<mikael.kjellstrom@mksoft.nu> wrote:\n>\n>\n> On 2020-01-03 13:01, Amit Kapila wrote:\n>\n> > 2020-01-02 19:51:05.687 CET [24138:3] FATAL: insufficient file\n> > descriptors available to start server process\n> > 2020-01-02 19:51:05.687 CET [24138:4] DETAIL: System allows 19, we\n> > need at least 20.\n> > 2020-01-02 19:51:05.687 CET [24138:5] LOG: database system is shut down\n> >\n> > Here, I think it is clear that the failure happens because we are\n> > setting the value of max_files_per_process as 26 which is low for this\n> > machine. It seems to me that the reason it is failing is that before\n> > reaching set_max_safe_fds, it has already seven open files. Now, I\n> > see on my CentOS system, the value of already_open files is 3, 6 and 6\n> > respectively for versions HEAD, 12 and 10. We can easily see the\n> > number of already opened files by changing the error level from DEBUG2\n> > to LOG for elog message in set_max_safe_fds. It is not very clear to\n> > me how many files we can expect to be kept open during startup? Can\n> > the number vary on different setups?\n>\n> Hm, where does it get the limit from? Is it something we set?\n>\n> Why is this machine different from everybody else when it comes to this\n> limit?\n>\n\nThe problem we are seeing on this machine is that I think we have\nseven files opened before we reach function set_max_safe_fds during\nstartup. Now, it is not clear to me why it is opening extra file(s)\nduring start-up as compare to other machines. I think this kind of\nproblem could occur if one has set shared_preload_libraries and via\nthat, some file is getting opened which is not closed or there is some\nother configuration due to which this extra file is getting opened.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 19:03:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 7:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 3, 2020 at 6:34 PM Mikael Kjellström\n> <mikael.kjellstrom@mksoft.nu> wrote:\n> >\n> >\n> > On 2020-01-03 13:01, Amit Kapila wrote:\n> >\n> > > 2020-01-02 19:51:05.687 CET [24138:3] FATAL: insufficient file\n> > > descriptors available to start server process\n> > > 2020-01-02 19:51:05.687 CET [24138:4] DETAIL: System allows 19, we\n> > > need at least 20.\n> > > 2020-01-02 19:51:05.687 CET [24138:5] LOG: database system is shut down\n> > >\n> > > Here, I think it is clear that the failure happens because we are\n> > > setting the value of max_files_per_process as 26 which is low for this\n> > > machine. It seems to me that the reason it is failing is that before\n> > > reaching set_max_safe_fds, it has already seven open files. Now, I\n> > > see on my CentOS system, the value of already_open files is 3, 6 and 6\n> > > respectively for versions HEAD, 12 and 10.\n\nI debugged on HEAD and found that we are closing all the files (like\npostgresql.conf, postgresql.auto.conf, etc.) that got opened before\nset_max_safe_fds. I think on HEAD the 3 already opened files are\nbasically stdin, stdout, stderr. It is still not clear why on some\nother versions it shows different number of already opened files.\n\n> > > We can easily see the\n> > > number of already opened files by changing the error level from DEBUG2\n> > > to LOG for elog message in set_max_safe_fds. It is not very clear to\n> > > me how many files we can expect to be kept open during startup? Can\n> > > the number vary on different setups?\n> >\n> > Hm, where does it get the limit from? Is it something we set?\n> >\n> > Why is this machine different from everybody else when it comes to this\n> > limit?\n> >\n\nMikael, is it possible for you to set log_min_messages to DEBUG2 on\nyour machine and start the server. You must see a line like:\n\"max_safe_fds = 984, usable_fds = 1000, already_open = 6\". Is it\npossible to share that information? This is just to confirm if the\nalready_open number is 7 on your machine.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 20:18:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Fri, Jan 3, 2020 at 6:34 PM Mikael Kjellström\n> <mikael.kjellstrom@mksoft.nu> wrote:\n>> Why is this machine different from everybody else when it comes to this\n>> limit?\n\n> The problem we are seeing on this machine is that I think we have\n> seven files opened before we reach function set_max_safe_fds during\n> startup. Now, it is not clear to me why it is opening extra file(s)\n> during start-up as compare to other machines.\n\nMaybe it uses one of the semaphore implementations that consume a\nfile descriptor per semaphore?\n\nI think that d20703805 was insanely optimistic to think that a\ntiny value of max_files_per_process would work the same everywhere.\nI'd actually recommend just dropping that test, as I do not think\nit's possible to make it portable and reliable. Even if it could\nbe fixed, I doubt it would ever find any actual bug to justify\nthe sweat it would take to maintain it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 10:05:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "I wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>> The problem we are seeing on this machine is that I think we have\n>> seven files opened before we reach function set_max_safe_fds during\n>> startup. Now, it is not clear to me why it is opening extra file(s)\n>> during start-up as compare to other machines.\n\n> Maybe it uses one of the semaphore implementations that consume a\n> file descriptor per semaphore?\n\nHm, no, sidewinder reports that it's using SysV semaphores:\n\nchecking which semaphore API to use... System V\n\nHowever, I tried building an installation that uses named POSIX\nsemaphores, by applying the attached hack on a macOS system.\nAnd sure enough, this test crashes and burns:\n\n2020-01-03 11:36:21.571 EST [91597] FATAL: insufficient file descriptors available to start server process\n2020-01-03 11:36:21.571 EST [91597] DETAIL: System allows -8, we need at least 20.\n2020-01-03 11:36:21.571 EST [91597] LOG: database system is shut down\n\nLooking at \"lsof\" output for a postmaster with max_connections=10,\nmax_wal_senders=5 (the parameters set up by PostgresNode.pm), I see\nthat it's got 31 \"PSXSEM\" file descriptors, so the number shown here\nis about what you'd expect. We might be able to constrain that down\na little further, but still, this test has no chance of working in\nanything like its present form on a machine that needs file\ndescriptors for semaphores. That's a supported configuration, even\nif not a recommended one, so I don't think it's okay for the test\nto fall over.\n\n(Hmm ... apparently, we have no buildfarm members that use such\nsemaphores and are running the TAP tests, else we'd have additional\ncomplaints. Perhaps that's a bad omission.)\n\nAnyway, it remains unclear exactly why sidewinder is failing, but\nI'm guessing it has a few more open files than you expected. My\nmacOS build has a few more than I can account for in my caffeine-\ndeprived state, too. One of them might be for bonjour ... not sure\nabout some of the rest. Bottom line here is that it's hard to\npredict with any accuracy how many pre-opened files there will be.\n\n\t\t\tregards, tom lane\n\n\ndiff --git a/src/template/darwin b/src/template/darwin\nindex f4d4e9d7cf..98331be22d 100644\n--- a/src/template/darwin\n+++ b/src/template/darwin\n@@ -23,11 +23,4 @@ CFLAGS_SL=\"\"\n # support System V semaphores; before that we have to use named POSIX\n # semaphores, which are less good for our purposes because they eat a\n # file descriptor per backend per max_connection slot.\n-case $host_os in\n- darwin[015].*)\n USE_NAMED_POSIX_SEMAPHORES=1\n- ;;\n- *)\n- USE_SYSV_SEMAPHORES=1\n- ;;\n-esac\n\n\n",
"msg_date": "Fri, 03 Jan 2020 11:56:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On 2020-01-03 15:48, Amit Kapila wrote:\n> On Fri, Jan 3, 2020 at 7:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> I debugged on HEAD and found that we are closing all the files (like\n> postgresql.conf, postgresql.auto.conf, etc.) that got opened before\n> set_max_safe_fds. I think on HEAD the 3 already opened files are\n> basically stdin, stdout, stderr. It is still not clear why on some\n> other versions it shows different number of already opened files.\n\nI think Tom Lane found the \"problem\". It has to do with the semaphores \ntaking up FD's.\n\n\n>>>> We can easily see the\n>>>> number of already opened files by changing the error level from DEBUG2\n>>>> to LOG for elog message in set_max_safe_fds. It is not very clear to\n>>>> me how many files we can expect to be kept open during startup? Can\n>>>> the number vary on different setups?\n>>>\n>>> Hm, where does it get the limit from? Is it something we set?\n>>>\n>>> Why is this machine different from everybody else when it comes to this\n>>> limit?\n>>>\n> \n> Mikael, is it possible for you to set log_min_messages to DEBUG2 on\n> your machine and start the server. You must see a line like:\n> \"max_safe_fds = 984, usable_fds = 1000, already_open = 6\". Is it\n> possible to share that information? This is just to confirm if the\n> already_open number is 7 on your machine.\n\nSure. I compiled pgsql 12 and this is the complete logfile after \nstarting up the server the first time with log_min_messages=debug2:\n\n\n2020-01-04 01:03:14.484 CET [14906] DEBUG: registering background \nworker \"logical replication launcher\"\n2020-01-04 01:03:14.484 CET [14906] LOG: starting PostgreSQL 12.1 on \nx86_64-unknown-netbsd7.0, compiled by gcc (nb2 20150115) 4.8.4, 64-bit\n2020-01-04 01:03:14.484 CET [14906] LOG: listening on IPv6 address \n\"::1\", port 5432\n2020-01-04 01:03:14.484 CET [14906] LOG: listening on IPv4 address \n\"127.0.0.1\", port 5432\n2020-01-04 01:03:14.485 CET [14906] LOG: listening on Unix socket \n\"/tmp/.s.PGSQL.5432\"\n2020-01-04 01:03:14.491 CET [14906] DEBUG: SlruScanDirectory invoking \ncallback on pg_notify/0000\n2020-01-04 01:03:14.491 CET [14906] DEBUG: removing file \"pg_notify/0000\"\n2020-01-04 01:03:14.491 CET [14906] DEBUG: dynamic shared memory system \nwill support 308 segments\n2020-01-04 01:03:14.491 CET [14906] DEBUG: created dynamic shared \nmemory control segment 2134641633 (7408 bytes)\n2020-01-04 01:03:14.492 CET [14906] DEBUG: max_safe_fds = 984, \nusable_fds = 1000, already_open = 6\n2020-01-04 01:03:14.493 CET [426] LOG: database system was shut down at \n2020-01-04 01:00:15 CET\n2020-01-04 01:03:14.493 CET [426] DEBUG: checkpoint record is at 0/15F15B8\n2020-01-04 01:03:14.493 CET [426] DEBUG: redo record is at 0/15F15B8; \nshutdown true\n2020-01-04 01:03:14.493 CET [426] DEBUG: next transaction ID: 486; next \nOID: 12974\n2020-01-04 01:03:14.493 CET [426] DEBUG: next MultiXactId: 1; next \nMultiXactOffset: 0\n2020-01-04 01:03:14.493 CET [426] DEBUG: oldest unfrozen transaction \nID: 479, in database 1\n2020-01-04 01:03:14.493 CET [426] DEBUG: oldest MultiXactId: 1, in \ndatabase 1\n2020-01-04 01:03:14.493 CET [426] DEBUG: commit timestamp Xid \noldest/newest: 0/0\n2020-01-04 01:03:14.493 CET [426] DEBUG: transaction ID wrap limit is \n2147484126, limited by database with OID 1\n2020-01-04 01:03:14.493 CET [426] DEBUG: MultiXactId wrap limit is \n2147483648, limited by database with OID 1\n2020-01-04 01:03:14.493 CET [426] DEBUG: starting up replication slots\n2020-01-04 01:03:14.493 CET [426] DEBUG: starting up replication origin \nprogress state\n2020-01-04 01:03:14.493 CET [426] DEBUG: MultiXactId wrap limit is \n2147483648, limited by database with OID 1\n2020-01-04 01:03:14.493 CET [426] DEBUG: MultiXact member stop limit is \nnow 4294914944 based on MultiXact 1\n2020-01-04 01:03:14.494 CET [14906] DEBUG: starting background worker \nprocess \"logical replication launcher\"\n2020-01-04 01:03:14.494 CET [14906] LOG: database system is ready to \naccept connections\n2020-01-04 01:03:14.495 CET [9809] DEBUG: autovacuum launcher started\n2020-01-04 01:03:14.496 CET [11463] DEBUG: received inquiry for database 0\n2020-01-04 01:03:14.496 CET [11463] DEBUG: writing stats file \n\"pg_stat_tmp/global.stat\"\n2020-01-04 01:03:14.497 CET [7890] DEBUG: logical replication launcher \nstarted\n2020-01-04 01:03:14.498 CET [28096] DEBUG: checkpointer updated shared \nmemory configuration values\n\n/Mikael\n\n\n",
"msg_date": "Sat, 4 Jan 2020 01:05:56 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> I think Tom Lane found the \"problem\". It has to do with the semaphores \n> taking up FD's.\n\nHm, no, because:\n\n> Sure. I compiled pgsql 12 and this is the complete logfile after \n> starting up the server the first time with log_min_messages=debug2:\n> 2020-01-04 01:03:14.492 CET [14906] DEBUG: max_safe_fds = 984, \n> usable_fds = 1000, already_open = 6\n\nThat's pretty much the same thing we see on most other platforms.\nPlus your configure log shows that SysV semaphores were selected,\nand those don't eat FDs.\n\nApparently, in the environment of that TAP test, the server has more\nopen FDs at this point than it does when running \"normally\". I have\nno idea what the additional FDs might be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 19:15:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On 2020-01-04 01:15, Tom Lane wrote:\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>> I think Tom Lane found the \"problem\". It has to do with the semaphores\n>> taking up FD's.\n> \n> Hm, no, because:\n\nYes, saw that after I posted my answer.\n\n\n>> Sure. I compiled pgsql 12 and this is the complete logfile after\n>> starting up the server the first time with log_min_messages=debug2:\n>> 2020-01-04 01:03:14.492 CET [14906] DEBUG: max_safe_fds = 984,\n>> usable_fds = 1000, already_open = 6\n> \n> That's pretty much the same thing we see on most other platforms.\n> Plus your configure log shows that SysV semaphores were selected,\n> and those don't eat FDs.\n\nYes, it looks \"normal\".\n\n\n> Apparently, in the environment of that TAP test, the server has more\n> open FDs at this point than it does when running \"normally\". I have\n> no idea what the additional FDs might be.\n\nWell it's running under cron if that makes a difference and what is the \nTAP-test using? perl?\n\n/Mikael\n\n\n",
"msg_date": "Sat, 4 Jan 2020 01:21:13 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On 2020-01-04 01:21, Mikael Kjellström wrote:\n\n>> Apparently, in the environment of that TAP test, the server has more\n>> open FDs at this point than it does when running \"normally\". I have\n>> no idea what the additional FDs might be.\n> \n> Well it's running under cron if that makes a difference and what is the \n> TAP-test using? perl?\n\nI tried starting it from cron and then I got:\n\n max_safe_fds = 981, usable_fds = 1000, already_open = 9\n\n/Mikael\n\n\n",
"msg_date": "Sat, 4 Jan 2020 01:41:20 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> On 2020-01-04 01:15, Tom Lane wrote:\n>> Apparently, in the environment of that TAP test, the server has more\n>> open FDs at this point than it does when running \"normally\". I have\n>> no idea what the additional FDs might be.\n\n> Well it's running under cron if that makes a difference and what is the \n> TAP-test using? perl?\n\nNot sure. There's a few things you could do to investigate:\n\n* Run the recovery TAP tests. Do you reproduce the buildfarm failure\nin your hand build? If not, we need to ask what's different.\n\n* If you do reproduce it, run those tests at debug2, just to confirm\nthe theory that already_open is higher than normal. (The easy way\nto make that happen is to add another line to what PostgresNode.pm's\ninit function is adding to postgresql.conf.)\n\n* Also, try putting a pg_usleep call just before the error in fd.c,\nto give yourself enough time to manually point \"lsof\" at the\npostmaster and see what all its FDs are.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 19:47:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> I tried starting it from cron and then I got:\n> max_safe_fds = 981, usable_fds = 1000, already_open = 9\n\nOh! There we have it then. I wonder if that's a cron bug (neglecting\nto close its own FDs before forking children) or intentional (maybe\nit uses those FDs to keep tabs on the children?).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 19:49:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 6:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> > I tried starting it from cron and then I got:\n> > max_safe_fds = 981, usable_fds = 1000, already_open = 9\n>\n> Oh! There we have it then.\n>\n\nRight.\n\n> I wonder if that's a cron bug (neglecting\n> to close its own FDs before forking children) or intentional (maybe\n> it uses those FDs to keep tabs on the children?).\n>\n\nSo, where do we go from here? Shall we try to identify why cron is\nkeeping extra FDs or we assume that we can't predict how many\npre-opened files there will be? In the latter case, we either want to\n(a) tweak the test to raise the value of max_files_per_process, (b)\nremove the test entirely. You seem to incline towards (b), but I have\na few things to say about that. We have another strange failure due\nto this test on one of Noah's machine, see my email [1]. I have\nrequested Noah for the stack trace [2]. It is not clear to me whether\nthe committed code has any problem or the test has discovered a\ndifferent problem in v10 specific to that platform. The same test has\npassed for v11, v12, and HEAD on the same platform.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LMDx6vK8Kdw8WUeW1MjToN2xVffL2kvtHvZg17%3DY6QQg%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1LJqMuXoCLuxkTr1HidbR8DkgRrVC7jHWDyXT%3DFD2gt6Q%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 4 Jan 2020 06:56:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On Sat, Jan 04, 2020 at 06:56:48AM +0530, Amit Kapila wrote:\n> On Sat, Jan 4, 2020 at 6:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> > > I tried starting it from cron and then I got:\n> > > max_safe_fds = 981, usable_fds = 1000, already_open = 9\n> >\n> > Oh! There we have it then.\n> >\n> \n> Right.\n> \n> > I wonder if that's a cron bug (neglecting\n> > to close its own FDs before forking children) or intentional (maybe\n> > it uses those FDs to keep tabs on the children?).\n> >\n> \n> So, where do we go from here? Shall we try to identify why cron is\n> keeping extra FDs or we assume that we can't predict how many\n> pre-opened files there will be?\n\nThe latter. If it helps, you could add a regress.c function\nleak_fd_until_max_fd_is(integer) so the main part of the test starts from a\nknown FD consumption state.\n\n> In the latter case, we either want to\n> (a) tweak the test to raise the value of max_files_per_process, (b)\n> remove the test entirely.\n\nI generally favor keeping the test, but feel free to decide it's too hard.\n\n\n",
"msg_date": "Sun, 5 Jan 2020 02:30:05 +0000",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Jan 04, 2020 at 06:56:48AM +0530, Amit Kapila wrote:\n>> So, where do we go from here? Shall we try to identify why cron is\n>> keeping extra FDs or we assume that we can't predict how many\n>> pre-opened files there will be?\n\n> The latter. If it helps, you could add a regress.c function\n> leak_fd_until_max_fd_is(integer) so the main part of the test starts from a\n> known FD consumption state.\n\nHmm ... that's an idea, but I'm not sure that even that would get the\njob done. By the time we reach any code in regress.c, there would\nhave been a bunch of catalog accesses, and so a bunch of the open FDs\nwould be from VFDs that fd.c could close on demand. So you still\nwouldn't have a clear idea of how much stress would be needed to get\nto an out-of-FDs situation.\n\nPerhaps, on top of this hypothetical regress.c function, you could\nadd some function in fd.c to force all VFDs closed, and then have\nregress.c call that before it leaks a pile of FDs. But now we're\ngetting mighty far into the weeds, and away from testing anything\nthat remotely resembles actual production behavior.\n\n> I generally favor keeping the test, but feel free to decide it's too hard.\n\nI remain dubious that it's worth the trouble, or indeed that the test\nwould prove anything of interest.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 22:00:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 8:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Noah Misch <noah@leadboat.com> writes:\n>\n> > I generally favor keeping the test, but feel free to decide it's too hard.\n>\n> I remain dubious that it's worth the trouble, or indeed that the test\n> would prove anything of interest.\n>\n\nI think we don't have any tests which test operating on many spill\nfiles which this test does leaving aside the part of the test which\ntests max open descriptors. We do have some tests related to spill\nfiles in contrib/spill/test_decoding/spill.sql, but I don't see any\nwhich tests with this many open spill files. Now, maybe it is not\nimportant to test that, but I think we should wait till we find out\nwhy this test failed on 'tern' and that too only in v10. It might\nturn out that it has revealed some actual code issue(either in what\ngot committed or some base code). In either case, it might turn out\nto be useful. So, we might decide to remove setting\nmax_files_per_process, but leave the test as it is. I am also not\nsure what is the right thing to do here, but it is clear that if we\nremove this test we won't be able to figure what went wrong on 'tern'.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 5 Jan 2020 09:28:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 8:00 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Sat, Jan 04, 2020 at 06:56:48AM +0530, Amit Kapila wrote:\n> > In the latter case, we either want to\n> > (a) tweak the test to raise the value of max_files_per_process, (b)\n> > remove the test entirely.\n>\n> I generally favor keeping the test, but feel free to decide it's too hard.\n>\n\nI am thinking that for now, we should raise the limit of\nmax_files_per_process in the test to something like 35 or 40, so that\nsidewinder passes and unblocks other people who might get blocked due\nto this, for example, I think one case is reported here\n(https://www.postgresql.org/message-id/20200106105608.GB18560%40msg.df7cb.de,\nsee Ubuntu bionic ..). I feel with this still we shall be able to\ncatch the problem we are facing on 'tern' and 'mandrill'.\n\nDo you have any opinion on this?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Jan 2020 08:59:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sidewinder has one failure"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I am thinking that for now, we should raise the limit of\n> max_files_per_process in the test to something like 35 or 40, so that\n> sidewinder passes and unblocks other people who might get blocked due\n> to this\n\nThat will not fix the problem for FD-per-semaphore platforms.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jan 2020 08:17:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sidewinder has one failure"
}
] |
[
{
"msg_contents": "Is it possible to tell what component of the cost estimate of an index scan is\nfrom the index reads vs heap ?\n\nIt would help to be able to set enable_bitmapscan=FORCE (to make all index\nscans go through a bitmap). Adding OR conditions can sometimes do this. That\nincludes cost of bitmap manipulation, but it's good enough for me.\n\nOr maybe explain should report it.\n\n\n",
"msg_date": "Fri, 3 Jan 2020 08:14:27 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "distinguish index cost component from table component"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 9:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> Is it possible to tell what component of the cost estimate of an index\n> scan is\n> from the index reads vs heap ?\n>\n\nNot that I have found, other than through sprinkling elog statements\nthroughout the costing code. Which is horrible, because then you get\nestimates for all the considered but rejected index scans as well, but\nwithout the context to know what they are for. So it only works for toy\nqueries where there are few possible indexes to consider.\n\nIt would help to be able to set enable_bitmapscan=FORCE (to make all index\n> scans go through a bitmap).\n\n\nDoesn't enable_indexscan=off accomplish this already? It is possible but\nnot terribly likely to switch from index to seq, rather than from index to\nbitmap. (Unless the index scan was being used to obtain an ordered result,\nbut a hypothetical enable_bitmapscan=FORCE can't fix that).\n\nOf course this doesn't really answer your question, as the\nseparately-reported costs of a bitmap heap and bitmap index scan are\nunlikely to match what the costs would be of a regular index scan, if they\nwere reported separately.\n\nOr maybe explain should report it.\n>\n\nI wouldn't be optimistic about getting such a backwards-incompatible change\naccepted (plus it would surely add some small accounting overhead, which\nagain would probably not be acceptable). But if you do enough tuning work,\nperhaps it would be worth carrying an out-of-tree patch to implement that.\nI wouldn't be so interested in writing such a patch, but would be\ninterested in using one were it available somewhere.\n\nCheers,\n\nJeff\n\nOn Fri, Jan 3, 2020 at 9:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:Is it possible to tell what component of the cost estimate of an index scan is\nfrom the index reads vs heap ?Not that I have found, other than through sprinkling elog statements throughout the costing code. Which is horrible, because then you get estimates for all the considered but rejected index scans as well, but without the context to know what they are for. So it only works for toy queries where there are few possible indexes to consider.\nIt would help to be able to set enable_bitmapscan=FORCE (to make all index\nscans go through a bitmap). Doesn't enable_indexscan=off accomplish this already? It is possible but not terribly likely to switch from index to seq, rather than from index to bitmap. (Unless the index scan was being used to obtain an ordered result, but a hypothetical \n\nenable_bitmapscan=FORCE can't fix that).Of course this doesn't really answer your question, as the separately-reported costs of a bitmap heap and bitmap index scan are unlikely to match what the costs would be of a regular index scan, if they were reported separately.\nOr maybe explain should report it.I wouldn't be optimistic about getting such a backwards-incompatible change accepted \n\n(plus it would surely add some small accounting overhead, which again would probably not be acceptable). But if you do enough tuning work, perhaps it would be worth carrying an out-of-tree patch to implement that. I wouldn't be so interested in writing such a patch, but would be interested in using one were it available somewhere.Cheers,Jeff",
"msg_date": "Fri, 3 Jan 2020 09:33:35 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: distinguish index cost component from table component"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 09:33:35AM -0500, Jeff Janes wrote:\n> Of course this doesn't really answer your question, as the\n> separately-reported costs of a bitmap heap and bitmap index scan are\n> unlikely to match what the costs would be of a regular index scan, if they\n> were reported separately.\n\nI think the cost of index component of bitmap scan would be exactly the same\nas the cost of the original indexscan.\n\n>> Or maybe explain should report it.\n> \n> I wouldn't be optimistic about getting such a backwards-incompatible change\n> accepted (plus it would surely add some small accounting overhead, which\n> again would probably not be acceptable). But if you do enough tuning work,\n> perhaps it would be worth carrying an out-of-tree patch to implement that.\n> I wouldn't be so interested in writing such a patch, but would be\n> interested in using one were it available somewhere.\n\nI did the attached in the simplest possible way. If it's somehow possible get\nthe path's index_total_cost from the plan, then there'd be no additional\noverhead.\n\nJustin",
"msg_date": "Fri, 3 Jan 2020 10:03:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: distinguish index cost component from table component"
},
{
"msg_contents": "Moving to -hackers\n\nI was asking about how to distinguish the index cost component of an indexscan\nfrom the cost of heap.\nhttps://www.postgresql.org/message-id/20200103141427.GK12066%40telsasoft.com\n\nOn Fri, Jan 03, 2020 at 09:33:35AM -0500, Jeff Janes wrote:\n> > It would help to be able to set enable_bitmapscan=FORCE (to make all index\n> > scans go through a bitmap).\n> \n> Doesn't enable_indexscan=off accomplish this already? It is possible but\n> not terribly likely to switch from index to seq, rather than from index to\n> bitmap. (Unless the index scan was being used to obtain an ordered result,\n> but a hypothetical enable_bitmapscan=FORCE can't fix that).\n\nNo, enable_indexscan=off implicitly disables bitmap index scans, since it does:\n\ncost_bitmap_heap_scan():\n|startup_cost += indexTotalCost;\n\nBut maybe it shouldn't (?) Or maybe it should take a third value, like\nenable_indexscan=bitmaponly, which means what it says. Actually the name is\nconfusable with indexonly, so maybe enable_indexscan=bitmap.\n\nA third value isn't really needed anyway; its only utility is that someone\nupgrading from v12 who uses enable_indexscan=off (presumably in limited scope)\nwouldn't have to also set enable_bitmapscan=off - not a big benefit.\n\nThat doesn't affect regress tests at all.\n\nNote, when I tested it, the cost of \"bitmap heap scan\" was several times higher\nthan the total cost of indexscan (including heap), even with CPU costs at 0. I\napplied my \"bitmap correlation\" patch, which seems to gives more reasonable\nresult. In any case, the purpose of this patch was primarily diagnostic, and\nthe heap cost of index scan would be its total cost minus the cost of the\nbitmap indexscan node when enable_indexscan=off. The high cost attributed to\nbitmap heapscan is topic for the other patch.\n\nJustin",
"msg_date": "Sat, 4 Jan 2020 10:50:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "allow disabling indexscans without disabling bitmapscans"
},
{
"msg_contents": "On Sat, Jan 04, 2020 at 10:50:47AM -0600, Justin Pryzby wrote:\n> > Doesn't enable_indexscan=off accomplish this already? It is possible but\n> > not terribly likely to switch from index to seq, rather than from index to\n> > bitmap. (Unless the index scan was being used to obtain an ordered result,\n> > but a hypothetical enable_bitmapscan=FORCE can't fix that).\n> \n> No, enable_indexscan=off implicitly disables bitmap index scans, since it does:\n\nI don't know how I went wrong, but the regress tests clued me in..it's as Jeff\nsaid.\n\nSorry for the noise.\n\nJustin\n\n\n",
"msg_date": "Sat, 4 Jan 2020 13:34:16 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: allow disabling indexscans without disabling bitmapscans"
},
{
"msg_contents": "commit 6f3a13ff058f15d565a30c16c0c2cb14cc994e42 Enhance docs for ALTER TABLE lock levels of storage parms\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: Mon Mar 6 16:48:12 2017 +0530\n\n <varlistentry>\n <term><literal>SET ( <replaceable class=\"PARAMETER\">storage_parameter</replaceable> = <replaceable class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n...\n- Changing fillfactor and autovacuum storage parameters acquires a <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n+ <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for \n+ fillfactor and autovacuum storage parameters, as well as the\n+ following planner related parameters:\n+ effective_io_concurrency, parallel_workers, seq_page_cost\n+ random_page_cost, n_distinct and n_distinct_inherited.\n\neffective_io_concurrency, seq_page_cost and random_page_cost cannot be set for\na table - reloptions.c shows that they've always been RELOPT_KIND_TABLESPACE.\n\nn_distinct lock mode seems to have been changed and documented at e5550d5f ;\n21d4e2e2 claimed to do the same, but the LOCKMODE is never used.\n\nSee also:\n\ncommit 21d4e2e20656381b4652eb675af4f6d65053607f Reduce lock levels for table storage params related to planning\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: Mon Mar 6 16:04:31 2017 +0530\n\ncommit 47167b7907a802ed39b179c8780b76359468f076 Reduce lock levels for ALTER TABLE SET autovacuum storage options\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: Fri Aug 14 14:19:28 2015 +0100\n\ncommit e5550d5fec66aa74caad1f79b79826ec64898688 Reduce lock levels of some ALTER TABLE cmds\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: Sun Apr 6 11:13:43 2014 -0400\n\ncommit 2dbbda02e7e688311e161a912a0ce00cde9bb6fc Reduce lock levels of CREATE TRIGGER and some ALTER TABLE, CREATE RULE actions.\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: Wed Jul 28 05:22:24 2010 +0000\n\ncommit d86d51a95810caebcea587498068ff32fe28293e Support ALTER TABLESPACE name SET/RESET ( tablespace_options ).\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Tue Jan 5 21:54:00 2010 +0000\n\nJustin",
"msg_date": "Sun, 5 Jan 2020 20:56:23 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "doc: alter table references bogus table-specific planner parameters"
},
{
"msg_contents": "On Mon, 6 Jan 2020 at 02:56, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> commit 6f3a13ff058f15d565a30c16c0c2cb14cc994e42 Enhance docs for ALTER\n> TABLE lock levels of storage parms\n> Author: Simon Riggs <simon@2ndQuadrant.com>\n> Date: Mon Mar 6 16:48:12 2017 +0530\n>\n> <varlistentry>\n> <term><literal>SET ( <replaceable\n> class=\"PARAMETER\">storage_parameter</replaceable> = <replaceable\n> class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n> ...\n> - Changing fillfactor and autovacuum storage parameters acquires a\n> <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n> + <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for\n> + fillfactor and autovacuum storage parameters, as well as the\n> + following planner related parameters:\n> + effective_io_concurrency, parallel_workers, seq_page_cost\n> + random_page_cost, n_distinct and n_distinct_inherited.\n>\n> effective_io_concurrency, seq_page_cost and random_page_cost cannot be set\n> for\n> a table - reloptions.c shows that they've always been\n> RELOPT_KIND_TABLESPACE.\n>\n\nRight, but if they were settable at table-level, the lock levels shown\nwould be accurate.\n\nI agree with the sentiment of the third doc change, but your patch removes\nthe mention of n_distinct, which isn't appropriate.\n\nThe second change in your patch alters the meaning of the sentence in a way\nthat is counter to the first change. The name of these parameters is\n\"Storage Parameters\" (in various places); I might agree with describing\nthem in text as \"storage or planner parameters\", but if you do that you\ncan't then just refer to \"storage parameters\" later, because if you do it\nimplies that planner parameters operate differently to storage parameters,\nwhich they don't.\n\n\n> n_distinct lock mode seems to have been changed and documented at e5550d5f\n> ;\n> 21d4e2e2 claimed to do the same, but the LOCKMODE is never used.\n>\n\nBut neither does it need to because we don't lock tablespaces.\n\nThanks for your comments.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Mon, 6 Jan 2020 at 02:56, Justin Pryzby <pryzby@telsasoft.com> wrote:commit 6f3a13ff058f15d565a30c16c0c2cb14cc994e42 Enhance docs for ALTER TABLE lock levels of storage parms\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: Mon Mar 6 16:48:12 2017 +0530\n\n <varlistentry>\n <term><literal>SET ( <replaceable class=\"PARAMETER\">storage_parameter</replaceable> = <replaceable class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n...\n- Changing fillfactor and autovacuum storage parameters acquires a <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n+ <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for \n+ fillfactor and autovacuum storage parameters, as well as the\n+ following planner related parameters:\n+ effective_io_concurrency, parallel_workers, seq_page_cost\n+ random_page_cost, n_distinct and n_distinct_inherited.\n\neffective_io_concurrency, seq_page_cost and random_page_cost cannot be set for\na table - reloptions.c shows that they've always been RELOPT_KIND_TABLESPACE.Right, but if they were settable at table-level, the lock levels shown would be accurate.I agree with the sentiment of the third doc change, but your patch removes the mention of n_distinct, which isn't appropriate.The second change in your patch alters the meaning of the sentence in a way that is counter to the first change. The name of these parameters is \"Storage Parameters\" (in various places); I might agree with describing them in text as \"storage or planner parameters\", but if you do that you can't then just refer to \"storage parameters\" later, because if you do it implies that planner parameters operate differently to storage parameters, which they don't. \nn_distinct lock mode seems to have been changed and documented at e5550d5f ;\n21d4e2e2 claimed to do the same, but the LOCKMODE is never used.But neither does it need to because we don't lock tablespaces.Thanks for your comments.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 6 Jan 2020 03:48:52 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 03:48:52AM +0000, Simon Riggs wrote:\n> On Mon, 6 Jan 2020 at 02:56, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > commit 6f3a13ff058f15d565a30c16c0c2cb14cc994e42 Enhance docs for ALTER TABLE lock levels of storage parms\n> > Author: Simon Riggs <simon@2ndQuadrant.com>\n> > Date: Mon Mar 6 16:48:12 2017 +0530\n> >\n> > <varlistentry>\n> > <term><literal>SET ( <replaceable\n> > class=\"PARAMETER\">storage_parameter</replaceable> = <replaceable\n> > class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n> > ...\n> > - Changing fillfactor and autovacuum storage parameters acquires a\n> > <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n> > + <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for\n> > + fillfactor and autovacuum storage parameters, as well as the\n> > + following planner related parameters:\n> > + effective_io_concurrency, parallel_workers, seq_page_cost\n> > + random_page_cost, n_distinct and n_distinct_inherited.\n> >\n> > effective_io_concurrency, seq_page_cost and random_page_cost cannot be set\n> > for\n> > a table - reloptions.c shows that they've always been\n> > RELOPT_KIND_TABLESPACE.\n> \n> I agree with the sentiment of the third doc change, but your patch removes\n> the mention of n_distinct, which isn't appropriate.\n\nI think it's correct to remove n_distinct there, as it's documented previously,\nsince e5550d5f. That's a per-attribute option (not storage) and can't be\nspecified there.\n\n <varlistentry>\n <term><literal>SET ( <replaceable class=\"PARAMETER\">attribute_option</replaceable> = <replaceable class=\"PARAMETER\">value</replaceable> [, ... ] )</literal></term>\n <term><literal>RESET ( <replaceable class=\"PARAMETER\">attribute_option</replaceable> [, ... ] )</literal></term>\n <listitem>\n <para>\n This form sets or resets per-attribute options. Currently, the only\n...\n+ <para>\n+ Changing per-attribute options acquires a\n+ <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\n+ </para>\n\n> The second change in your patch alters the meaning of the sentence in a way\n> that is counter to the first change. The name of these parameters is\n> \"Storage Parameters\" (in various places); I might agree with describing\n> them in text as \"storage or planner parameters\", but if you do that you\n> can't then just refer to \"storage parameters\" later, because if you do it\n> implies that planner parameters operate differently to storage parameters,\n> which they don't.\n\nThe 2nd change is:\n\n for details on the available parameters. Note that the table contents\n- will not be modified immediately by this command; depending on the\n+ will not be modified immediately by setting its storage parameters; depending on the\n parameter you might need to rewrite the table to get the desired effects.\n\nI deliberately qualified that as referring only to \"storage params\" rather than\n\"this command\", since planner params never \"modify the table contents\".\nPossibly other instances in the document (and createtable) should be changed\nfor consistency.\n\nJustin\n\n\n",
"msg_date": "Sun, 5 Jan 2020 22:13:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
},
{
"msg_contents": "On Mon, 6 Jan 2020 at 04:13, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n>\n> > I agree with the sentiment of the third doc change, but your patch\n> removes\n> > the mention of n_distinct, which isn't appropriate.\n>\n> I think it's correct to remove n_distinct there, as it's documented\n> previously,\n> since e5550d5f. That's a per-attribute option (not storage) and can't be\n> specified there.\n>\n\nOK, then agreed.\n\n> The second change in your patch alters the meaning of the sentence in a\n> way\n> > that is counter to the first change. The name of these parameters is\n> > \"Storage Parameters\" (in various places); I might agree with describing\n> > them in text as \"storage or planner parameters\", but if you do that you\n> > can't then just refer to \"storage parameters\" later, because if you do it\n> > implies that planner parameters operate differently to storage\n> parameters,\n> > which they don't.\n>\n> The 2nd change is:\n>\n> for details on the available parameters. Note that the table\n> contents\n> - will not be modified immediately by this command; depending on the\n> + will not be modified immediately by setting its storage parameters;\n> depending on the\n> parameter you might need to rewrite the table to get the desired\n> effects.\n>\n> I deliberately qualified that as referring only to \"storage params\" rather\n> than\n> \"this command\", since planner params never \"modify the table contents\".\n> Possibly other instances in the document (and createtable) should be\n> changed\n> for consistency.\n>\n\nYes, but it's not a correction, just a different preference of wording.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Mon, 6 Jan 2020 at 04:13, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I agree with the sentiment of the third doc change, but your patch removes\n> the mention of n_distinct, which isn't appropriate.\n\nI think it's correct to remove n_distinct there, as it's documented previously,\nsince e5550d5f. That's a per-attribute option (not storage) and can't be\nspecified there.OK, then agreed.\n> The second change in your patch alters the meaning of the sentence in a way\n> that is counter to the first change. The name of these parameters is\n> \"Storage Parameters\" (in various places); I might agree with describing\n> them in text as \"storage or planner parameters\", but if you do that you\n> can't then just refer to \"storage parameters\" later, because if you do it\n> implies that planner parameters operate differently to storage parameters,\n> which they don't.\n\nThe 2nd change is:\n\n for details on the available parameters. Note that the table contents\n- will not be modified immediately by this command; depending on the\n+ will not be modified immediately by setting its storage parameters; depending on the\n parameter you might need to rewrite the table to get the desired effects.\n\nI deliberately qualified that as referring only to \"storage params\" rather than\n\"this command\", since planner params never \"modify the table contents\".\nPossibly other instances in the document (and createtable) should be changed\nfor consistency.Yes, but it's not a correction, just a different preference of wording.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 6 Jan 2020 04:33:46 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
}
] |
[
{
"msg_contents": "On Sun, Feb 17, 2019 at 11:29:56AM -0500, Jeff Janes wrote:\nhttps://www.postgresql.org/message-id/CAMkU%3D1zBJNVo2DGYBgLJqpu8fyjCE_ys%2Bmsr6pOEoiwA7y5jrA%40mail.gmail.com\n> What would I find very useful is [...] if the HashAggregate node under\n> \"explain analyze\" would report memory and bucket stats; and if the Aggregate\n> node would report...anything.\n\nFind attached my WIP attempt to implement this.\n\nJeff: can you suggest what details Aggregate should show ?\n\nJustin",
"msg_date": "Fri, 3 Jan 2020 10:19:26 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 10:19:25AM -0600, Justin Pryzby wrote:\n> On Sun, Feb 17, 2019 at 11:29:56AM -0500, Jeff Janes wrote:\n> https://www.postgresql.org/message-id/CAMkU%3D1zBJNVo2DGYBgLJqpu8fyjCE_ys%2Bmsr6pOEoiwA7y5jrA%40mail.gmail.com\n> > What would I find very useful is [...] if the HashAggregate node under\n> > \"explain analyze\" would report memory and bucket stats; and if the Aggregate\n> > node would report...anything.\n> \n> Find attached my WIP attempt to implement this.\n> \n> Jeff: can you suggest what details Aggregate should show ?\n\nRebased on top of 10013684970453a0ddc86050bba813c611114321\nAnd added https://commitfest.postgresql.org/27/2428/\n\n\n",
"msg_date": "Sun, 26 Jan 2020 08:14:25 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Sun, Jan 26, 2020 at 08:14:25AM -0600, Justin Pryzby wrote:\n> On Fri, Jan 03, 2020 at 10:19:25AM -0600, Justin Pryzby wrote:\n> > On Sun, Feb 17, 2019 at 11:29:56AM -0500, Jeff Janes wrote:\n> > https://www.postgresql.org/message-id/CAMkU%3D1zBJNVo2DGYBgLJqpu8fyjCE_ys%2Bmsr6pOEoiwA7y5jrA%40mail.gmail.com\n> > > What would I find very useful is [...] if the HashAggregate node under\n> > > \"explain analyze\" would report memory and bucket stats; and if the Aggregate\n> > > node would report...anything.\n> > \n> > Find attached my WIP attempt to implement this.\n> > \n> > Jeff: can you suggest what details Aggregate should show ?\n> \n> Rebased on top of 10013684970453a0ddc86050bba813c611114321\n> And added https://commitfest.postgresql.org/27/2428/\n\nAttached for real.",
"msg_date": "Sun, 26 Jan 2020 11:17:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "Hi,\n\n\nOn 2020-01-03 10:19:26 -0600, Justin Pryzby wrote:\n> On Sun, Feb 17, 2019 at 11:29:56AM -0500, Jeff Janes wrote:\n> https://www.postgresql.org/message-id/CAMkU%3D1zBJNVo2DGYBgLJqpu8fyjCE_ys%2Bmsr6pOEoiwA7y5jrA%40mail.gmail.com\n> > What would I find very useful is [...] if the HashAggregate node under\n> > \"explain analyze\" would report memory and bucket stats; and if the Aggregate\n> > node would report...anything.\n\nYea, that'd be amazing. It probably should be something every\nexecGrouping.c using node can opt into.\n\n\nJustin: As far as I can tell, you're trying to share one instrumentation\nstate between hashagg and hashjoins. I'm doubtful that's a good\nidea. The cases are different enough that that's probably just going to\nbe complicated, without actually simplifying anything.\n\n\n> Jeff: can you suggest what details Aggregate should show ?\n\nMemory usage most importantly. Probably makes sense to differentiate\nbetween the memory for the hashtable itself, and the tuples in it (since\nthey're allocated separately, and just having a overly large hashtable\ndoesn't hurt that much if it's not filled).\n\n\n> diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c\n> index 3603c58..cf0fe3c 100644\n> --- a/src/backend/executor/execGrouping.c\n> +++ b/src/backend/executor/execGrouping.c\n> @@ -203,6 +203,11 @@ BuildTupleHashTableExt(PlanState *parent,\n> \t\thashtable->hash_iv = 0;\n> \n> \thashtable->hashtab = tuplehash_create(metacxt, nbuckets, hashtable);\n> +\thashtable->hinstrument.nbuckets_original = nbuckets;\n> +\thashtable->hinstrument.nbuckets = nbuckets;\n> +\thashtable->hinstrument.space_peak = entrysize * hashtable->hashtab->size;\n\nThat's not actually an accurate accounting of memory, because for filled\nentries a lot of memory is used to store actual tuples:\n\nstatic TupleHashEntryData *\nlookup_hash_entry(AggState *aggstate)\n...\n\t/* find or create the hashtable entry using the filtered tuple */\n\tentry = LookupTupleHashEntry(perhash->hashtable, hashslot, &isnew);\n\n\tif (isnew)\n\t{\n\t\tAggStatePerGroup pergroup;\n\t\tint\t\t\ttransno;\n\n\t\tpergroup = (AggStatePerGroup)\n\t\t\tMemoryContextAlloc(perhash->hashtable->tablecxt,\n\t\t\t\t\t\t\t sizeof(AggStatePerGroupData) * aggstate->numtrans);\n\t\tentry->additional = pergroup;\n\n\nsince the memory doesn't actually shrink unless the hashtable is\ndestroyed or reset, it'd probably be sensible to compute the memory\nusage either at reset, or at the end of the query.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Feb 2020 06:53:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Mon, Feb 03, 2020 at 06:53:01AM -0800, Andres Freund wrote:\n> On 2020-01-03 10:19:26 -0600, Justin Pryzby wrote:\n> > On Sun, Feb 17, 2019 at 11:29:56AM -0500, Jeff Janes wrote:\n> > https://www.postgresql.org/message-id/CAMkU%3D1zBJNVo2DGYBgLJqpu8fyjCE_ys%2Bmsr6pOEoiwA7y5jrA%40mail.gmail.com\n> > > What would I find very useful is [...] if the HashAggregate node under\n> > > \"explain analyze\" would report memory and bucket stats; and if the Aggregate\n> > > node would report...anything.\n> \n> Yea, that'd be amazing. It probably should be something every\n> execGrouping.c using node can opt into.\n\nDo you think it should be implemented in execGrouping/TupleHashTableData (as I\ndid) ? I also did an experiment moving into the higher level nodes, but I\nguess that's not actually desirable. There's currently different output from\ntests between the implementation using execGrouping.c and the one outside it,\nso there's at least an issue with grouping sets.\n\n> > +\thashtable->hinstrument.nbuckets_original = nbuckets;\n> > +\thashtable->hinstrument.nbuckets = nbuckets;\n> > +\thashtable->hinstrument.space_peak = entrysize * hashtable->hashtab->size;\n> \n> That's not actually an accurate accounting of memory, because for filled\n> entries a lot of memory is used to store actual tuples:\n\nThanks - I think I finally understood this.\n\nI updated some existing tests to show the new output. I imagine that's a\nthrowaway commit, and should eventually add new tests for each of these node\ntypes under explain analyze.\n\nI've been testing the various nodes like:\n\n--heapscan:\nDROP TABLE t; CREATE TABLE t (i int unique) WITH(autovacuum_enabled=off); INSERT INTO t SELECT generate_series(1,99999); SET enable_seqscan=off; SET parallel_tuple_cost=0; SET parallel_setup_cost=0; SET enable_indexonlyscan=off; explain analyze verbose SELECT * FROM t WHERE i BETWEEN 999 and 99999999;\n\n--setop:\nexplain( analyze,verbose) SELECT * FROM generate_series(1,999) EXCEPT (SELECT NULL UNION ALL SELECT * FROM generate_series(1,99999));\n Buckets: 2048 (originally 256) Memory Usage: hashtable: 48kB, tuples: 8Kb\n\n--recursive union:\nexplain analyze verbose WITH RECURSIVE t(n) AS ( SELECT 'foo' UNION SELECT n || ' bar' FROM t WHERE length(n) < 9999) SELECT n, n IS OF (text) AS is_text FROM t;\n\n--subplan\nexplain analyze verbose SELECT i FROM generate_series(1,999)i WHERE (i,i) NOT IN (SELECT 1,1 UNION ALL SELECT j,j FROM generate_series(1,99999)j);\n Buckets: 262144 (originally 131072) Memory Usage: hashtable: 6144kB, tuples: 782Kb\nexplain analyze verbose select i FROM generate_series(1,999)i WHERE(1,i) NOT in (select i,null::int from t) ;\n\n--Agg:\nexplain (analyze,verbose) SELECT A,COUNT(1) FROM generate_series(1,99999)a GROUP BY 1;\n Buckets: 262144 (originally 256) Memory Usage: hashtable: 6144kB, tuples: 782Kb\n\nexplain (analyze, verbose) select i FROM generate_series(1,999)i WHERE(1,1) not in (select a,null from (SELECT generate_series(1,99999) a)x) ;\n\nexplain analyze verbose select * from (SELECT a FROM generate_series(1,99)a)v left join lateral (select v.a, four, ten, count(*) from (SELECT b four, 2 ten, b FROM generate_series(1,999)b)x group by cube(four,ten)) s on true order by v.a,four,ten;\n\n--Grouping sets:\nexplain analyze verbose select unique1,\n count(two), count(four), count(ten),\n count(hundred), count(thousand), count(twothousand),\n count(*)\n from tenk1 group by grouping sets (unique1,twothousand,thousand,hundred,ten,four,two);\n\n-- \nJustin",
"msg_date": "Sat, 15 Feb 2020 18:02:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "Updated:\n\n . remove from explain analyze those tests which would display sort\n Memory/Disk. Oops.\n . fix issue with the first patch showing zero \"tuples\" memory for some\n grouping sets.\n . reclassify memory as \"tuples\" if it has to do with \"members\". So hashtable\n size is now redundant with nbuckets (if you know\n sizeof(TupleHashEntryData));\n\n-- \nJustin",
"msg_date": "Sun, 16 Feb 2020 11:53:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Sun, Feb 16, 2020 at 11:53:07AM -0600, Justin Pryzby wrote:\n> Updated:\n> \n> . remove from explain analyze those tests which would display sort\n> Memory/Disk. Oops.\n\n . Rebased on top of 5b618e1f48aecc66e3a9f60289491da520faae19\n . Updated to avoid sort's Disk output, for real this time.\n . And fixed a syntax error in an intermediate commit.\n\n-- \nJustin",
"msg_date": "Wed, 19 Feb 2020 14:10:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "Hi,\n\nI've started looking at this patch, because I've been long missing the\ninformation about hashagg hash table, so I'm pleased someone is working\non this. In general I agree it may be useful to add simila information\nto other nodes using a hashtable, but IMHO the hashagg bit is the most\nuseful, so maybe we should focus on it first.\n\nA couple of comments after an initial review of the patches:\n\n\n1) explain.c API\n\nThe functions in explain.c (even the static ones) follow the convention\nthat the last parameter is (ExplainState *es). I think we should stick\nto this, so the new parameters should be added before it.\n\nFor example instead of \n\n static void ExplainNode(PlanState *planstate, List *ancestors,\n const char *relationship, const char *plan_name,\n ExplainState *es, SubPlanState *subplanstate);\n\nwe should do e.g.\n\n static void ExplainNode(PlanState *planstate, SubPlanState *subplanstate,\n List *ancestors,\n const char *relationship, const char *plan_name,\n ExplainState *es);\n\nAlso, the first show_grouping_sets should be renamed to aggstate to make\nit consistent with the type change.\n\n\n2) The hash_instrumentation is a bit inconsistent with what we already\nhave. We have Instrumentation, JitInstrumentation, WorkerInstrumentation\nand HashInstrumentation, so I suggest we follow the same patter and call\nthis HashTableInstrumentation or something like that.\n\n\n3) Almost all executor nodes that are modified to include this new\ninstrumentation struct also include TupleHashTable, and the data are\nessentially about the hash table. So my question is why not to include\nthis into TupleHashTable - that would mean we don't need to modify any\nexecutor nodes, and it'd probably simplify code in explain.c too because\nwe could simply pass the hashtable.\n\nIt'd also simplify e.g. SubPlanState where we have to add two new fields\nand make sure to update the right one.\n\n\n4) The one exception to (3) is BitmapHeapScanState, which does include\nTIDBitmap and not TupleHashTable. And then we have tbm_instrumentation\nwhich \"fakes\" the data based on the pagetable. Maybe this is a sign that\nTIDBitmap needs a slightly different struct? Also, I'm not sure why we\nactually need tbm_instrumentation()? It just copies the instrumentation\ndata from TIDBitmap into the node level, but why couldn't we just look\nat the instrumentation data in TIDBitmap directly?\n\n\n5) I think the explain for grouping sets need a rething. Teh function\nshow_grouping_set_keys was originally meant to print just the keys, but\nnow it's also printing the hash table stats. IMO we need a new function\nprinting a grouping set info - calling show_grouping_set_keys to print\nthe keys, but then also printing the extra hashtable info.\n\n\n6) subplan explain\n\nI probably agree the hashtable info should be included in the subplan,\nnot in the parent node. Otherwise it's confusing, particularly when the\nnode has multiple subplans. The one thing I find a bit strange is this:\n\nexplain (analyze, timing off, summary off, costs off)\nselect 'foo'::text in (select 'bar'::name union all select 'bar'::name);\n QUERY PLAN\n----------------------------------------------\n Result (actual rows=1 loops=1)\n SubPlan 1\n Buckets: 4 (originally 2)\n Null hashtable: Buckets: 2\n -> Append (actual rows=2 loops=1)\n -> Result (actual rows=1 loops=1)\n -> Result (actual rows=1 loops=1)\n(7 rows)\n\nThat is, there's no indication why would this use a hash table, because\nthe \"hashed subplan\" is included only in verbose mode:\n\nexplain (analyze, verbose, timing off, summary off, costs off)\nselect 'foo'::text in (select 'bar'::name union all select 'bar'::name);\n QUERY PLAN\n----------------------------------------------------------------------------------\n Result (actual rows=1 loops=1)\n Output: (hashed SubPlan 1)\n SubPlan 1\n Buckets: 4 (originally 2) Memory Usage Hash: 1kB Memory Usage Tuples: 1kB\n Null hashtable: Buckets: 2 Memory Usage Hash: 1kB Memory Usage Tuples: 0kB\n -> Append (actual rows=2 loops=1)\n -> Result (actual rows=1 loops=1)\n Output: 'bar'::name\n -> Result (actual rows=1 loops=1)\n Output: 'bar'::name\n(10 rows)\n\nNot sure if this is an issue, maybe it's fine. But it's definitely\nstrange that we only print memory info in verbose mode - IMHO it's much\nmore useful info than the number of buckets etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 22 Feb 2020 22:53:35 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Sat, Feb 22, 2020 at 10:53:35PM +0100, Tomas Vondra wrote:\n> I've started looking at this patch, because I've been long missing the\n\nThanks for looking\n\nI have brief, initial comments before I revisit the patch.\n\n> 3) Almost all executor nodes that are modified to include this new\n> instrumentation struct also include TupleHashTable, and the data are\n> essentially about the hash table. So my question is why not to include\n> this into TupleHashTable - that would mean we don't need to modify any\n> executor nodes, and it'd probably simplify code in explain.c too because\n> we could simply pass the hashtable.\n\nI considered this. From 0004 commit message:\n\n| Also, if instrumentation were implemented in simplehash.h, I think every\n| insertion or deletion would need to check ->members and ->size (which isn't\n| necessary for Agg, but is necessary in the general case, and specifically for\n| tidbitmap, since it actually DELETEs hashtable entries). Or else simplehash\n| would need a new function like UpdateTupleHashStats, which the higher level nodes\n| would need to call after filling the hashtable or before deleting tuples, which\n| seems to defeat the purpose of implementing stats at a lower layer.\n\n> 4) The one exception to (3) is BitmapHeapScanState, which does include\n> TIDBitmap and not TupleHashTable. And then we have tbm_instrumentation\n> which \"fakes\" the data based on the pagetable. Maybe this is a sign that\n> TIDBitmap needs a slightly different struct?\n\nHm, I'd say that it \"collects\" the data that's not immediately present, not\nfake it. But maybe I did it poorly. Also, maybe TIDBitmap shouldn't be\nincluded in the patch..\n\n> Also, I'm not sure why we\n> actually need tbm_instrumentation()? It just copies the instrumentation\n> data from TIDBitmap into the node level, but why couldn't we just look\n> at the instrumentation data in TIDBitmap directly?\n\nSee 0004 commit message:\n\n| TIDBitmap is a private structure, so add an accessor function to return its\n| instrumentation, and duplicate instrumentation struct in BitmapHeapState.\n\nAlso, I don't know what anyone else thinks, but I think 0005 is a throwaway\ncommit. It's implemented more nicely in execGrouping.c.\n\n> But it's definitely strange that we only print memory info in verbose mode -\n> IMHO it's much more useful info than the number of buckets etc.\n\nBecause I wanted to be able to put \"explain analyze\" into regression tests\n(which can show: \"Buckets: 4 (originally 2)\"). But cannot get stable output\nfor any plan which uses Sort, without hacks like explain_sq_limit and\nexplain_parallel_sort_stats.\n\nActually, I wish there were a way to control Sort nodes' Memory/Disk output,\ntoo. I'm sure most of regression tests were meant to be run as explain(analyze NO),\nbut it'd be much better if analyze YES were reasonably easy in the general\ncase that might include Sort. If someone seconds that, I will start a separate\nthread.\n\n-- \nJustin Pryzby\n\n\n",
"msg_date": "Sat, 22 Feb 2020 17:00:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Sat, Feb 22, 2020 at 10:53:35PM +0100, Tomas Vondra wrote:\n> 1) explain.c API\n> \n> The functions in explain.c (even the static ones) follow the convention\n> that the last parameter is (ExplainState *es). I think we should stick\n> to this, so the new parameters should be added before it.\n\nI found it weird to have the \"constant\" arguments at the end rather than at the\nbeginning. (Also, these don't follow that convention: show_buffer_usage\nExplainSaveGroup ExplainRestoreGroup ExplainOneQuery ExplainPrintJIT).\n\nBut done.\n\n> Also, the first show_grouping_sets should be renamed to aggstate to make\n> it consistent with the type change.\n\nThe prototype wasn't updated - fixed.\n\n> 2) The hash_instrumentation is a bit inconsistent with what we already\n> have ..HashTableInstrumentation..\n\nThanks for thinking of a better name.\n\n> 5) I think the explain for grouping sets need a rething. Teh function\n> show_grouping_set_keys was originally meant to print just the keys, but\n> now it's also printing the hash table stats. IMO we need a new function\n> printing a grouping set info - calling show_grouping_set_keys to print\n> the keys, but then also printing the extra hashtable info.\n\nI renamed it, and did the rest in a separate patch for now, since I'm only\npartially convinced it's an improvement.\n\n> 6) subplan explain\n> \n> That is, there's no indication why would this use a hash table, because\n> the \"hashed subplan\" is included only in verbose mode:\n\nNeed to think about that..\n\n> Not sure if this is an issue, maybe it's fine. But it's definitely\n> strange that we only print memory info in verbose mode - IMHO it's much\n> more useful info than the number of buckets etc.\n\nYou're right that verbose isn't right for this.\n\nI wrote patches creating new explain options to allow stable output of \"explain\nanalyze\", by avoiding Memory/Disk. The only other way to handle it seems to be\nto avoid \"explain analyze\" in regression tests, which is what's in common\npractice anyway, so did that instead.\n\nI also fixed wrong output and wrong non-text formatting for grouping sets,\ntweaked output for subplan, and broke style rules less often.\n\n-- \nJustin",
"msg_date": "Mon, 24 Feb 2020 21:35:32 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "Updated for new tests in 58c47ccfff20b8c125903482725c1dbfd30beade\nand rebased.",
"msg_date": "Sun, 1 Mar 2020 09:45:40 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-01 09:45:40 -0600, Justin Pryzby wrote:\n> +/*\n> + * Show hash bucket stats and (optionally) memory.\n> + */\n> +static void\n> +show_tuplehash_info(HashTableInstrumentation *inst, ExplainState *es)\n> +{\n> +\tlong\tspacePeakKb_tuples = (inst->space_peak_tuples + 1023) / 1024,\n> +\t\tspacePeakKb_hash = (inst->space_peak_hash + 1023) / 1024;\n\nLet's not add further uses of long. It's terrible because it has\ndifferent widths on 64bit windows and everything else. Specify the\nwidths explicitly, or use something like size_t.\n\n\n> +\tif (es->format != EXPLAIN_FORMAT_TEXT)\n> +\t{\n> +\t\tExplainPropertyInteger(\"Hash Buckets\", NULL,\n> +\t\t\t\t\t\t\t inst->nbuckets, es);\n> +\t\tExplainPropertyInteger(\"Original Hash Buckets\", NULL,\n> +\t\t\t\t\t\t\t inst->nbuckets_original, es);\n> +\t\tExplainPropertyInteger(\"Peak Memory Usage (hashtable)\", \"kB\",\n> +\t\t\t\t\t\t\t spacePeakKb_hash, es);\n> +\t\tExplainPropertyInteger(\"Peak Memory Usage (tuples)\", \"kB\",\n> +\t\t\t\t\t\t\t spacePeakKb_tuples, es);\n\nAnd then you're passing the long to ExplainPropertyInteger which accepts\na int64, making the use of long above particularly suspicious.\n\nI wonder if it would make sense to add a ExplainPropertyBytes(), that\nwould do the rounding etc automatically. It could then also switch units\nas appropriate.\n\n\n> +\t}\n> +\telse if (!inst->nbuckets)\n> +\t\t; /* Do nothing */\n> +\telse\n> +\t{\n> +\t\tif (inst->nbuckets_original != inst->nbuckets)\n> +\t\t{\n> +\t\t\tExplainIndentText(es);\n> +\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\"Buckets: %ld (originally %ld)\",\n> +\t\t\t\t\t\tinst->nbuckets,\n> +\t\t\t\t\t\tinst->nbuckets_original);\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tExplainIndentText(es);\n> +\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\"Buckets: %ld\",\n> +\t\t\t\t\t\tinst->nbuckets);\n> +\t\t}\n> +\n> +\t\tif (es->analyze)\n> +\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\" Memory Usage: hashtable: %ldkB, tuples: %ldkB\",\n> +\t\t\t\t\tspacePeakKb_hash, spacePeakKb_tuples);\n> +\t\tappendStringInfoChar(es->str, '\\n');\n\nI'm not sure I like the alternative output formats here. All the other\nfields are separated with a comma, but the original size is in\nparens. I'd probably just format it as \"Buckets: %lld \" and then add\n\", Original Buckets: %lld\" when differing.\n\nAlso, %ld is problematic, because it's only 32bit wide on some platforms\n(including 64bit windows), but 64bit on others. The easiest way to deal\nwith that is to use %lld and cast the argument to long long - yes that's\na sad workaround.\n\n\n> +/* Update instrumentation stats */\n> +void\n> +UpdateTupleHashTableStats(TupleHashTable hashtable, bool initial)\n> +{\n> +\thashtable->instrument.nbuckets = hashtable->hashtab->size;\n> +\tif (initial)\n> +\t{\n> +\t\thashtable->instrument.nbuckets_original = hashtable->hashtab->size;\n> +\t\thashtable->instrument.space_peak_hash = hashtable->hashtab->size *\n> +\t\t\tsizeof(TupleHashEntryData);\n> +\t\thashtable->instrument.space_peak_tuples = 0;\n> +\t}\n> +\telse\n> +\t{\n> +#define maxself(a,b) a=Max(a,b)\n> +\t\t/* hashtable->entrysize includes additionalsize */\n> +\t\tmaxself(hashtable->instrument.space_peak_hash,\n> +\t\t\t\thashtable->hashtab->size * sizeof(TupleHashEntryData));\n> +\t\tmaxself(hashtable->instrument.space_peak_tuples,\n> +\t\t\t\thashtable->hashtab->members * hashtable->entrysize);\n> +#undef maxself\n> +\t}\n> +}\n\nNot a fan of this macro.\n\nI'm also not sure I understand what you're trying to do here?\n\n\n> diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out\n> index f457b5b150..b173b32cab 100644\n> --- a/src/test/regress/expected/aggregates.out\n> +++ b/src/test/regress/expected/aggregates.out\n> @@ -517,10 +517,11 @@ order by 1, 2;\n> -> HashAggregate\n> Output: s2.s2, sum((s1.s1 + s2.s2))\n> Group Key: s2.s2\n> + Buckets: 4\n> -> Function Scan on pg_catalog.generate_series s2\n> Output: s2.s2\n> Function Call: generate_series(1, 3)\n> -(14 rows)\n> +(15 rows)\n\nThese tests probably won't be portable. The number of hash buckets\ncalculated will e.g. depend onthe size of the contained elements. And\nthat'll e.g. will depend on whether pointers are 4 or 8 bytes.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 6 Mar 2020 09:58:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Fri, Mar 06, 2020 at 09:58:59AM -0800, Andres Freund wrote:\n> ...\n>\n>> +\t}\n>> +\telse if (!inst->nbuckets)\n>> +\t\t; /* Do nothing */\n>> +\telse\n>> +\t{\n>> +\t\tif (inst->nbuckets_original != inst->nbuckets)\n>> +\t\t{\n>> +\t\t\tExplainIndentText(es);\n>> +\t\t\tappendStringInfo(es->str,\n>> +\t\t\t\t\t\t\"Buckets: %ld (originally %ld)\",\n>> +\t\t\t\t\t\tinst->nbuckets,\n>> +\t\t\t\t\t\tinst->nbuckets_original);\n>> +\t\t}\n>> +\t\telse\n>> +\t\t{\n>> +\t\t\tExplainIndentText(es);\n>> +\t\t\tappendStringInfo(es->str,\n>> +\t\t\t\t\t\t\"Buckets: %ld\",\n>> +\t\t\t\t\t\tinst->nbuckets);\n>> +\t\t}\n>> +\n>> +\t\tif (es->analyze)\n>> +\t\t\tappendStringInfo(es->str,\n>> +\t\t\t\t\t\" Memory Usage: hashtable: %ldkB, tuples: %ldkB\",\n>> +\t\t\t\t\tspacePeakKb_hash, spacePeakKb_tuples);\n>> +\t\tappendStringInfoChar(es->str, '\\n');\n>\n>I'm not sure I like the alternative output formats here. All the other\n>fields are separated with a comma, but the original size is in\n>parens. I'd probably just format it as \"Buckets: %lld \" and then add\n>\", Original Buckets: %lld\" when differing.\n>\n\nFWIW this copies hashjoin precedent, which does this:\n\n appendStringInfo(es->str,\n \"Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\\n\",\n hinstrument.nbuckets,\n\t\t ...\n\nI agree it's not ideal, but maybe let's not invent new ways to format\nthe same type of info.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Mar 2020 20:43:07 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Fri, Mar 06, 2020 at 09:58:59AM -0800, Andres Freund wrote:\n> > +\t\t\tExplainIndentText(es);\n> > +\t\t\tappendStringInfo(es->str,\n> > +\t\t\t\t\t\t\"Buckets: %ld (originally %ld)\",\n> > +\t\t\t\t\t\tinst->nbuckets,\n> > +\t\t\t\t\t\tinst->nbuckets_original);\n> \n> I'm not sure I like the alternative output formats here. All the other\n> fields are separated with a comma, but the original size is in\n> parens. I'd probably just format it as \"Buckets: %lld \" and then add\n> \", Original Buckets: %lld\" when differing.\n\nIt's done that way for consistency with hashJoin in show_hash_info().\n\n> > +/* Update instrumentation stats */\n> > +void\n> > +UpdateTupleHashTableStats(TupleHashTable hashtable, bool initial)\n> > +{\n> > +\thashtable->instrument.nbuckets = hashtable->hashtab->size;\n> > +\tif (initial)\n> > +\t{\n> > +\t\thashtable->instrument.nbuckets_original = hashtable->hashtab->size;\n> > +\t\thashtable->instrument.space_peak_hash = hashtable->hashtab->size *\n> > +\t\t\tsizeof(TupleHashEntryData);\n> > +\t\thashtable->instrument.space_peak_tuples = 0;\n> > +\t}\n> > +\telse\n> > +\t{\n> > +#define maxself(a,b) a=Max(a,b)\n> > +\t\t/* hashtable->entrysize includes additionalsize */\n> > +\t\tmaxself(hashtable->instrument.space_peak_hash,\n> > +\t\t\t\thashtable->hashtab->size * sizeof(TupleHashEntryData));\n> > +\t\tmaxself(hashtable->instrument.space_peak_tuples,\n> > +\t\t\t\thashtable->hashtab->members * hashtable->entrysize);\n> > +#undef maxself\n> > +\t}\n> > +}\n> \n> Not a fan of this macro.\n> \n> I'm also not sure I understand what you're trying to do here?\n\nI have to call UpdateTupleHashTableStats() from the callers at deliberate\nlocations. If the caller fills the hashtable all at once, I can populate the\nstats immediately after that, but if it's populated incrementally, then need to\nupdate stats right before it's destroyed or reset, otherwise we can show tuple\nsize of the hashtable since its most recent reset, rather than a larger,\nprevious incarnation.\n\n> > diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out\n> > index f457b5b150..b173b32cab 100644\n> > --- a/src/test/regress/expected/aggregates.out\n> > +++ b/src/test/regress/expected/aggregates.out\n> > @@ -517,10 +517,11 @@ order by 1, 2;\n> > -> HashAggregate\n> > Output: s2.s2, sum((s1.s1 + s2.s2))\n> > Group Key: s2.s2\n> > + Buckets: 4\n> > -> Function Scan on pg_catalog.generate_series s2\n> > Output: s2.s2\n> > Function Call: generate_series(1, 3)\n> > -(14 rows)\n> > +(15 rows)\n> \n> These tests probably won't be portable. The number of hash buckets\n> calculated will e.g. depend onthe size of the contained elements. And\n> that'll e.g. will depend on whether pointers are 4 or 8 bytes.\n\nI was aware and afraid of that. Previously, I added this output only to\n\"explain analyze\", and (as an quick, interim implementation) changed various\ntests to use analyze, and memory only shown in \"verbose\" mode. But as Tomas\npointed out, that's consistent with what's done elsewhere.\n\nSo is the solution to show stats only during explain ANALYZE ?\n\nOr ... I have a patch to create a new explain(MACHINE) option to allow more\nstable output, by avoiding Memory/Disk. That doesn't attempt to make all\n\"explain analyze\" output stable - there's other issues, I think mostly related\nto parallel workers (see 4ea03f3f, 13e8b2ee). But does allow retiring\nexplain_sq_limit and explain_parallel_sort_stats. I'm including my patch to\nshow what I mean, but I didn't enable it for hashtable \"Buckets:\". I guess in\neither case, the tests shouldn't be included.\n\n-- \nJustin",
"msg_date": "Fri, 6 Mar 2020 15:33:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-06 15:33:10 -0600, Justin Pryzby wrote:\n> On Fri, Mar 06, 2020 at 09:58:59AM -0800, Andres Freund wrote:\n> > > +\t\t\tExplainIndentText(es);\n> > > +\t\t\tappendStringInfo(es->str,\n> > > +\t\t\t\t\t\t\"Buckets: %ld (originally %ld)\",\n> > > +\t\t\t\t\t\tinst->nbuckets,\n> > > +\t\t\t\t\t\tinst->nbuckets_original);\n> > \n> > I'm not sure I like the alternative output formats here. All the other\n> > fields are separated with a comma, but the original size is in\n> > parens. I'd probably just format it as \"Buckets: %lld \" and then add\n> > \", Original Buckets: %lld\" when differing.\n> \n> It's done that way for consistency with hashJoin in show_hash_info().\n\nFair. I don't like it, but it's not this patch's fault.\n\n\n> > > diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out\n> > > index f457b5b150..b173b32cab 100644\n> > > --- a/src/test/regress/expected/aggregates.out\n> > > +++ b/src/test/regress/expected/aggregates.out\n> > > @@ -517,10 +517,11 @@ order by 1, 2;\n> > > -> HashAggregate\n> > > Output: s2.s2, sum((s1.s1 + s2.s2))\n> > > Group Key: s2.s2\n> > > + Buckets: 4\n> > > -> Function Scan on pg_catalog.generate_series s2\n> > > Output: s2.s2\n> > > Function Call: generate_series(1, 3)\n> > > -(14 rows)\n> > > +(15 rows)\n> > \n> > These tests probably won't be portable. The number of hash buckets\n> > calculated will e.g. depend onthe size of the contained elements. And\n> > that'll e.g. will depend on whether pointers are 4 or 8 bytes.\n> \n> I was aware and afraid of that. Previously, I added this output only to\n> \"explain analyze\", and (as an quick, interim implementation) changed various\n> tests to use analyze, and memory only shown in \"verbose\" mode. But as Tomas\n> pointed out, that's consistent with what's done elsewhere.\n> \n> So is the solution to show stats only during explain ANALYZE ?\n> \n> Or ... I have a patch to create a new explain(MACHINE) option to allow more\n> stable output, by avoiding Memory/Disk. That doesn't attempt to make all\n> \"explain analyze\" output stable - there's other issues, I think mostly related\n> to parallel workers (see 4ea03f3f, 13e8b2ee). But does allow retiring\n> explain_sq_limit and explain_parallel_sort_stats. I'm including my patch to\n> show what I mean, but I didn't enable it for hashtable \"Buckets:\". I guess in\n> either case, the tests shouldn't be included.\n\nYea, there's been recent discussion about an argument like that. See\ne.g.\nhttps://www.postgresql.org/message-id/18494.1580079189%40sss.pgh.pa.us\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Mar 2020 10:33:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "\n+\t\t/* hashtable->entrysize includes additionalsize */\n+\t\thashtable->instrument.space_peak_hash = Max(\n+\t\t\thashtable->instrument.space_peak_hash,\n+\t\t\thashtable->hashtab->size *\nsizeof(TupleHashEntryData));\n+\n+\t\thashtable->instrument.space_peak_tuples = Max(\n+\t\t\thashtable->instrument.space_peak_tuples,\n+\t\t\t\thashtable->hashtab->members *\nhashtable->entrysize);\n\nI think, in general, we should avoid estimates/projections for\nreporting and try to get at a real number, like\nMemoryContextMemAllocated(). (Aside: I may want to tweak exactly what\nthat function reports so that it doesn't count the unused portion of\nthe last block.)\n\nFor instance, the report is still not accurate, because it doesn't\naccount for pass-by-ref transition state values.\n\nTo use memory-context-based reporting, it's hard to make the stats a\npart of the tuple hash table, because the tuple hash table doesn't own\nthe memory contexts (they are passed in). It's also hard to make it\nper-hashtable (e.g. for grouping sets), unless we put each grouping set\nin its own memory context.\n\nAlso, is there a reason you report two different memory values\n(hashtable and tuples)? I don't object, but it seems like a little too\nmuch detail.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 13 Mar 2020 10:15:46 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-13 10:15:46 -0700, Jeff Davis wrote:\n> Also, is there a reason you report two different memory values\n> (hashtable and tuples)? I don't object, but it seems like a little too\n> much detail.\n\nSeems useful to me - the hashtable is pre-allocated based on estimates,\nwhereas the tuples are allocated \"on demand\". So seeing the difference\nwill allow to investigate the more crucial issue...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Mar 2020 10:27:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Fri, 2020-03-13 at 10:27 -0700, Andres Freund wrote:\n> On 2020-03-13 10:15:46 -0700, Jeff Davis wrote:\n> > Also, is there a reason you report two different memory values\n> > (hashtable and tuples)? I don't object, but it seems like a little\n> > too\n> > much detail.\n> \n> Seems useful to me - the hashtable is pre-allocated based on\n> estimates,\n> whereas the tuples are allocated \"on demand\". So seeing the\n> difference\n> will allow to investigate the more crucial issue...\n\nThen do we also want to report separately on the by-ref transition\nvalues? That could be useful if you are using ARRAY_AGG and the states\ngrow larger than you might expect.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 13 Mar 2020 10:53:17 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-13 10:53:17 -0700, Jeff Davis wrote:\n> On Fri, 2020-03-13 at 10:27 -0700, Andres Freund wrote:\n> > On 2020-03-13 10:15:46 -0700, Jeff Davis wrote:\n> > > Also, is there a reason you report two different memory values\n> > > (hashtable and tuples)? I don't object, but it seems like a little\n> > > too\n> > > much detail.\n> > \n> > Seems useful to me - the hashtable is pre-allocated based on\n> > estimates,\n> > whereas the tuples are allocated \"on demand\". So seeing the\n> > difference\n> > will allow to investigate the more crucial issue...\n> \n> Then do we also want to report separately on the by-ref transition\n> values? That could be useful if you are using ARRAY_AGG and the states\n> grow larger than you might expect.\n\nI can see that being valuable - I've had to debug cases with too much\nmemory being used due to aggregate transitions before. Right now it'd be\nmixed in with tuples, I believe - and we'd need a separate context for\ntracking the transition values? Due to that I'm inclined to not report\nseparately for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Mar 2020 10:57:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 10:57:43AM -0700, Andres Freund wrote:\n> On 2020-03-13 10:53:17 -0700, Jeff Davis wrote:\n> > On Fri, 2020-03-13 at 10:27 -0700, Andres Freund wrote:\n> > > On 2020-03-13 10:15:46 -0700, Jeff Davis wrote:\n> > > > Also, is there a reason you report two different memory values\n> > > > (hashtable and tuples)? I don't object, but it seems like a little too\n> > > > much detail.\n> > > \n> > > Seems useful to me - the hashtable is pre-allocated based on estimates,\n> > > whereas the tuples are allocated \"on demand\". So seeing the difference\n> > > will allow to investigate the more crucial issue...\n\n> > Then do we also want to report separately on the by-ref transition\n> > values? That could be useful if you are using ARRAY_AGG and the states\n> > grow larger than you might expect.\n> \n> I can see that being valuable - I've had to debug cases with too much\n> memory being used due to aggregate transitions before. Right now it'd be\n> mixed in with tuples, I believe - and we'd need a separate context for\n> tracking the transition values? Due to that I'm inclined to not report\n> separately for now.\n\nI think that's already in a separate context indexed by grouping set:\nsrc/include/nodes/execnodes.h: ExprContext **aggcontexts; /* econtexts for long-lived data (per GS) */\n\nBut the hashtable and tuples are combined. I put them in separate contexts and\nrebased on top of 1f39bce021540fde00990af55b4432c55ef4b3c7.\n\nBut didn't do anything yet with the aggcontexts.\n\nNow I can get output like:\n\n|template1=# explain analyze SELECT i,COUNT(1) FROM t GROUP BY 1;\n| HashAggregate (cost=4769.99..6769.98 rows=199999 width=12) (actual time=266.465..27020.333 rows=199999 loops=1)\n| Group Key: i\n| Buckets: 524288 (originally 262144)\n| Peak Memory Usage: hashtable: 12297kB, tuples: 24576kB\n| Disk Usage: 192 kB\n| HashAgg Batches: 3874\n| -> Seq Scan on t (cost=0.00..3769.99 rows=199999 width=4) (actual time=13.043..64.017 rows=199999 loops=1)\n\nIt looks somewhat funny next to hash join, which puts everything on one line:\n\n|template1=# explain analyze SELECT i,COUNT(1) FROM t a JOIN t b USING(i) GROUP BY 1;\n| HashAggregate (cost=13789.95..15789.94 rows=199999 width=12) (actual time=657.733..27129.873 rows=199999 loops=1)\n| Group Key: a.i\n| Buckets: 524288 (originally 262144)\n| Peak Memory Usage: hashtable: 12297kB, tuples: 24576kB\n| Disk Usage: 192 kB\n| HashAgg Batches: 3874\n| -> Hash Join (cost=6269.98..12789.95 rows=199999 width=4) (actual time=135.932..426.071 rows=199999 loops=1)\n| Hash Cond: (a.i = b.i)\n| -> Seq Scan on t a (cost=0.00..3769.99 rows=199999 width=4) (actual time=3.265..47.598 rows=199999 loops=1)\n| -> Hash (cost=3769.99..3769.99 rows=199999 width=4) (actual time=131.881..131.882 rows=199999 loops=1)\n| Buckets: 262144 Batches: 1 Memory Usage: 9080kB\n| -> Seq Scan on t b (cost=0.00..3769.99 rows=199999 width=4) (actual time=3.273..40.163 rows=199999 loops=1)\n\n-- \nJustin",
"msg_date": "Fri, 20 Mar 2020 03:44:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 03:44:42AM -0500, Justin Pryzby wrote:\n> On Fri, Mar 13, 2020 at 10:57:43AM -0700, Andres Freund wrote:\n> > On 2020-03-13 10:53:17 -0700, Jeff Davis wrote:\n> > > On Fri, 2020-03-13 at 10:27 -0700, Andres Freund wrote:\n> > > > On 2020-03-13 10:15:46 -0700, Jeff Davis wrote:\n> > > > > Also, is there a reason you report two different memory values\n> > > > > (hashtable and tuples)? I don't object, but it seems like a little too\n> > > > > much detail.\n> > > > \n> > > > Seems useful to me - the hashtable is pre-allocated based on estimates,\n> > > > whereas the tuples are allocated \"on demand\". So seeing the difference\n> > > > will allow to investigate the more crucial issue...\n> \n> > > Then do we also want to report separately on the by-ref transition\n> > > values? That could be useful if you are using ARRAY_AGG and the states\n> > > grow larger than you might expect.\n> > \n> > I can see that being valuable - I've had to debug cases with too much\n> > memory being used due to aggregate transitions before. Right now it'd be\n> > mixed in with tuples, I believe - and we'd need a separate context for\n> > tracking the transition values? Due to that I'm inclined to not report\n> > separately for now.\n> \n> I think that's already in a separate context indexed by grouping set:\n> src/include/nodes/execnodes.h: ExprContext **aggcontexts; /* econtexts for long-lived data (per GS) */\n> \n> But the hashtable and tuples are combined. I put them in separate contexts and\n> rebased on top of 1f39bce021540fde00990af55b4432c55ef4b3c7.\n\nI forgot to say that I'd also switched started using memory context based\naccounting.\n\n90% of the initial goal of this patch was handled by instrumentation added by\n\"hash spill to disk\" (1f39bce02), but this *also* adds:\n\n - separate accounting for tuples vs hashtable;\n - number of hash buckets;\n - handles other agg nodes, and bitmap scan;\n\nShould I continue pursuing this patch?\nDoes it still serve any significant purpose?\n\ntemplate1=# explain (analyze, costs off, summary off) SELECT a, COUNT(1) FROM generate_series(1,999999) a GROUP BY 1 ;\n HashAggregate (actual time=1070.713..2287.011 rows=999999 loops=1)\n Group Key: a\n Buckets: 32768 (originally 512)\n Peak Memory Usage: hashtable: 777kB, tuples: 4096kB\n Disk Usage: 22888 kB\n HashAgg Batches: 84\n -> Function Scan on generate_series a (actual time=238.270..519.832 rows=999999 loops=1)\n\ntemplate1=# explain analyze SELECT * FROM t WHERE a BETWEEN 999 AND 99999;\n Bitmap Heap Scan on t (cost=4213.01..8066.67 rows=197911 width=4) (actual time=26.803..84.693 rows=198002 loops=1)\n Recheck Cond: ((a >= 999) AND (a <= 99999))\n Heap Blocks: exact=878\n Buckets: 1024 (originally 256)\n Peak Memory Usage: hashtable: 48kB, tuples: 4kB\n\ntemplate1=# explain analyze SELECT generate_series(1,99999) EXCEPT SELECT generate_series(1,999);\n HashSetOp Except (cost=0.00..2272.49 rows=99999 width=8) (actual time=135.986..174.656 rows=99000 loops=1)\n Buckets: 262144 (originally 131072)\n Peak Memory Usage: hashtable: 6177kB, tuples: 8192kB\n\n@cfbot: rebased",
"msg_date": "Wed, 8 Apr 2020 16:00:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "On Wed, 2020-04-08 at 16:00 -0500, Justin Pryzby wrote:\n> 90% of the initial goal of this patch was handled by instrumentation\n> added by\n> \"hash spill to disk\" (1f39bce02), but this *also* adds:\n> \n> - separate accounting for tuples vs hashtable;\n> - number of hash buckets;\n> - handles other agg nodes, and bitmap scan;\n> \n> Should I continue pursuing this patch?\n> Does it still serve any significant purpose?\n\nThose things would be useful for me trying to tune the performance and\ncost model. I think we need to put some of these things under \"VERBOSE\"\nor maybe invent a new explain option to provide this level of detail,\nthough.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 08 Apr 2020 15:24:39 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "> On 9 Apr 2020, at 00:24, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Wed, 2020-04-08 at 16:00 -0500, Justin Pryzby wrote:\n>> 90% of the initial goal of this patch was handled by instrumentation\n>> added by\n>> \"hash spill to disk\" (1f39bce02), but this *also* adds:\n>> \n>> - separate accounting for tuples vs hashtable;\n>> - number of hash buckets;\n>> - handles other agg nodes, and bitmap scan;\n>> \n>> Should I continue pursuing this patch?\n>> Does it still serve any significant purpose?\n> \n> Those things would be useful for me trying to tune the performance and\n> cost model. I think we need to put some of these things under \"VERBOSE\"\n> or maybe invent a new explain option to provide this level of detail,\n> though.\n\nThis thread has stalled and the patch has been Waiting on Author since March,\nand skimming the thread there seems to be questions raised over the value\nproposition. Is there progress happening behind the scenes or should we close\nthis entry for now, to re-open in case there is renewed activity/interest?\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 12 Jul 2020 21:52:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
},
{
"msg_contents": "> On 12 Jul 2020, at 21:52, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> This thread has stalled and the patch has been Waiting on Author since March,\n> and skimming the thread there seems to be questions raised over the value\n> proposition. Is there progress happening behind the scenes or should we close\n> this entry for now, to re-open in case there is renewed activity/interest?\n\nWith not too many days of the commitfest left, I'm closing this in 2020-07.\nPlease feel free to add a new entry if there is renewed interest in this patch.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 23 Jul 2020 13:35:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: explain HashAggregate to report bucket and memory stats"
}
] |
[
{
"msg_contents": "This is a new bug in PG12. When you have a database with an OID above \nINT32_MAX (signed), then pg_basebackup fails thus:\n\npg_basebackup: error: could not get write-ahead log end position from \nserver: ERROR: value \"3000000000\" is out of range for type integer\n\nThe cause appears to be commit 6b9e875f7286d8535bff7955e5aa3602e188e436.\n\nA possible fix is attached. An alternative to using \nOidInputFunctionCall() would be exporting something like oidin_subr().\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 6 Jan 2020 09:07:26 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "pg_basebackup fails on databases with high OIDs"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 09:07:26AM +0100, Peter Eisentraut wrote:\n> This is a new bug in PG12. When you have a database with an OID above\n> INT32_MAX (signed), then pg_basebackup fails thus:\n\nYep. Introduced by 6b9e875.\n\n> pg_basebackup: error: could not get write-ahead log end position from\n> server: ERROR: value \"3000000000\" is out of range for type integer\n> \n> The cause appears to be commit 6b9e875f7286d8535bff7955e5aa3602e188e436.\n> \n> A possible fix is attached. An alternative to using OidInputFunctionCall()\n> would be exporting something like oidin_subr().\n\nI think that you would save yourself from a lot of trouble if you do\nthe latter with a subroutine. Not quite like that based on the\nprocess context where the call is done, but remember 21f428eb..\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 17:20:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup fails on databases with high OIDs"
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 9:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jan 06, 2020 at 09:07:26AM +0100, Peter Eisentraut wrote:\n> > This is a new bug in PG12. When you have a database with an OID above\n> > INT32_MAX (signed), then pg_basebackup fails thus:\n>\n> Yep. Introduced by 6b9e875.\n\nIndeed.\n\n> > pg_basebackup: error: could not get write-ahead log end position from\n> > server: ERROR: value \"3000000000\" is out of range for type integer\n> >\n> > The cause appears to be commit 6b9e875f7286d8535bff7955e5aa3602e188e436.\n> >\n> > A possible fix is attached. An alternative to using OidInputFunctionCall()\n> > would be exporting something like oidin_subr().\n>\n> I think that you would save yourself from a lot of trouble if you do\n> the latter with a subroutine. Not quite like that based on the\n> process context where the call is done, but remember 21f428eb..\n\n+0.5 to avoid calling OidInputFunctionCall()\n\n\n",
"msg_date": "Mon, 6 Jan 2020 09:31:28 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup fails on databases with high OIDs"
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 9:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Jan 6, 2020 at 9:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Jan 06, 2020 at 09:07:26AM +0100, Peter Eisentraut wrote:\n> > > This is a new bug in PG12. When you have a database with an OID above\n> > > INT32_MAX (signed), then pg_basebackup fails thus:\n> >\n> > Yep. Introduced by 6b9e875.\n>\n> Indeed.\n\nYeah, clearly :/\n\n\n> > > pg_basebackup: error: could not get write-ahead log end position from\n> > > server: ERROR: value \"3000000000\" is out of range for type integer\n> > >\n> > > The cause appears to be commit 6b9e875f7286d8535bff7955e5aa3602e188e436.\n> > >\n> > > A possible fix is attached. An alternative to using OidInputFunctionCall()\n> > > would be exporting something like oidin_subr().\n> >\n> > I think that you would save yourself from a lot of trouble if you do\n> > the latter with a subroutine. Not quite like that based on the\n> > process context where the call is done, but remember 21f428eb..\n>\n> +0.5 to avoid calling OidInputFunctionCall()\n\nOr just directly using atol() instead of atoi()? Well maybe not\ndirectly but in a small wrapper that verifies it's not bigger than an\nunsigned?\n\nUnlike in cases where we use oidin etc, we are dealing with data that\nis \"mostly trusted\" here, aren't we? Meaning we could call atol() on\nit, and throw an error if it overflows, and be done with it?\nSubdirectories in the data directory aren't exactly \"untrusted enduser\ndata\"...\n\nI agree with the feelings that calling OidInputFunctionCall() from\nthis context leaves me slightly irked.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 6 Jan 2020 21:00:31 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup fails on databases with high OIDs"
},
{
"msg_contents": "On 2020-01-06 21:00, Magnus Hagander wrote:\n>> +0.5 to avoid calling OidInputFunctionCall()\n> \n> Or just directly using atol() instead of atoi()? Well maybe not\n> directly but in a small wrapper that verifies it's not bigger than an\n> unsigned?\n> \n> Unlike in cases where we use oidin etc, we are dealing with data that\n> is \"mostly trusted\" here, aren't we? Meaning we could call atol() on\n> it, and throw an error if it overflows, and be done with it?\n> Subdirectories in the data directory aren't exactly \"untrusted enduser\n> data\"...\n\nYeah, it looks like we are using strtoul() without additional error \nchecking in similar situations, so here is a patch doing it like that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 11 Jan 2020 08:21:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup fails on databases with high OIDs"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 08:21:11AM +0100, Peter Eisentraut wrote:\n> On 2020-01-06 21:00, Magnus Hagander wrote:\n> > > +0.5 to avoid calling OidInputFunctionCall()\n> >\n> > Or just directly using atol() instead of atoi()? Well maybe not\n> > directly but in a small wrapper that verifies it's not bigger than an\n> > unsigned?\n> >\n> > Unlike in cases where we use oidin etc, we are dealing with data that\n> > is \"mostly trusted\" here, aren't we? Meaning we could call atol() on\n> > it, and throw an error if it overflows, and be done with it?\n> > Subdirectories in the data directory aren't exactly \"untrusted enduser\n> > data\"...\n>\n> Yeah, it looks like we are using strtoul() without additional error checking\n> in similar situations, so here is a patch doing it like that.\n\n> -\t\t\t\t\t\t\t\ttrue, isDbDir ? pg_atoi(lastDir + 1, sizeof(Oid), 0) : InvalidOid);\n> +\t\t\t\t\t\t\t\ttrue, isDbDir ? (Oid) strtoul(lastDir + 1, NULL, 10) : InvalidOid);\n\nLooking at some other code, I just discovered the atooid() macro that already\ndoes the same, maybe it'd be better for consistency to use that instead?\n\n\n",
"msg_date": "Sat, 11 Jan 2020 17:44:37 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup fails on databases with high OIDs"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 5:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Jan 11, 2020 at 08:21:11AM +0100, Peter Eisentraut wrote:\n> > On 2020-01-06 21:00, Magnus Hagander wrote:\n> > > > +0.5 to avoid calling OidInputFunctionCall()\n> > >\n> > > Or just directly using atol() instead of atoi()? Well maybe not\n> > > directly but in a small wrapper that verifies it's not bigger than an\n> > > unsigned?\n> > >\n> > > Unlike in cases where we use oidin etc, we are dealing with data that\n> > > is \"mostly trusted\" here, aren't we? Meaning we could call atol() on\n> > > it, and throw an error if it overflows, and be done with it?\n> > > Subdirectories in the data directory aren't exactly \"untrusted enduser\n> > > data\"...\n> >\n> > Yeah, it looks like we are using strtoul() without additional error checking\n> > in similar situations, so here is a patch doing it like that.\n>\n> > - true, isDbDir ? pg_atoi(lastDir + 1, sizeof(Oid), 0) : InvalidOid);\n> > + true, isDbDir ? (Oid) strtoul(lastDir + 1, NULL, 10) : InvalidOid);\n>\n> Looking at some other code, I just discovered the atooid() macro that already\n> does the same, maybe it'd be better for consistency to use that instead?\n\n+1. Whie it does the same thing, consistency is good! :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sat, 11 Jan 2020 17:47:41 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup fails on databases with high OIDs"
},
{
"msg_contents": "On 2020-01-11 17:47, Magnus Hagander wrote:\n> On Sat, Jan 11, 2020 at 5:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Sat, Jan 11, 2020 at 08:21:11AM +0100, Peter Eisentraut wrote:\n>>> On 2020-01-06 21:00, Magnus Hagander wrote:\n>>>>> +0.5 to avoid calling OidInputFunctionCall()\n>>>>\n>>>> Or just directly using atol() instead of atoi()? Well maybe not\n>>>> directly but in a small wrapper that verifies it's not bigger than an\n>>>> unsigned?\n>>>>\n>>>> Unlike in cases where we use oidin etc, we are dealing with data that\n>>>> is \"mostly trusted\" here, aren't we? Meaning we could call atol() on\n>>>> it, and throw an error if it overflows, and be done with it?\n>>>> Subdirectories in the data directory aren't exactly \"untrusted enduser\n>>>> data\"...\n>>>\n>>> Yeah, it looks like we are using strtoul() without additional error checking\n>>> in similar situations, so here is a patch doing it like that.\n>>\n>>> - true, isDbDir ? pg_atoi(lastDir + 1, sizeof(Oid), 0) : InvalidOid);\n>>> + true, isDbDir ? (Oid) strtoul(lastDir + 1, NULL, 10) : InvalidOid);\n>>\n>> Looking at some other code, I just discovered the atooid() macro that already\n>> does the same, maybe it'd be better for consistency to use that instead?\n> \n> +1. Whie it does the same thing, consistency is good! :)\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 Jan 2020 13:49:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup fails on databases with high OIDs"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 1:49 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-01-11 17:47, Magnus Hagander wrote:\n> > On Sat, Jan 11, 2020 at 5:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >>\n> >> On Sat, Jan 11, 2020 at 08:21:11AM +0100, Peter Eisentraut wrote:\n> >>> On 2020-01-06 21:00, Magnus Hagander wrote:\n> >>>>> +0.5 to avoid calling OidInputFunctionCall()\n> >>>>\n> >>>> Or just directly using atol() instead of atoi()? Well maybe not\n> >>>> directly but in a small wrapper that verifies it's not bigger than an\n> >>>> unsigned?\n> >>>>\n> >>>> Unlike in cases where we use oidin etc, we are dealing with data that\n> >>>> is \"mostly trusted\" here, aren't we? Meaning we could call atol() on\n> >>>> it, and throw an error if it overflows, and be done with it?\n> >>>> Subdirectories in the data directory aren't exactly \"untrusted enduser\n> >>>> data\"...\n> >>>\n> >>> Yeah, it looks like we are using strtoul() without additional error checking\n> >>> in similar situations, so here is a patch doing it like that.\n> >>\n> >>> - true, isDbDir ? pg_atoi(lastDir + 1, sizeof(Oid), 0) : InvalidOid);\n> >>> + true, isDbDir ? (Oid) strtoul(lastDir + 1, NULL, 10) : InvalidOid);\n> >>\n> >> Looking at some other code, I just discovered the atooid() macro that already\n> >> does the same, maybe it'd be better for consistency to use that instead?\n> >\n> > +1. Whie it does the same thing, consistency is good! :)\n>\n> committed\n\nThanks!\n\n\n",
"msg_date": "Mon, 13 Jan 2020 14:07:06 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup fails on databases with high OIDs"
}
] |
[
{
"msg_contents": "ALTER TABLE ... SET STORAGE does not propagate to indexes, even though \nindexes created afterwards get the new storage setting. So depending on \nthe order of commands, you can get inconsistent storage settings between \nindexes and tables. For example:\n\ncreate table foo1 (a text);\nalter table foo1 alter column a set storage external;\ncreate index foo1i on foo1(a);\ninsert into foo1 values(repeat('a', 10000));\nERROR: index row requires 10016 bytes, maximum size is 8191\n\n(Storage \"external\" disables compression.)\n\nbut\n\ncreate table foo1 (a text);\ncreate index foo1i on foo1(a);\nalter table foo1 alter column a set storage external;\ninsert into foo1 values(repeat('a', 10000));\n-- no error\n\nAlso, this second state cannot be reproduced by pg_dump, so a possible \neffect is that such a database would fail to restore.\n\nAttached is a patch that attempts to fix this by propagating the storage \nchange to existing indexes. This triggers a few regression test \nfailures (going from no error to error), which I attempted to fix up, \nbut I haven't analyzed what the tests were trying to do, so it might \nneed another look.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 6 Jan 2020 13:32:53 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "On 2020-01-06 13:32, Peter Eisentraut wrote:\n> Attached is a patch that attempts to fix this by propagating the storage\n> change to existing indexes. This triggers a few regression test\n> failures (going from no error to error), which I attempted to fix up,\n> but I haven't analyzed what the tests were trying to do, so it might\n> need another look.\n\nAttached is a more polished patch, with tests.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 24 Feb 2020 08:28:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "Hello Peter,\n\nOn Mon, Feb 24, 2020 at 12:59 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-01-06 13:32, Peter Eisentraut wrote:\n> > Attached is a patch that attempts to fix this by propagating the storage\n> > change to existing indexes. This triggers a few regression test\n> > failures (going from no error to error), which I attempted to fix up,\n> > but I haven't analyzed what the tests were trying to do, so it might\n> > need another look.\n>\n> Attached is a more polished patch, with tests.\n\nI've reproduced the issue on head. And, the patch seems to solve the\nproblem. The patch looks good to me. But, I've a small doubt regarding\nthe changes in test_decoding regression file.\n\ndiff --git a/contrib/test_decoding/sql/toast.sql\nb/contrib/test_decoding/sql/toast.sql\n..\n-INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 10000));\n-SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;\n+INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 269));\n+SELECT pg_column_size(toasted_key) > 2^11 FROM toasted_several;\n\nThis actually tests whether we can decode \"old\" tuples bigger than the\nmax heap tuple size correctly which is around 8KB. But, the above\nchanges will make the tuple size around 3KB. So, it'll not be able to\ntest that particular scenario.Thoughts?\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Feb 2020 16:51:13 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "On 2020-02-24 12:21, Kuntal Ghosh wrote:\n> I've reproduced the issue on head. And, the patch seems to solve the\n> problem. The patch looks good to me. But, I've a small doubt regarding\n> the changes in test_decoding regression file.\n> \n> diff --git a/contrib/test_decoding/sql/toast.sql\n> b/contrib/test_decoding/sql/toast.sql\n> ..\n> -INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 10000));\n> -SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;\n> +INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 269));\n> +SELECT pg_column_size(toasted_key) > 2^11 FROM toasted_several;\n> \n> This actually tests whether we can decode \"old\" tuples bigger than the\n> max heap tuple size correctly which is around 8KB. But, the above\n> changes will make the tuple size around 3KB. So, it'll not be able to\n> test that particular scenario.Thoughts?\n\nOK, this is interesting. The details of this are somewhat unfamiliar to \nme, but it appears that due to TOAST_INDEX_HACK in indextuple.c, an \nindex tuple cannot be larger than 8191 bytes when untoasted (but not \nuncompressed).\n\nWhat the test case above is testing is a situation where the heap tuple \nis stored toasted uncompressed (storage external) but the index tuple is \nnot (probably compressed inline). This is exactly the situation that I \nwas contending should not be possible, because it cannot be dumped or \nrestored.\n\nAn alternative would be that we make this situation fully supported. \nThen we'd probably need at least ALTER INDEX ... ALTER COLUMN ... SET \nSTORAGE, and some pg_dump support.\n\nThoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Feb 2020 08:39:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "On Tue, Feb 25, 2020 at 1:09 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-02-24 12:21, Kuntal Ghosh wrote:\n> > I've reproduced the issue on head. And, the patch seems to solve the\n> > problem. The patch looks good to me. But, I've a small doubt regarding\n> > the changes in test_decoding regression file.\n> >\n> > diff --git a/contrib/test_decoding/sql/toast.sql\n> > b/contrib/test_decoding/sql/toast.sql\n> > ..\n> > -INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 10000));\n> > -SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;\n> > +INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 269));\n> > +SELECT pg_column_size(toasted_key) > 2^11 FROM toasted_several;\n> >\n> > This actually tests whether we can decode \"old\" tuples bigger than the\n> > max heap tuple size correctly which is around 8KB. But, the above\n> > changes will make the tuple size around 3KB. So, it'll not be able to\n> > test that particular scenario.Thoughts?\n>\n> OK, this is interesting. The details of this are somewhat unfamiliar to\n> me, but it appears that due to TOAST_INDEX_HACK in indextuple.c, an\n> index tuple cannot be larger than 8191 bytes when untoasted (but not\n> uncompressed).\n>\n> What the test case above is testing is a situation where the heap tuple\n> is stored toasted uncompressed (storage external) but the index tuple is\n> not (probably compressed inline). This is exactly the situation that I\n> was contending should not be possible, because it cannot be dumped or\n> restored.\n>\nYeah. If we only commit this patch to fix the issue, we're going to\nput some restriction for the above situation, i.e., the index for an\nexternal attribute has to be stored as an external (i.e. uncompressed)\nvalue. So, a lot of existing workload might start failing after an\nupgrade. I think there should be an option to store the index of an\nexternal attribute as a compressed inline value.\n\n> An alternative would be that we make this situation fully supported.\n> Then we'd probably need at least ALTER INDEX ... ALTER COLUMN ... SET\n> STORAGE, and some pg_dump support.\n>\n> Thoughts?\nYes. We need the support for this syntax along with the bug fix patch.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Feb 2020 18:33:47 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "On 2020-Feb-25, Peter Eisentraut wrote:\n\n> An alternative would be that we make this situation fully supported. Then\n> we'd probably need at least ALTER INDEX ... ALTER COLUMN ... SET STORAGE,\n> and some pg_dump support.\n\nI think this is a more promising direction.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Mar 2020 13:17:20 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "> ALTER TABLE ... SET STORAGE does not propagate to indexes, even though\n> indexes created afterwards get the new storage setting. So depending on\n> the order of commands, you can get inconsistent storage settings between\n> indexes and tables.\n\nI've absolutely noticed this behavior, I just thought it was intentional\nfor some reason.\n\nHaving this behavior change as stated above would be very welcome in my\nopinion. It's always something i've had to manually think about in my\nmigration scripts, so it would be welcome from my view.\n\n-Adam\n\n>\n\nALTER TABLE ... SET STORAGE does not propagate to indexes, even though> indexes created afterwards get the new storage setting. So depending on> the order of commands, you can get inconsistent storage settings between> indexes and tables.I've absolutely noticed this behavior, I just thought it was intentional for some reason.Having this behavior change as stated above would be very welcome in my opinion. It's always something i've had to manually think about in my migration scripts, so it would be welcome from my view.-Adam",
"msg_date": "Mon, 30 Mar 2020 12:23:44 -0400",
"msg_from": "Adam Brusselback <adambrusselback@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "On 2020-03-30 18:17, Alvaro Herrera wrote:\n> On 2020-Feb-25, Peter Eisentraut wrote:\n>> An alternative would be that we make this situation fully supported. Then\n>> we'd probably need at least ALTER INDEX ... ALTER COLUMN ... SET STORAGE,\n>> and some pg_dump support.\n> \n> I think this is a more promising direction.\n\nI have started implementing the ALTER INDEX command, which by itself \nisn't very hard, but it requires significant new infrastructure in \npg_dump, and probably also a bit of work in psql, and that's all a bit \ntoo much right now.\n\nAn alternative for the short term is the attached patch. It's the same \nas before, but I have hacked up the test_decoding test to achieve the \neffect of ALTER INDEX with direct catalog manipulation. This preserves \nthe spirit of the test case, but allows us to fix everything else about \nthis situation.\n\nOne thing to remember is that the current situation is broken. While \nyou can set index columns to have different storage than the \ncorresponding table columns, pg_dump does not preserve that, because it \ndumps indexes after ALTER TABLE commands. So at the moment, having \nthese two things different isn't really supported. The proposed patch \njust makes this behave consistently and allows adding an ALTER INDEX \ncommand later on if desired.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 9 Apr 2020 15:07:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "I'm surprised that this hasn't applied yet, because:\n\nOn 2020-Apr-09, Peter Eisentraut wrote:\n\n> One thing to remember is that the current situation is broken. While you\n> can set index columns to have different storage than the corresponding table\n> columns, pg_dump does not preserve that, because it dumps indexes after\n> ALTER TABLE commands. So at the moment, having these two things different\n> isn't really supported.\n\nSo I have to ask -- are you planning to get this patch pushed and\nbackpatched?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Apr 2020 19:56:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "On 2020-04-22 01:56, Alvaro Herrera wrote:\n> I'm surprised that this hasn't applied yet, because:\n> \n> On 2020-Apr-09, Peter Eisentraut wrote:\n> \n>> One thing to remember is that the current situation is broken. While you\n>> can set index columns to have different storage than the corresponding table\n>> columns, pg_dump does not preserve that, because it dumps indexes after\n>> ALTER TABLE commands. So at the moment, having these two things different\n>> isn't really supported.\n> \n> So I have to ask -- are you planning to get this patch pushed and\n> backpatched?\n\nI think I should, but I figured I want to give some extra time for \npeople to consider the horror that I created in the test_decoding tests.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Apr 2020 16:26:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "On 2020-04-22 16:26, Peter Eisentraut wrote:\n> On 2020-04-22 01:56, Alvaro Herrera wrote:\n>> I'm surprised that this hasn't applied yet, because:\n>>\n>> On 2020-Apr-09, Peter Eisentraut wrote:\n>>\n>>> One thing to remember is that the current situation is broken. While you\n>>> can set index columns to have different storage than the corresponding table\n>>> columns, pg_dump does not preserve that, because it dumps indexes after\n>>> ALTER TABLE commands. So at the moment, having these two things different\n>>> isn't really supported.\n>>\n>> So I have to ask -- are you planning to get this patch pushed and\n>> backpatched?\n> \n> I think I should, but I figured I want to give some extra time for\n> people to consider the horror that I created in the test_decoding tests.\n\nOK then, if there are no last-minute objects, I'll commit this for the \nupcoming minor releases.\n\nThis is the patch summary again:\n\nDate: Thu, 9 Apr 2020 14:10:01 +0200\nSubject: [PATCH v3] Propagate ALTER TABLE ... SET STORAGE to indexes\n\nWhen creating a new index, the attstorage setting of the table column\nis copied to regular (non-expression) index columns. But a later\nALTER TABLE ... SET STORAGE is not propagated to indexes, thus\ncreating an inconsistent and undumpable state.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 May 2020 16:37:55 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
},
{
"msg_contents": "On 2020-05-06 16:37, Peter Eisentraut wrote:\n> On 2020-04-22 16:26, Peter Eisentraut wrote:\n>> On 2020-04-22 01:56, Alvaro Herrera wrote:\n>>> I'm surprised that this hasn't applied yet, because:\n>>>\n>>> On 2020-Apr-09, Peter Eisentraut wrote:\n>>>\n>>>> One thing to remember is that the current situation is broken. While you\n>>>> can set index columns to have different storage than the corresponding table\n>>>> columns, pg_dump does not preserve that, because it dumps indexes after\n>>>> ALTER TABLE commands. So at the moment, having these two things different\n>>>> isn't really supported.\n>>>\n>>> So I have to ask -- are you planning to get this patch pushed and\n>>> backpatched?\n>>\n>> I think I should, but I figured I want to give some extra time for\n>> people to consider the horror that I created in the test_decoding tests.\n> \n> OK then, if there are no last-minute objects, I'll commit this for the\n> upcoming minor releases.\n\nI have committed this and backpatched to PG12 and PG11. Before that, \nthe catalog manipulation code is factored quite differently and it would \nbe more complicated to backpatch and I didn't find that worth it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 8 May 2020 10:17:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE ... SET STORAGE does not propagate to indexes"
}
] |
[
{
"msg_contents": "Greetings -hackers,\n\nGoogle Summer of Code is back for 2020! They have a similar set of\nrequirements, expectations, and timeline as last year.\n\nNow is the time to be working to get together a set of projects we'd\nlike to have GSoC students work on over the summer. Similar to last\nyear, we need to have a good set of projects for students to choose from\nin advance of the deadline for mentoring organizations.\n\nThe deadline for Mentoring organizations to apply is: February 5.\n\nThe list of accepted organization will be published around February 20.\n\nUnsurprisingly, we'll need to have an Ideas page again, so I've gone\nahead and created one (copying last year's):\n\nhttps://wiki.postgresql.org/wiki/GSoC_2020\n\nGoogle discusses what makes a good \"Ideas\" list here:\n\nhttps://google.github.io/gsocguides/mentor/defining-a-project-ideas-list.html\n\nAll the entries are marked with '2019' to indicate they were pulled from\nlast year. If the project from last year is still relevant, please\nupdate it to be '2020' and make sure to update all of the information\n(in particular, make sure to list yourself as a mentor and remove the\nother mentors, as appropriate).\n\nNew entries are certainly welcome and encouraged, just be sure to note\nthem as '2020' when you add it.\n\nProjects from last year which were worked on but have significant\nfollow-on work to be completed are absolutely welcome as well- simply\nupdate the description appropriately and mark it as being for '2020'.\n\nWhen we get closer to actually submitting our application, I'll clean\nout the '2019' entries that didn't get any updates. Also- if there are\nany projects that are no longer appropriate (maybe they were completed,\nfor example and no longer need work), please feel free to remove them.\nI took a whack at that myself but it's entirely possible I missed some\n(and if I removed any that shouldn't have been- feel free to add them\nback by copying from the 2019 page).\n\nAs a reminder, each idea on the page should be in the format that the\nother entries are in and should include:\n\n- Project title/one-line description\n- Brief, 2-5 sentence, description of the project (remember, these are\n 12-week projects)\n- Description of programming skills needed and estimation of the\n difficulty level\n- List of potential mentors\n- Expected Outcomes\n\nAs with last year, please consider PostgreSQL to be an \"Umbrella\"\nproject and that anything which would be considered \"PostgreSQL Family\"\nper the News/Announce policy [2] is likely to be acceptable as a\nPostgreSQL GSoC project.\n\nIn other words, if you're a contributor or developer on WAL-G, barman,\npgBackRest, the PostgreSQL website (pgweb), the PgEU/PgUS website code\n(pgeu-system), pgAdmin4, pgbouncer, pldebugger, the PG RPMs (pgrpms),\nthe JDBC driver, the ODBC driver, or any of the many other PG Family\nprojects, please feel free to add a project for consideration! If we\nget quite a few, we can organize the page further based on which\nproject or maybe what skills are needed or similar.\n\nLet's have another great year of GSoC with PostgreSQL!\n\nThanks!\n\nStephen\n\n[1]: https://developers.google.com/open-source/gsoc/timeline\n[2]: https://www.postgresql.org/about/policies/news-and-events/",
"msg_date": "Mon, 6 Jan 2020 17:45:18 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "GSoC 2020"
}
] |
[
{
"msg_contents": "Hello,\n\nWhen I tried to repack my bloated table an error occurred:\n\nFATAL: terminating connection due to idle-in-transaction timeout\nERROR: query failed: SSL connection has been closed unexpectedly\nDETAIL: query was: SAVEPOINT repack_sp1\n\nand this error is occurring in large tables only, and current table size which is running about 700GB\n\n/pg_repack --version\npg_repack 1.4.3\n\nDB version: PostgreSQL 9.6.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit\n\n\n\nAppreciate the assistance.\nFATAL: terminating connection due to idle-in-transaction timeout · Issue #222 · reorg/pg_repack\n\n| \n| \n| \n| | |\n\n |\n\n |\n| \n| | \nFATAL: terminating connection due to idle-in-transaction timeout · Issu...\n\nWhen I tried to repack my bloated table an error occurred: FATAL: terminating connection due to idle-in-transact...\n |\n\n |\n\n |\n\n\n\n\nRegards,Nagaraj\nHello,When I tried to repack my bloated table an error occurred:FATAL: terminating connection due to idle-in-transaction timeoutERROR: query failed: SSL connection has been closed unexpectedlyDETAIL: query was: SAVEPOINT repack_sp1and this error is occurring in large tables only, and current table size which is running about 700GB/pg_repack --versionpg_repack 1.4.3DB version: PostgreSQL 9.6.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bitAppreciate the assistance.FATAL: terminating connection due to idle-in-transaction timeout · Issue #222 · reorg/pg_repackFATAL: terminating connection due to idle-in-transaction timeout · Issu...When I tried to repack my bloated table an error occurred: FATAL: terminating connection due to idle-in-transact...Regards,Nagaraj",
"msg_date": "Tue, 7 Jan 2020 06:15:09 +0000 (UTC)",
"msg_from": "Nagaraj Raj <nagaraj.sf@yahoo.com>",
"msg_from_op": true,
"msg_subject": "pg_repack failure"
},
{
"msg_contents": "On Tue, Jan 07, 2020 at 06:15:09AM +0000, Nagaraj Raj wrote:\n> and this error is occurring in large tables only, and current table\n> size which is running about 700GB\n> \n> /pg_repack --version\n> pg_repack 1.4.3\n> \n> DB version: PostgreSQL 9.6.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit\n\nI think that you had better report that directly to the maintainers of\nthe tool here:\nhttps://github.com/reorg/pg_repack/\n--\nMichael",
"msg_date": "Tue, 7 Jan 2020 15:46:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_repack failure"
}
] |
[
{
"msg_contents": "I have a test where a user creates a temp table and then disconnect,\nconcurrently we try to do DROP OWNED BY CASCADE on the same user. Seems\nthis causes race condition between temptable deletion during disconnection\n(@RemoveTempRelations(myTempNamespace)) and DROP OWNED BY CASCADE operation\nwhich will try to remove same temp table when they find them as part of\npg_shdepend. Which will result in internal error cache lookup failed as\nbelow.\n\nDROP OWNED BY test_role CASCADE;\n2020-01-07 12:35:06.524 IST [26064] ERROR: cache lookup failed for\nrelation 41019\n2020-01-07 12:35:06.524 IST [26064] STATEMENT: DROP OWNED BY test_role\nCASCADE;\nreproduce.sql:8: ERROR: cache lookup failed for relation 41019\n\nTEST\n=====================\ncreate database test_db;\ncreate user test_superuser superuser;\n\\c test_db test_superuser\nCREATE ROLE test_role nosuperuser login password 'test_pwd' ;\n\\c test_db test_role\nCREATE TEMPORARY TABLE tmp_table(col1 int);\n\\c test_db test_superuser\nDROP OWNED BY test_role CASCADE;\n\n\n-- \nThanks and Regards\nMithun Chicklore Yogendra\nEnterpriseDB: http://www.enterprisedb.com\n\nI have a test where a user creates a temp table and then disconnect, concurrently we try to do DROP OWNED BY CASCADE on the same user. Seems this causes race condition between temptable deletion during disconnection (@RemoveTempRelations(myTempNamespace)) and DROP OWNED BY CASCADE operation which will try to remove same temp table when they find them as part of pg_shdepend. Which will result in internal error cache lookup failed as below.DROP OWNED BY test_role CASCADE;2020-01-07 12:35:06.524 IST [26064] ERROR: cache lookup failed for relation 410192020-01-07 12:35:06.524 IST [26064] STATEMENT: DROP OWNED BY test_role CASCADE;reproduce.sql:8: ERROR: cache lookup failed for relation 41019TEST=====================create database test_db;create user test_superuser superuser;\\c test_db test_superuserCREATE ROLE test_role nosuperuser login password 'test_pwd' ;\\c test_db test_roleCREATE TEMPORARY TABLE tmp_table(col1 int);\\c test_db test_superuserDROP OWNED BY test_role CASCADE;-- Thanks and RegardsMithun Chicklore YogendraEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 7 Jan 2020 12:52:00 +0530",
"msg_from": "Mithun Cy <mithun.cy@gmail.com>",
"msg_from_op": true,
"msg_subject": "DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "On 2020-Jan-07, Mithun Cy wrote:\n\n> I have a test where a user creates a temp table and then disconnect,\n> concurrently we try to do DROP OWNED BY CASCADE on the same user. Seems\n> this causes race condition between temptable deletion during disconnection\n> (@RemoveTempRelations(myTempNamespace)) and DROP OWNED BY CASCADE operation\n> which will try to remove same temp table when they find them as part of\n> pg_shdepend.\n\nCute.\n\nThis seems fiddly to handle better; maybe you'd have to have a new\nPERFORM_DELETION_* flag that says to ignore \"missing\" objects; so when\nyou go from shdepDropOwned, you pass that flag all the way down to\ndoDeletion(), so the objtype-specific function is called with\n\"missing_ok\", and ignore if the object has already gone away. That's\ntedious because none of the Remove* functions have the concept of\nmissing_ok.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 Jan 2020 19:45:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 07:45:06PM -0300, Alvaro Herrera wrote:\n> This seems fiddly to handle better; maybe you'd have to have a new\n> PERFORM_DELETION_* flag that says to ignore \"missing\" objects; so when\n> you go from shdepDropOwned, you pass that flag all the way down to\n> doDeletion(), so the objtype-specific function is called with\n> \"missing_ok\", and ignore if the object has already gone away. That's\n> tedious because none of the Remove* functions have the concept of\n> missing_ok.\n\nYes, that would be invasive and I'd rather not backpatch such a change\nbut I don't see a better or cleaner way to handle that correctly\neither than the way you are describing. Looking at all the\nsubroutines removing the objects by OID, a patch among those lines is\nrepetitive, though not complicated to do.\n--\nMichael",
"msg_date": "Tue, 14 Jan 2020 09:19:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jan-07, Mithun Cy wrote:\n>> I have a test where a user creates a temp table and then disconnect,\n>> concurrently we try to do DROP OWNED BY CASCADE on the same user. Seems\n>> this causes race condition between temptable deletion during disconnection\n>> (@RemoveTempRelations(myTempNamespace)) and DROP OWNED BY CASCADE operation\n>> which will try to remove same temp table when they find them as part of\n>> pg_shdepend.\n\n> Cute.\n\nIs this really any worse than any other attempt to issue two DROPs against\nthe same object concurrently? Maybe we can just call it pilot error.\n\n> This seems fiddly to handle better; maybe you'd have to have a new\n> PERFORM_DELETION_* flag that says to ignore \"missing\" objects; so when\n> you go from shdepDropOwned, you pass that flag all the way down to\n> doDeletion(), so the objtype-specific function is called with\n> \"missing_ok\", and ignore if the object has already gone away. That's\n> tedious because none of the Remove* functions have the concept of\n> missing_ok.\n\nThat seems fundamentally wrong. By the time we've queued an object for\ndeletion in dependency.c, we have a lock on it, and we've verified that\nthe object is still there (cf. systable_recheck_tuple calls).\nIf shdepDropOwned is doing it differently, I'd say shdepDropOwned is\ndoing it wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jan 2020 19:27:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "On 2020-Jan-13, Tom Lane wrote:\n\n> That seems fundamentally wrong. By the time we've queued an object for\n> deletion in dependency.c, we have a lock on it, and we've verified that\n> the object is still there (cf. systable_recheck_tuple calls).\n> If shdepDropOwned is doing it differently, I'd say shdepDropOwned is\n> doing it wrong.\n\nHmm, it seems to be doing it differently. Maybe it should be acquiring\nlocks on all objects in that nested loop and verified them for\nexistence, so that when it calls performMultipleDeletions the objects\nare already locked, as you say.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Jan 2020 18:59:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "On 2020-Jan-14, Alvaro Herrera wrote:\n\n> On 2020-Jan-13, Tom Lane wrote:\n> \n> > That seems fundamentally wrong. By the time we've queued an object for\n> > deletion in dependency.c, we have a lock on it, and we've verified that\n> > the object is still there (cf. systable_recheck_tuple calls).\n> > If shdepDropOwned is doing it differently, I'd say shdepDropOwned is\n> > doing it wrong.\n> \n> Hmm, it seems to be doing it differently. Maybe it should be acquiring\n> locks on all objects in that nested loop and verified them for\n> existence, so that when it calls performMultipleDeletions the objects\n> are already locked, as you say.\n\nYeah, this solves the reported bug.\n\nThis is not a 100% solution: there's the cases when the user is removed\nfrom an ACL and from a policy, and those deletions are done directly\ninstead of accumulating to the end for a mass deletion.\n\nI had to export AcquireDeletionLock (previously a static in\ndependency.c). I wonder if I should export ReleaseDeletionLock too, for\nsymmetry.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 23 Jan 2020 14:14:23 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "On 2020-Jan-23, Alvaro Herrera wrote:\n\n> This is not a 100% solution: there's the cases when the user is removed\n> from an ACL and from a policy, and those deletions are done directly\n> instead of accumulating to the end for a mass deletion.\n> \n> I had to export AcquireDeletionLock (previously a static in\n> dependency.c). I wonder if I should export ReleaseDeletionLock too, for\n> symmetry.\n\nFWIW I'm going to withhold this bugfix until after the next set of\nminors are out. I'd rather not find out later that I have no way to fix\n9.4 if I break it, for a bug that has existed forever ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Feb 2020 18:13:02 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI have tested the patch with REL_12_STABLE for the given error scenario by running the \"CREATE TEMPORARY TABLE...\" and \"DROP OWNED BY...\" commands concurrently using parallel background workers. I have also tested a few related scenarios and the patch does seem to fix the reported bug. Ran make installcheck-world, no difference with and without patch.",
"msg_date": "Wed, 19 Feb 2020 13:59:21 +0000",
"msg_from": "ahsan hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jan-14, Alvaro Herrera wrote:\n>> Hmm, it seems to be doing it differently. Maybe it should be acquiring\n>> locks on all objects in that nested loop and verified them for\n>> existence, so that when it calls performMultipleDeletions the objects\n>> are already locked, as you say.\n\n> Yeah, this solves the reported bug.\n\nI looked this over and think it should be fine. There will be cases\nwhere we get a deadlock error, but such risks existed anyway, since\nwe'd have acquired all the same locks later in the process.\n\n> This is not a 100% solution: there's the cases when the user is removed\n> from an ACL and from a policy, and those deletions are done directly\n> instead of accumulating to the end for a mass deletion.\n\nLet's worry about that when and if we get field complaints.\n\n> I had to export AcquireDeletionLock (previously a static in\n> dependency.c). I wonder if I should export ReleaseDeletionLock too, for\n> symmetry.\n\nHmmm ... there is an argument for doing ReleaseDeletionLock in the code\npaths where you discover that the object's been deleted. Holding a lock\non a no-longer-existent object OID should be harmless from a deadlock\nstandpoint; but it does represent useless consumption of a shared lock\ntable entry, and that's a resource this operation could already burn\nwith abandon.\n\nAlso, if we're exporting these, it's worth expending a bit more\neffort on their header comments. In particular AcquireDeletionLock\nshould describe its flags argument; perhaps along the lines of\n\"Accepts the same flags as performDeletion (though currently only\nPERFORM_DELETION_CONCURRENTLY does anything)\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Apr 2020 15:08:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
},
{
"msg_contents": "On 2020-Apr-06, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jan-14, Alvaro Herrera wrote:\n> >> Hmm, it seems to be doing it differently. Maybe it should be acquiring\n> >> locks on all objects in that nested loop and verified them for\n> >> existence, so that when it calls performMultipleDeletions the objects\n> >> are already locked, as you say.\n> \n> > Yeah, this solves the reported bug.\n> \n> I looked this over and think it should be fine. There will be cases\n> where we get a deadlock error, but such risks existed anyway, since\n> we'd have acquired all the same locks later in the process.\n\nThanks for looking again. I have pushed this to all branches, with\nthese changes:\n\n> Hmmm ... there is an argument for doing ReleaseDeletionLock in the code\n> paths where you discover that the object's been deleted.\n\nAdded this. This of course required also exporting ReleaseDeletionLock,\nwhich closes my concern about exporting only half of that API.\n\n> Also, if we're exporting these, it's worth expending a bit more\n> effort on their header comments. In particular AcquireDeletionLock\n> should describe its flags argument; perhaps along the lines of\n> \"Accepts the same flags as performDeletion (though currently only\n> PERFORM_DELETION_CONCURRENTLY does anything)\".\n\nDid this too. I also changed the comment to indicate that, since\nthey're now exported APIs, they might grow the ability to lock shared\nobjects in the future. In fact, we have some places where we're using\nLockSharedObject directly to lock objects to drop; it seems reasonable\nto think that we should augment AcquireDeletionLock to handle those\nobjects and make those places use the new API.\n\nLastly: right now, only performMultipleDeletions passes the flags down\nto AcquireDeletionLock -- there are a couple places that drop objects\nand call AcquireDeletionLock with flags=0. There's no bug AFAICS\nbecause those cannot be called while running concurrent object drop.\nBut for correctness, those should pass flags too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 May 2020 13:02:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP OWNED CASCADE vs Temp tables"
}
] |
[
{
"msg_contents": "Hi,\nI am getting a server crash on publication server on HEAD for the below\ntest case.\n\nCommit: b9c130a1fdf16cd99afb390c186d19acaea7d132\n\nData setup:\nPublication server:\nwal_level = logical\nmax_wal_senders = 10\nmax_replication_slots = 15\nwal_log_hints = on\nhot_standby_feedback = on\nwal_receiver_status_interval = 1\nlisten_addresses='*'\nlog_min_messages=debug1\nwal_sender_timeout = 0\nlogical_decoding_work_mem=64kB\n\nSubscription server:\nwal_level = logical\nwal_log_hints = on\nhot_standby_feedback = on\nwal_receiver_status_interval = 1\nlog_min_messages=debug1\nport=5433\nlogical_decoding_work_mem=64kB\n\nTest case:\nPublication server:\ncreate table test(a int);\ncreate publication test_pub for all tables;\nalter table test replica identity NOTHING ;\n\nSubscription server:\ncreate table test(a int);\ncreate subscription test_sub CONNECTION 'host=172.16.208.32 port=5432\ndbname=postgres user=centos' PUBLICATION test_pub WITH ( slot_name =\ntest_slot_sub);\n\nPublication server:\ninsert into test values(generate_series(1,5),'aa');\n\nAfter executing the DML in publication server ,it crashed with the\nmentioned assert.\n\n*Publication Server log File snippet:*\n2020-01-07 11:54:00.476 UTC [17417] DETAIL: Streaming transactions\ncommitting after 0/163CC30, reading WAL from 0/163CC30.\n2020-01-07 11:54:00.476 UTC [17417] LOG: logical decoding found consistent\npoint at 0/163CC30\n2020-01-07 11:54:00.476 UTC [17417] DETAIL: There are no running\ntransactions.\n*TRAP: FailedAssertion(\"rel->rd_rel->relreplident ==\nREPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\nREPLICA_IDENTITY_FULL || rel->rd_rel->relreplident ==\nREPLICA_IDENTITY_INDEX\", File: \"proto.c\", Line: 148)*\npostgres: walsender centos 172.16.208.32(40292)\nidle(ExceptionalCondition+0x53)[0x8ca453]\npostgres: walsender centos 172.16.208.32(40292) idle[0x74c515]\n/home/centos/PG_master/postgresql/inst/lib/pgoutput.so(+0x2114)[0x7f3bb463d114]\npostgres: walsender centos 172.16.208.32(40292) idle[0x747fa8]\npostgres: walsender centos 172.16.208.32(40292)\nidle(ReorderBufferCommit+0x12ee)[0x75187e]\npostgres: walsender centos 172.16.208.32(40292) idle[0x7455a8]\npostgres: walsender centos 172.16.208.32(40292)\nidle(LogicalDecodingProcessRecord+0x2ea)[0x74593a]\npostgres: walsender centos 172.16.208.32(40292) idle[0x766c24]\npostgres: walsender centos 172.16.208.32(40292) idle[0x7693a2]\npostgres: walsender centos 172.16.208.32(40292)\nidle(exec_replication_command+0xbb1)[0x76a091]\npostgres: walsender centos 172.16.208.32(40292)\nidle(PostgresMain+0x4b9)[0x7b1099]\npostgres: walsender centos 172.16.208.32(40292) idle[0x482bc7]\npostgres: walsender centos 172.16.208.32(40292)\nidle(PostmasterMain+0xdbf)[0x73339f]\npostgres: walsender centos 172.16.208.32(40292) idle(main+0x44f)[0x48403f]\n/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f3bc53f23d5]\npostgres: walsender centos 172.16.208.32(40292) idle[0x4840a6]\n2020-01-07 11:54:00.802 UTC [17359] LOG: server process (PID 17417) was\nterminated by signal 6: Aborted\n2020-01-07 11:54:00.802 UTC [17359] LOG: terminating any other active\nserver processes\n2020-01-07 11:54:00.802 UTC [17413] WARNING: terminating connection\nbecause of crash of another server process\n2020-01-07 11:54:00.802 UTC [17413] DETAIL: The postmaster has commanded\nthis server process to roll back the current transaction and exit, because\nanother server process exited abnormally and possibly corrupted shared\nmemory.\n\n\nStack Trace:\nCore was generated by `postgres: walsender centos 172.16.208.32(40286) idle\n '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f3bc5406207 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-260.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\nkrb5-libs-1.15.1-34.el7.x86_64 libcom_err-1.42.9-13.el7.x86_64\nlibgcc-4.8.5-36.el7.x86_64 libselinux-2.5-14.1.el7.x86_64\nopenssl-libs-1.0.2k-16.el7.x86_64 pcre-8.32-17.el7.x86_64\nzlib-1.2.7-18.el7.x86_64\n(gdb) bt\n#0 0x00007f3bc5406207 in raise () from /lib64/libc.so.6\n#1 0x00007f3bc54078f8 in abort () from /lib64/libc.so.6\n#2 0x00000000008ca472 in ExceptionalCondition (\n conditionName=conditionName@entry=0xa5b9c8 \"rel->rd_rel->relreplident\n== REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\nREPLICA_IDENTITY_FULL || rel->rd_rel->relreplident ==\nREPLICA_IDENTITY_INDEX\", errorType=errorType@entry=0x918fe9\n\"FailedAssertion\", fileName=fileName@entry=0xa5b903 \"proto.c\",\nlineNumber=lineNumber@entry=148) at assert.c:67\n#3 0x000000000074c515 in logicalrep_write_insert (out=0xe8c768,\nrel=rel@entry=0x7f3bc69548d8, newtuple=0x7f3bb3e3a068) at proto.c:146\n#4 0x00007f3bb463d114 in pgoutput_change (ctx=0xe8a8b0, txn=<optimized\nout>, relation=0x7f3bc69548d8, change=0xf3e018) at pgoutput.c:345\n#5 0x0000000000747fa8 in change_cb_wrapper (cache=<optimized out>,\ntxn=<optimized out>, relation=<optimized out>, change=<optimized out>) at\nlogical.c:754\n#6 0x000000000075187e in ReorderBufferCommit (rb=0xf2e090, xid=xid@entry=489,\ncommit_lsn=23318344, end_lsn=<optimized out>,\ncommit_time=commit_time@entry=631713235036663,\norigin_id=origin_id@entry=0,\n origin_lsn=origin_lsn@entry=0) at reorderbuffer.c:1661\n#7 0x00000000007455a8 in DecodeCommit (xid=489, parsed=0x7ffec53910a0,\nbuf=0x7ffec5391260, ctx=0xe8a8b0) at decode.c:637\n#8 DecodeXactOp (ctx=0xe8a8b0, buf=buf@entry=0x7ffec5391260) at\ndecode.c:245\n#9 0x000000000074593a in LogicalDecodingProcessRecord (ctx=0xe8a8b0,\nrecord=0xe8ab70) at decode.c:114\n#10 0x0000000000766c24 in XLogSendLogical () at walsender.c:2806\n#11 0x00000000007693a2 in WalSndLoop (send_data=send_data@entry=0x766bc0\n<XLogSendLogical>) at walsender.c:2230\n#12 0x000000000076a091 in StartLogicalReplication (cmd=0xee9d68) at\nwalsender.c:1153\n#13 exec_replication_command (cmd_string=cmd_string@entry=0xe652d0\n\"START_REPLICATION SLOT \\\"test_slot_sub\\\" LOGICAL 0/0 (proto_version '1',\npublication_names '\\\"test_pub\\\"')\") at walsender.c:1576\n#14 0x00000000007b1099 in PostgresMain (argc=<optimized out>,\nargv=argv@entry=0xe90a58, dbname=0xe90960 \"postgres\", username=<optimized\nout>) at postgres.c:4287\n#15 0x0000000000482bc7 in BackendRun (port=<optimized out>, port=<optimized\nout>) at postmaster.c:4498\n#16 BackendStartup (port=0xe88920) at postmaster.c:4189\n#17 ServerLoop () at postmaster.c:1727\n#18 0x000000000073339f in PostmasterMain (argc=argc@entry=3,\nargv=argv@entry=0xe5fe20)\nat postmaster.c:1400\n#19 0x000000000048403f in main (argc=3, argv=0xe5fe20) at main.c:210\n\nThanks.\n--\nRegards,\nNeha Sharma\n\nHi,I am getting a server crash on publication server on HEAD for the below test case.Commit: b9c130a1fdf16cd99afb390c186d19acaea7d132Data setup:Publication server:wal_level = logicalmax_wal_senders = 10max_replication_slots = 15wal_log_hints = onhot_standby_feedback = onwal_receiver_status_interval = 1listen_addresses='*'log_min_messages=debug1wal_sender_timeout = 0logical_decoding_work_mem=64kBSubscription server:wal_level = logicalwal_log_hints = onhot_standby_feedback = onwal_receiver_status_interval = 1log_min_messages=debug1port=5433logical_decoding_work_mem=64kBTest case:Publication server:create table test(a int);create publication test_pub for all tables;alter table test replica identity NOTHING ;Subscription server:create table test(a int);create subscription test_sub CONNECTION 'host=172.16.208.32 port=5432 dbname=postgres user=centos' PUBLICATION test_pub WITH ( slot_name = test_slot_sub);Publication server:insert into test values(generate_series(1,5),'aa');After executing the DML in publication server ,it crashed with the mentioned assert.Publication Server log File snippet:2020-01-07 11:54:00.476 UTC [17417] DETAIL: Streaming transactions committing after 0/163CC30, reading WAL from 0/163CC30.2020-01-07 11:54:00.476 UTC [17417] LOG: logical decoding found consistent point at 0/163CC302020-01-07 11:54:00.476 UTC [17417] DETAIL: There are no running transactions.TRAP: FailedAssertion(\"rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\", File: \"proto.c\", Line: 148)postgres: walsender centos 172.16.208.32(40292) idle(ExceptionalCondition+0x53)[0x8ca453]postgres: walsender centos 172.16.208.32(40292) idle[0x74c515]/home/centos/PG_master/postgresql/inst/lib/pgoutput.so(+0x2114)[0x7f3bb463d114]postgres: walsender centos 172.16.208.32(40292) idle[0x747fa8]postgres: walsender centos 172.16.208.32(40292) idle(ReorderBufferCommit+0x12ee)[0x75187e]postgres: walsender centos 172.16.208.32(40292) idle[0x7455a8]postgres: walsender centos 172.16.208.32(40292) idle(LogicalDecodingProcessRecord+0x2ea)[0x74593a]postgres: walsender centos 172.16.208.32(40292) idle[0x766c24]postgres: walsender centos 172.16.208.32(40292) idle[0x7693a2]postgres: walsender centos 172.16.208.32(40292) idle(exec_replication_command+0xbb1)[0x76a091]postgres: walsender centos 172.16.208.32(40292) idle(PostgresMain+0x4b9)[0x7b1099]postgres: walsender centos 172.16.208.32(40292) idle[0x482bc7]postgres: walsender centos 172.16.208.32(40292) idle(PostmasterMain+0xdbf)[0x73339f]postgres: walsender centos 172.16.208.32(40292) idle(main+0x44f)[0x48403f]/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f3bc53f23d5]postgres: walsender centos 172.16.208.32(40292) idle[0x4840a6]2020-01-07 11:54:00.802 UTC [17359] LOG: server process (PID 17417) was terminated by signal 6: Aborted2020-01-07 11:54:00.802 UTC [17359] LOG: terminating any other active server processes2020-01-07 11:54:00.802 UTC [17413] WARNING: terminating connection because of crash of another server process2020-01-07 11:54:00.802 UTC [17413] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.Stack Trace:Core was generated by `postgres: walsender centos 172.16.208.32(40286) idle '.Program terminated with signal 6, Aborted.#0 0x00007f3bc5406207 in raise () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-34.el7.x86_64 libcom_err-1.42.9-13.el7.x86_64 libgcc-4.8.5-36.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-16.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64(gdb) bt#0 0x00007f3bc5406207 in raise () from /lib64/libc.so.6#1 0x00007f3bc54078f8 in abort () from /lib64/libc.so.6#2 0x00000000008ca472 in ExceptionalCondition ( conditionName=conditionName@entry=0xa5b9c8 \"rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\", errorType=errorType@entry=0x918fe9 \"FailedAssertion\", fileName=fileName@entry=0xa5b903 \"proto.c\", lineNumber=lineNumber@entry=148) at assert.c:67#3 0x000000000074c515 in logicalrep_write_insert (out=0xe8c768, rel=rel@entry=0x7f3bc69548d8, newtuple=0x7f3bb3e3a068) at proto.c:146#4 0x00007f3bb463d114 in pgoutput_change (ctx=0xe8a8b0, txn=<optimized out>, relation=0x7f3bc69548d8, change=0xf3e018) at pgoutput.c:345#5 0x0000000000747fa8 in change_cb_wrapper (cache=<optimized out>, txn=<optimized out>, relation=<optimized out>, change=<optimized out>) at logical.c:754#6 0x000000000075187e in ReorderBufferCommit (rb=0xf2e090, xid=xid@entry=489, commit_lsn=23318344, end_lsn=<optimized out>, commit_time=commit_time@entry=631713235036663, origin_id=origin_id@entry=0, origin_lsn=origin_lsn@entry=0) at reorderbuffer.c:1661#7 0x00000000007455a8 in DecodeCommit (xid=489, parsed=0x7ffec53910a0, buf=0x7ffec5391260, ctx=0xe8a8b0) at decode.c:637#8 DecodeXactOp (ctx=0xe8a8b0, buf=buf@entry=0x7ffec5391260) at decode.c:245#9 0x000000000074593a in LogicalDecodingProcessRecord (ctx=0xe8a8b0, record=0xe8ab70) at decode.c:114#10 0x0000000000766c24 in XLogSendLogical () at walsender.c:2806#11 0x00000000007693a2 in WalSndLoop (send_data=send_data@entry=0x766bc0 <XLogSendLogical>) at walsender.c:2230#12 0x000000000076a091 in StartLogicalReplication (cmd=0xee9d68) at walsender.c:1153#13 exec_replication_command (cmd_string=cmd_string@entry=0xe652d0 \"START_REPLICATION SLOT \\\"test_slot_sub\\\" LOGICAL 0/0 (proto_version '1', publication_names '\\\"test_pub\\\"')\") at walsender.c:1576#14 0x00000000007b1099 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xe90a58, dbname=0xe90960 \"postgres\", username=<optimized out>) at postgres.c:4287#15 0x0000000000482bc7 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4498#16 BackendStartup (port=0xe88920) at postmaster.c:4189#17 ServerLoop () at postmaster.c:1727#18 0x000000000073339f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xe5fe20) at postmaster.c:1400#19 0x000000000048403f in main (argc=3, argv=0xe5fe20) at main.c:210Thanks.--Regards,Neha Sharma",
"msg_date": "Tue, 7 Jan 2020 17:38:49 +0530",
"msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident\n == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\n REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "On Tue, Jan 07, 2020 at 05:38:49PM +0530, Neha Sharma wrote:\n> I am getting a server crash on publication server on HEAD for the below\n> test case.\n> \n> Test case:\n> Publication server:\n> create table test(a int);\n> create publication test_pub for all tables;\n> alter table test replica identity NOTHING ;\n> \n> Subscription server:\n> create table test(a int);\n> create subscription test_sub CONNECTION 'host=172.16.208.32 port=5432\n> dbname=postgres user=centos' PUBLICATION test_pub WITH ( slot_name =\n> test_slot_sub);\n> \n> Publication server:\n> insert into test values(generate_series(1,5),'aa');\n\nThis would not work as your relation has only one column. There are\nsome TAP tests for logical replication (none actually stressing\nNOTHING as replica identity), which do not fail, and I cannot\nreproduce the failure myself.\n\n> After executing the DML in publication server ,it crashed with the\n> mentioned assert.\n\nDo you have other objects defined on your schema on the publication or\nthe subscription side? Like, er, triggers?\n--\nMichael",
"msg_date": "Thu, 9 Jan 2020 15:08:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "Hi Michael,\nThanks for looking into the issue. Sorry by mistake I had mentioned the\nincorrect DML query,please use the query as mentioned below.\n\nOn Thu, Jan 9, 2020 at 11:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jan 07, 2020 at 05:38:49PM +0530, Neha Sharma wrote:\n> > I am getting a server crash on publication server on HEAD for the below\n> > test case.\n> >\n> > Test case:\n> > Publication server:\n> > create table test(a int);\n> > create publication test_pub for all tables;\n> > alter table test replica identity NOTHING ;\n> >\n> > Subscription server:\n> > create table test(a int);\n> > create subscription test_sub CONNECTION 'host=172.16.208.32 port=5432\n> > dbname=postgres user=centos' PUBLICATION test_pub WITH ( slot_name =\n> > test_slot_sub);\n> >\n> > Publication server:\n> > insert into test values(generate_series(1,5),'aa');\n>\n insert into test values(generate_series(1,5));\n\n>\n> This would not work as your relation has only one column. There are\n> some TAP tests for logical replication (none actually stressing\n> NOTHING as replica identity), which do not fail, and I cannot\n> reproduce the failure myself.\n>\n> > After executing the DML in publication server ,it crashed with the\n> > mentioned assert.\n>\n> Do you have other objects defined on your schema on the publication or\n> the subscription side? Like, er, triggers?\n>\nI had only one table in the publication server.\n\nI am able to reproduce the issue consistently.\n\n2020-01-09 07:14:31.727 UTC [20436] LOG: logical decoding found consistent\npoint at 0/1632FC0\n2020-01-09 07:14:31.727 UTC [20436] DETAIL: There are no running\ntransactions.\n*TRAP: FailedAssertion(\"rel->rd_rel->relreplident ==\nREPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\nREPLICA_IDENTITY_FULL || rel->rd_rel->relreplident ==\nREPLICA_IDENTITY_INDEX\", File: \"proto.c\", Line: 148)*\npostgres: walsender centos 172.16.208.32(40324)\nidle(ExceptionalCondition+0x53)[0x8ca453]\npostgres: walsender centos 172.16.208.32(40324) idle[0x74c515]\n/home/centos/PG_master/postgresql/inst/lib/pgoutput.so(+0x2114)[0x7fb105038114]\npostgres: walsender centos 172.16.208.32(40324) idle[0x747fa8]\npostgres: walsender centos 172.16.208.32(40324)\nidle(ReorderBufferCommit+0x12ee)[0x75187e]\npostgres: walsender centos 172.16.208.32(40324) idle[0x7455a8]\npostgres: walsender centos 172.16.208.32(40324)\nidle(LogicalDecodingProcessRecord+0x2ea)[0x74593a]\npostgres: walsender centos 172.16.208.32(40324) idle[0x766c24]\npostgres: walsender centos 172.16.208.32(40324) idle[0x7693a2]\npostgres: walsender centos 172.16.208.32(40324)\nidle(exec_replication_command+0xbb1)[0x76a091]\npostgres: walsender centos 172.16.208.32(40324)\nidle(PostgresMain+0x4b9)[0x7b1099]\npostgres: walsender centos 172.16.208.32(40324) idle[0x482bc7]\npostgres: walsender centos 172.16.208.32(40324)\nidle(PostmasterMain+0xdbf)[0x73339f]\npostgres: walsender centos 172.16.208.32(40324) idle(main+0x44f)[0x48403f]\n/lib64/libc.so.6(__libc_start_main+0xf5)[0x7fb115ded3d5]\npostgres: walsender centos 172.16.208.32(40324) idle[0x4840a6]\n2020-01-09 07:14:32.055 UTC [20357] LOG: server process (PID 20436) was\nterminated by signal 6: Aborted\n\n> --\n> Michael\n>\n\nHi Michael,Thanks for looking into the issue. Sorry by mistake I had mentioned the incorrect DML query,please use the query as mentioned below.On Thu, Jan 9, 2020 at 11:38 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jan 07, 2020 at 05:38:49PM +0530, Neha Sharma wrote:\n> I am getting a server crash on publication server on HEAD for the below\n> test case.\n> \n> Test case:\n> Publication server:\n> create table test(a int);\n> create publication test_pub for all tables;\n> alter table test replica identity NOTHING ;\n> \n> Subscription server:\n> create table test(a int);\n> create subscription test_sub CONNECTION 'host=172.16.208.32 port=5432\n> dbname=postgres user=centos' PUBLICATION test_pub WITH ( slot_name =\n> test_slot_sub);\n> \n> Publication server:\n> insert into test values(generate_series(1,5),'aa'); insert into test values(generate_series(1,5));\n\nThis would not work as your relation has only one column. There are\nsome TAP tests for logical replication (none actually stressing\nNOTHING as replica identity), which do not fail, and I cannot\nreproduce the failure myself.\n\n> After executing the DML in publication server ,it crashed with the\n> mentioned assert.\n\nDo you have other objects defined on your schema on the publication or\nthe subscription side? Like, er, triggers?I had only one table in the publication server.I am able to reproduce the issue consistently.2020-01-09 07:14:31.727 UTC [20436] LOG: logical decoding found consistent point at 0/1632FC02020-01-09 07:14:31.727 UTC [20436] DETAIL: There are no running transactions.TRAP: FailedAssertion(\"rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\", File: \"proto.c\", Line: 148)postgres: walsender centos 172.16.208.32(40324) idle(ExceptionalCondition+0x53)[0x8ca453]postgres: walsender centos 172.16.208.32(40324) idle[0x74c515]/home/centos/PG_master/postgresql/inst/lib/pgoutput.so(+0x2114)[0x7fb105038114]postgres: walsender centos 172.16.208.32(40324) idle[0x747fa8]postgres: walsender centos 172.16.208.32(40324) idle(ReorderBufferCommit+0x12ee)[0x75187e]postgres: walsender centos 172.16.208.32(40324) idle[0x7455a8]postgres: walsender centos 172.16.208.32(40324) idle(LogicalDecodingProcessRecord+0x2ea)[0x74593a]postgres: walsender centos 172.16.208.32(40324) idle[0x766c24]postgres: walsender centos 172.16.208.32(40324) idle[0x7693a2]postgres: walsender centos 172.16.208.32(40324) idle(exec_replication_command+0xbb1)[0x76a091]postgres: walsender centos 172.16.208.32(40324) idle(PostgresMain+0x4b9)[0x7b1099]postgres: walsender centos 172.16.208.32(40324) idle[0x482bc7]postgres: walsender centos 172.16.208.32(40324) idle(PostmasterMain+0xdbf)[0x73339f]postgres: walsender centos 172.16.208.32(40324) idle(main+0x44f)[0x48403f]/lib64/libc.so.6(__libc_start_main+0xf5)[0x7fb115ded3d5]postgres: walsender centos 172.16.208.32(40324) idle[0x4840a6]2020-01-09 07:14:32.055 UTC [20357] LOG: server process (PID 20436) was terminated by signal 6: Aborted\n--\nMichael",
"msg_date": "Thu, 9 Jan 2020 12:50:16 +0530",
"msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident\n == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\n REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 12:50 PM Neha Sharma\n<neha.sharma@enterprisedb.com> wrote:\n>\n> Hi Michael,\n> Thanks for looking into the issue. Sorry by mistake I had mentioned the incorrect DML query,please use the query as mentioned below.\n>\n> On Thu, Jan 9, 2020 at 11:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Tue, Jan 07, 2020 at 05:38:49PM +0530, Neha Sharma wrote:\n>> > I am getting a server crash on publication server on HEAD for the below\n>> > test case.\n>> >\n>> > Test case:\n>> > Publication server:\n>> > create table test(a int);\n>> > create publication test_pub for all tables;\n>> > alter table test replica identity NOTHING ;\n>> >\n>> > Subscription server:\n>> > create table test(a int);\n>> > create subscription test_sub CONNECTION 'host=172.16.208.32 port=5432\n>> > dbname=postgres user=centos' PUBLICATION test_pub WITH ( slot_name =\n>> > test_slot_sub);\n>> >\n>> > Publication server:\n>> > insert into test values(generate_series(1,5),'aa');\n>\n> insert into test values(generate_series(1,5));\n>>\n>>\n>> This would not work as your relation has only one column. There are\n>> some TAP tests for logical replication (none actually stressing\n>> NOTHING as replica identity), which do not fail, and I cannot\n>> reproduce the failure myself.\n\nI am able to reproduce the failure, I think the assert in the\n'logicalrep_write_insert' is not correct. IMHO even if the replica\nidentity is set to NOTHING we should be able to replicate INSERT?\n\nThis will fix the issue.\n\ndiff --git a/src/backend/replication/logical/proto.c\nb/src/backend/replication/logical/proto.c\nindex dcf7c08..471461c 100644\n--- a/src/backend/replication/logical/proto.c\n+++ b/src/backend/replication/logical/proto.c\n@@ -145,7 +145,8 @@ logicalrep_write_insert(StringInfo out, Relation\nrel, HeapTuple newtuple)\n\n Assert(rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n- rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX);\n+ rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX ||\n+ rel->rd_rel->relreplident == REPLICA_IDENTITY_NOTHING);\n\n /* use Oid as relation identifier */\n pq_sendint32(out, RelationGetRelid(rel));\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jan 2020 13:17:59 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident\n == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\n REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "On Thu, 9 Jan 2020 at 08:48, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jan 9, 2020 at 12:50 PM Neha Sharma\n> <neha.sharma@enterprisedb.com> wrote:\n> >\n> > Hi Michael,\n> > Thanks for looking into the issue. Sorry by mistake I had mentioned the incorrect DML query,please use the query as mentioned below.\n> >\n> > On Thu, Jan 9, 2020 at 11:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> On Tue, Jan 07, 2020 at 05:38:49PM +0530, Neha Sharma wrote:\n> >> > I am getting a server crash on publication server on HEAD for the below\n> >> > test case.\n> >> >\n> >> > Test case:\n> >> > Publication server:\n> >> > create table test(a int);\n> >> > create publication test_pub for all tables;\n> >> > alter table test replica identity NOTHING ;\n> >> >\n> >> > Subscription server:\n> >> > create table test(a int);\n> >> > create subscription test_sub CONNECTION 'host=172.16.208.32 port=5432\n> >> > dbname=postgres user=centos' PUBLICATION test_pub WITH ( slot_name =\n> >> > test_slot_sub);\n> >> >\n> >> > Publication server:\n> >> > insert into test values(generate_series(1,5),'aa');\n> >\n> > insert into test values(generate_series(1,5));\n> >>\n> >>\n> >> This would not work as your relation has only one column. There are\n> >> some TAP tests for logical replication (none actually stressing\n> >> NOTHING as replica identity), which do not fail, and I cannot\n> >> reproduce the failure myself.\n>\n> I am able to reproduce the failure, I think the assert in the\n> 'logicalrep_write_insert' is not correct. IMHO even if the replica\n> identity is set to NOTHING we should be able to replicate INSERT?\n>\nTrue that.\n> This will fix the issue.\n>\n> diff --git a/src/backend/replication/logical/proto.c\n> b/src/backend/replication/logical/proto.c\n> index dcf7c08..471461c 100644\n> --- a/src/backend/replication/logical/proto.c\n> +++ b/src/backend/replication/logical/proto.c\n> @@ -145,7 +145,8 @@ logicalrep_write_insert(StringInfo out, Relation\n> rel, HeapTuple newtuple)\n>\n> Assert(rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n> rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n> - rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX);\n> + rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX ||\n> + rel->rd_rel->relreplident == REPLICA_IDENTITY_NOTHING);\n>\n> /* use Oid as relation identifier */\n> pq_sendint32(out, RelationGetRelid(rel));\n>\n+1\n\n\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Thu, 9 Jan 2020 11:28:39 +0100",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident\n == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\n REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "Hi,\n\nOn 2020-01-09 13:17:59 +0530, Dilip Kumar wrote:\n> I am able to reproduce the failure, I think the assert in the\n> 'logicalrep_write_insert' is not correct. IMHO even if the replica\n> identity is set to NOTHING we should be able to replicate INSERT?\n> \n> This will fix the issue.\n> \n> diff --git a/src/backend/replication/logical/proto.c\n> b/src/backend/replication/logical/proto.c\n> index dcf7c08..471461c 100644\n> --- a/src/backend/replication/logical/proto.c\n> +++ b/src/backend/replication/logical/proto.c\n> @@ -145,7 +145,8 @@ logicalrep_write_insert(StringInfo out, Relation\n> rel, HeapTuple newtuple)\n> \n> Assert(rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n> rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n> - rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX);\n> + rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX ||\n> + rel->rd_rel->relreplident == REPLICA_IDENTITY_NOTHING);\n> \n> /* use Oid as relation identifier */\n> pq_sendint32(out, RelationGetRelid(rel));\n\nThere's not much point in having this assert, right? Given that it\ncovers all choices? Seems better to just drop it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Jan 2020 09:13:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "On Thu, 9 Jan 2020 at 10:43 PM, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-01-09 13:17:59 +0530, Dilip Kumar wrote:\n> > I am able to reproduce the failure, I think the assert in the\n> > 'logicalrep_write_insert' is not correct. IMHO even if the replica\n> > identity is set to NOTHING we should be able to replicate INSERT?\n> >\n> > This will fix the issue.\n> >\n> > diff --git a/src/backend/replication/logical/proto.c\n> > b/src/backend/replication/logical/proto.c\n> > index dcf7c08..471461c 100644\n> > --- a/src/backend/replication/logical/proto.c\n> > +++ b/src/backend/replication/logical/proto.c\n> > @@ -145,7 +145,8 @@ logicalrep_write_insert(StringInfo out, Relation\n> > rel, HeapTuple newtuple)\n> >\n> > Assert(rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n> > rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n> > - rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX);\n> > + rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX ||\n> > + rel->rd_rel->relreplident ==\n> REPLICA_IDENTITY_NOTHING);\n> >\n> > /* use Oid as relation identifier */\n> > pq_sendint32(out, RelationGetRelid(rel));\n>\n> There's not much point in having this assert, right? Given that it\n> covers all choices? Seems better to just drop\n\nit.\n\n\nYeah right!\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, 9 Jan 2020 at 10:43 PM, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-01-09 13:17:59 +0530, Dilip Kumar wrote:\n> I am able to reproduce the failure, I think the assert in the\n> 'logicalrep_write_insert' is not correct. IMHO even if the replica\n> identity is set to NOTHING we should be able to replicate INSERT?\n> \n> This will fix the issue.\n> \n> diff --git a/src/backend/replication/logical/proto.c\n> b/src/backend/replication/logical/proto.c\n> index dcf7c08..471461c 100644\n> --- a/src/backend/replication/logical/proto.c\n> +++ b/src/backend/replication/logical/proto.c\n> @@ -145,7 +145,8 @@ logicalrep_write_insert(StringInfo out, Relation\n> rel, HeapTuple newtuple)\n> \n> Assert(rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n> rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n> - rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX);\n> + rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX ||\n> + rel->rd_rel->relreplident == REPLICA_IDENTITY_NOTHING);\n> \n> /* use Oid as relation identifier */\n> pq_sendint32(out, RelationGetRelid(rel));\n\nThere's not much point in having this assert, right? Given that it\ncovers all choices? Seems better to just drop it.Yeah right!-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 10 Jan 2020 07:30:34 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident\n == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\n REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 07:30:34AM +0530, Dilip Kumar wrote:\n> On Thu, 9 Jan 2020 at 10:43 PM, Andres Freund <andres@anarazel.de> wrote:\n>> There's not much point in having this assert, right? Given that it\n>> covers all choices? Seems better to just drop it.\n>\n> Yeah right!\n\nRefreshing my mind on that... The two remaining assertions still make\nsense for update and delete changes per the restrictions in place in\nCheckCmdReplicaIdentity(), and there is a gap with the regression\ntests. So combining all that I get the attached patch (origin point\nis 665d1fa). Thoughts? \n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 14:01:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 10:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 10, 2020 at 07:30:34AM +0530, Dilip Kumar wrote:\n> > On Thu, 9 Jan 2020 at 10:43 PM, Andres Freund <andres@anarazel.de> wrote:\n> >> There's not much point in having this assert, right? Given that it\n> >> covers all choices? Seems better to just drop it.\n> >\n> > Yeah right!\n>\n> Refreshing my mind on that... The two remaining assertions still make\n> sense for update and delete changes per the restrictions in place in\n> CheckCmdReplicaIdentity(),\n\nRight\n\n and there is a gap with the regression\n> tests. So combining all that I get the attached patch (origin point\n> is 665d1fa). Thoughts?\n\nLGTM\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Jan 2020 11:01:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident\n == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident ==\n REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 11:01:13AM +0530, Dilip Kumar wrote:\n> On Fri, Jan 10, 2020 at 10:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> and there is a gap with the regression\n>> tests. So combining all that I get the attached patch (origin point\n>> is 665d1fa). Thoughts?\n> \n> LGTM\n\nThanks for the lookup. I'll look at that again in a couple of days\nand hopefully wrap it by then.\n--\nMichael",
"msg_date": "Sat, 11 Jan 2020 10:34:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 10:34:20AM +0900, Michael Paquier wrote:\n> Thanks for the lookup. I'll look at that again in a couple of days\n> and hopefully wrap it by then.\n\nAnd done.\n--\nMichael",
"msg_date": "Sun, 12 Jan 2020 23:56:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Logical Replication] TRAP:\n FailedAssertion(\"rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX\""
}
] |
[
{
"msg_contents": "Hello, I wanted some guidance/suggestions about creating an spgist\nextension. For context, i am a grad student doing research that involves\ncomparing the performance of different indexes for spatial data. We've\nbuilt a system that uses Postgres and one of the data structures we want to\nuse is a loose quadtree, but there is no implementation of this data\nstructure in spgist. The reason why I think this is pretty do-able is that\nit is quite similar to a quadtree on boxes, which is implemented in\nsrc/backend/utils/adt/geo_spgist.c.\n\nAdditionally, I found by grepping through the repo for the existing\nfunctions in spgist/box_ops operator class that several catalog files need\nto be updated to reflect a new operator class in spgist. The files that I\nbelieve need to be changed to create a new\nspgist_loose_box_ops operator class are:\n\nsrc/include/catalog/pg_amop.dat\nsrc/include/catalog/pg_amproc.dat\nsrc/include/catalog/pg_opclass.dat\nsrc/include/catalog/pg_opfamily.dat\n\n\nI've poked around quite a bit in the spgist code and have tried making\nminimal changes to geo_spgist.c, but I haven't done any development on\npostgres before, so i'm running into some issues that I couldn't find help\nwith on the postgres slack, by searching the mailing list, or by scouring\nthe development wikis. For example, I wanted to just print out some data to\nsee what quadrant a box is being placed into in the geo_spgist.c code. I\nunderstand that printing to stdout won't work in postgres, but I thought\nthat I could possibly write some data to the logfile. I tried updating a\nfunction to use both elog and ereport and re-built the code. However, I\ncan't get anything to print out to the logfile no matter what I try. Does\nanyone have tips for printing out and debugging in general for postgres\ndevelopment?\n\n\nAny tips or guidance would be much appreciated. Also, if there's a\ndifferent route I should go to turn this into a proposal for a patch please\nlet me know. I'm new to postgres dev.\n\nBest,\nPeter\n\nHello, I wanted some guidance/suggestions about creating an spgist extension. For context, i am a grad student doing research that involves comparing the performance of different indexes for spatial data. We've built a system that uses Postgres and one of the data structures we want to use is a loose quadtree, but there is no implementation of this data structure in spgist. The reason why I think this is pretty do-able is that it is quite similar to a quadtree on boxes, which is implemented in src/backend/utils/adt/geo_spgist.c. Additionally, I found by grepping through the repo for the existing functions in spgist/box_ops operator class that several catalog files need to be updated to reflect a new operator class in spgist. The files that I believe need to be changed to create a new spgist_loose_box_ops operator class are: src/include/catalog/pg_amop.datsrc/include/catalog/pg_amproc.datsrc/include/catalog/pg_opclass.datsrc/include/catalog/pg_opfamily.datI've poked around quite a bit in the spgist code and have tried making minimal changes to geo_spgist.c, but I haven't done any development on postgres before, so i'm running into some issues that I couldn't find help with on the postgres slack, by searching the mailing list, or by scouring the development wikis. For example, I wanted to just print out some data to see what quadrant a box is being placed into in the geo_spgist.c code. I understand that printing to stdout won't work in postgres, but I thought that I could possibly write some data to the logfile. I tried updating a function to use both elog and ereport and re-built the code. However, I can't get anything to print out to the logfile no matter what I try. Does anyone have tips for printing out and debugging in general for postgres development?Any tips or guidance would be much appreciated. Also, if there's a different route I should go to turn this into a proposal for a patch please let me know. I'm new to postgres dev.Best,Peter",
"msg_date": "Tue, 7 Jan 2020 11:33:31 -0500",
"msg_from": "Peter Griggs <petergriggs33@gmail.com>",
"msg_from_op": true,
"msg_subject": "[QUESTION/PROPOSAL] loose quadtree in spgist"
},
{
"msg_contents": "On Tue, Jan 07, 2020 at 11:33:31AM -0500, Peter Griggs wrote:\n>Hello, I wanted some guidance/suggestions about creating an spgist\n>extension. For context, i am a grad student doing research that involves\n>comparing the performance of different indexes for spatial data. We've\n>built a system that uses Postgres and one of the data structures we want to\n>use is a loose quadtree, but there is no implementation of this data\n>structure in spgist. The reason why I think this is pretty do-able is that\n>it is quite similar to a quadtree on boxes, which is implemented in\n>src/backend/utils/adt/geo_spgist.c.\n>\n>Additionally, I found by grepping through the repo for the existing\n>functions in spgist/box_ops operator class that several catalog files need\n>to be updated to reflect a new operator class in spgist. The files that I\n>believe need to be changed to create a new\n>spgist_loose_box_ops operator class are:\n>\n>src/include/catalog/pg_amop.dat\n>src/include/catalog/pg_amproc.dat\n>src/include/catalog/pg_opclass.dat\n>src/include/catalog/pg_opfamily.dat\n>\n\nYou should probably try using CREATE OPERATOR CLASS command [1], not\nmodify the catalogs directly. That's only necessary for built-in index\ntypes (i.e. available right after initdb). But you mentioned you're\nworking on an extension, so the command is the right thing to do (after\nall, you don't know OIDs of objects from the extension).\n\n[1] https://www.postgresql.org/docs/current/sql-createopclass.html\n\n>\n>I've poked around quite a bit in the spgist code and have tried making\n>minimal changes to geo_spgist.c, but I haven't done any development on\n>postgres before, so i'm running into some issues that I couldn't find help\n>with on the postgres slack, by searching the mailing list, or by scouring\n>the development wikis.\n\nWell, learning the ropes may take a bit of time, and pgsql-hackers is\nprobably the right place to ask ...\n\n>For example, I wanted to just print out some data to\n>see what quadrant a box is being placed into in the geo_spgist.c code. I\n>understand that printing to stdout won't work in postgres, but I thought\n>that I could possibly write some data to the logfile. I tried updating a\n>function to use both elog and ereport and re-built the code. However, I\n>can't get anything to print out to the logfile no matter what I try. Does\n>anyone have tips for printing out and debugging in general for postgres\n>development?\n>\n\nWell, elog/ereport are the easiest approach (it's what I'd do), and they\ndo about the same thing. The main difference is that ereport allows\ntranslations of messages to other languages, while elog is for internal\nthings that should not happen (unexpected errors, ...). For debugging\njust use elog(), I guess.\n\nIt's hard to say why you're not getting anything logged, because you\nhaven't shown us any code. My guess is that you're uring log level that\nis not high enough to make it into the log file.\n\nThe default config in postgresql.conf says\n\n log_min_messages = warning\n\nwhich means the level has to be at least WARNING to make it into the\nfile. So either WARNING, ERROR, LOG, FATAL, PANIC. So for example\n\n elog(INFO, \"test message\");\n\nwon't do anything, but\n\n elog(LOG, \"test message\");\n\nwill write stuff to the log file. If you use WARNING, you'll actually\nget the message on the client console (well, there's client_min_messages\nbut you get the idea).\n\n>\n>Any tips or guidance would be much appreciated. Also, if there's a\n>different route I should go to turn this into a proposal for a patch\n>please let me know. I'm new to postgres dev.\n>\n\nA general recommendation is to show snippets of code, so that people on\nthis list actually can help without too much guessing what you're doing.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 7 Jan 2020 23:56:34 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTION/PROPOSAL] loose quadtree in spgist"
},
{
"msg_contents": "Thank you for the tips Tomas, I really appreciate it. You're definitely\nright that I should include code snippets, so here's the code i'm trying to\nchange.\n\nIn the getQuadrant function in the file src/backend/utils/adt/geo_spgist.c,\nI only added some elog statements to see the quadrant that a box is placed\ninto using the current code. getQuadrant is called several times by the\nspg_box_quad_picksplit function, which is used when inserting into the\nquadtree. With this change, I can still build postgres but when I try to\ntrigger the code, nothing gets printed to my logfile. Here's my process for\ntrying to trigger this code:\n\n1. delete the current postgres installation by removing /usr/local/pgsql\n2. re-build from source by following documentation\n3. create a database with a table that has two columns: (id int, b box)\n4. insert some boxes into the table and build an index on it using \"CREATE\nINDEX box_quad_idx ON quad USING spgist(b);\"\n\nAnd here's the function I modified:\n\n/* * Calculate the quadrant * * The quadrant is 8 bit unsigned\ninteger with 4 least bits in use. * This function accepts BOXes as\ninput. They are not casted to * RangeBoxes, yet. All 4 bits are set\nby comparing a corner of the box. * This makes 16 quadrants in total.\n*/static uint8\ngetQuadrant(BOX *centroid, BOX *inBox){\n\tuint8\t\tquadrant = 0;\n\n\telog(LOG, \"BOX (minx, miny) = (%d, %d)\\n\", centroid->low.x, centroid->low.y);\n\telog(LOG, \"BOX (maxx, maxy) = (%d, %d)\\n\", centroid->high.x, centroid->high.y);\n\n\tif (inBox->low.x > centroid->low.x)\n\t\tquadrant |= 0x8;\n\n\tif (inBox->high.x > centroid->high.x)\n\t\tquadrant |= 0x4;\n\n\tif (inBox->low.y > centroid->low.y)\n\t\tquadrant |= 0x2;\n\n\tif (inBox->high.y > centroid->high.y)\n\t\tquadrant |= 0x1;\n\n\telog(LOG, \"Quadrant bitvector value is: %d\\n\", quadrant);\n\n\treturn quadrant;}\n\n\n\n\n\n\nOn Tue, Jan 7, 2020 at 5:56 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Tue, Jan 07, 2020 at 11:33:31AM -0500, Peter Griggs wrote:\n> >Hello, I wanted some guidance/suggestions about creating an spgist\n> >extension. For context, i am a grad student doing research that involves\n> >comparing the performance of different indexes for spatial data. We've\n> >built a system that uses Postgres and one of the data structures we want\n> to\n> >use is a loose quadtree, but there is no implementation of this data\n> >structure in spgist. The reason why I think this is pretty do-able is that\n> >it is quite similar to a quadtree on boxes, which is implemented in\n> >src/backend/utils/adt/geo_spgist.c.\n> >\n> >Additionally, I found by grepping through the repo for the existing\n> >functions in spgist/box_ops operator class that several catalog files need\n> >to be updated to reflect a new operator class in spgist. The files that I\n> >believe need to be changed to create a new\n> >spgist_loose_box_ops operator class are:\n> >\n> >src/include/catalog/pg_amop.dat\n> >src/include/catalog/pg_amproc.dat\n> >src/include/catalog/pg_opclass.dat\n> >src/include/catalog/pg_opfamily.dat\n> >\n>\n> You should probably try using CREATE OPERATOR CLASS command [1], not\n> modify the catalogs directly. That's only necessary for built-in index\n> types (i.e. available right after initdb). But you mentioned you're\n> working on an extension, so the command is the right thing to do (after\n> all, you don't know OIDs of objects from the extension).\n>\n> [1] https://www.postgresql.org/docs/current/sql-createopclass.html\n>\n> >\n> >I've poked around quite a bit in the spgist code and have tried making\n> >minimal changes to geo_spgist.c, but I haven't done any development on\n> >postgres before, so i'm running into some issues that I couldn't find help\n> >with on the postgres slack, by searching the mailing list, or by scouring\n> >the development wikis.\n>\n> Well, learning the ropes may take a bit of time, and pgsql-hackers is\n> probably the right place to ask ...\n>\n> >For example, I wanted to just print out some data to\n> >see what quadrant a box is being placed into in the geo_spgist.c code. I\n> >understand that printing to stdout won't work in postgres, but I thought\n> >that I could possibly write some data to the logfile. I tried updating a\n> >function to use both elog and ereport and re-built the code. However, I\n> >can't get anything to print out to the logfile no matter what I try. Does\n> >anyone have tips for printing out and debugging in general for postgres\n> >development?\n> >\n>\n> Well, elog/ereport are the easiest approach (it's what I'd do), and they\n> do about the same thing. The main difference is that ereport allows\n> translations of messages to other languages, while elog is for internal\n> things that should not happen (unexpected errors, ...). For debugging\n> just use elog(), I guess.\n>\n> It's hard to say why you're not getting anything logged, because you\n> haven't shown us any code. My guess is that you're uring log level that\n> is not high enough to make it into the log file.\n>\n> The default config in postgresql.conf says\n>\n> log_min_messages = warning\n>\n> which means the level has to be at least WARNING to make it into the\n> file. So either WARNING, ERROR, LOG, FATAL, PANIC. So for example\n>\n> elog(INFO, \"test message\");\n>\n> won't do anything, but\n>\n> elog(LOG, \"test message\");\n>\n> will write stuff to the log file. If you use WARNING, you'll actually\n> get the message on the client console (well, there's client_min_messages\n> but you get the idea).\n>\n> >\n> >Any tips or guidance would be much appreciated. Also, if there's a\n> >different route I should go to turn this into a proposal for a patch\n> >please let me know. I'm new to postgres dev.\n> >\n>\n> A general recommendation is to show snippets of code, so that people on\n> this list actually can help without too much guessing what you're doing.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nPeter Griggs\nMasters of Engineering (Meng) in Computer Science\nMassachusetts Institute of Technology | 2020\n\nThank you for the tips Tomas, I really appreciate it. You're definitely right that I should include code snippets, so here's the code i'm trying to change. In the getQuadrant function in the file src/backend/utils/adt/geo_spgist.c, I only added some elog statements to see the quadrant that a box is placed into using the current code. getQuadrant is called several times by the spg_box_quad_picksplit function, which is used when inserting into the quadtree. With this change, I can still build postgres but when I try to trigger the code, nothing gets printed to my logfile. Here's my process for trying to trigger this code:1. delete the current postgres installation by removing /usr/local/pgsql 2. re-build from source by following documentation 3. create a database with a table that has two columns: (id int, b box)4. insert some boxes into the table and build an index on it using \"CREATE INDEX box_quad_idx ON quad USING spgist(b);\"And here's the function I modified:\n/*\n * Calculate the quadrant\n * \n * The quadrant is 8 bit unsigned integer with 4 least bits in use.\n * This function accepts BOXes as input. They are not casted to\n * RangeBoxes, yet. All 4 bits are set by comparing a corner of the box.\n * This makes 16 quadrants in total.\n */\nstatic uint8\ngetQuadrant(BOX *centroid, BOX *inBox)\n{\n\tuint8\t\tquadrant = 0;\n\n\telog(LOG, \"BOX (minx, miny) = (%d, %d)\\n\", centroid->low.x, centroid->low.y);\n\telog(LOG, \"BOX (maxx, maxy) = (%d, %d)\\n\", centroid->high.x, centroid->high.y);\n\n\tif (inBox->low.x > centroid->low.x)\n\t\tquadrant |= 0x8;\n\n\tif (inBox->high.x > centroid->high.x)\n\t\tquadrant |= 0x4;\n\n\tif (inBox->low.y > centroid->low.y)\n\t\tquadrant |= 0x2;\n\n\tif (inBox->high.y > centroid->high.y)\n\t\tquadrant |= 0x1;\n\n\telog(LOG, \"Quadrant bitvector value is: %d\\n\", quadrant);\n\n\treturn quadrant;\n}\nOn Tue, Jan 7, 2020 at 5:56 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Tue, Jan 07, 2020 at 11:33:31AM -0500, Peter Griggs wrote:\n>Hello, I wanted some guidance/suggestions about creating an spgist\n>extension. For context, i am a grad student doing research that involves\n>comparing the performance of different indexes for spatial data. We've\n>built a system that uses Postgres and one of the data structures we want to\n>use is a loose quadtree, but there is no implementation of this data\n>structure in spgist. The reason why I think this is pretty do-able is that\n>it is quite similar to a quadtree on boxes, which is implemented in\n>src/backend/utils/adt/geo_spgist.c.\n>\n>Additionally, I found by grepping through the repo for the existing\n>functions in spgist/box_ops operator class that several catalog files need\n>to be updated to reflect a new operator class in spgist. The files that I\n>believe need to be changed to create a new\n>spgist_loose_box_ops operator class are:\n>\n>src/include/catalog/pg_amop.dat\n>src/include/catalog/pg_amproc.dat\n>src/include/catalog/pg_opclass.dat\n>src/include/catalog/pg_opfamily.dat\n>\n\nYou should probably try using CREATE OPERATOR CLASS command [1], not\nmodify the catalogs directly. That's only necessary for built-in index\ntypes (i.e. available right after initdb). But you mentioned you're\nworking on an extension, so the command is the right thing to do (after\nall, you don't know OIDs of objects from the extension).\n\n[1] https://www.postgresql.org/docs/current/sql-createopclass.html\n\n>\n>I've poked around quite a bit in the spgist code and have tried making\n>minimal changes to geo_spgist.c, but I haven't done any development on\n>postgres before, so i'm running into some issues that I couldn't find help\n>with on the postgres slack, by searching the mailing list, or by scouring\n>the development wikis.\n\nWell, learning the ropes may take a bit of time, and pgsql-hackers is\nprobably the right place to ask ...\n\n>For example, I wanted to just print out some data to\n>see what quadrant a box is being placed into in the geo_spgist.c code. I\n>understand that printing to stdout won't work in postgres, but I thought\n>that I could possibly write some data to the logfile. I tried updating a\n>function to use both elog and ereport and re-built the code. However, I\n>can't get anything to print out to the logfile no matter what I try. Does\n>anyone have tips for printing out and debugging in general for postgres\n>development?\n>\n\nWell, elog/ereport are the easiest approach (it's what I'd do), and they\ndo about the same thing. The main difference is that ereport allows\ntranslations of messages to other languages, while elog is for internal\nthings that should not happen (unexpected errors, ...). For debugging\njust use elog(), I guess.\n\nIt's hard to say why you're not getting anything logged, because you\nhaven't shown us any code. My guess is that you're uring log level that\nis not high enough to make it into the log file.\n\nThe default config in postgresql.conf says\n\n log_min_messages = warning\n\nwhich means the level has to be at least WARNING to make it into the\nfile. So either WARNING, ERROR, LOG, FATAL, PANIC. So for example\n\n elog(INFO, \"test message\");\n\nwon't do anything, but\n\n elog(LOG, \"test message\");\n\nwill write stuff to the log file. If you use WARNING, you'll actually\nget the message on the client console (well, there's client_min_messages\nbut you get the idea).\n\n>\n>Any tips or guidance would be much appreciated. Also, if there's a\n>different route I should go to turn this into a proposal for a patch\n>please let me know. I'm new to postgres dev.\n>\n\nA general recommendation is to show snippets of code, so that people on\nthis list actually can help without too much guessing what you're doing.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n-- Peter GriggsMasters of Engineering (Meng) in Computer ScienceMassachusetts Institute of Technology | 2020",
"msg_date": "Wed, 8 Jan 2020 14:36:14 -0500",
"msg_from": "Peter Griggs <petergriggs33@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTION/PROPOSAL] loose quadtree in spgist"
},
{
"msg_contents": "Peter Griggs <petergriggs33@gmail.com> writes:\n> In the getQuadrant function in the file src/backend/utils/adt/geo_spgist.c,\n> I only added some elog statements to see the quadrant that a box is placed\n> into using the current code. getQuadrant is called several times by the\n> spg_box_quad_picksplit function, which is used when inserting into the\n> quadtree. With this change, I can still build postgres but when I try to\n> trigger the code, nothing gets printed to my logfile.\n\nPerhaps you're looking in the wrong logfile. elog(LOG) should definitely\nproduce output unless you're using very strange settings.\n\nAnother possibility is that the specific test case you're using doesn't\nactually reach this function. I'm not totally sure, but I think that\nSPGiST might not call the datatype-specific choose or picksplit functions\nuntil it's got more than one index page's worth of data.\n\n> \telog(LOG, \"BOX (minx, miny) = (%d, %d)\\n\", centroid->low.x, centroid->low.y);\n\nA couple thoughts here:\n\n* From memory, the x and y values of a BOX are float8, so don't you want\nto use %g or %f instead of %d?\n\n* You don't want to end an elog with \\n, that'll just add extra blank\nlines to the log.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jan 2020 15:07:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [QUESTION/PROPOSAL] loose quadtree in spgist"
},
{
"msg_contents": "As an update, you were totally right Tom, SPGIST loads all of the tuples\nonto the root page which doesn't call picksplit until there's a page's\nworth of data. I just wasn't inserting enough tuples to see my elog values\nappear in the log, but now I am and they do!\n\nThe hint from Tomas before to use the CREATE OPERATOR CLASS command was\nspot on. That documentation lead me to this page (\nhttps://www.postgresql.org/docs/11/xindex.html), which looked like the sql\nI need to include in the extension to get the new loose quadtree index to\nbuild. What I did was create a \"loose_quadtree\" folder for my extension in\nthe /contrib folder and followed the format of another extension by\nincluding a Makefile, loose_quadtree.control file, loose_quadtree.sql, and\nmy loose_quadtree.c file, in which I just put copies of the box quadtree\nfunctions from geo_spgist.c with a bit of extra logging code. Now, I can\nbuild the extension with make and then run 'CREATE EXTENSION\nloose_quadtree;', then index using 'spgist(b loose_quadtree_ops)' operator\nclass! Now, I have to actually figure out how to change the logic within\nthe functions from geo_spgist.c.\n\nI was wondering if you had advice on how to handle implementing insertion\nfor the loose quadtree index. For some background, a loose quadtree is\nsimilar to a quadtree over boxes, except that the length of a side becomes\nk*L where k>1. Throughout this, I assume that our space is a square (take\nthe minimum bounding square of all of the boxes). Usually, a value of K=2\nis used. Since, each loose quadtree cell is 2x its normal size, a given\nlevel can hold any object that has a radius of <=1/4 of the cell side\nlength, regardless of the object's position. We can do a bit of math and\nfigure out what level an object should be inserted into the tree in O(1)\ntime. I'm including a picture of the level selection algorithm below, but\nits just a re-formulation of what i've said above. My overall question is\nhow to do this in spgist. From what I understand in the spgist insertion\nalgorithm, the level selection would should done in the choose() function\nbecause choose() is called when we are trying to insert a leaf tuple into a\ninner tuple that has one or more levels under it. Currently, it seems like\nthe spg_box_quad_choose() function descends recursively into a quadtree\nnode. What I would like to do is to have it jump straight to the level it\nwants to insert into using the loose quadtree level selection algorithm and\nthen find which cell it should add to by comparing its center coordinates.\n\n[Following image from: Thatcher Ulrich. Loose octrees. In Mark DeLoura,\neditor, Game Programming Gems, pages 444–453. Charles River Media, 2000]\n\n[image: Screen Shot 2020-01-14 at 6.15.09 PM.png]\n\nBest,\nPeter\n\n\n\n\n\nOn Wed, Jan 8, 2020 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Griggs <petergriggs33@gmail.com> writes:\n> > In the getQuadrant function in the file\n> src/backend/utils/adt/geo_spgist.c,\n> > I only added some elog statements to see the quadrant that a box is\n> placed\n> > into using the current code. getQuadrant is called several times by the\n> > spg_box_quad_picksplit function, which is used when inserting into the\n> > quadtree. With this change, I can still build postgres but when I try to\n> > trigger the code, nothing gets printed to my logfile.\n>\n> Perhaps you're looking in the wrong logfile. elog(LOG) should definitely\n> produce output unless you're using very strange settings.\n>\n> Another possibility is that the specific test case you're using doesn't\n> actually reach this function. I'm not totally sure, but I think that\n> SPGiST might not call the datatype-specific choose or picksplit functions\n> until it's got more than one index page's worth of data.\n>\n> > elog(LOG, \"BOX (minx, miny) = (%d, %d)\\n\", centroid->low.x,\n> centroid->low.y);\n>\n> A couple thoughts here:\n>\n> * From memory, the x and y values of a BOX are float8, so don't you want\n> to use %g or %f instead of %d?\n>\n> * You don't want to end an elog with \\n, that'll just add extra blank\n> lines to the log.\n>\n> regards, tom lane\n>\n\n\n-- \nPeter Griggs\nMasters of Engineering (Meng) in Computer Science\nMassachusetts Institute of Technology | 2020",
"msg_date": "Thu, 16 Jan 2020 00:32:34 -0500",
"msg_from": "Peter Griggs <petergriggs33@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTION/PROPOSAL] loose quadtree in spgist"
},
{
"msg_contents": "Hi, I was wondering if someone could help me understand what the\n\"allTheSame\" attribute in the spgChooseIn struct is.\nDoes it mean that the inner tuple contains either only inner tuples or only\nleaf nodes? Or is it saying that the tuples in an inner tuple are all in\nthe same quadrant?\n\nThis is a code snippet from /src/include/access/spgist.h:\n/*\n * Argument structs for spg_choose method\n */\ntypedef struct spgChooseIn\n{\n Datum datum; /* original datum to be indexed */\n Datum leafDatum; /* current datum to be stored at leaf */\n int level; /* current level (counting from zero) */\n\n /* Data from current inner tuple */\n bool allTheSame; /* tuple is marked all-the-same? */\n bool hasPrefix; /* tuple has a prefix? */\n Datum prefixDatum; /* if so, the prefix value */\n int nNodes; /* number of nodes in the inner tuple */\n Datum *nodeLabels; /* node label values (NULL if none) */\n} spgChooseIn;\n\nOn Thu, Jan 16, 2020 at 12:32 AM Peter Griggs <petergriggs33@gmail.com>\nwrote:\n\n> As an update, you were totally right Tom, SPGIST loads all of the tuples\n> onto the root page which doesn't call picksplit until there's a page's\n> worth of data. I just wasn't inserting enough tuples to see my elog values\n> appear in the log, but now I am and they do!\n>\n> The hint from Tomas before to use the CREATE OPERATOR CLASS command was\n> spot on. That documentation lead me to this page (\n> https://www.postgresql.org/docs/11/xindex.html), which looked like the\n> sql I need to include in the extension to get the new loose quadtree index\n> to build. What I did was create a \"loose_quadtree\" folder for my extension\n> in the /contrib folder and followed the format of another extension by\n> including a Makefile, loose_quadtree.control file, loose_quadtree.sql, and\n> my loose_quadtree.c file, in which I just put copies of the box quadtree\n> functions from geo_spgist.c with a bit of extra logging code. Now, I can\n> build the extension with make and then run 'CREATE EXTENSION\n> loose_quadtree;', then index using 'spgist(b loose_quadtree_ops)' operator\n> class! Now, I have to actually figure out how to change the logic within\n> the functions from geo_spgist.c.\n>\n> I was wondering if you had advice on how to handle implementing insertion\n> for the loose quadtree index. For some background, a loose quadtree is\n> similar to a quadtree over boxes, except that the length of a side becomes\n> k*L where k>1. Throughout this, I assume that our space is a square (take\n> the minimum bounding square of all of the boxes). Usually, a value of K=2\n> is used. Since, each loose quadtree cell is 2x its normal size, a given\n> level can hold any object that has a radius of <=1/4 of the cell side\n> length, regardless of the object's position. We can do a bit of math and\n> figure out what level an object should be inserted into the tree in O(1)\n> time. I'm including a picture of the level selection algorithm below, but\n> its just a re-formulation of what i've said above. My overall question is\n> how to do this in spgist. From what I understand in the spgist insertion\n> algorithm, the level selection would should done in the choose() function\n> because choose() is called when we are trying to insert a leaf tuple into a\n> inner tuple that has one or more levels under it. Currently, it seems like\n> the spg_box_quad_choose() function descends recursively into a quadtree\n> node. What I would like to do is to have it jump straight to the level it\n> wants to insert into using the loose quadtree level selection algorithm and\n> then find which cell it should add to by comparing its center coordinates.\n>\n> [Following image from: Thatcher Ulrich. Loose octrees. In Mark DeLoura,\n> editor, Game Programming Gems, pages 444–453. Charles River Media, 2000]\n>\n> [image: Screen Shot 2020-01-14 at 6.15.09 PM.png]\n>\n> Best,\n> Peter\n>\n>\n>\n>\n>\n> On Wed, Jan 8, 2020 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Peter Griggs <petergriggs33@gmail.com> writes:\n>> > In the getQuadrant function in the file\n>> src/backend/utils/adt/geo_spgist.c,\n>> > I only added some elog statements to see the quadrant that a box is\n>> placed\n>> > into using the current code. getQuadrant is called several times by the\n>> > spg_box_quad_picksplit function, which is used when inserting into the\n>> > quadtree. With this change, I can still build postgres but when I try to\n>> > trigger the code, nothing gets printed to my logfile.\n>>\n>> Perhaps you're looking in the wrong logfile. elog(LOG) should definitely\n>> produce output unless you're using very strange settings.\n>>\n>> Another possibility is that the specific test case you're using doesn't\n>> actually reach this function. I'm not totally sure, but I think that\n>> SPGiST might not call the datatype-specific choose or picksplit functions\n>> until it's got more than one index page's worth of data.\n>>\n>> > elog(LOG, \"BOX (minx, miny) = (%d, %d)\\n\", centroid->low.x,\n>> centroid->low.y);\n>>\n>> A couple thoughts here:\n>>\n>> * From memory, the x and y values of a BOX are float8, so don't you want\n>> to use %g or %f instead of %d?\n>>\n>> * You don't want to end an elog with \\n, that'll just add extra blank\n>> lines to the log.\n>>\n>> regards, tom lane\n>>\n>\n>\n> --\n> Peter Griggs\n> Masters of Engineering (Meng) in Computer Science\n> Massachusetts Institute of Technology | 2020\n>\n\n\n-- \nPeter Griggs\nMasters of Engineering (Meng) in Computer Science\nMassachusetts Institute of Technology | 2020",
"msg_date": "Tue, 28 Jan 2020 01:23:33 -0500",
"msg_from": "Peter Griggs <petergriggs33@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [QUESTION/PROPOSAL] loose quadtree in spgist"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed to my annoyance that 'make -j4 -C src/interfaces/libpq'\ndoesn't work in a clean checkout, because it can't find\nlibpgcommon_shlib and libpgport_shlib. It looks like that's because the\nsubmake-libpgport dependency is declared on the all-lib target, not on\nthe shlib itself. Moving it to SHLIB_PREREQS instead fixes it, patch\nfor which is attached.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl",
"msg_date": "Wed, 08 Jan 2020 13:33:13 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Fixing parallel make of libpq"
},
{
"msg_contents": "On Wed, Jan 08, 2020 at 01:33:13PM +0000, Dagfinn Ilmari Mannsåker wrote:\n> I noticed to my annoyance that 'make -j4 -C src/interfaces/libpq'\n> doesn't work in a clean checkout, because it can't find\n> libpgcommon_shlib and libpgport_shlib. It looks like that's because the\n> submake-libpgport dependency is declared on the all-lib target, not on\n> the shlib itself. Moving it to SHLIB_PREREQS instead fixes it, patch\n> for which is attached.\n\nHmm. That logically makes sense. Isn't that a side effect of 7143b3e\nthen? Now, FWIW, I am not able to reproduce it here, after trying on\ntwo different machines, various parallel job numbers (up to 32), and a\ncouple of dozen attempts. Perhaps somebody else can see the failures?\n--\nMichael",
"msg_date": "Thu, 9 Jan 2020 15:38:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fixing parallel make of libpq"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Wed, Jan 08, 2020 at 01:33:13PM +0000, Dagfinn Ilmari Mannsåker wrote:\n>> I noticed to my annoyance that 'make -j4 -C src/interfaces/libpq'\n>> doesn't work in a clean checkout, because it can't find\n>> libpgcommon_shlib and libpgport_shlib. It looks like that's because the\n>> submake-libpgport dependency is declared on the all-lib target, not on\n>> the shlib itself. Moving it to SHLIB_PREREQS instead fixes it, patch\n>> for which is attached.\n>\n> Hmm. That logically makes sense. Isn't that a side effect of 7143b3e\n> then? Now, FWIW, I am not able to reproduce it here, after trying on\n> two different machines, various parallel job numbers (up to 32), and a\n> couple of dozen attempts. Perhaps somebody else can see the failures?\n\nIt fails reliably for me on Debian Buster, with make 4.2.1-1.2and -j4.\nThe command to reproduce it is:\n\n git clean -xfd && ./configure && make -Otarget -j4 -C src/interfaces/libpq/\n\nAttached is the output of the `make` step with and without the patch.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\nmake -C ../../../src/backend generated-headers\nmake -C ../../../src/port pg_config_paths.h\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define PGBINDIR \\\"/usr/local/pgsql/bin\\\"\" >pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake -C catalog distprep generated-header-symlinks\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define PGSHAREDIR \\\"/usr/local/pgsql/share\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define SYSCONFDIR \\\"/usr/local/pgsql/etc\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define INCLUDEDIR \\\"/usr/local/pgsql/include\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define PKGINCLUDEDIR \\\"/usr/local/pgsql/include\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define INCLUDEDIRSERVER \\\"/usr/local/pgsql/include/server\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define LIBDIR \\\"/usr/local/pgsql/lib\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define PKGLIBDIR \\\"/usr/local/pgsql/lib\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define LOCALEDIR \\\"/usr/local/pgsql/share/locale\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define DOCDIR \\\"/usr/local/pgsql/share/doc/\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define HTMLDIR \\\"/usr/local/pgsql/share/doc/\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define MANDIR \\\"/usr/local/pgsql/share/man\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-auth-scram.o fe-auth-scram.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake -C utils distprep generated-header-symlinks\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-exec.o fe-exec.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-misc.o fe-misc.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake -C parser gram.h\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-lobj.o fe-lobj.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake -C storage/lmgr lwlocknames.h lwlocknames.c\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/storage/lmgr'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' ./generate-lwlocknames.pl ../../../../src/backend/storage/lmgr/lwlocknames.txt\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/storage/lmgr'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/storage/lmgr'\ntouch lwlocknames.c\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/storage/lmgr'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' ./generate-errcodes.pl ../../../src/backend/utils/errcodes.txt > errcodes.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/backend'\nprereqdir=`cd 'storage/lmgr/' >/dev/null && pwd` && \\\n cd '../../src/include/storage/' && rm -f lwlocknames.h && \\\n ln -s \"$prereqdir/lwlocknames.h\" .\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/backend'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\nsed -f ./Gen_dummy_probes.sed probes.d >probes.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\ncd '../../../src/include/utils/' && rm -f probes.h && \\\n ln -s \"../../../src/backend/utils/probes.h\" .\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-print.o fe-print.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-protocol2.o fe-protocol2.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-protocol3.o fe-protocol3.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-secure.o fe-secure.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o legacy-pqsignal.o legacy-pqsignal.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o libpq-events.o libpq-events.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o pqexpbuffer.o pqexpbuffer.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-auth.o fe-auth.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nrm -f encnames.c && ln -s ../../../src/backend/utils/mb/encnames.c .\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nrm -f wchar.c && ln -s ../../../src/backend/utils/mb/wchar.c .\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\n( echo '{ global:'; gawk '/^[^#]/ {printf \"%s;\\n\",$1}' exports.txt; echo ' local: *; };' ) >exports.list\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-connect.o fe-connect.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o encnames.o encnames.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o wchar.o wchar.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Name: libpq' >libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Description: PostgreSQL libpq library' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Url: http://www.postgresql.org/' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Version: 13devel' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Requires: ' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Requires.private: ' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Cflags: -I/usr/local/pgsql/include' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Libs: -L/usr/local/pgsql/lib -lpq' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Libs.private: -lm' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -shared -Wl,-soname,libpq.so.5 -Wl,--version-script=exports.list -o libpq.so.5.13 fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o fe-secure.o legacy-pqsignal.o libpq-events.o pqexpbuffer.o fe-auth.o encnames.o wchar.o -L../../../src/port -L../../../src/common -lpgcommon_shlib -lpgport_shlib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags -lm \n/usr/bin/ld: cannot find -lpgcommon_shlib\n/usr/bin/ld: cannot find -lpgport_shlib\ncollect2: error: ld returned 1 exit status\nmake: *** [../../../src/Makefile.shlib:293: libpq.so.5.13] Error 1\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: *** Waiting for unfinished jobs....\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' -I ../../../src/backend/catalog Gen_fmgrtab.pl --include-path=../../../src/include/ ../../../src/include/catalog/pg_proc.dat\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\ntouch fmgr-stamp\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\nprereqdir=`cd './' >/dev/null && pwd` && \\\ncd '../../../src/include/utils/' && for file in fmgroids.h fmgrprotos.h errcodes.h; do \\\n rm -f $file && ln -s \"$prereqdir/$file\" . ; \\\ndone\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\ntouch ../../../src/include/utils/header-stamp\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/parser'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' ./check_keywords.pl gram.y ../../../src/include/parser/kwlist.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/parser'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/catalog'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' genbki.pl --include-path=../../../src/include/ \\\n\t--set-version=13 ../../../src/include/catalog/pg_proc.h ../../../src/include/catalog/pg_type.h ../../../src/include/catalog/pg_attribute.h ../../../src/include/catalog/pg_class.h ../../../src/include/catalog/pg_attrdef.h ../../../src/include/catalog/pg_constraint.h ../../../src/include/catalog/pg_inherits.h ../../../src/include/catalog/pg_index.h ../../../src/include/catalog/pg_operator.h ../../../src/include/catalog/pg_opfamily.h ../../../src/include/catalog/pg_opclass.h ../../../src/include/catalog/pg_am.h ../../../src/include/catalog/pg_amop.h ../../../src/include/catalog/pg_amproc.h ../../../src/include/catalog/pg_language.h ../../../src/include/catalog/pg_largeobject_metadata.h ../../../src/include/catalog/pg_largeobject.h ../../../src/include/catalog/pg_aggregate.h ../../../src/include/catalog/pg_statistic_ext.h ../../../src/include/catalog/pg_statistic_ext_data.h ../../../src/include/catalog/pg_statistic.h ../../../src/include/catalog/pg_rewrite.h ../../../src/include/catalog/pg_trigger.h ../../../src/include/catalog/pg_event_trigger.h ../../../src/include/catalog/pg_description.h ../../../src/include/catalog/pg_cast.h ../../../src/include/catalog/pg_enum.h ../../../src/include/catalog/pg_namespace.h ../../../src/include/catalog/pg_conversion.h ../../../src/include/catalog/pg_depend.h ../../../src/include/catalog/pg_database.h ../../../src/include/catalog/pg_db_role_setting.h ../../../src/include/catalog/pg_tablespace.h ../../../src/include/catalog/pg_pltemplate.h ../../../src/include/catalog/pg_authid.h ../../../src/include/catalog/pg_auth_members.h ../../../src/include/catalog/pg_shdepend.h ../../../src/include/catalog/pg_shdescription.h ../../../src/include/catalog/pg_ts_config.h ../../../src/include/catalog/pg_ts_config_map.h ../../../src/include/catalog/pg_ts_dict.h ../../../src/include/catalog/pg_ts_parser.h ../../../src/include/catalog/pg_ts_template.h ../../../src/include/catalog/pg_extension.h ../../../src/include/catalog/pg_foreign_data_wrapper.h ../../../src/include/catalog/pg_foreign_server.h ../../../src/include/catalog/pg_user_mapping.h ../../../src/include/catalog/pg_foreign_table.h ../../../src/include/catalog/pg_policy.h ../../../src/include/catalog/pg_replication_origin.h ../../../src/include/catalog/pg_default_acl.h ../../../src/include/catalog/pg_init_privs.h ../../../src/include/catalog/pg_seclabel.h ../../../src/include/catalog/pg_shseclabel.h ../../../src/include/catalog/pg_collation.h ../../../src/include/catalog/pg_partitioned_table.h ../../../src/include/catalog/pg_range.h ../../../src/include/catalog/pg_transform.h ../../../src/include/catalog/pg_sequence.h ../../../src/include/catalog/pg_publication.h ../../../src/include/catalog/pg_publication_rel.h ../../../src/include/catalog/pg_subscription.h ../../../src/include/catalog/pg_subscription_rel.h ../../../src/include/catalog/toasting.h ../../../src/include/catalog/indexing.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/catalog'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/catalog'\ntouch bki-stamp\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/catalog'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/catalog'\nprereqdir=`cd './' >/dev/null && pwd` && \\\ncd '../../../src/include/catalog/' && for file in pg_proc_d.h pg_type_d.h pg_attribute_d.h pg_class_d.h pg_attrdef_d.h pg_constraint_d.h pg_inherits_d.h pg_index_d.h pg_operator_d.h pg_opfamily_d.h pg_opclass_d.h pg_am_d.h pg_amop_d.h pg_amproc_d.h pg_language_d.h pg_largeobject_metadata_d.h pg_largeobject_d.h pg_aggregate_d.h pg_statistic_ext_d.h pg_statistic_ext_data_d.h pg_statistic_d.h pg_rewrite_d.h pg_trigger_d.h pg_event_trigger_d.h pg_description_d.h pg_cast_d.h pg_enum_d.h pg_namespace_d.h pg_conversion_d.h pg_depend_d.h pg_database_d.h pg_db_role_setting_d.h pg_tablespace_d.h pg_pltemplate_d.h pg_authid_d.h pg_auth_members_d.h pg_shdepend_d.h pg_shdescription_d.h pg_ts_config_d.h pg_ts_config_map_d.h pg_ts_dict_d.h pg_ts_parser_d.h pg_ts_template_d.h pg_extension_d.h pg_foreign_data_wrapper_d.h pg_foreign_server_d.h pg_user_mapping_d.h pg_foreign_table_d.h pg_policy_d.h pg_replication_origin_d.h pg_default_acl_d.h pg_init_privs_d.h pg_seclabel_d.h pg_shseclabel_d.h pg_collation_d.h pg_partitioned_table_d.h pg_range_d.h pg_transform_d.h pg_sequence_d.h pg_publication_d.h pg_publication_rel_d.h pg_subscription_d.h pg_subscription_rel_d.h schemapg.h; do \\\n rm -f $file && ln -s \"$prereqdir/$file\" . ; \\\ndone\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/catalog'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/catalog'\ntouch ../../../src/include/catalog/header-stamp\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/catalog'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/parser'\n/usr/bin/bison -Wno-deprecated -d -o gram.c gram.y\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/parser'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/parser'\ntouch gram.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/parser'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/backend'\nprereqdir=`cd 'parser/' >/dev/null && pwd` && \\\n cd '../../src/include/parser/' && rm -f gram.h && \\\n ln -s \"$prereqdir/gram.h\" .\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/backend'\n\nmake -C ../../../src/backend generated-headers\nmake -C ../../../src/port pg_config_paths.h\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define PGBINDIR \\\"/usr/local/pgsql/bin\\\"\" >pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake -C catalog distprep generated-header-symlinks\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define PGSHAREDIR \\\"/usr/local/pgsql/share\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define SYSCONFDIR \\\"/usr/local/pgsql/etc\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define INCLUDEDIR \\\"/usr/local/pgsql/include\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define PKGINCLUDEDIR \\\"/usr/local/pgsql/include\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define INCLUDEDIRSERVER \\\"/usr/local/pgsql/include/server\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define LIBDIR \\\"/usr/local/pgsql/lib\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define PKGLIBDIR \\\"/usr/local/pgsql/lib\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define LOCALEDIR \\\"/usr/local/pgsql/share/locale\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define DOCDIR \\\"/usr/local/pgsql/share/doc/\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define HTMLDIR \\\"/usr/local/pgsql/share/doc/\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\necho \"#define MANDIR \\\"/usr/local/pgsql/share/man\\\"\" >>pg_config_paths.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-auth-scram.o fe-auth-scram.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-misc.o fe-misc.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-exec.o fe-exec.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-print.o fe-print.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake -C utils distprep generated-header-symlinks\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-protocol2.o fe-protocol2.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-protocol3.o fe-protocol3.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-lobj.o fe-lobj.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o legacy-pqsignal.o legacy-pqsignal.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-secure.o fe-secure.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o libpq-events.o libpq-events.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o pqexpbuffer.o pqexpbuffer.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' ./generate-errcodes.pl ../../../src/backend/utils/errcodes.txt > errcodes.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-auth.o fe-auth.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\nsed -f ./Gen_dummy_probes.sed probes.d >probes.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nrm -f wchar.c && ln -s ../../../src/backend/utils/mb/wchar.c .\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nrm -f encnames.c && ln -s ../../../src/backend/utils/mb/encnames.c .\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake -C parser gram.h\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\n( echo '{ global:'; gawk '/^[^#]/ {printf \"%s;\\n\",$1}' exports.txt; echo ' local: *; };' ) >exports.list\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o fe-connect.o fe-connect.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o encnames.o encnames.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -DFRONTEND -DUNSAFE_STAT_OK -I. -I../../../src/include -D_GNU_SOURCE -I../../../src/port -I../../../src/port -DSO_MAJOR_VERSION=5 -c -o wchar.o wchar.c\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Name: libpq' >libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Description: PostgreSQL libpq library' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Url: http://www.postgresql.org/' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Version: 13devel' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Requires: ' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Requires.private: ' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Cflags: -I/usr/local/pgsql/include' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Libs: -L/usr/local/pgsql/lib -lpq' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\necho 'Libs.private: -lm' >>libpq.pc\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake -C storage/lmgr lwlocknames.h lwlocknames.c\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/storage/lmgr'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' ./generate-lwlocknames.pl ../../../../src/backend/storage/lmgr/lwlocknames.txt\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/storage/lmgr'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/storage/lmgr'\ntouch lwlocknames.c\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/storage/lmgr'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/backend'\nprereqdir=`cd 'storage/lmgr/' >/dev/null && pwd` && \\\n cd '../../src/include/storage/' && rm -f lwlocknames.h && \\\n ln -s \"$prereqdir/lwlocknames.h\" .\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/backend'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\ncd '../../../src/include/utils/' && rm -f probes.h && \\\n ln -s \"../../../src/backend/utils/probes.h\" .\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' -I ../../../src/backend/catalog Gen_fmgrtab.pl --include-path=../../../src/include/ ../../../src/include/catalog/pg_proc.dat\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\ntouch fmgr-stamp\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\nprereqdir=`cd './' >/dev/null && pwd` && \\\ncd '../../../src/include/utils/' && for file in fmgroids.h fmgrprotos.h errcodes.h; do \\\n rm -f $file && ln -s \"$prereqdir/$file\" . ; \\\ndone\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/utils'\ntouch ../../../src/include/utils/header-stamp\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/utils'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/parser'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' ./check_keywords.pl gram.y ../../../src/include/parser/kwlist.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/parser'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/catalog'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' genbki.pl --include-path=../../../src/include/ \\\n\t--set-version=13 ../../../src/include/catalog/pg_proc.h ../../../src/include/catalog/pg_type.h ../../../src/include/catalog/pg_attribute.h ../../../src/include/catalog/pg_class.h ../../../src/include/catalog/pg_attrdef.h ../../../src/include/catalog/pg_constraint.h ../../../src/include/catalog/pg_inherits.h ../../../src/include/catalog/pg_index.h ../../../src/include/catalog/pg_operator.h ../../../src/include/catalog/pg_opfamily.h ../../../src/include/catalog/pg_opclass.h ../../../src/include/catalog/pg_am.h ../../../src/include/catalog/pg_amop.h ../../../src/include/catalog/pg_amproc.h ../../../src/include/catalog/pg_language.h ../../../src/include/catalog/pg_largeobject_metadata.h ../../../src/include/catalog/pg_largeobject.h ../../../src/include/catalog/pg_aggregate.h ../../../src/include/catalog/pg_statistic_ext.h ../../../src/include/catalog/pg_statistic_ext_data.h ../../../src/include/catalog/pg_statistic.h ../../../src/include/catalog/pg_rewrite.h ../../../src/include/catalog/pg_trigger.h ../../../src/include/catalog/pg_event_trigger.h ../../../src/include/catalog/pg_description.h ../../../src/include/catalog/pg_cast.h ../../../src/include/catalog/pg_enum.h ../../../src/include/catalog/pg_namespace.h ../../../src/include/catalog/pg_conversion.h ../../../src/include/catalog/pg_depend.h ../../../src/include/catalog/pg_database.h ../../../src/include/catalog/pg_db_role_setting.h ../../../src/include/catalog/pg_tablespace.h ../../../src/include/catalog/pg_pltemplate.h ../../../src/include/catalog/pg_authid.h ../../../src/include/catalog/pg_auth_members.h ../../../src/include/catalog/pg_shdepend.h ../../../src/include/catalog/pg_shdescription.h ../../../src/include/catalog/pg_ts_config.h ../../../src/include/catalog/pg_ts_config_map.h ../../../src/include/catalog/pg_ts_dict.h ../../../src/include/catalog/pg_ts_parser.h ../../../src/include/catalog/pg_ts_template.h ../../../src/include/catalog/pg_extension.h ../../../src/include/catalog/pg_foreign_data_wrapper.h ../../../src/include/catalog/pg_foreign_server.h ../../../src/include/catalog/pg_user_mapping.h ../../../src/include/catalog/pg_foreign_table.h ../../../src/include/catalog/pg_policy.h ../../../src/include/catalog/pg_replication_origin.h ../../../src/include/catalog/pg_default_acl.h ../../../src/include/catalog/pg_init_privs.h ../../../src/include/catalog/pg_seclabel.h ../../../src/include/catalog/pg_shseclabel.h ../../../src/include/catalog/pg_collation.h ../../../src/include/catalog/pg_partitioned_table.h ../../../src/include/catalog/pg_range.h ../../../src/include/catalog/pg_transform.h ../../../src/include/catalog/pg_sequence.h ../../../src/include/catalog/pg_publication.h ../../../src/include/catalog/pg_publication_rel.h ../../../src/include/catalog/pg_subscription.h ../../../src/include/catalog/pg_subscription_rel.h ../../../src/include/catalog/toasting.h ../../../src/include/catalog/indexing.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/catalog'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/catalog'\ntouch bki-stamp\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/catalog'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/catalog'\nprereqdir=`cd './' >/dev/null && pwd` && \\\ncd '../../../src/include/catalog/' && for file in pg_proc_d.h pg_type_d.h pg_attribute_d.h pg_class_d.h pg_attrdef_d.h pg_constraint_d.h pg_inherits_d.h pg_index_d.h pg_operator_d.h pg_opfamily_d.h pg_opclass_d.h pg_am_d.h pg_amop_d.h pg_amproc_d.h pg_language_d.h pg_largeobject_metadata_d.h pg_largeobject_d.h pg_aggregate_d.h pg_statistic_ext_d.h pg_statistic_ext_data_d.h pg_statistic_d.h pg_rewrite_d.h pg_trigger_d.h pg_event_trigger_d.h pg_description_d.h pg_cast_d.h pg_enum_d.h pg_namespace_d.h pg_conversion_d.h pg_depend_d.h pg_database_d.h pg_db_role_setting_d.h pg_tablespace_d.h pg_pltemplate_d.h pg_authid_d.h pg_auth_members_d.h pg_shdepend_d.h pg_shdescription_d.h pg_ts_config_d.h pg_ts_config_map_d.h pg_ts_dict_d.h pg_ts_parser_d.h pg_ts_template_d.h pg_extension_d.h pg_foreign_data_wrapper_d.h pg_foreign_server_d.h pg_user_mapping_d.h pg_foreign_table_d.h pg_policy_d.h pg_replication_origin_d.h pg_default_acl_d.h pg_init_privs_d.h pg_seclabel_d.h pg_shseclabel_d.h pg_collation_d.h pg_partitioned_table_d.h pg_range_d.h pg_transform_d.h pg_sequence_d.h pg_publication_d.h pg_publication_rel_d.h pg_subscription_d.h pg_subscription_rel_d.h schemapg.h; do \\\n rm -f $file && ln -s \"$prereqdir/$file\" . ; \\\ndone\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/catalog'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/catalog'\ntouch ../../../src/include/catalog/header-stamp\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/catalog'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/parser'\n/usr/bin/bison -Wno-deprecated -d -o gram.c gram.y\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/parser'\nmake[2]: Entering directory '/home/ilmari/src/postgresql/src/backend/parser'\ntouch gram.h\nmake[2]: Leaving directory '/home/ilmari/src/postgresql/src/backend/parser'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/backend'\nprereqdir=`cd 'parser/' >/dev/null && pwd` && \\\n cd '../../src/include/parser/' && rm -f gram.h && \\\n ln -s \"$prereqdir/gram.h\" .\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/backend'\nmake -C ../../../src/port all\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o strlcat.o strlcat.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o strlcpy.o strlcpy.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o getpeereid.o getpeereid.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o fls.o fls.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -msse4.2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pg_crc32c_sse42.o pg_crc32c_sse42.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pg_crc32c_sse42_choose.o pg_crc32c_sse42_choose.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pg_crc32c_sb8.o pg_crc32c_sb8.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o chklocale.o chklocale.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o erand48.o erand48.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o inet_net_ntop.o inet_net_ntop.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pg_bitutils.o pg_bitutils.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o noblock.o noblock.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pg_strong_random.o pg_strong_random.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pgmkdirp.o pgmkdirp.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pgcheckdir.o pgcheckdir.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pgsleep.o pgsleep.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pgstrcasecmp.o pgstrcasecmp.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pqsignal.o pqsignal.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o pgstrsignal.o pgstrsignal.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o qsort.o qsort.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o quotes.o quotes.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o qsort_arg.o qsort_arg.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o snprintf.o snprintf.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o sprompt.o sprompt.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o strerror.o strerror.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o tar.o tar.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o thread.o thread.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c fls.c -o fls_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c strlcat.c -o strlcat_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c getpeereid.c -o getpeereid_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c strlcpy.c -o strlcpy_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -msse4.2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pg_crc32c_sse42.c -o pg_crc32c_sse42_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pg_crc32c_sb8.c -o pg_crc32c_sb8_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pg_crc32c_sse42_choose.c -o pg_crc32c_sse42_choose_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c chklocale.c -o chklocale_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c erand48.c -o erand48_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c inet_net_ntop.c -o inet_net_ntop_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c noblock.c -o noblock_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c path.c -o path_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pg_bitutils.c -o pg_bitutils_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pgmkdirp.c -o pgmkdirp_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pg_strong_random.c -o pg_strong_random_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pgcheckdir.c -o pgcheckdir_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pgstrcasecmp.c -o pgstrcasecmp_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pgsleep.c -o pgsleep_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pgstrsignal.c -o pgstrsignal_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c pqsignal.c -o pqsignal_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c qsort.c -o qsort_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c qsort_arg.c -o qsort_arg_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c quotes.c -o quotes_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c snprintf.c -o snprintf_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c sprompt.c -o sprompt_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c strerror.c -o strerror_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c tar.c -o tar_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c strlcat.c -o strlcat_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c thread.c -o thread_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c fls.c -o fls_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c getpeereid.c -o getpeereid_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c strlcpy.c -o strlcpy_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -msse4.2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pg_crc32c_sse42.c -o pg_crc32c_sse42_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pg_crc32c_sse42_choose.c -o pg_crc32c_sse42_choose_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c chklocale.c -o chklocale_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c erand48.c -o erand48_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pg_crc32c_sb8.c -o pg_crc32c_sb8_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c inet_net_ntop.c -o inet_net_ntop_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c noblock.c -o noblock_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pg_bitutils.c -o pg_bitutils_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c path.c -o path_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pgcheckdir.c -o pgcheckdir_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pg_strong_random.c -o pg_strong_random_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pgmkdirp.c -o pgmkdirp_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pgstrcasecmp.c -o pgstrcasecmp_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pgsleep.c -o pgsleep_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pgstrsignal.c -o pgstrsignal_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c pqsignal.c -o pqsignal_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c qsort.c -o qsort_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c quotes.c -o quotes_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c qsort_arg.c -o qsort_arg_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c sprompt.c -o sprompt_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c snprintf.c -o snprintf_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c strerror.c -o strerror_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c tar.c -o tar_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\nrm -f libpgport.a\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\nrm -f libpgport_shlib.a\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../src/port -I../../src/include -D_GNU_SOURCE -c thread.c -o thread_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\nrm -f libpgport_srv.a\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\nar crs libpgport_shlib.a fls_shlib.o getpeereid_shlib.o strlcat_shlib.o strlcpy_shlib.o pg_crc32c_sse42_shlib.o pg_crc32c_sb8_shlib.o pg_crc32c_sse42_choose_shlib.o chklocale_shlib.o erand48_shlib.o inet_net_ntop_shlib.o noblock_shlib.o path_shlib.o pg_bitutils_shlib.o pg_strong_random_shlib.o pgcheckdir_shlib.o pgmkdirp_shlib.o pgsleep_shlib.o pgstrcasecmp_shlib.o pgstrsignal_shlib.o pqsignal_shlib.o qsort_shlib.o qsort_arg_shlib.o quotes_shlib.o snprintf_shlib.o sprompt_shlib.o strerror_shlib.o tar_shlib.o thread_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\nar crs libpgport.a fls.o getpeereid.o strlcat.o strlcpy.o pg_crc32c_sse42.o pg_crc32c_sb8.o pg_crc32c_sse42_choose.o chklocale.o erand48.o inet_net_ntop.o noblock.o path.o pg_bitutils.o pg_strong_random.o pgcheckdir.o pgmkdirp.o pgsleep.o pgstrcasecmp.o pgstrsignal.o pqsignal.o qsort.o qsort_arg.o quotes.o snprintf.o sprompt.o strerror.o tar.o thread.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/port'\nar crs libpgport_srv.a fls_srv.o getpeereid_srv.o strlcat_srv.o strlcpy_srv.o pg_crc32c_sse42_srv.o pg_crc32c_sb8_srv.o pg_crc32c_sse42_choose_srv.o chklocale_srv.o erand48_srv.o inet_net_ntop_srv.o noblock_srv.o path_srv.o pg_bitutils_srv.o pg_strong_random_srv.o pgcheckdir_srv.o pgmkdirp_srv.o pgsleep_srv.o pgstrcasecmp_srv.o pgstrsignal_srv.o pqsignal_srv.o qsort_srv.o qsort_arg_srv.o quotes_srv.o snprintf_srv.o sprompt_srv.o strerror_srv.o tar_srv.o thread_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/port'\nmake -C ../../../src/common all\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o base64.o base64.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o config_info.o config_info.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o controldata_utils.o controldata_utils.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o exec.o exec.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -Wno-declaration-after-statement -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o d2s.o d2s.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -Wno-declaration-after-statement -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o f2s.o f2s.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o file_perm.o file_perm.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o ip.o ip.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o link-canary.o link-canary.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o md5.o md5.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o pg_lzcompress.o pg_lzcompress.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o pgfnames.o pgfnames.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o kwlookup.o kwlookup.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o psprintf.o psprintf.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o rmtree.o rmtree.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o relpath.o relpath.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o saslprep.o saslprep.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o string.o string.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o stringinfo.o stringinfo.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o scram-common.o scram-common.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o username.o username.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o unicode_norm.o unicode_norm.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o wait_error.o wait_error.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o sha2.o sha2.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o file_utils.o file_utils.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o fe_memutils.o fe_memutils.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c base64.c -o base64_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o logging.o logging.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\n'/home/ilmari/perl5/perlbrew/perls/30.0/bin/perl' -I ../../src/tools ../../src/tools/gen_keywordlist.pl --extern ../../src/include/parser/kwlist.h\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c config_info.c -o config_info_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c controldata_utils.c -o controldata_utils_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o restricted_token.o restricted_token.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -Wno-declaration-after-statement -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c d2s.c -o d2s_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c exec.c -o exec_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c file_perm.c -o file_perm_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c ip.c -o ip_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -Wno-declaration-after-statement -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c f2s.c -o f2s_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c kwlookup.c -o kwlookup_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c -o keywords.o keywords.c\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c link-canary.c -o link-canary_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c md5.c -o md5_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c pg_lzcompress.c -o pg_lzcompress_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c pgfnames.c -o pgfnames_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c psprintf.c -o psprintf_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c relpath.c -o relpath_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c rmtree.c -o rmtree_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c saslprep.c -o saslprep_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c scram-common.c -o scram-common_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c string.c -o string_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c stringinfo.c -o stringinfo_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c wait_error.c -o wait_error_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c unicode_norm.c -o unicode_norm_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c username.c -o username_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c sha2.c -o sha2_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c file_utils.c -o file_utils_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c fe_memutils.c -o fe_memutils_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c restricted_token.c -o restricted_token_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c logging.c -o logging_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c config_info.c -o config_info_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c base64.c -o base64_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -Wno-declaration-after-statement -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c d2s.c -o d2s_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c controldata_utils.c -o controldata_utils_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c exec.c -o exec_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -Wno-declaration-after-statement -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c f2s.c -o f2s_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c file_perm.c -o file_perm_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c kwlookup.c -o kwlookup_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c keywords.c -o keywords_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c ip.c -o ip_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c md5.c -o md5_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c link-canary.c -o link-canary_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c pg_lzcompress.c -o pg_lzcompress_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c pgfnames.c -o pgfnames_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c relpath.c -o relpath_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c psprintf.c -o psprintf_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c rmtree.c -o rmtree_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c saslprep.c -o saslprep_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c scram-common.c -o scram-common_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c string.c -o string_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c username.c -o username_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c stringinfo.c -o stringinfo_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c unicode_norm.c -o unicode_norm_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\nrm -f libpgcommon.a\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c wait_error.c -o wait_error_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c sha2.c -o sha2_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\nar crs libpgcommon.a base64.o config_info.o controldata_utils.o d2s.o exec.o f2s.o file_perm.o ip.o keywords.o kwlookup.o link-canary.o md5.o pg_lzcompress.o pgfnames.o psprintf.o relpath.o rmtree.o saslprep.o scram-common.o string.o stringinfo.o unicode_norm.o username.o wait_error.o sha2.o fe_memutils.o file_utils.o logging.o restricted_token.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\nrm -f libpgcommon_srv.a\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -DFRONTEND -I. -I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CONFIGURE=\"\\\"\\\"\" -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl -lm \\\"\" -c keywords.c -o keywords_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\nar crs libpgcommon_srv.a base64_srv.o config_info_srv.o controldata_utils_srv.o d2s_srv.o exec_srv.o f2s_srv.o file_perm_srv.o ip_srv.o keywords_srv.o kwlookup_srv.o link-canary_srv.o md5_srv.o pg_lzcompress_srv.o pgfnames_srv.o psprintf_srv.o relpath_srv.o rmtree_srv.o saslprep_srv.o scram-common_srv.o string_srv.o stringinfo_srv.o unicode_norm_srv.o username_srv.o wait_error_srv.o sha2_srv.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\nrm -f libpgcommon_shlib.a\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake[1]: Entering directory '/home/ilmari/src/postgresql/src/common'\nar crs libpgcommon_shlib.a base64_shlib.o config_info_shlib.o controldata_utils_shlib.o d2s_shlib.o exec_shlib.o f2s_shlib.o file_perm_shlib.o ip_shlib.o keywords_shlib.o kwlookup_shlib.o link-canary_shlib.o md5_shlib.o pg_lzcompress_shlib.o pgfnames_shlib.o psprintf_shlib.o relpath_shlib.o rmtree_shlib.o saslprep_shlib.o scram-common_shlib.o string_shlib.o stringinfo_shlib.o unicode_norm_shlib.o username_shlib.o wait_error_shlib.o sha2_shlib.o fe_memutils_shlib.o file_utils_shlib.o logging_shlib.o restricted_token_shlib.o\nmake[1]: Leaving directory '/home/ilmari/src/postgresql/src/common'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nrm -f libpq.a\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nar crs libpq.a fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o fe-secure.o legacy-pqsignal.o libpq-events.o pqexpbuffer.o fe-auth.o encnames.o wchar.o\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nranlib libpq.a\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ntouch libpq.a\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -fPIC -shared -Wl,-soname,libpq.so.5 -Wl,--version-script=exports.list -o libpq.so.5.13 fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o fe-secure.o legacy-pqsignal.o libpq-events.o pqexpbuffer.o fe-auth.o encnames.o wchar.o -L../../../src/port -L../../../src/common -lpgcommon_shlib -lpgport_shlib -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags -lm \nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nrm -f libpq.so.5\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nln -s libpq.so.5.13 libpq.so.5\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nrm -f libpq.so\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nmake: Entering directory '/home/ilmari/src/postgresql/src/interfaces/libpq'\nln -s libpq.so.5.13 libpq.so\nmake: Leaving directory '/home/ilmari/src/postgresql/src/interfaces/libpq'",
"msg_date": "Thu, 09 Jan 2020 11:03:23 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Fixing parallel make of libpq"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Hmm. That logically makes sense. Isn't that a side effect of 7143b3e\n>> then? Now, FWIW, I am not able to reproduce it here, after trying on\n>> two different machines, various parallel job numbers (up to 32), and a\n>> couple of dozen attempts. Perhaps somebody else can see the failures?\n\n> It fails reliably for me on Debian Buster, with make 4.2.1-1.2and -j4.\n\nYeah, it's also reliable for me on Fedora 30:\n\n$ make -s clean\n$ make -s -j4 -C src/interfaces/libpq\n/usr/bin/ld: cannot find -lpgcommon_shlib\n/usr/bin/ld: cannot find -lpgport_shlib\ncollect2: error: ld returned 1 exit status\nmake: *** [../../../src/Makefile.shlib:293: libpq.so.5.13] Error 1\nmake: *** Waiting for unfinished jobs....\n\nOn a RHEL6 box, the same test only draws a complaint about\n-lpgcommon_shlib, so it does seem like there's some make version\ndependency in here. And of course the whole thing is a race condition\nanyway, so naturally it's going to be pretty context-sensitive.\n\nMy thoughts about the patch:\n\n1) Changing from an \"|\"-style dependency to a plain dependency seems\nlike a semantics change. I've never been totally clear on the\ndifference though. I think Peter introduced our use of the \"|\" style,\nso maybe he can comment.\n\n2) The same coding pattern is used in a bunch of other places, so if\nthis spot is broken, there probably are a lot of others that need a\nsimilar change. On the other hand, there may not be that many\ndirectories that are likely places to start a parallel build from,\nso maybe we don't care elsewhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 09:17:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing parallel make of libpq"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> My thoughts about the patch:\n>\n> 1) Changing from an \"|\"-style dependency to a plain dependency seems\n> like a semantics change. I've never been totally clear on the\n> difference though. I think Peter introduced our use of the \"|\" style,\n> so maybe he can comment.\n\nMakefile.shlib puts $(SHLIB_PREREQS) after the \"|\":\n\n$ grep SHLIB_PREREQS src/Makefile.shlib\n# SHLIB_PREREQS Order-only prerequisites for library build target\n$(stlib): $(OBJS) | $(SHLIB_PREREQS)\n$(shlib): $(OBJS) | $(SHLIB_PREREQS)\n$(shlib): $(OBJS) | $(SHLIB_PREREQS)\n$(shlib): $(OBJS) | $(SHLIB_PREREQS)\n$(shlib): $(OBJS) | $(SHLIB_PREREQS)\n$(shlib): $(OBJS) $(DLL_DEFFILE) | $(SHLIB_PREREQS)\n\n\n> 2) The same coding pattern is used in a bunch of other places, so if\n> this spot is broken, there probably are a lot of others that need a\n> similar change. On the other hand, there may not be that many\n> directories that are likely places to start a parallel build from,\n> so maybe we don't care elsewhere.\n\nGrepping the Makefiles for ':.*submake-' shows that they are on the\nactual build artifact target, libpq was just the outlier having it on\nthe phony \"all\" target. For example pg_basebackup:\n\npg_basebackup: pg_basebackup.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils\n $(CC) $(CFLAGS) pg_basebackup.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n\npg_receivewal: pg_receivewal.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils\n $(CC) $(CFLAGS) pg_receivewal.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n\npg_recvlogical: pg_recvlogical.o $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils\n $(CC) $(CFLAGS) pg_recvlogical.o $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n",
"msg_date": "Thu, 09 Jan 2020 14:35:19 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Fixing parallel make of libpq"
},
{
"msg_contents": "On 2020-01-09 15:17, Tom Lane wrote:\n> 1) Changing from an \"|\"-style dependency to a plain dependency seems\n> like a semantics change. I've never been totally clear on the\n> difference though. I think Peter introduced our use of the \"|\" style,\n> so maybe he can comment.\n\nIf you have a phony target as a prerequisite of a real-file target, you \nshould make that an order-only (\"|\") prerequisite. Otherwise the \nreal-file target rules will *always* be run, on account of the phony \ntarget prerequisite.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 Jan 2020 13:29:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing parallel make of libpq"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-01-09 15:17, Tom Lane wrote:\n>> 1) Changing from an \"|\"-style dependency to a plain dependency seems\n>> like a semantics change. I've never been totally clear on the\n>> difference though. I think Peter introduced our use of the \"|\" style,\n>> so maybe he can comment.\n\n> If you have a phony target as a prerequisite of a real-file target, you \n> should make that an order-only (\"|\") prerequisite. Otherwise the \n> real-file target rules will *always* be run, on account of the phony \n> target prerequisite.\n\nOK, got that. But that doesn't directly answer the question of whether\nit's wrong to use a phony target as an order-only prerequisite of\nanother phony target. Grepping around for other possible issues,\nI see that you recently added\n\nupdate-unicode: | submake-generated-headers submake-libpgport\n\t$(MAKE) -C src/common/unicode $@\n\t$(MAKE) -C contrib/unaccent $@\n\nDoesn't that also have parallel-make hazards, if libpq does?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 10:08:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing parallel make of libpq"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> OK, got that. But that doesn't directly answer the question of whether\n> it's wrong to use a phony target as an order-only prerequisite of\n> another phony target. Grepping around for other possible issues,\n> I see that you recently added\n>\n> update-unicode: | submake-generated-headers submake-libpgport\n> \t$(MAKE) -C src/common/unicode $@\n> \t$(MAKE) -C contrib/unaccent $@\n>\n> Doesn't that also have parallel-make hazards, if libpq does?\n\nThe part of 'update-unicode' that needs the generated headers and\nlibpgport is src/common/unicode/norm_test, which is depended by on by\nthe normalization-check target in the same directory. Running 'make -C\nsrc/common/unicode normalization-check' in a freshly-configured tree\ndoes indeed fail, independent of parallelism or the update-unicode\ntarget.\n\nAdding the deps to the norm_test target fixes 'make -C\nsrc/common/unicode normalization-check', but 'make -C src/common/unicode\nupdate-unicode' still fails, because submake-generated-headers only does\nits thing in the top-level make invocation, so that needs an explicit\ndep as well.\n\nPlease find a patch attached. However, I don't think it's fair to block\nfixing the actual libpq parallel-make bug that has bitten me several\ntimes on auditing the entire source tree for vaguely related issues that\nnobody has complained about yet.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen",
"msg_date": "Mon, 24 Feb 2020 00:31:59 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Fixing parallel make of libpq"
}
] |
[
{
"msg_contents": "I spent some time studying the question of how we classify queries as\neither read-only or not, and our various definitions of read-only, and\nfound some bugs. Specifically:\n\n1. check_xact_readonly() prohibits most kinds of DDL in read-only\ntransactions, but a bunch of recently-added commands were not added to\nthe list. The missing node types are T_CreatePolicyStmt,\nT_AlterPolicyStmt, T_CreateAmStmt, T_CreateStatsStmt,\nT_AlterStatsStmt, and T_AlterCollationStmt, which means you can run\nthese commands in a read-only transaction with no problem and even\nattempt to run them on a standby. The ones I tested on a standby all\nfail with random-ish error messages due to lower-level checks, but\nthat's not a great situation.\n\n2. There are comments in utility.c which assert that certain commands\nare \"forbidden in parallel mode due to CommandIsReadOnly.\" That's\ntechnically correct, but silly and misleading.These commands wouldn't\nbe running in parallel mode unless they were running inside of a\nfunction or procedure or something, and, if they are,\nCommandIsReadOnly() checks in spi.c or functions.c would prevent not\nonly these commands but, in fact, all utility commands, so calling out\nthose particular ones is just adding confusion. Also, the underlying\nrestriction is unnecessary, because there's no good reason to prevent\nthe use of things like SHOW and DO in parallel mode, yet we currently\ndo.\n\nThe problems mentioned under (1) are technically the fault of the\npeople who wrote, reviewed, and committed the patches which added\nthose command types, and the problems mentioned under (2) are\nbasically my fault, dating back to the original ParallelContext patch.\nHowever, I think that all of them can be tracked back to a more\nfundamental underlying cause, which is that the way that the various\nrestrictions on read-write queries are implemented is pretty\nconfusing. Attached is a patch I wrote to try to improve things. It\ncentralizes three decisions that are currently made in different\nplaces in a single place: (a) can this be run in a read only\ntransaction? (b) can it run in parallel mode? (c) can it run on a\nstandby? -- and along the way, it fixes the problems mentioned above\nand tries to supply slightly improved comments. Perhaps we should\nback-patch fixes at least for (1) even if this gets committed, but I\nguess my first question is what people think of this approach to the\nproblem.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 8 Jan 2020 14:09:12 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "our checks for read-only queries are not great"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I spent some time studying the question of how we classify queries as\n> either read-only or not, and our various definitions of read-only, and\n> found some bugs. ...\n> However, I think that all of them can be tracked back to a more\n> fundamental underlying cause, which is that the way that the various\n> restrictions on read-write queries are implemented is pretty\n> confusing. Attached is a patch I wrote to try to improve things.\n\nHmm. I like the idea of deciding this in one place and insisting that\nthat one place have a switch case for every statement type. That'll\naddress the root issue that people fail to think about this when adding\nnew statements.\n\nI'm less enamored of some of the specific decisions here. Notably\n\n* I find COMMAND_IS_WEAKLY_READ_ONLY to be a more confusing concept\nthan what it replaces. The test for LockStmt is an example --- the\ncomment talks about restricting locks during recovery, which is fine and\nunderstandable, but then it's completely unobvious that the actual code\nimplements that behavior rather than some other one.\n\n* ALTER SYSTEM SET is readonly? Say what? Even if that's how the current\ncode behaves, it's a damfool idea and we should change it. I think that\nthe semantics we are really trying to implement for read-only is \"has no\neffects visible outside the current session\" --- this explains, for\nexample, why copying into a temp table is OK. ALTER SYSTEM SET certainly\nisn't that.\n\nI haven't read all of the code; those were just a couple points that\njumped out at me.\n\nI think if we can sort out the notation for how the restrictions\nare expressed, this'll be a good improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jan 2020 14:57:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Wed, Jan 8, 2020 at 2:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm. I like the idea of deciding this in one place and insisting that\n> that one place have a switch case for every statement type. That'll\n> address the root issue that people fail to think about this when adding\n> new statements.\n\nRight. Assuming they test their new command at least one time, they\nshould notice. :-)\n\n> I'm less enamored of some of the specific decisions here. Notably\n>\n> * I find COMMAND_IS_WEAKLY_READ_ONLY to be a more confusing concept\n> than what it replaces. The test for LockStmt is an example --- the\n> comment talks about restricting locks during recovery, which is fine and\n> understandable, but then it's completely unobvious that the actual code\n> implements that behavior rather than some other one.\n\nUh, suggestions?\n\n> * ALTER SYSTEM SET is readonly? Say what? Even if that's how the current\n> code behaves, it's a damfool idea and we should change it. I think that\n> the semantics we are really trying to implement for read-only is \"has no\n> effects visible outside the current session\" --- this explains, for\n> example, why copying into a temp table is OK. ALTER SYSTEM SET certainly\n> isn't that.\n\nIt would be extremely lame and a huge usability regression to\narbitrary restrict ALTER SYSTEM SET on standby nodes for no reason.\nEditing the postgresql.auto.conf file works just fine there, and is a\ntotally sensible thing to want to do. You could argue for restricting\nit in parallel mode just out of general paranoia, but I don't favor\nthat approach.\n\n> I think if we can sort out the notation for how the restrictions\n> are expressed, this'll be a good improvement.\n\nThanks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Jan 2020 15:11:09 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jan 8, 2020 at 2:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * I find COMMAND_IS_WEAKLY_READ_ONLY to be a more confusing concept\n>> than what it replaces. The test for LockStmt is an example --- the\n>> comment talks about restricting locks during recovery, which is fine and\n>> understandable, but then it's completely unobvious that the actual code\n>> implements that behavior rather than some other one.\n\n> Uh, suggestions?\n\nCOMMAND_NOT_IN_RECOVERY, maybe?\n\n>> * ALTER SYSTEM SET is readonly? Say what?\n\n> It would be extremely lame and a huge usability regression to\n> arbitrary restrict ALTER SYSTEM SET on standby nodes for no reason.\n\nI didn't say that it shouldn't be allowed on standby nodes. I said\nit shouldn't be allowed in transactions that have explicitly declared\nthemselves to be read-only. Maybe we need to disaggregate those\nconcepts a bit more --- a refactoring such as this would be a fine\ntime to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jan 2020 15:26:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Wed, Jan 8, 2020 at 3:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Jan 8, 2020 at 2:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> * I find COMMAND_IS_WEAKLY_READ_ONLY to be a more confusing concept\n> >> than what it replaces. The test for LockStmt is an example --- the\n> >> comment talks about restricting locks during recovery, which is fine and\n> >> understandable, but then it's completely unobvious that the actual code\n> >> implements that behavior rather than some other one.\n>\n> > Uh, suggestions?\n>\n> COMMAND_NOT_IN_RECOVERY, maybe?\n\nWell, maybe I should just get rid of COMMAND_IS_WEAKLY_READ_ONLY and\nreturn individual COMMAND_OK_IN_* flags for those cases.\n\n> >> * ALTER SYSTEM SET is readonly? Say what?\n>\n> > It would be extremely lame and a huge usability regression to\n> > arbitrary restrict ALTER SYSTEM SET on standby nodes for no reason.\n>\n> I didn't say that it shouldn't be allowed on standby nodes.\n\nOh, OK. I guess I misunderstood.\n\n> I said\n> it shouldn't be allowed in transactions that have explicitly declared\n> themselves to be read-only. Maybe we need to disaggregate those\n> concepts a bit more --- a refactoring such as this would be a fine\n> time to do that.\n\nYeah, the current rules are pretty weird. Aside from ALTER SYSTEM ..\nSET, read-only transaction seem to allow writes to temporary relations\nand sequences, plus CLUSTER, REINDEX, VACUUM, PREPARE, ROLLBACK\nPREPARED, and COMMIT PREPARED, all of which sound a lot like writes to\nme. They also allow LISTEN and SET which are have transactional\nbehavior in general but for some reason don't feel they need to\nrespect the R/O property. I worry that if we start whacking these\nbehaviors around we'll get complaints, so I'm cautious about doing\nthat. At the least, we would need to have a real clear definition, and\nif there is such a definition that covers the current cases, I can't\nguess what it is. Forget ALTER SYSTEM for a minute -- why is it OK to\nrewrite a table via CLUSTER in a R/O transaction, but not OK to do an\nALTER TABLE that changes the clustering index? Why is it not OK to\nLISTEN on a standby (and presumably not get any notifications until a\npromotion occurs) but OK to UNLISTEN? Whatever reasons may have\njustified the current choice of behaviors are probably lost in the\nsands of time; they are for sure unknown to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Jan 2020 15:37:04 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Greetings,\n\n(I'm also overall in favor of the direction this is going, so general +1\nfrom me, and I took a quick look through the patch and didn't\nparticularly see anything I didn't like besides what's commented on\nbelow.)\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Jan 8, 2020 at 3:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > On Wed, Jan 8, 2020 at 2:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> * I find COMMAND_IS_WEAKLY_READ_ONLY to be a more confusing concept\n> > >> than what it replaces. The test for LockStmt is an example --- the\n> > >> comment talks about restricting locks during recovery, which is fine and\n> > >> understandable, but then it's completely unobvious that the actual code\n> > >> implements that behavior rather than some other one.\n> >\n> > > Uh, suggestions?\n> >\n> > COMMAND_NOT_IN_RECOVERY, maybe?\n> \n> Well, maybe I should just get rid of COMMAND_IS_WEAKLY_READ_ONLY and\n> return individual COMMAND_OK_IN_* flags for those cases.\n\nYeah, I don't like the WEAKLY_READ_ONLY idea either- explicitly having\nCOMMAND_OK_IN_X seems cleaner.\n\n> > >> * ALTER SYSTEM SET is readonly? Say what?\n> >\n> > > It would be extremely lame and a huge usability regression to\n> > > arbitrary restrict ALTER SYSTEM SET on standby nodes for no reason.\n> >\n> > I didn't say that it shouldn't be allowed on standby nodes.\n> \n> Oh, OK. I guess I misunderstood.\n\nI agree that we want ALTER SYSTEM SET to work on standbys, but it seems\nthere isn't really disagreement there.\n\n> > I said\n> > it shouldn't be allowed in transactions that have explicitly declared\n> > themselves to be read-only. Maybe we need to disaggregate those\n> > concepts a bit more --- a refactoring such as this would be a fine\n> > time to do that.\n> \n> Yeah, the current rules are pretty weird. Aside from ALTER SYSTEM ..\n> SET, read-only transaction seem to allow writes to temporary relations\n> and sequences, plus CLUSTER, REINDEX, VACUUM, PREPARE, ROLLBACK\n> PREPARED, and COMMIT PREPARED, all of which sound a lot like writes to\n> me. They also allow LISTEN and SET which are have transactional\n> behavior in general but for some reason don't feel they need to\n> respect the R/O property. I worry that if we start whacking these\n> behaviors around we'll get complaints, so I'm cautious about doing\n> that. At the least, we would need to have a real clear definition, and\n> if there is such a definition that covers the current cases, I can't\n> guess what it is. Forget ALTER SYSTEM for a minute -- why is it OK to\n> rewrite a table via CLUSTER in a R/O transaction, but not OK to do an\n> ALTER TABLE that changes the clustering index? Why is it not OK to\n> LISTEN on a standby (and presumably not get any notifications until a\n> promotion occurs) but OK to UNLISTEN? Whatever reasons may have\n> justified the current choice of behaviors are probably lost in the\n> sands of time; they are for sure unknown to me.\n\nThat a 'read-only' transaction can call CLUSTER is definitely bizarre to\nme. As relates to 'UN-SOMETHING', having those be allowed makes sense,\nto me anyway, since connection poolers like to do those things and it\nshould be a no-op more-or-less by definition. SET isn't changing data\nblocks, so that also seems ok for a read-only transaction.. but, yeah,\nthere's no real great hard-and-fast-rule we've been following.\n\nWould we be able to make a rule of \"can't change on-disk stuff, except\nfor things in temporary schemas\" and have it stick without a lot of\ncomplaints? Seems like that would address Tom's ALTER SYSTEM SET\nconcern, and would mean CLUSTER/REINDEX/VACUUM are disallowed in a\nbackwards-incompatible way (though I think I'm fine with that..), and\nSET would still be allowed (which strikes me as correct too). I'm not\nquite sure how I feel about LISTEN, but that it could possibly actually\nbe used post-promotion and doesn't change on-disk stuff makes me feel\nlike it actually probably should be allowed.\n\nJust looking at what was mentioned here- if there's other cases where\nthis idea falls flat then let's discuss them.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 8 Jan 2020 17:57:53 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Wed, Jan 8, 2020 at 5:57 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Yeah, I don't like the WEAKLY_READ_ONLY idea either- explicitly having\n> COMMAND_OK_IN_X seems cleaner.\n\nDone that way. v2 attached.\n\n> Would we be able to make a rule of \"can't change on-disk stuff, except\n> for things in temporary schemas\" and have it stick without a lot of\n> complaints? Seems like that would address Tom's ALTER SYSTEM SET\n> concern, and would mean CLUSTER/REINDEX/VACUUM are disallowed in a\n> backwards-incompatible way (though I think I'm fine with that..), and\n> SET would still be allowed (which strikes me as correct too). I'm not\n> quite sure how I feel about LISTEN, but that it could possibly actually\n> be used post-promotion and doesn't change on-disk stuff makes me feel\n> like it actually probably should be allowed.\n\nI think we can make any rule we like, but I think we should have some\nmeasure of broad agreement on it. I'd like to go ahead with this for\nnow and then further changes can continue to be discussed and debated.\nHopefully we'll get a few more people to weigh in, too, because\ndeciding something like this based on what a handful of people doesn't\nseem like a good idea to me.\n\nI'd be really interested to hear if anyone knows the history behind\nallowing CLUSTER, REINDEX, VACUUM, and some operations on temp tables.\nIt seems to have been that way for a long time. I wonder if it was a\ndeliberate choice or something that just happened semi-accidentally.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 9 Jan 2020 13:57:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'd be really interested to hear if anyone knows the history behind\n> allowing CLUSTER, REINDEX, VACUUM, and some operations on temp tables.\n> It seems to have been that way for a long time. I wonder if it was a\n> deliberate choice or something that just happened semi-accidentally.\n\nWithin a \"read-only\" xact you mean? I believe that allowing DML writes\nwas intentional. As for the utility commands, I suspect that it was in\npart accidental (error of omission?), and then if anyone thought hard\nabout it they decided that allowing DML writes to temp tables justifies\nthose operations too.\n\nHave you tried excavating in our git history to see when the relevant\npermission tests originated?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 14:24:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 2:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I'd be really interested to hear if anyone knows the history behind\n> > allowing CLUSTER, REINDEX, VACUUM, and some operations on temp tables.\n> > It seems to have been that way for a long time. I wonder if it was a\n> > deliberate choice or something that just happened semi-accidentally.\n>\n> Within a \"read-only\" xact you mean? I believe that allowing DML writes\n> was intentional. As for the utility commands, I suspect that it was in\n> part accidental (error of omission?), and then if anyone thought hard\n> about it they decided that allowing DML writes to temp tables justifies\n> those operations too.\n>\n> Have you tried excavating in our git history to see when the relevant\n> permission tests originated?\n\ncheck_xact_readonly() with a long list of command tags originated in\nthe same commit that added read-only transactions. CLUSTER, REINDEX,\nand VACUUM weren't included in the list of prohibited operations then,\neither, but it's unclear whether that was a deliberate omission or an\noversight. That commit also thought that COPY FROM - and queries -\nshould allow temp tables. But there's nothing in the commit that seems\nto explain why, unless the commit message itself is a hint:\n\ncommit b65cd562402ed9d3206d501cc74dc38bc421b2ce\nAuthor: Peter Eisentraut <peter_e@gmx.net>\nDate: Fri Jan 10 22:03:30 2003 +0000\n\n Read-only transactions, as defined in SQL.\n\nMaybe the SQL standard has something to say about this?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Jan 2020 14:55:43 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Maybe the SQL standard has something to say about this?\n\n[ pokes around ... ] Yeah, it does, and I'd say it's pretty clearly\nin agreement with what Peter did, so far as DML ops go. For instance,\nthis bit from SQL99's description of DELETE:\n\n 1) If the access mode of the current SQL-transaction or the access\n mode of the branch of the current SQL-transaction at the current\n SQL-connection is read-only, and T is not a temporary table,\n then an exception condition is raised: invalid transaction state\n - read-only SQL-transaction.\n\nUPDATE and INSERT say the same. (I didn't look at later spec versions,\nsince Peter's 2003 commit was probably based on SQL99.)\n\nYou could argue about exactly how to extend that to non-spec\nutility commands, but for the most part allowing them seems\nto make sense if DML is allowed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 15:07:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Maybe the SQL standard has something to say about this?\n>\n> [ pokes around ... ] Yeah, it does, and I'd say it's pretty clearly\n> in agreement with what Peter did, so far as DML ops go. For instance,\n> this bit from SQL99's description of DELETE:\n>\n> 1) If the access mode of the current SQL-transaction or the access\n> mode of the branch of the current SQL-transaction at the current\n> SQL-connection is read-only, and T is not a temporary table,\n> then an exception condition is raised: invalid transaction state\n> - read-only SQL-transaction.\n>\n> UPDATE and INSERT say the same. (I didn't look at later spec versions,\n> since Peter's 2003 commit was probably based on SQL99.)\n\nOK. That's good to know.\n\n> You could argue about exactly how to extend that to non-spec\n> utility commands, but for the most part allowing them seems\n> to make sense if DML is allowed.\n\nBut I think we allow them on all tables, not just temp tables, so I\ndon't think I understand this argument.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Jan 2020 15:37:23 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 3:37 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > You could argue about exactly how to extend that to non-spec\n> > utility commands, but for the most part allowing them seems\n> > to make sense if DML is allowed.\n>\n> But I think we allow them on all tables, not just temp tables, so I\n> don't think I understand this argument.\n\nOh, wait: I'm conflating two things. The current behavior extends the\nspec behavior to COPY in a logical way.\n\nBut it also allows CLUSTER, REINDEX, and VACUUM on any table. The spec\npresumably has no view on that, nor does the passage you quoted seem\nto apply here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Jan 2020 15:38:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jan 9, 2020 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> You could argue about exactly how to extend that to non-spec\n>> utility commands, but for the most part allowing them seems\n>> to make sense if DML is allowed.\n\n> But I think we allow them on all tables, not just temp tables, so I\n> don't think I understand this argument.\n\nOh, I misunderstood your concern.\n\nPeter might remember more clearly, but I have a feeling that we\nconcluded that the intent of the spec was for read-only-ness to\ndisallow globally-visible changes in the visible database contents.\nVACUUM, for example, does not cause any visible change, so it\nshould be admissible. REINDEX ditto. (We ignore here the possibility\nof such things causing, say, a change in the order in which rows are\nreturned, since that's beneath the spec's notice to begin with.)\nANALYZE ditto, except to the extent that if you look at pg_stats\nyou might see something different --- but again, system catalog\ncontents are outside the spec's purview.\n\nYou could extend this line of argument, perhaps, far enough to justify\nALTER SYSTEM SET as well. But I don't like that because some GUCs have\nvisible effects on the results that an ordinary query minding its own\nbusiness can get. Timezone is perhaps the poster child there, or\nsearch_path. If we were to subdivide the GUCs into \"affects\nimplementation details only\" vs \"can affect query semantics\",\nI'd hold still for allowing ALTER SYSTEM SET on the former group.\nDoubt it's worth the trouble to distinguish, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 15:52:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On 2020-01-09 21:52, Tom Lane wrote:\n> Peter might remember more clearly, but I have a feeling that we\n> concluded that the intent of the spec was for read-only-ness to\n> disallow globally-visible changes in the visible database contents.\n\nI don't really remember, but that was basically the opinion I had \narrived at as I was reading through this current thread. Roughly \nspeaking, anything that changes the database state (data or schema) in a \nway that would be reflected in a pg_dump output is not read-only.\n\n> VACUUM, for example, does not cause any visible change, so it\n> should be admissible. REINDEX ditto. (We ignore here the possibility\n> of such things causing, say, a change in the order in which rows are\n> returned, since that's beneath the spec's notice to begin with.)\n\nagreed\n\n> ANALYZE ditto, except to the extent that if you look at pg_stats\n> you might see something different --- but again, system catalog\n> contents are outside the spec's purview.\n\nagreed\n\n> You could extend this line of argument, perhaps, far enough to justify\n> ALTER SYSTEM SET as well. But I don't like that because some GUCs have\n> visible effects on the results that an ordinary query minding its own\n> business can get. Timezone is perhaps the poster child there, or\n> search_path. If we were to subdivide the GUCs into \"affects\n> implementation details only\" vs \"can affect query semantics\",\n> I'd hold still for allowing ALTER SYSTEM SET on the former group.\n> Doubt it's worth the trouble to distinguish, though.\n\nALTER SYSTEM is read only in my mind.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 Jan 2020 13:23:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 7:23 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I don't really remember, but that was basically the opinion I had\n> arrived at as I was reading through this current thread. Roughly\n> speaking, anything that changes the database state (data or schema) in a\n> way that would be reflected in a pg_dump output is not read-only.\n\nThis rule very nearly matches the current behavior: it explains why\ntemp table operations are allowed, and why ALTER SYSTEM is allowed,\nand why REINDEX etc. are allowed. However, there's a notable\nexception: PREPARE, COMMIT PREPARED, and ROLLBACK PREPARED are allowed\nin a read-only transaction. Under the \"doesn't change pg_dump output\"\ncriteria, the first and third ones should be permitted but COMMIT\nPREPARED should be denied, except maybe if the prepared transaction\ndidn't do any writes (and in that case, why did we bother preparing\nit?). Despite that, this rule does a way better job explaining the\ncurrent behavior than anything else suggested so far.\n\nAccordingly, here's v3, with comments adjusted to match this new\nexplanation for the current behavior. This seems way better than what\nI had before, because it actually explains why stuff is the way it is\nrather than just appealing to history.\n\nBTW, there's a pending patch that allows CLUSTER to change the\ntablespace of an object while rewriting it. If we want to be strict\nabout it, that variant would need to be disallowed in read only mode,\nunder this definition. (I also think that it's lousy syntax and ought\nto be spelled ALTER TABLE x SET TABLESPACE foo, CLUSTER or something\nlike that rather than anything beginning with CLUSTER, but I seem to\nbe losing that argument.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 10 Jan 2020 08:41:06 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I don't really remember, but that was basically the opinion I had \n> arrived at as I was reading through this current thread. Roughly \n> speaking, anything that changes the database state (data or schema) in a \n> way that would be reflected in a pg_dump output is not read-only.\n\nOK, although I'd put some emphasis on \"roughly speaking\".\n\n> ALTER SYSTEM is read only in my mind.\n\nI'm still having trouble with this conclusion. I think it can only\nbe justified by a very narrow reading of \"reflected in pg_dump\"\nthat relies on the specific factorization we've chosen for upgrade\noperations, ie that postgresql.conf mods have to be carried across\nby hand. But that's mostly historical baggage, rather than a sane\nbasis for defining \"read only\". If somebody comes up with a patch\nthat causes \"pg_dumpall -g\" to include ALTER SYSTEM SET commands for\nwhatever is in postgresql.auto.conf (not an unreasonable idea BTW),\nwill you then decide that ALTER SYSTEM SET is no longer read-only?\nOr, perhaps, reject such a patch on the grounds that it breaks this\narbitrary definition of read-only-ness?\n\nAs another example, do we need to consider that replacing pg_hba.conf\nvia pg_write_file should be allowed in a \"read only\" transaction?\n\nThese conclusions seem obviously silly to me, but perhaps YMMV.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 09:29:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 9:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If somebody comes up with a patch\n> that causes \"pg_dumpall -g\" to include ALTER SYSTEM SET commands for\n> whatever is in postgresql.auto.conf (not an unreasonable idea BTW),\n> will you then decide that ALTER SYSTEM SET is no longer read-only?\n> Or, perhaps, reject such a patch on the grounds that it breaks this\n> arbitrary definition of read-only-ness?\n\nI would vote to reject such a patch as a confused muddle. I mean,\ngenerally, the expectation right now is that if you move your data\nfrom the current cluster to a new one by pg_dump, pg_upgrade, or even\nby promoting a standby, you're responsible for making sure that\npostgresql.conf and postgresql.auto.conf get copied over separately.\nIn the last case, the backup that created the standby will have copied\nthe postgresql.conf from the master as it existed at that time, but\npropagating any subsequent changes is up to you. Now, if we now decide\nto shove ALTER SYSTEM SET commands into pg_dumpall output, then\nsuddenly you're changing that rule, and it's not very clear what the\nnew rule is.\n\nNow, our current approach is fairly arguable. Given that GUCs on\ndatabases, users, functions, etc. are stored in the catalogs and\nsubject to backup, restore, replication, etc., one might well expect\nthat global settings would be handled the same way. I tend to think\nthat would be nicer, though it would require solving the problem of\nhow to back out bad changes that make the database not start up.\nRegardless of what you or anybody thinks about that, though, it's not\nhow it works today and would require some serious engineering if we\nwanted to make it happen.\n\n> As another example, do we need to consider that replacing pg_hba.conf\n> via pg_write_file should be allowed in a \"read only\" transaction?\n\nI don't really see what the problem with that is. It bothers me a lot\nmore that CLUSTER can be run in a read-only transaction -- which\nactually changes stuff inside the database, even if not necessarily in\na user-visible way -- than it does that somebody might be able to use\nthe database to change something that isn't really part of the\ndatabase anyway. And pg_hba.conf, like postgresql.conf, is largely\ntreated as an input to the database rather than part of it.\n\nSomebody could create a user-defined function that launches a\nsatellite into orbit and that is, I would argue, a write operation in\nthe truest sense. You have changed the state of the world in a lasting\nway, and you cannot take it back. But, it's not writing to the\ndatabase, so as far as read-only transactions are concerned, who\ncares?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 10 Jan 2020 14:23:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Jan 10, 2020 at 9:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > If somebody comes up with a patch\n> > that causes \"pg_dumpall -g\" to include ALTER SYSTEM SET commands for\n> > whatever is in postgresql.auto.conf (not an unreasonable idea BTW),\n> > will you then decide that ALTER SYSTEM SET is no longer read-only?\n> > Or, perhaps, reject such a patch on the grounds that it breaks this\n> > arbitrary definition of read-only-ness?\n> \n> I would vote to reject such a patch as a confused muddle. I mean,\n> generally, the expectation right now is that if you move your data\n> from the current cluster to a new one by pg_dump, pg_upgrade, or even\n> by promoting a standby, you're responsible for making sure that\n> postgresql.conf and postgresql.auto.conf get copied over separately.\n\nI really don't like that the ALTER SYSTEM SET/postgresql.auto.conf stuff\nhas to be handled in some way external to logical export/import, or\nexternal to pg_upgrade (particularly in link mode), so I generally see\nwhere Tom's coming from with that suggestion.\n\nIn general, I don't think people should be expected to hand-muck around\nwith anything in the data directory.\n\n> In the last case, the backup that created the standby will have copied\n> the postgresql.conf from the master as it existed at that time, but\n> propagating any subsequent changes is up to you. Now, if we now decide\n> to shove ALTER SYSTEM SET commands into pg_dumpall output, then\n> suddenly you're changing that rule, and it's not very clear what the\n> new rule is.\n\nI'd like a rule of \"users don't muck with the data directory\", and we\nare nearly there when you include sensible packaging such as what Debian\nprovides- by moving postrgesql.conf, log files, etc, outside of the data\ndirectory. For things that can't be moved out of the data directory\nthough, like postgresql.auto.conf, we should be handling those\ntransparently to the user.\n\nI agree that there are some interesting cases to consider here though-\nlike doing a pg_dumpall against a standby resulting in something\ndifferent than if you did it against the primary because the\npostgresql.auto.conf is different between them (something that I'm\ngenerally supportive of having, and it seems everyone else is too). I\nthink having an option to control if the postgresql.auto.conf settings\nare included or not in the pg_dumpall output would be a reasonable way\nto deal with that though.\n\n> Now, our current approach is fairly arguable. Given that GUCs on\n> databases, users, functions, etc. are stored in the catalogs and\n> subject to backup, restore, replication, etc., one might well expect\n> that global settings would be handled the same way. I tend to think\n> that would be nicer, though it would require solving the problem of\n> how to back out bad changes that make the database not start up.\n> Regardless of what you or anybody thinks about that, though, it's not\n> how it works today and would require some serious engineering if we\n> wanted to make it happen.\n\nThis sounds an awful lot like the arguments that I tried to make when\nALTER SYSTEM SET was first going in, but what's done is done and there's\nnot much to do but make the best of it as I can't imagine there'd be\nmuch support for ripping it out.\n\nI don't really agree about the need for 'some serious engineering'\neither, but having an option for it, sure.\n\nI do also tend to agree with Tom about making ALTER SYSTEM SET be\nprohibited in explicitly read-only transactions, but still allowing it\nto be run against replicas as that's a handy thing to be able to do.\n\n> > As another example, do we need to consider that replacing pg_hba.conf\n> > via pg_write_file should be allowed in a \"read only\" transaction?\n> \n> I don't really see what the problem with that is. It bothers me a lot\n> more that CLUSTER can be run in a read-only transaction -- which\n> actually changes stuff inside the database, even if not necessarily in\n> a user-visible way -- than it does that somebody might be able to use\n> the database to change something that isn't really part of the\n> database anyway. And pg_hba.conf, like postgresql.conf, is largely\n> treated as an input to the database rather than part of it.\n\nI don't like that CLUSTER can be run in a read-only transaction either\n(though it seems like downthread maybe some people are fine with\nthat..). I'm also coming around to the idea that pg_write_file()\nprobably shouldn't be allowed either, and probably not COPY TO either\n(except to stdout, since that's a result, not a change operation).\n\n> Somebody could create a user-defined function that launches a\n> satellite into orbit and that is, I would argue, a write operation in\n> the truest sense. You have changed the state of the world in a lasting\n> way, and you cannot take it back. But, it's not writing to the\n> database, so as far as read-only transactions are concerned, who\n> cares?\n\nI suppose there's another thing to think about in this discussion, which\nare FDWs, if the idea is that read-only means \"I don't want to make\nchanges in THIS database\". I don't really feel like that's what marking\na transaction as 'read only' is intended to mean though.\n\nWhen I think of starting a read-only transaction, I feel like it's\nusually with the idea of \"I want to play it safe and I don't want this\ntransaction to make ANY changes\". I'm feeling more inclined that we\nshould be going out of our way to make darn sure that we respect that\nrequest of the user, no matter what it is they're running. We can't\nprevent user-created C-level functions from launching satellites, but\nthat's an untrusted language and therefore it's up to the function\nauthor to manage the transaction and privilege system properly anyway,\nnot ours.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 10 Jan 2020 15:22:26 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Fri, 2020-01-10 at 09:29 -0500, Tom Lane wrote:\n> > ALTER SYSTEM is read only in my mind.\n> \n> I'm still having trouble with this conclusion. I think it can only\n> be justified by a very narrow reading of \"reflected in pg_dump\"\n> that relies on the specific factorization we've chosen for upgrade\n> operations, ie that postgresql.conf mods have to be carried across\n> by hand. But that's mostly historical baggage, rather than a sane\n> basis for defining \"read only\". If somebody comes up with a patch\n> that causes \"pg_dumpall -g\" to include ALTER SYSTEM SET commands for\n> whatever is in postgresql.auto.conf (not an unreasonable idea BTW),\n> will you then decide that ALTER SYSTEM SET is no longer read-only?\n\nI think that having ALTER SYSTEM commands in pg_dumpall output\nwould be a problem. It would cause all kinds of problems whenever\nparameters change. Thinking of the transition \"checkpoint_segments\"\n-> \"max_wal_size\", you'd have to build some translation magic into pg_dump.\nBesides, such a feature would make it harder to restore a dump taken\nwith version x into version x + n for n > 0.\n\n> Or, perhaps, reject such a patch on the grounds that it breaks this\n> arbitrary definition of read-only-ness?\n\nI agree with Robert that such a patch should be rejected on other\ngrounds.\n\nConcerning the topic of the thread, I personally have come to think\nthat changing GUCs is *not* writing to the database. But that is based\non the fact that you can change GUCs on streaming replication standbys,\nand it may be surprising to a newcomer.\n\nPerhaps it would be good to consider this question:\nDo we call something \"read-only\" if it changes nothing, or do we call it\n\"read-only\" if it is allowed on a streaming replication standby?\nThe first would be more correct, but the second may be more convenient.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Sun, 12 Jan 2020 17:25:38 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> Perhaps it would be good to consider this question:\n> Do we call something \"read-only\" if it changes nothing, or do we call it\n> \"read-only\" if it is allowed on a streaming replication standby?\n> The first would be more correct, but the second may be more convenient.\n\nYeah, this is really the larger point at stake. I'm not sure that\n\"read-only\" and \"allowed on standby\" should be identical, nor even\nthat one should be an exact subset of the other. They're certainly\nby-and-large the same sets of operations, but there might be\nexceptions that belong to only one set. \"read-only\" is driven by\n(some reading of) the SQL standard, while \"allowed on standby\" is\ndriven by implementation limitations, so I think it'd be dangerous\nto commit ourselves to those being identical.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Jan 2020 12:06:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On 2020-01-10 14:41, Robert Haas wrote:\n> This rule very nearly matches the current behavior: it explains why\n> temp table operations are allowed, and why ALTER SYSTEM is allowed,\n> and why REINDEX etc. are allowed. However, there's a notable\n> exception: PREPARE, COMMIT PREPARED, and ROLLBACK PREPARED are allowed\n> in a read-only transaction. Under the \"doesn't change pg_dump output\"\n> criteria, the first and third ones should be permitted but COMMIT\n> PREPARED should be denied, except maybe if the prepared transaction\n> didn't do any writes (and in that case, why did we bother preparing\n> it?). Despite that, this rule does a way better job explaining the\n> current behavior than anything else suggested so far.\n\nI don't follow. Does pg_dump dump prepared transactions?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 Jan 2020 11:57:53 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> On Fri, 2020-01-10 at 09:29 -0500, Tom Lane wrote:\n> > > ALTER SYSTEM is read only in my mind.\n> > \n> > I'm still having trouble with this conclusion. I think it can only\n> > be justified by a very narrow reading of \"reflected in pg_dump\"\n> > that relies on the specific factorization we've chosen for upgrade\n> > operations, ie that postgresql.conf mods have to be carried across\n> > by hand. But that's mostly historical baggage, rather than a sane\n> > basis for defining \"read only\". If somebody comes up with a patch\n> > that causes \"pg_dumpall -g\" to include ALTER SYSTEM SET commands for\n> > whatever is in postgresql.auto.conf (not an unreasonable idea BTW),\n> > will you then decide that ALTER SYSTEM SET is no longer read-only?\n> \n> I think that having ALTER SYSTEM commands in pg_dumpall output\n> would be a problem. It would cause all kinds of problems whenever\n> parameters change. Thinking of the transition \"checkpoint_segments\"\n> -> \"max_wal_size\", you'd have to build some translation magic into pg_dump.\n> Besides, such a feature would make it harder to restore a dump taken\n> with version x into version x + n for n > 0.\n\npg_dump already specifically has understanding of how to deal with old\noptions in other things when constructing a dump for a given version-\nand we already have issues that a dump taken with pg_dump X has a good\nchance of now being able to be restoreding into a PG X+1, that's why\nit's recommended to use the pg_dump for the version of PG you're\nintending to restore into, so I don't particularly agree with any of the\narguments presented above.\n\n> > Or, perhaps, reject such a patch on the grounds that it breaks this\n> > arbitrary definition of read-only-ness?\n> \n> I agree with Robert that such a patch should be rejected on other\n> grounds.\n> \n> Concerning the topic of the thread, I personally have come to think\n> that changing GUCs is *not* writing to the database. But that is based\n> on the fact that you can change GUCs on streaming replication standbys,\n> and it may be surprising to a newcomer.\n> \n> Perhaps it would be good to consider this question:\n> Do we call something \"read-only\" if it changes nothing, or do we call it\n> \"read-only\" if it is allowed on a streaming replication standby?\n> The first would be more correct, but the second may be more convenient.\n\nThe two are distinct from each other and one doesn't imply the other. I\ndon't think we need to, or really want to, force them to be the same.\n\nWhen we're talking about a \"read-only\" transaction that the user has\nspecifically asked be \"read-only\" then, imv anyway, we should be pretty\nstringent regarding what that's allowed to do and shouldn't be allowing\nthat to change state in the system which other processes will see after\nthe transaction is over.\n\nA transaction (on a primary or a replica) doesn't need to be started as\nexplicitly \"read-only\" and perhaps we should change the language when we\nare starting up to say \"database is ready to accept replica connections\"\nor something instead of \"read-only\" connections to clarify that.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 13 Jan 2020 13:56:30 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 5:57 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-01-10 14:41, Robert Haas wrote:\n> > This rule very nearly matches the current behavior: it explains why\n> > temp table operations are allowed, and why ALTER SYSTEM is allowed,\n> > and why REINDEX etc. are allowed. However, there's a notable\n> > exception: PREPARE, COMMIT PREPARED, and ROLLBACK PREPARED are allowed\n> > in a read-only transaction. Under the \"doesn't change pg_dump output\"\n> > criteria, the first and third ones should be permitted but COMMIT\n> > PREPARED should be denied, except maybe if the prepared transaction\n> > didn't do any writes (and in that case, why did we bother preparing\n> > it?). Despite that, this rule does a way better job explaining the\n> > current behavior than anything else suggested so far.\n>\n> I don't follow. Does pg_dump dump prepared transactions?\n\nNo, but committing one changes the database contents as seen by a\nsubsequent pg_dump.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 Jan 2020 14:14:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Mon, 2020-01-13 at 13:56 -0500, Stephen Frost wrote:\n> > I think that having ALTER SYSTEM commands in pg_dumpall output\n> > would be a problem. It would cause all kinds of problems whenever\n> > parameters change. Thinking of the transition \"checkpoint_segments\"\n> > -> \"max_wal_size\", you'd have to build some translation magic into pg_dump.\n> > Besides, such a feature would make it harder to restore a dump taken\n> > with version x into version x + n for n > 0.\n> \n> pg_dump already specifically has understanding of how to deal with old\n> options in other things when constructing a dump for a given version-\n> and we already have issues that a dump taken with pg_dump X has a good\n> chance of now being able to be restoreding into a PG X+1, that's why\n> it's recommended to use the pg_dump for the version of PG you're\n> intending to restore into, so I don't particularly agree with any of the\n> arguments presented above.\n\nRight.\nBut increasing the difficulty of restoring a version x pg_dump with\nversion x + 1 is still not a thing we should lightly do.\n\nNot that the docs currently say \"it is recommended to use pg_dumpall\nfrom the newer version\". They don't say \"is is not supported\".\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 13 Jan 2020 20:37:57 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> On Mon, 2020-01-13 at 13:56 -0500, Stephen Frost wrote:\n> > > I think that having ALTER SYSTEM commands in pg_dumpall output\n> > > would be a problem. It would cause all kinds of problems whenever\n> > > parameters change. Thinking of the transition \"checkpoint_segments\"\n> > > -> \"max_wal_size\", you'd have to build some translation magic into pg_dump.\n> > > Besides, such a feature would make it harder to restore a dump taken\n> > > with version x into version x + n for n > 0.\n> > \n> > pg_dump already specifically has understanding of how to deal with old\n> > options in other things when constructing a dump for a given version-\n> > and we already have issues that a dump taken with pg_dump X has a good\n> > chance of now being able to be restoreding into a PG X+1, that's why\n> > it's recommended to use the pg_dump for the version of PG you're\n> > intending to restore into, so I don't particularly agree with any of the\n> > arguments presented above.\n> \n> Right.\n> But increasing the difficulty of restoring a version x pg_dump with\n> version x + 1 is still not a thing we should lightly do.\n\nI've never heard that and I don't agree with it being a justification\nfor blocking sensible progress.\n\n> Not that the docs currently say \"it is recommended to use pg_dumpall\n> from the newer version\". They don't say \"is is not supported\".\n\nIt's recommended due to exactly the reasons presented and no one is\nsaying that such isn't supported- but we don't and aren't going to\nguarantee that it's going to work. We absolutely know of cases where it\njust won't work, today. If that's justification for saying it's not\nsupported, then fine, let's change the language to say that.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 13 Jan 2020 15:00:02 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 3:00 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I've never heard that and I don't agree with it being a justification\n> for blocking sensible progress.\n\nSpeaking of sensible progress, I think we've drifted off on a tangent\nhere about ALTER SYSTEM. As I understand it, nobody's opposed to the\nmost recent version (v3) of the proposed patch, which also makes no\ndefinitional changes relative to the status quo, but does fix some\nbugs, and makes things a little nicer for parallel query, too. So I'd\nlike to go ahead and commit that.\n\nDiscussing of what to do about ALTER SYSTEM can continue, although I\nfeel perhaps the current discussion isn't particularly productive. On\nthe one hand, the argument that it isn't read only because it writes\ndata someplace doesn't convince me: practically every command can\ncause some kind of write some place, e.g. SELECT can write WAL for at\nleast 2 different reasons, and that does not make it not read-only,\nnor does the fact that it updates the table statistics. The question\nof what data is being written must be relevant. On the other hand, I'm\nunpersuaded by the arguments so far that including ALTER SYSTEM\ncommands in pg_dump output would be anything other than a train wreck,\nthough doing it optionally and not by default might be OK. However,\nthe main thing for me is that messing around with the behavior of\nALTER SYSTEM in either of those ways or some other is not what this\npatch is about. I'm just proposing to refactor the code to fix the\nexisting bugs and make it much less likely that future patches will\ncreate new ones, and I think reclassifying or redesigning ALTER SYSTEM\nought to be done, if at all, separately.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 Jan 2020 11:02:13 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Speaking of sensible progress, I think we've drifted off on a tangent\n> here about ALTER SYSTEM.\n\nAgreed, that's not terribly relevant for the proposed patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 12:34:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Speaking of sensible progress, I think we've drifted off on a tangent\n> > here about ALTER SYSTEM.\n> \n> Agreed, that's not terribly relevant for the proposed patch.\n\nI agree that the proposed patch seems alright by itself, as the changes\nit's making to existing behavior seem to all be bug-fixes and pretty\nclear improvements not really related to 'read-only' transactions.\n\nIt's unfortunate that we haven't been able to work through to some kind\nof agreement around what \"SET TRANSACTION READ ONLY\" means, so that\nusers of it can know what to expect.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 14 Jan 2020 13:46:57 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On 2020-01-13 20:14, Robert Haas wrote:\n> On Mon, Jan 13, 2020 at 5:57 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2020-01-10 14:41, Robert Haas wrote:\n>>> This rule very nearly matches the current behavior: it explains why\n>>> temp table operations are allowed, and why ALTER SYSTEM is allowed,\n>>> and why REINDEX etc. are allowed. However, there's a notable\n>>> exception: PREPARE, COMMIT PREPARED, and ROLLBACK PREPARED are allowed\n>>> in a read-only transaction. Under the \"doesn't change pg_dump output\"\n>>> criteria, the first and third ones should be permitted but COMMIT\n>>> PREPARED should be denied, except maybe if the prepared transaction\n>>> didn't do any writes (and in that case, why did we bother preparing\n>>> it?). Despite that, this rule does a way better job explaining the\n>>> current behavior than anything else suggested so far.\n>>\n>> I don't follow. Does pg_dump dump prepared transactions?\n> \n> No, but committing one changes the database contents as seen by a\n> subsequent pg_dump.\n\nWell, if the transaction was declared read-only, then committing it \n(directly or 2PC) shouldn't change anything. This appears to be a \ncircular argument.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jan 2020 16:25:37 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 10:25 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Well, if the transaction was declared read-only, then committing it\n> (directly or 2PC) shouldn't change anything. This appears to be a\n> circular argument.\n\nI don't think it's a circular argument. Suppose that someone decrees\nthat, as of 5pm Eastern time, no more read-write transactions are\npermitted. And because the person issuing the decree has a lot of\npower, everybody obeys. Now, every pg_dump taken after that time will\nbe semantically equivalent to every other pg_dump taken after that\ntime, with one tiny exception. That exception is that someone could\nstill do COMMIT PREPARED of a read-write transaction that was prepared\nbefore 5pm. If the goal of the powerful person who issued the decree\nwas to make sure that the database couldn't change - e.g. so they\ncould COPY each table individually without keeping a snapshot open and\nstill get a consistent backup - they might fail to achieve it if, as\nof the moment of the freeze, there are some prepared write\ntransactions.\n\nI'm not saying we have to change the behavior or anything. I'm just\nsaying that there seems to be one, and only one, way to make the\napparent contents of the database change in a read-only transaction\nright now. And that's a COMMIT PREPARED of a read-write transaction.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 15 Jan 2020 13:54:54 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Jan 15, 2020 at 10:25 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > Well, if the transaction was declared read-only, then committing it\n> > (directly or 2PC) shouldn't change anything. This appears to be a\n> > circular argument.\n> \n> I don't think it's a circular argument. Suppose that someone decrees\n> that, as of 5pm Eastern time, no more read-write transactions are\n> permitted. And because the person issuing the decree has a lot of\n> power, everybody obeys. Now, every pg_dump taken after that time will\n> be semantically equivalent to every other pg_dump taken after that\n> time, with one tiny exception. That exception is that someone could\n> still do COMMIT PREPARED of a read-write transaction that was prepared\n> before 5pm. If the goal of the powerful person who issued the decree\n> was to make sure that the database couldn't change - e.g. so they\n> could COPY each table individually without keeping a snapshot open and\n> still get a consistent backup - they might fail to achieve it if, as\n> of the moment of the freeze, there are some prepared write\n> transactions.\n> \n> I'm not saying we have to change the behavior or anything. I'm just\n> saying that there seems to be one, and only one, way to make the\n> apparent contents of the database change in a read-only transaction\n> right now. And that's a COMMIT PREPARED of a read-write transaction.\n\nYeah, allowing a read-only transaction to start and then commit a\nread-write transaction doesn't seem sensible. I'd be in favor of\nchanging that.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 15 Jan 2020 14:01:53 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 1:46 PM Stephen Frost <sfrost@snowman.net> wrote:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > Speaking of sensible progress, I think we've drifted off on a tangent\n> > > here about ALTER SYSTEM.\n> >\n> > Agreed, that's not terribly relevant for the proposed patch.\n>\n> I agree that the proposed patch seems alright by itself, as the changes\n> it's making to existing behavior seem to all be bug-fixes and pretty\n> clear improvements not really related to 'read-only' transactions.\n\nThere seems to be no disagreement on this point, so I have committed the patch.\n\n> It's unfortunate that we haven't been able to work through to some kind\n> of agreement around what \"SET TRANSACTION READ ONLY\" means, so that\n> users of it can know what to expect.\n\nI at least feel like we have a pretty good handle on what it was\nintended to mean; that is, \"doesn't cause semantically significant\nchanges to pg_dump output.\" I do hear some skepticism as to whether\nthat's the best definition, but it has pretty good explanatory power\nrelative to the current state of the code, which is something.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jan 2020 12:14:55 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Jan 14, 2020 at 1:46 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > Robert Haas <robertmhaas@gmail.com> writes:\n> > > > Speaking of sensible progress, I think we've drifted off on a tangent\n> > > > here about ALTER SYSTEM.\n> > >\n> > > Agreed, that's not terribly relevant for the proposed patch.\n> >\n> > I agree that the proposed patch seems alright by itself, as the changes\n> > it's making to existing behavior seem to all be bug-fixes and pretty\n> > clear improvements not really related to 'read-only' transactions.\n> \n> There seems to be no disagreement on this point, so I have committed the patch.\n\nWorks for me.\n\n> > It's unfortunate that we haven't been able to work through to some kind\n> > of agreement around what \"SET TRANSACTION READ ONLY\" means, so that\n> > users of it can know what to expect.\n> \n> I at least feel like we have a pretty good handle on what it was\n> intended to mean; that is, \"doesn't cause semantically significant\n> changes to pg_dump output.\" I do hear some skepticism as to whether\n> that's the best definition, but it has pretty good explanatory power\n> relative to the current state of the code, which is something.\n\nI think I agree with you regarding the original intent, though even\nthere, as discussed elsewhere, it seems like there's perhaps either a\nbug or a disagreement about the specifics of what that means when it\nrelates to committing a 2-phase transaction. Still, setting that aside\nfor the moment, do we feel like this is enough to be able to update our\ndocumentation with?\n\nThanks,\n\nStephen",
"msg_date": "Thu, 16 Jan 2020 12:22:52 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 12:22 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I think I agree with you regarding the original intent, though even\n> there, as discussed elsewhere, it seems like there's perhaps either a\n> bug or a disagreement about the specifics of what that means when it\n> relates to committing a 2-phase transaction. Still, setting that aside\n> for the moment, do we feel like this is enough to be able to update our\n> documentation with?\n\nI think that would be possible. What did you have in mind?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jan 2020 15:22:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 01:56:30PM -0500, Stephen Frost wrote:\n> > I think that having ALTER SYSTEM commands in pg_dumpall output\n> > would be a problem. It would cause all kinds of problems whenever\n> > parameters change. Thinking of the transition \"checkpoint_segments\"\n> > -> \"max_wal_size\", you'd have to build some translation magic into pg_dump.\n> > Besides, such a feature would make it harder to restore a dump taken\n> > with version x into version x + n for n > 0.\n> \n> pg_dump already specifically has understanding of how to deal with old\n> options in other things when constructing a dump for a given version-\n> and we already have issues that a dump taken with pg_dump X has a good\n> chance of now being able to be restoreding into a PG X+1, that's why\n> it's recommended to use the pg_dump for the version of PG you're\n> intending to restore into, so I don't particularly agree with any of the\n> arguments presented above.\n\nOne issue is that system table GUC settings (e.g., per-database,\nper-user) cannot include postgresql.conf-only settings, like\nmax_wal_size, so system tables GUC settings are less likely to be\nrenamed than postgresql.conf.auto settings. FYI, we are more inclined\nto allow postgresql.conf-only changes than others because there is less\nimpact on applications.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 18 Jan 2020 20:47:59 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Mon, Jan 13, 2020 at 01:56:30PM -0500, Stephen Frost wrote:\n> > > I think that having ALTER SYSTEM commands in pg_dumpall output\n> > > would be a problem. It would cause all kinds of problems whenever\n> > > parameters change. Thinking of the transition \"checkpoint_segments\"\n> > > -> \"max_wal_size\", you'd have to build some translation magic into pg_dump.\n> > > Besides, such a feature would make it harder to restore a dump taken\n> > > with version x into version x + n for n > 0.\n> > \n> > pg_dump already specifically has understanding of how to deal with old\n> > options in other things when constructing a dump for a given version-\n> > and we already have issues that a dump taken with pg_dump X has a good\n> > chance of now being able to be restoreding into a PG X+1, that's why\n> > it's recommended to use the pg_dump for the version of PG you're\n> > intending to restore into, so I don't particularly agree with any of the\n> > arguments presented above.\n> \n> One issue is that system table GUC settings (e.g., per-database,\n> per-user) cannot include postgresql.conf-only settings, like\n> max_wal_size, so system tables GUC settings are less likely to be\n> renamed than postgresql.conf.auto settings. FYI, we are more inclined\n> to allow postgresql.conf-only changes than others because there is less\n> impact on applications.\n\nI'm a bit unclear about what's being suggested here. When you are\ntalking about 'applications', are you referring specifically to pg_dump\nand pg_restore, or are you talking about regular user applications?\n\nIf you're referring to pg_dump/restore, then what I'm understanding from\nyour comments is that if we made pg_dump/restore aware of ALTER SYSTEM\nand were made to support it that we would then be less inclined to\nchange the names of postgresql.conf-only settings because, if we do so,\nwe have to update pg_dump/restore.\n\nI can see some argument in that direction but my initial reaction is\nthat I don't feel like the bar would really be moved very far, and, if\nwe came up with some mapping from one to the other for those, I actually\nthink it'd be really helpful downstream for packagers and such who\nroutinely are dealing with updating from an older postgresql.conf file\nto a newer one when an upgrade is done.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 27 Jan 2020 14:26:42 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 02:26:42PM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Mon, Jan 13, 2020 at 01:56:30PM -0500, Stephen Frost wrote:\n> > > > I think that having ALTER SYSTEM commands in pg_dumpall output\n> > > > would be a problem. It would cause all kinds of problems whenever\n> > > > parameters change. Thinking of the transition \"checkpoint_segments\"\n> > > > -> \"max_wal_size\", you'd have to build some translation magic into pg_dump.\n> > > > Besides, such a feature would make it harder to restore a dump taken\n> > > > with version x into version x + n for n > 0.\n> > > \n> > > pg_dump already specifically has understanding of how to deal with old\n> > > options in other things when constructing a dump for a given version-\n> > > and we already have issues that a dump taken with pg_dump X has a good\n> > > chance of now being able to be restoreding into a PG X+1, that's why\n> > > it's recommended to use the pg_dump for the version of PG you're\n> > > intending to restore into, so I don't particularly agree with any of the\n> > > arguments presented above.\n> > \n> > One issue is that system table GUC settings (e.g., per-database,\n> > per-user) cannot include postgresql.conf-only settings, like\n> > max_wal_size, so system tables GUC settings are less likely to be\n> > renamed than postgresql.conf.auto settings. FYI, we are more inclined\n> > to allow postgresql.conf-only changes than others because there is less\n> > impact on applications.\n> \n> I'm a bit unclear about what's being suggested here. When you are\n> talking about 'applications', are you referring specifically to pg_dump\n> and pg_restore, or are you talking about regular user applications?\n\nSorry for the late reply. I meant all applications.\n\n> If you're referring to pg_dump/restore, then what I'm understanding from\n> your comments is that if we made pg_dump/restore aware of ALTER SYSTEM\n> and were made to support it that we would then be less inclined to\n> change the names of postgresql.conf-only settings because, if we do so,\n> we have to update pg_dump/restore.\n> \n> I can see some argument in that direction but my initial reaction is\n> that I don't feel like the bar would really be moved very far, and, if\n> we came up with some mapping from one to the other for those, I actually\n> think it'd be really helpful downstream for packagers and such who\n> routinely are dealing with updating from an older postgresql.conf file\n> to a newer one when an upgrade is done.\n\nI should have given more examples. Changing GUC variables like\nsearch_path or work_mem, which can be set in per-database, per-user, and\nper-session contexts, is more disruptive than changing GUCs that can\nonly can be set in postgresql.conf, like max_wal_size. My point is that\nall GUC changes don't have the same level of disruption.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 5 Mar 2020 18:48:20 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: our checks for read-only queries are not great"
}
] |
[
{
"msg_contents": "Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings\n\nThis allows different users to authenticate with different certificates.\n\nAuthor: Craig Ringer\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/f5fd995a1a24e6571d26b1e29c4dc179112b1003\n\nModified Files\n--------------\ncontrib/postgres_fdw/expected/postgres_fdw.out | 12 ++++++++++++\ncontrib/postgres_fdw/option.c | 9 +++++++++\ncontrib/postgres_fdw/sql/postgres_fdw.sql | 13 +++++++++++++\ndoc/src/sgml/postgres-fdw.sgml | 12 ++++++++++--\n4 files changed, 44 insertions(+), 2 deletions(-)",
"msg_date": "Thu, 09 Jan 2020 08:11:33 +0000",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "pgsql: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 3:11 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings\n>\n> This allows different users to authenticate with different certificates.\n>\n> Author: Craig Ringer\n>\n> https://git.postgresql.org/pg/commitdiff/f5fd995a1a24e6571d26b1e29c4dc179112b1003\n\nDoes this mean that a non-superuser can induce postgres_fdw to read an\narbitrary file from the local filesystem?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Jan 2020 09:18:45 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings"
},
{
"msg_contents": "Re: Robert Haas 2020-01-09 <CA+TgmoZEjyv_PD=2cinkbDA_chyLNAcBPL_9bKJQ6bc=nw+FHA@mail.gmail.com>\n> Does this mean that a non-superuser can induce postgres_fdw to read an\n> arbitrary file from the local filesystem?\n\nYes, see my comments in the \"Allow 'sslkey' and 'sslcert' in\npostgres_fdw user mappings\" thread.\n\nChristoph\n\n\n",
"msg_date": "Thu, 9 Jan 2020 15:38:45 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings"
},
{
"msg_contents": "On Thu, 9 Jan 2020 at 22:38, Christoph Berg <myon@debian.org> wrote:\n\n> Re: Robert Haas 2020-01-09 <CA+TgmoZEjyv_PD=2cinkbDA_chyLNAcBPL_9bKJQ6bc=\n> nw+FHA@mail.gmail.com>\n> > Does this mean that a non-superuser can induce postgres_fdw to read an\n> > arbitrary file from the local filesystem?\n>\n> Yes, see my comments in the \"Allow 'sslkey' and 'sslcert' in\n> postgres_fdw user mappings\" thread.\n\n\nUgh, I misread your comment.\n\nYou raise a sensible concern.\n\nThese options should be treated the same as the proposed option to allow\npasswordless connections: disallow creation or alteration of FDW connection\nstrings that use them by non-superusers. So a superuser can define a user\nmapping that uses these options, but normal users may not.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 9 Jan 2020 at 22:38, Christoph Berg <myon@debian.org> wrote:Re: Robert Haas 2020-01-09 <CA+TgmoZEjyv_PD=2cinkbDA_chyLNAcBPL_9bKJQ6bc=nw+FHA@mail.gmail.com>\n> Does this mean that a non-superuser can induce postgres_fdw to read an\n> arbitrary file from the local filesystem?\n\nYes, see my comments in the \"Allow 'sslkey' and 'sslcert' in\npostgres_fdw user mappings\" thread.Ugh, I misread your comment.You raise a sensible concern.These options should be treated the same as the proposed option to allow passwordless connections: disallow creation or alteration of FDW connection strings that use them by non-superusers. So a superuser can define a user mapping that uses these options, but normal users may not.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 20 Jan 2020 15:48:37 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings"
},
{
"msg_contents": "\nOn 1/20/20 2:48 AM, Craig Ringer wrote:\n> On Thu, 9 Jan 2020 at 22:38, Christoph Berg <myon@debian.org\n> <mailto:myon@debian.org>> wrote:\n>\n> Re: Robert Haas 2020-01-09\n> <CA+TgmoZEjyv_PD=2cinkbDA_chyLNAcBPL_9bKJQ6bc=nw+FHA@mail.gmail.com\n> <mailto:nw%2BFHA@mail.gmail.com>>\n> > Does this mean that a non-superuser can induce postgres_fdw to\n> read an\n> > arbitrary file from the local filesystem?\n>\n> Yes, see my comments in the \"Allow 'sslkey' and 'sslcert' in\n> postgres_fdw user mappings\" thread.\n>\n>\n> Ugh, I misread your comment.\n>\n> You raise a sensible concern.\n>\n> These options should be treated the same as the proposed option to\n> allow passwordless connections: disallow creation or alteration of FDW\n> connection strings that use them by non-superusers. So a superuser can\n> define a user mapping that uses these options, but normal users may not.\n>\n>\n\n\nAlready done.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 20 Jan 2020 04:00:34 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings"
}
] |
[
{
"msg_contents": "Hi all,\r\n\r\nAttached is a patch to resolve parallel hash join performance issue. This is my first time to contribute patch to PostgreSQL community, I referred one of previous thread as template to report the issue and patch. Please let me know if need more information of the problem and patch.\r\n\r\nA. Problem Summary\r\nWhen we ran query which was executed by hash join operation, we can not achieve good performance improvement with more number of threads. More specifically, when we ran query02 of TPC-DS workload using scale 500 (500GB dataset), execution time using 8 threads was 124.6 sec while time using 28 threads was 103.5 sec. Here is execution time by different number of threads:\r\n\r\nnumber of thread: 1 4 8 16 28\r\ntime used(sec): 460.4 211 124.6 101.9 103.5\r\n\r\nThe test was made on a server with 384GB DRAM, 56 cores/112 HT. Data has been cached into OS page cache, so there was no disk I/O during execution, and there were enough physical CPU cores to support 28 threads to run in parallel.\r\n\r\nWe investigated this problem with perf c2c (http://man7.org/linux/man-pages/man1/perf-c2c.1.html) tool, confirmed the problem was caused by false sharing cache coherence. And we located the code write cache line is at line 457 of nodeHashjoin.c (pg version 12.0).\r\n\r\nB. Patch\r\n change line 457 in ExecHashJoinImpl function of nodeHashJoin.c. (be applicable to both 12.0 and 12.1)\r\n original code:\r\n HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));\r\n changed to:\r\n if (!HeapTupleHeaderHasMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple)))\r\n {\r\n HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));\r\n }\r\n Compared with original code, modified code can avoid unnecessary write to memory/cache.\r\n\r\nC. Test case:\r\n\r\n 1. Use https://github.com/pivotalguru/TPC-DS to setup TPC-DS benchmark for postgreSQL\r\n 2. run below command to ensure query will be executed with expected parallelism:\r\n psql postgres -h localhost -U postgres -c \"alter table web_sales set (parallel_workers =28);\";psql postgres -h localhost -U postgres -c \"alter table catalog_sales set (parallel_workers =28);\"\r\n 3. run query: psql postgres -h localhost -U postgres -f 102.tpcds.02.sql first to ensure data is loaded into page cache.\r\n 4. run query \" time psql postgres -h localhost -U postgres -f 102.tpcds.02.sql\" again to measure performance.\r\n\r\nD. Result\r\nWith the modified code, performance of hash join operation can scale better with number of threads. Here is result of query02 after patch. For example, performance improved ~2.5x when run 28 threads.\r\n\r\nnumber of thread: 1 4 8 16 28\r\ntime used(sec): 465.1 193.1 97.9 55.9 41\r\n\r\nI attached 5 files for more information:\r\n\r\n 1. query_plan_q2_no_opt_28_thread: query plan using 28 threads without patch\r\n 2. query_plan_q2_opt_28_thread: query plan using 28 threads with patch\r\n 3. perf_c2c_no_opt.txt: perf c2c output before patch\r\n 4. perf_c2c_opt.txt: perf c2c output after patch\r\n 5. git diff of the patch\r\n\r\n\r\nThanks and Best Regards\r\n\r\nDeng, Gang (邓刚)\r\nIAGS-CPDP-CEE PRC Enterprise\r\nMobile: 13161337000\r\nOffice: 010-57511964",
"msg_date": "Thu, 9 Jan 2020 08:53:42 +0000",
"msg_from": "\"Deng, Gang\" <gang.deng@intel.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Resolve Parallel Hash Join Performance Issue"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 10:04 PM Deng, Gang <gang.deng@intel.com> wrote:\n> Attached is a patch to resolve parallel hash join performance issue. This is my first time to contribute patch to PostgreSQL community, I referred one of previous thread as template to report the issue and patch. Please let me know if need more information of the problem and patch.\n\nThank you very much for investigating this and for your report.\n\n> HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));\n>\n> changed to:\n>\n> if (!HeapTupleHeaderHasMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple)))\n>\n> {\n>\n> HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));\n>\n> }\n>\n> Compared with original code, modified code can avoid unnecessary write to memory/cache.\n\nRight, I see. The funny thing is that the match bit is not even used\nin this query (it's used for right and full hash join, and those\naren't supported for parallel joins yet). Hmm. So, instead of the\ntest you proposed, an alternative would be to use if (!parallel).\nThat's a value that will be constant-folded, so that there will be no\nbranch in the generated code (see the pg_attribute_always_inline\ntrick). If, in a future release, we need the match bit for parallel\nhash join because we add parallel right/full hash join support, we\ncould do it the way you showed, but only if it's one of those join\ntypes, using another constant parameter.\n\n> D. Result\n>\n> With the modified code, performance of hash join operation can scale better with number of threads. Here is result of query02 after patch. For example, performance improved ~2.5x when run 28 threads.\n>\n> number of thread: 1 4 8 16 28\n> time used(sec): 465.1 193.1 97.9 55.9 41\n\nWow. That is a very nice improvement.\n\n\n",
"msg_date": "Thu, 9 Jan 2020 23:04:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Resolve Parallel Hash Join Performance Issue"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Right, I see. The funny thing is that the match bit is not even used\n> in this query (it's used for right and full hash join, and those\n> aren't supported for parallel joins yet). Hmm. So, instead of the\n> test you proposed, an alternative would be to use if (!parallel).\n> That's a value that will be constant-folded, so that there will be no\n> branch in the generated code (see the pg_attribute_always_inline\n> trick). If, in a future release, we need the match bit for parallel\n> hash join because we add parallel right/full hash join support, we\n> could do it the way you showed, but only if it's one of those join\n> types, using another constant parameter.\n\nCan we base the test off the match type today, and avoid leaving\nsomething that will need to be fixed later?\n\nI'm pretty sure that the existing coding is my fault, and that it's\nlike that because I reasoned that setting the bit was too cheap\nto justify having a test-and-branch around it. Apparently that's\nnot true anymore in a parallel join, but I have to say that it's\nunclear why. In any case, the reasoning probably still holds good\nin non-parallel cases, so it'd be a shame to introduce a run-time\ntest if we can avoid it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 09:43:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Resolve Parallel Hash Join Performance Issue"
},
{
"msg_contents": "Thank you for the comment. Yes, I agree the alternative of using '(!parallel)', so that no need to test the bit. Will someone submit patch to for it accordingly?\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <thomas.munro@gmail.com> \r\nSent: Thursday, January 9, 2020 6:04 PM\r\nTo: Deng, Gang <gang.deng@intel.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: [PATCH] Resolve Parallel Hash Join Performance Issue\r\n\r\nOn Thu, Jan 9, 2020 at 10:04 PM Deng, Gang <gang.deng@intel.com> wrote:\r\n> Attached is a patch to resolve parallel hash join performance issue. This is my first time to contribute patch to PostgreSQL community, I referred one of previous thread as template to report the issue and patch. Please let me know if need more information of the problem and patch.\r\n\r\nThank you very much for investigating this and for your report.\r\n\r\n> HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));\r\n>\r\n> changed to:\r\n>\r\n> if \r\n> (!HeapTupleHeaderHasMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple)))\r\n>\r\n> {\r\n>\r\n> \r\n> HeapTupleHeaderSetMatch(HJTUPLE_MINTUPLE(node->hj_CurTuple));\r\n>\r\n> }\r\n>\r\n> Compared with original code, modified code can avoid unnecessary write to memory/cache.\r\n\r\nRight, I see. The funny thing is that the match bit is not even used in this query (it's used for right and full hash join, and those aren't supported for parallel joins yet). Hmm. So, instead of the test you proposed, an alternative would be to use if (!parallel).\r\nThat's a value that will be constant-folded, so that there will be no branch in the generated code (see the pg_attribute_always_inline trick). If, in a future release, we need the match bit for parallel hash join because we add parallel right/full hash join support, we could do it the way you showed, but only if it's one of those join types, using another constant parameter.\r\n\r\n> D. Result\r\n>\r\n> With the modified code, performance of hash join operation can scale better with number of threads. Here is result of query02 after patch. For example, performance improved ~2.5x when run 28 threads.\r\n>\r\n> number of thread: 1 4 8 16 28\r\n> time used(sec): 465.1 193.1 97.9 55.9 41\r\n\r\nWow. That is a very nice improvement.\r\n",
"msg_date": "Fri, 10 Jan 2020 00:52:42 +0000",
"msg_from": "\"Deng, Gang\" <gang.deng@intel.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Resolve Parallel Hash Join Performance Issue"
},
{
"msg_contents": "Regarding to the reason of setting bit was not cheap anymore in parallel join. As I explain in my original mail, it is because 'false sharing cache coherence'. In short word, setting of the bit will cause the whole cache line (64 bytes) dirty. So that all CPU cores contain the cache line have to load it again, which will waste much cpu time. Article https://software.intel.com/en-us/articles/avoiding-and-identifying-false-sharing-among-threads explain more detail.\n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Thursday, January 9, 2020 10:43 PM\nTo: Thomas Munro <thomas.munro@gmail.com>\nCc: Deng, Gang <gang.deng@intel.com>; pgsql-hackers@postgresql.org\nSubject: Re: [PATCH] Resolve Parallel Hash Join Performance Issue\n\nThomas Munro <thomas.munro@gmail.com> writes:\n> Right, I see. The funny thing is that the match bit is not even used \n> in this query (it's used for right and full hash join, and those \n> aren't supported for parallel joins yet). Hmm. So, instead of the \n> test you proposed, an alternative would be to use if (!parallel).\n> That's a value that will be constant-folded, so that there will be no \n> branch in the generated code (see the pg_attribute_always_inline \n> trick). If, in a future release, we need the match bit for parallel \n> hash join because we add parallel right/full hash join support, we \n> could do it the way you showed, but only if it's one of those join \n> types, using another constant parameter.\n\nCan we base the test off the match type today, and avoid leaving something that will need to be fixed later?\n\nI'm pretty sure that the existing coding is my fault, and that it's like that because I reasoned that setting the bit was too cheap to justify having a test-and-branch around it. Apparently that's not true anymore in a parallel join, but I have to say that it's unclear why. In any case, the reasoning probably still holds good in non-parallel cases, so it'd be a shame to introduce a run-time test if we can avoid it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 01:18:39 +0000",
"msg_from": "\"Deng, Gang\" <gang.deng@intel.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Resolve Parallel Hash Join Performance Issue"
},
{
"msg_contents": "(Replies to both Gang and Tom below).\n\nOn Fri, Jan 10, 2020 at 1:52 PM Deng, Gang <gang.deng@intel.com> wrote:\n> Thank you for the comment. Yes, I agree the alternative of using '(!parallel)', so that no need to test the bit. Will someone submit patch to for it accordingly?\n\nHere's a patch like that.\n\nOn Fri, Jan 10, 2020 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Right, I see. The funny thing is that the match bit is not even used\n> > in this query (it's used for right and full hash join, and those\n> > aren't supported for parallel joins yet). Hmm. So, instead of the\n> > test you proposed, an alternative would be to use if (!parallel).\n> > That's a value that will be constant-folded, so that there will be no\n> > branch in the generated code (see the pg_attribute_always_inline\n> > trick). If, in a future release, we need the match bit for parallel\n> > hash join because we add parallel right/full hash join support, we\n> > could do it the way you showed, but only if it's one of those join\n> > types, using another constant parameter.\n>\n> Can we base the test off the match type today, and avoid leaving\n> something that will need to be fixed later?\n\nI agree that it'd be nicer to use the logically correct thing, namely\na test of HJ_FILL_INNER(node), but that'd be a run-time check. I'd\nlike to back-patch this and figured that we don't want to add new\nbranches too casually.\n\nI have an experimental patch where \"fill_inner\" and \"fill_outer\" are\ncompile-time constants and you can skip various bits of code without\nbranching (part of a larger experiment to figure out which of many\nparameters are worth specialising at a cost of a couple of KB of text\nper combination, including the ability to use wider hashes so that\nmonster sized joins work better). Then I could test the logically\ncorrect thing explicitly without branches.",
"msg_date": "Tue, 21 Jan 2020 18:20:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Resolve Parallel Hash Join Performance Issue"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 6:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jan 10, 2020 at 1:52 PM Deng, Gang <gang.deng@intel.com> wrote:\n> > Thank you for the comment. Yes, I agree the alternative of using '(!parallel)', so that no need to test the bit. Will someone submit patch to for it accordingly?\n>\n> Here's a patch like that.\n\nPushed. Thanks again for the report!\n\nI didn't try the TPC-DS query, but could see a small improvement from\nthis on various simple queries, especially with a fairly small hash\ntable and a large outer relation, when many cores are probing.\n\n(Off topic for this thread, but after burning a few hours on a 72-way\nbox investigating various things including this, I was reminded of the\nperformance drop-off for joins with large hash tables that happens\nsomewhere around 8-16 workers. That's because we can't give 32KB\nchunks out fast enough, and if you increase the chunk size it helps\nonly a bit. That really needs some work; maybe something like a\nseparation of reservation and allocation, so that multiple segments\ncan be created in parallel while respecting limits, or something like\nthat. The other thing I was reminded of: FreeBSD blows Linux out of\nthe water on big parallel hash joins on identical hardware; I didn't\ndig further today but I suspect this may be down to lack of huge pages\n(TLB misses), and perhaps also those pesky fallocate() calls. I'm\nstarting to wonder if we should have a new GUC shared_work_mem that\nreserves a wodge of shm in the main region, and hand out 'fast DSM\nsegments' from there, or some other higher level abstraction that's\nwired into the resource release system; they would benefit from\nhuge_pages=try on Linux, they'd be entirely allocated (in the VM\nsense) and there'd be no system calls, though admittedly there'd be\nmore ways for things to go wrong...)\n\n\n",
"msg_date": "Mon, 27 Jan 2020 15:09:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Resolve Parallel Hash Join Performance Issue"
}
] |
[
{
"msg_contents": "Hi, we have trouble to detect true root corruptions on replicas. I made a patch for resolving it with the locking meta page and potential root page. I heard that amcheck has an invariant about locking no more than 1 page at a moment for avoiding deadlocks. Is there possible a deadlock situation?",
"msg_date": "Thu, 9 Jan 2020 13:55:17 +0500",
"msg_from": "=?utf-8?Q?godjan_=E2=80=A2?= <g0dj4n@gmail.com>",
"msg_from_op": true,
"msg_subject": "Verify true root on replicas with amcheck"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 12:55 AM godjan • <g0dj4n@gmail.com> wrote:\n> Hi, we have trouble to detect true root corruptions on replicas. I made a patch for resolving it with the locking meta page and potential root page.\n\nWhat do you mean by true root corruption? What is the cause of the\nproblem? What symptom does it have in your application?\n\nWhile I was the one that wrote the existing !readonly/parent check for\nthe true root (a check which your patch makes work with the regular\nbt_check_index() function), I wasn't thinking of any particular\ncorruption scenario at the time. I wrote the check simply because it\nwas easy to do so (with a heavyweight ShareLock on the index).\n\n> I heard that amcheck has an invariant about locking no more than 1 page at a moment for avoiding deadlocks. Is there possible a deadlock situation?\n\nThis is a conservative principle that I came up with when I wrote the\noriginal version of amcheck. It's not strictly necessary, but it\nseemed like a good idea. It should be safe to \"couple\" buffer locks in\na way that matches the B-Tree code -- as long as it is thought through\nvery carefully. I am probably going to relax the rule for one specific\ncase soon -- see:\n\nhttps://postgr.es/m/F7527087-6E95-4077-B964-D2CAFEF6224B@yandex-team.ru\n\nYour patch looks like it gets it right (it won't deadlock with other\nsessions that access the metapage), but I hesitate to commit it\nwithout a strong justification. Acquiring multiple buffer locks\nconcurrently is worth avoiding wherever possible.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 Jan 2020 16:40:47 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Verify true root on replicas with amcheck"
},
{
"msg_contents": "On 1/9/20 3:55 AM, godjan � wrote:\n> Hi, we have trouble to detect true root corruptions on replicas. I made a patch for resolving it with the locking meta page and potential root page. I heard that amcheck has an invariant about locking no more than 1 page at a moment for avoiding deadlocks. Is there possible a deadlock situation?\n\nThis patch no longer applies cleanly: \nhttp://cfbot.cputube.org/patch_27_2418.log\n\nThe CF entry has been updated to Waiting on Author.\n\nAlso, it would be a good idea to answer Peter's questions down-thread if \nyou are interested in moving this patch forward.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 1 Apr 2020 11:17:11 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Verify true root on replicas with amcheck"
},
{
"msg_contents": "On 1/16/20 7:40 PM, Peter Geoghegan wrote:\n> On Thu, Jan 9, 2020 at 12:55 AM godjan • <g0dj4n@gmail.com> wrote:\n> \n>> I heard that amcheck has an invariant about locking no more than 1 page at a moment for avoiding deadlocks. Is there possible a deadlock situation?\n> \n> This is a conservative principle that I came up with when I wrote the\n> original version of amcheck. It's not strictly necessary, but it\n> seemed like a good idea. It should be safe to \"couple\" buffer locks in\n> a way that matches the B-Tree code -- as long as it is thought through\n> very carefully. I am probably going to relax the rule for one specific\n> case soon -- see:\n> \n> https://postgr.es/m/F7527087-6E95-4077-B964-D2CAFEF6224B@yandex-team.ru\n> \n> Your patch looks like it gets it right (it won't deadlock with other\n> sessions that access the metapage), but I hesitate to commit it\n> without a strong justification. Acquiring multiple buffer locks\n> concurrently is worth avoiding wherever possible.\n\nI have marked this patch Returned with Feedback since it has been \nsitting for a while with no response from the author.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 8 Apr 2020 10:26:38 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Verify true root on replicas with amcheck"
}
] |
[
{
"msg_contents": "Hi folks,\n\nwith 12.1, after a couple of queries, at a random place, the clientlib\ndoes produce a failed query without giving reason or error-message [1].\nThen when retrying, the clientlib switches off signal handling and\nsits inactive in memory (needs kill -9).\n\nThe server log shows no error or other hint.\nThe behaviour happens rarely with trust access, and almost always when\nusing Kerberos5 (Heimdal as included in FreeBSD).\n\n11.5 clientlib has none of this behaviour and seems to work fine, like\n10.10 did.\n\nEnvironment:\n\tOS \tFreeBSD 11.3\n\tApplic.\tRuby-on-Rails, ruby=2.5.7, gem 'pg'=1.2.2\n\t (it makes no difference if that one is compiled with\n\t\tthe 12.1 or the 10.10 library)\n\tServer 12.1\n\n[1] the message from ruby is\n PG::ConnectionBad: PQconsumeInput() : <query>\n\nrgds,\nPMc\n\n\n",
"msg_date": "Thu, 9 Jan 2020 19:18:22 +0100",
"msg_from": "Peter <pmc@citylink.dinoex.sub.org>",
"msg_from_op": true,
"msg_subject": "12.1 not useable: clientlib fails after a dozen queries (GSSAPI ?)"
},
{
"msg_contents": "On 1/9/20 10:18 AM, Peter wrote:\n> Hi folks,\n> \n> with 12.1, after a couple of queries, at a random place, the clientlib\n> does produce a failed query without giving reason or error-message [1].\n> Then when retrying, the clientlib switches off signal handling and\n> sits inactive in memory (needs kill -9).\n> \n> The server log shows no error or other hint.\n> The behaviour happens rarely with trust access, and almost always when\n> using Kerberos5 (Heimdal as included in FreeBSD).\n> \n> 11.5 clientlib has none of this behaviour and seems to work fine, like\n> 10.10 did.\n\nMight want to take at below:\n\nhttps://github.com/ged/ruby-pg/issues/311\n\n> \n> Environment:\n> \tOS \tFreeBSD 11.3\n> \tApplic.\tRuby-on-Rails, ruby=2.5.7, gem 'pg'=1.2.2\n> \t (it makes no difference if that one is compiled with\n> \t\tthe 12.1 or the 10.10 library)\n> \tServer 12.1\n> \n> [1] the message from ruby is\n> PG::ConnectionBad: PQconsumeInput() : <query>\n> \n> rgds,\n> PMc\n> \n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n",
"msg_date": "Thu, 9 Jan 2020 10:47:00 -0800",
"msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Peter <pmc@citylink.dinoex.sub.org> writes:\n> with 12.1, after a couple of queries, at a random place, the clientlib\n> does produce a failed query without giving reason or error-message [1].\n> Then when retrying, the clientlib switches off signal handling and\n> sits inactive in memory (needs kill -9).\n\nSeems like you'd better raise this with the author(s) of the \"pg\"\nRuby gem. Perhaps they read this mailing list, but more likely\nthey have a specific bug reporting mechanism somewhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 13:48:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 10:47:00AM -0800, Adrian Klaver wrote:\n! \n! Might want to take at below:\n! \n! https://github.com/ged/ruby-pg/issues/311\n\nThanks a lot! That option\n> gssencmode: \"disable\"\nseems to solve the issue.\n\nBut I think the people there are concerned by a different issue: they\nare bothering about fork(), while my flaw appears also when I do *NOT*\ndo fork. Also the picture is slightly different; they get segfaults, I\nget misbehaviour.\n\nrgds,\nPMc\n\n\n",
"msg_date": "Thu, 9 Jan 2020 21:53:17 +0100",
"msg_from": "Peter <pmc@citylink.dinoex.sub.org>",
"msg_from_op": true,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 01:48:01PM -0500, Tom Lane wrote:\n! Peter <pmc@citylink.dinoex.sub.org> writes:\n! > with 12.1, after a couple of queries, at a random place, the clientlib\n! > does produce a failed query without giving reason or error-message [1].\n! > Then when retrying, the clientlib switches off signal handling and\n! > sits inactive in memory (needs kill -9).\n! \n! Seems like you'd better raise this with the author(s) of the \"pg\"\n! Ruby gem. Perhaps they read this mailing list, but more likely\n! they have a specific bug reporting mechanism somewhere.\n\nTom,\n I don't think this has anything to do with \"pg\". Just checked: I get\ngarbage and misbehaviour on the \"psql\" command line tool also:\n\n$ psql -h myhost flowmdev\npsql (12.1)\nGSSAPI-encrypted connection\nType \"help\" for help.\n\nflowmdev=> select * from flows;\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nflowmdev=> select * from flows;\nserver sent data (\"D\" message) without prior row description (\"T\" message)\nflowmdev=> select * from flows;\nmessage type 0x54 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\nmessage type 0x44 arrived from server while idle\n id | name | ... <here finally starts the data as expected>\n\n\nTo the contrary:\n\n$ PGGSSENCMODE=\"disable\" psql -h myhost flowmdev\npsql (12.1)\nType \"help\" for help.\n\nflowmdev=> select * from flows;\n id | name | ... <all working as normal>\n\n\nrgds,\nPMc\n\n\n",
"msg_date": "Thu, 9 Jan 2020 21:53:19 +0100",
"msg_from": "Peter <pmc@citylink.dinoex.sub.org>",
"msg_from_op": true,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Peter <pmc@citylink.dinoex.sub.org> writes:\n> I don't think this has anything to do with \"pg\". Just checked: I get\n> garbage and misbehaviour on the \"psql\" command line tool also:\n\n> $ psql -h myhost flowmdev\n> psql (12.1)\n> GSSAPI-encrypted connection\n> Type \"help\" for help.\n\n> flowmdev=> select * from flows;\n> message type 0x44 arrived from server while idle\n> message type 0x44 arrived from server while idle\n> message type 0x44 arrived from server while idle\n\nOh ... that does look pretty broken. However, we've had no other similar\nreports, so there must be something unique to your configuration. Busted\nGSSAPI library, or some ABI inconsistency, perhaps? What platform are you\non, and how did you build or obtain this Postgres code?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 16:31:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 04:31:44PM -0500, Tom Lane wrote:\n! Peter <pmc@citylink.dinoex.sub.org> writes:\n! > flowmdev=> select * from flows;\n! > message type 0x44 arrived from server while idle\n! > message type 0x44 arrived from server while idle\n! > message type 0x44 arrived from server while idle\n! \n! Oh ... that does look pretty broken. However, we've had no other similar\n! reports, so there must be something unique to your configuration. Busted\n! GSSAPI library, or some ABI inconsistency, perhaps? What platform are you\n! on, and how did you build or obtain this Postgres code?\n\nThis is a FreeBSD 11.3-p3 r351611 built from source. Postgres is built\nfrom\nhttps://svn0.eu.freebsd.org/ports/branches/2019Q4 (rel. 12r1) or\nhttps://svn0.eu.freebsd.org/ports/branches/2020Q1 (rel. 12.1)\nwith \"make package install\".\nI have a build environment for base&ports that forces recompiles on\nany change and should make ABI inconsistencies quite hard to create.\n\nAll local patches are versioned and documented; there are none that\nI could imagine influencing this.\nThere are no patches on postgres. Also no patches on the GSSAPI.\nThere are a couple of patches on the Heimdal, to fix broken\ncommandline parsing, broken pidfile handling and broken daemonization.\nNone of them touches the core functionality (like key handling).\n\nBut I just recognize something of interest (which I had taken for\ngranted when importing the database): the flaw does NOT appear when\naccessing the database from the server's local system (with TCP and\nGSSAPI encryption active). Only from remote system.\n\nBut then, if I go on the local system, and change the mtu:\n# ifconfig lo0 mtu 1500\nand restart the server, then I get the exact same errors locally.\n\nI don't get a clue of that, it doesn't make sense. With the default\nlo0 mtu of 16384 the packets go on the network with the full 8256\nbytes you send. With mtu 1500 they are split into 1448 byte pieces;\nbut TCP is supposed to handle this transparently. And what difference\nwould the encryption make with this?\n> net.inet.tcp.sendspace: 32768\n> net.inet.tcp.recvspace: 65536\nThese are also bigger. No, I don't understand that.\n\nThe only thing - these are all VIMAGE jails. VIMAGE was considered\n'experimental' some time ago, and went productive in FreeBSD 12.0, \nand 11.3 is lower and later than 12.0 - whatever that concedes.\n\nAnother thing I found out: the slower the network, the worse the\nerrors. So might it be nobody complained just because those people\nusually having GSSAPI also have very fast machines and networks\nnowadays?\n\nWhen I go to packet-radio speed:\n# ipfw pipe 4 config bw 10kbit/s\n\nthen I can see the query returning empty at the first received bytes:\nflowmdev=# select * from flows;\nflowmdev=# \n\nand not even waiting the 8 seconds for the first block to arrive.\n\n\nrgds,\nPMc\n\n\n",
"msg_date": "Fri, 10 Jan 2020 03:23:42 +0100",
"msg_from": "Peter <pmc@citylink.dinoex.sub.org>",
"msg_from_op": true,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "[ redirecting to -hackers ]\n\nPeter <pmc@citylink.dinoex.sub.org> writes:\n> But I just recognize something of interest (which I had taken for\n> granted when importing the database): the flaw does NOT appear when\n> accessing the database from the server's local system (with TCP and\n> GSSAPI encryption active). Only from remote system.\n> But then, if I go on the local system, and change the mtu:\n> # ifconfig lo0 mtu 1500\n> and restart the server, then I get the exact same errors locally.\n\nOh-ho, that is interesting.\n\nLooking at our regression tests for gssenc, I observe that they\nonly try to transport a negligible amount of data (viz, single-row\nboolean results). So we'd not notice any problem that involved\nmultiple-packet messages.\n\nI modified the kerberos test so that it tries a query with a less\nnegligible amount of data, and what I find is:\n\n* passes on Fedora 30, with either default or 1500 mtu\n* passes on FreeBSD 12.0 with default mtu\n* FAILS on FreeBSD 12.0 with mtu = 1500\n\nI haven't run it further to ground than that, but there's definitely\nsomething fishy here. Based on just these results one would be hard\npressed to say if it's our bug or FreeBSD's, but your report that you\ndon't see the failure with PG 11 makes it sound like our problem.\n\nOTOH, I also find that there's some hysteresis in the behavior:\nonce it's failed, reverting the mtu to the default setting doesn't\nnecessarily make subsequent runs pass. It's really hard to explain\nthat behavior if it's our bug.\n\nI tested today's HEAD of our code with up-to-date FreeBSD 12.0-RELEASE-p12\nrunning on amd64 bare metal, no jails or emulators or VIMAGE or anything.\n\nAttached are proposed test patch, as well as client-side regression log\noutput from a failure. (There's no evidence of distress in the\npostmaster log, same as your report.)\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 10 Jan 2020 12:59:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "I wrote:\n> I haven't run it further to ground than that, but there's definitely\n> something fishy here. Based on just these results one would be hard\n> pressed to say if it's our bug or FreeBSD's, but your report that you\n> don't see the failure with PG 11 makes it sound like our problem.\n\nAh, I have it: whoever wrote pg_GSS_read() failed to pay attention\nto the fact that setting errno is a critical part of its API.\nSometimes it returns -1 while leaving errno in a state that causes\npqReadData to draw the wrong conclusions. In particular that can\nhappen when it reads an incomplete packet, and that's very timing\ndependent, which is why this is so ticklish to reproduce.\n\nI'll have a patch in a little while.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 14:25:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> I wrote:\n> > I haven't run it further to ground than that, but there's definitely\n> > something fishy here. Based on just these results one would be hard\n> > pressed to say if it's our bug or FreeBSD's, but your report that you\n> > don't see the failure with PG 11 makes it sound like our problem.\n> \n> Ah, I have it: whoever wrote pg_GSS_read() failed to pay attention\n> to the fact that setting errno is a critical part of its API.\n> Sometimes it returns -1 while leaving errno in a state that causes\n> pqReadData to draw the wrong conclusions. In particular that can\n> happen when it reads an incomplete packet, and that's very timing\n> dependent, which is why this is so ticklish to reproduce.\n\nAh-hah. Not sure if that was Robbie or myself (probably me, really,\nsince I rewrote a great deal of that code). I agree that the regression\ntests don't test with very much data, but I tested pushing quite a bit\nof data through and didn't see any issues with my testing. Apparently I\nwas getting pretty lucky. :/\n\n> I'll have a patch in a little while.\n\nThat's fantastic, thanks!\n\nStephen",
"msg_date": "Fri, 10 Jan 2020 15:38:07 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Ah-hah. Not sure if that was Robbie or myself (probably me, really,\n> since I rewrote a great deal of that code). I agree that the regression\n> tests don't test with very much data, but I tested pushing quite a bit\n> of data through and didn't see any issues with my testing. Apparently I\n> was getting pretty lucky. :/\n\nYou were *very* lucky, because this code is absolutely full of mistakes\nrelated to incomplete reads, inadequate or outright wrong error handling,\netc.\n\nI was nearly done cleaning that up, when it sank into me that\nfe-secure-gssapi.c uses static buffers for partially-read or\npartially-encoded data. That means that any client trying to use\nmultiple GSSAPI-encrypted connections is very likely to see breakage\ndue to different connections trying to use the same buffers concurrently.\nI wonder whether that doesn't explain the complaints mentioned upthread\nfrom the Ruby folks.\n\n(be-secure-gssapi.c is coded identically, but there it's OK since\nany backend only has one client connection to deal with.)\n\n>> I'll have a patch in a little while.\n\n> That's fantastic, thanks!\n\nThis is gonna take longer than I thought.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 15:58:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 12:59:09PM -0500, Tom Lane wrote:\n! [ redirecting to -hackers ]\n\n! I modified the kerberos test so that it tries a query with a less\n! negligible amount of data, and what I find is:\n! \n! * passes on Fedora 30, with either default or 1500 mtu\n! * passes on FreeBSD 12.0 with default mtu\n! * FAILS on FreeBSD 12.0 with mtu = 1500\n\nSo, it is not related to only VIMAGE @ R11.3, and -more important to\nme- it is not only happening in my kitchen. Thank You very much :)\n\n! OTOH, I also find that there's some hysteresis in the behavior:\n! once it's failed, reverting the mtu to the default setting doesn't\n! necessarily make subsequent runs pass. It's really hard to explain\n! that behavior if it's our bug.\n\nThat's affirmative. Made me go astray a few times when trying to\nisolate it.\n\nOn Fri, Jan 10, 2020 at 02:25:22PM -0500, Tom Lane wrote:\n! Ah, I have it: whoever wrote pg_GSS_read() failed to pay attention\n! to the fact that setting errno is a critical part of its API.\n! Sometimes it returns -1 while leaving errno in a state that causes\n\nWow, that's fast. My probability-guess this morning was either some\nhard-coded 8192-byte buffer, or something taking an [EWOULDBLOCK] for\nOK. Then I decided to not look into the code, as You will be much\nfaster anyway, and there are other pieces of software where I do\nnot have such a competent peer to talk to...\n\nAnyway, thanks a lot!\nPMc\n\n\n",
"msg_date": "Fri, 10 Jan 2020 22:03:37 +0100",
"msg_from": "Peter <pmc@citylink.dinoex.sub.org>",
"msg_from_op": true,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Jan 10, 2020 at 15:58 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Ah-hah. Not sure if that was Robbie or myself (probably me, really,\n> > since I rewrote a great deal of that code). I agree that the regression\n> > tests don't test with very much data, but I tested pushing quite a bit\n> > of data through and didn't see any issues with my testing. Apparently I\n> > was getting pretty lucky. :/\n>\n> You were *very* lucky, because this code is absolutely full of mistakes\n> related to incomplete reads, inadequate or outright wrong error handling,\n> etc.\n\n\nI guess so.. I specifically remember running into problems transferring\nlarge data sets before I rewrote the code but after doing so it was\nreliable (for me anyway...).\n\nI was nearly done cleaning that up, when it sank into me that\n> fe-secure-gssapi.c uses static buffers for partially-read or\n> partially-encoded data. That means that any client trying to use\n> multiple GSSAPI-encrypted connections is very likely to see breakage\n> due to different connections trying to use the same buffers concurrently.\n\n\nUghhh. That’s a completely valid point and one I should have thought of.\n\nI wonder whether that doesn't explain the complaints mentioned upthread\n> from the Ruby folks.\n\n\nNo- the segfault issue has been demonstrated to be able to reproduce\nwithout any PG code involved at all, and it also involved threads with only\none connection, at least as I recall (on my phone atm).\n\n(be-secure-gssapi.c is coded identically, but there it's OK since\n> any backend only has one client connection to deal with.)\n\n\nRight... I actually wrote the backend code first and then largely copied\nit to the front end, and then adjusted it, but obviously insufficiently as\nI had been thinking of just the one connection that the backend has to deal\nwith.\n\nThanks!\n\nStephen\n\nGreetings,On Fri, Jan 10, 2020 at 15:58 Tom Lane <tgl@sss.pgh.pa.us> wrote:Stephen Frost <sfrost@snowman.net> writes:\n> Ah-hah. Not sure if that was Robbie or myself (probably me, really,\n> since I rewrote a great deal of that code). I agree that the regression\n> tests don't test with very much data, but I tested pushing quite a bit\n> of data through and didn't see any issues with my testing. Apparently I\n> was getting pretty lucky. :/\n\nYou were *very* lucky, because this code is absolutely full of mistakes\nrelated to incomplete reads, inadequate or outright wrong error handling,\netc.I guess so.. I specifically remember running into problems transferring large data sets before I rewrote the code but after doing so it was reliable (for me anyway...).\nI was nearly done cleaning that up, when it sank into me that\nfe-secure-gssapi.c uses static buffers for partially-read or\npartially-encoded data. That means that any client trying to use\nmultiple GSSAPI-encrypted connections is very likely to see breakage\ndue to different connections trying to use the same buffers concurrently.Ughhh. That’s a completely valid point and one I should have thought of.\nI wonder whether that doesn't explain the complaints mentioned upthread\nfrom the Ruby folks.No- the segfault issue has been demonstrated to be able to reproduce without any PG code involved at all, and it also involved threads with only one connection, at least as I recall (on my phone atm).\n(be-secure-gssapi.c is coded identically, but there it's OK since\nany backend only has one client connection to deal with.)Right... I actually wrote the backend code first and then largely copied it to the front end, and then adjusted it, but obviously insufficiently as I had been thinking of just the one connection that the backend has to deal with.Thanks!Stephen",
"msg_date": "Fri, 10 Jan 2020 16:09:24 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Here's a draft patch that cleans up all the logic errors I could find.\n\nI also expanded the previous patch for the kerberos test so that it\nverifies that we can upload a nontrivial amount of data to the server,\nas well as download.\n\nI also spent a fair amount of effort on removing cosmetic differences\nbetween the comparable logic in be-secure-gssapi.c and fe-secure-gssapi.c,\nsuch that the two files can now be diff'd to confirm that be_gssapi_write\nand be_gssapi_read implement identical logic to pg_GSS_write and\npg_GSS_read. (They did not, before :-(.)\n\nThis does not deal with the problem that libpq shouldn't be using\nstatic data space for this purpose. It seems reasonable to me to\nleave that for a separate patch.\n\nThis passes tests for me, on my FreeBSD build with lo0 mtu = 1500.\nIt wouldn't hurt to get some more mileage on it though. Peter,\nI didn't follow how to set up the \"packet radio speed\" environment\nthat you mentioned, but if you could beat on this with that setup\nit would surely be useful.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 10 Jan 2020 22:51:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "I wrote:\n> Here's a draft patch that cleans up all the logic errors I could find.\n\nSo last night I was assuming that this problem just requires more careful\nattention to what to return in the error exit paths. In the light of\nmorning, though, I realize that the algorithms involved in\nbe-secure-gssapi.c and fe-secure-gssapi.c are just fundamentally wrong:\n\n* On the read side, the code will keep looping until it gets a no-data\nerror from the underlying socket call. This is silly. In every or\nalmost every use, the caller's read length request corresponds to the\nsize of a buffer that's meant to be larger than typical messages, so\nthat betting that we're going to fill that buffer completely is the\nwrong way to bet. Meanwhile, it's fairly likely that the incoming\nencrypted packet's length *does* correspond to some actual message\nboundary; that would only not happen if the sender is forced to break\nup a message, which ought to be a minority situation, else our buffer\nsize choices are too small. So it's very likely that the looping just\nresults in doubling the number of read() calls that are made, with\nhalf of them failing with EWOULDBLOCK. What we should do instead is\nreturn to the caller whenever we finish handing back the decrypted\ncontents of a packet. We can do the read() on the next call, after\nthe caller's dealt with that data.\n\n* On the write side, if the code encrypts some data and then gets\nEWOULDBLOCK trying to write it, it will tell the caller that it\nsuccessfully wrote that data. If that was all the data the caller\nhad to write (again, not so unlikely) this is a catastrophic\nmistake, because the caller will be satisfied and will go to sleep,\nrather than calling again to attempt another write. What we *must*\ndo is to reflect the write failure verbatim whether or not we\nencrypted some data. We must remember how much data we encrypted\nand then discount that much of the caller's supplied data next time.\nThere are hints in the existing comments that somebody understood\nthis at one point, but the code isn't acting that way today.\n\nI expect that I can prove point B by hot-wiring pqsecure_raw_write\nto randomly return EWOULDBLOCK (instead of making any write attempt)\nevery so often. I think strace will be enough to confirm point A,\nbut haven't tried it yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jan 2020 10:28:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "I wrote:\n> So last night I was assuming that this problem just requires more careful\n> attention to what to return in the error exit paths. In the light of\n> morning, though, I realize that the algorithms involved in\n> be-secure-gssapi.c and fe-secure-gssapi.c are just fundamentally wrong:\n\nHere's a revised patch that attempts to deal with those issues.\n(Still doesn't touch the static-buffer issue, though.)\n\nThe 0002 patch isn't meant for commit, but testing with that gives me\na whole lot more confidence that the gssapi code deals with EWOULDBLOCK\ncorrectly.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 11 Jan 2020 15:37:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "I wrote:\n> (Still doesn't touch the static-buffer issue, though.)\n\nAnd here's a delta patch for that. This fixes an additional portability\nhazard, which is that there were various places doing stuff like\n\n\t\tinput.length = ntohl(*(uint32 *) PqGSSRecvBuffer);\n\nThat's a SIGBUS hazard on alignment-picky hardware, because there is\nno guarantee that a variable that's just declared \"char ...[...]\"\nwill have any particular alignment. But malloc'ing the space will\nprovide maxaligned storage.\n\nMy FreeBSD testing has now given me enough confidence in these patches\nthat I'm just going to go ahead and push them. But, if you'd like to\ndo some more testing in advance of 12.2 release, please do.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 11 Jan 2020 16:42:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 10:51:50PM -0500, Tom Lane wrote:\n! Here's a draft patch that cleans up all the logic errors I could find.\n\nOkiee, thank You!\nLet's see (was a bit busy yesterday trying to upgrade pgadmin3 -\ndifficult matter), now lets sort this out:\n\nWith the first patch applied (as from Friday - applied only on the\nclient side), the application did appear to work well.\n\nBut then, when engaging bandwidth-limiting to some modem-speed, it did\nnot work: psql would receive all (or most of) the data from a SELECT,\nbut then somehow not recognize the end of it and sit there and wait for\nwhatever:\n\n> flowmdev=> select * from flows;\n> ^CCancel request sent\n> ^CCancel request sent\n\n\nNow with the new patches 0001+0003 applied, on both server & client,\nall now running 12.1 release, on a first run I did not perceive\na malfunction, bandwidth limited or not.\nI'll leave them applied, but this here will not experience serious\nloads; You'll need somebody else to test for that...\n\n\nrgds,\nPMc\n\n\n",
"msg_date": "Sun, 12 Jan 2020 19:36:33 +0100",
"msg_from": "Peter <pmc@citylink.dinoex.sub.org>",
"msg_from_op": true,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Peter <pmc@citylink.dinoex.sub.org> writes:\n> With the first patch applied (as from Friday - applied only on the\n> client side), the application did appear to work well.\n> But then, when engaging bandwidth-limiting to some modem-speed, it did\n> not work: psql would receive all (or most of) the data from a SELECT,\n> but then somehow not recognize the end of it and sit there and wait for\n> whatever:\n\nYeah, that's just the behavior I'd expect (and was able to reproduce\nhere) because of the additional design problem.\n\n> Now with the new patches 0001+0003 applied, on both server & client,\n> all now running 12.1 release, on a first run I did not perceive\n> a malfunction, bandwidth limited or not.\n> I'll leave them applied, but this here will not experience serious\n> loads; You'll need somebody else to test for that...\n\nCool, let us know if you do see any problems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Jan 2020 14:46:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> I wrote:\n> > Here's a draft patch that cleans up all the logic errors I could find.\n> \n> So last night I was assuming that this problem just requires more careful\n> attention to what to return in the error exit paths. In the light of\n> morning, though, I realize that the algorithms involved in\n> be-secure-gssapi.c and fe-secure-gssapi.c are just fundamentally wrong:\n> \n> * On the read side, the code will keep looping until it gets a no-data\n> error from the underlying socket call. This is silly. In every or\n> almost every use, the caller's read length request corresponds to the\n> size of a buffer that's meant to be larger than typical messages, so\n> that betting that we're going to fill that buffer completely is the\n> wrong way to bet. Meanwhile, it's fairly likely that the incoming\n> encrypted packet's length *does* correspond to some actual message\n> boundary; that would only not happen if the sender is forced to break\n> up a message, which ought to be a minority situation, else our buffer\n> size choices are too small. So it's very likely that the looping just\n> results in doubling the number of read() calls that are made, with\n> half of them failing with EWOULDBLOCK. What we should do instead is\n> return to the caller whenever we finish handing back the decrypted\n> contents of a packet. We can do the read() on the next call, after\n> the caller's dealt with that data.\n\nYeah, I agree that this is a better approach. Doing unnecessary\nread()'s certainly isn't ideal but beyond being silly it doesn't sound\nlike this was fundamentally broken..? (yes, the error cases certainly\nweren't properly being handled, I understand that)\n\n> * On the write side, if the code encrypts some data and then gets\n> EWOULDBLOCK trying to write it, it will tell the caller that it\n> successfully wrote that data. If that was all the data the caller\n> had to write (again, not so unlikely) this is a catastrophic\n> mistake, because the caller will be satisfied and will go to sleep,\n> rather than calling again to attempt another write. What we *must*\n> do is to reflect the write failure verbatim whether or not we\n> encrypted some data. We must remember how much data we encrypted\n> and then discount that much of the caller's supplied data next time.\n> There are hints in the existing comments that somebody understood\n> this at one point, but the code isn't acting that way today.\n\nThat's a case I hadn't considered and you're right- the algorithm\ncertainly wouldn't work in such a case. I don't recall specifically if\nthe code had handled it better previously, or not, but I do recall there\nwas something previously about being given a buffer and then having the\nAPI defined as \"give me back the exact same buffer because I had to\nstop\" and I recall finding that to ugly, but I get it now, seeing this\nissue. I'd certainly be happier if there was a better alternative but I\ndon't know that there really is.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 14 Jan 2020 15:12:07 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> ... We must remember how much data we encrypted\n>> and then discount that much of the caller's supplied data next time.\n>> There are hints in the existing comments that somebody understood\n>> this at one point, but the code isn't acting that way today.\n\n> That's a case I hadn't considered and you're right- the algorithm\n> certainly wouldn't work in such a case. I don't recall specifically if\n> the code had handled it better previously, or not, but I do recall there\n> was something previously about being given a buffer and then having the\n> API defined as \"give me back the exact same buffer because I had to\n> stop\" and I recall finding that to ugly, but I get it now, seeing this\n> issue. I'd certainly be happier if there was a better alternative but I\n> don't know that there really is.\n\nYeah. The only bright spot is that there's no reason for the caller\nto change its mind about what it wants to write, so that this restriction\ndoesn't really affect anything. (The next call might potentially add\n*more* data at the end, but that's fine.)\n\nI realized when I got into it that my sketch above also considered only\npart of the problem. In the general case, we might've encrypted some data\nfrom the current write request and successfully sent it, and then\nencrypted some more data but been unable to (fully) send that packet.\nIn this situation, it's best to report that we wrote however much data\ncorresponds to the fully sent packet(s). That way the caller can discard\nthat data from its buffer. We can't report the data corresponding to the\nin-progress packet as being written, though, or we have the\nmight-not-get-another-call problem. Fortunately the API already has the\nnotion of a partial write, since the underlying socket calls do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 15:22:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> ... We must remember how much data we encrypted\n> >> and then discount that much of the caller's supplied data next time.\n> >> There are hints in the existing comments that somebody understood\n> >> this at one point, but the code isn't acting that way today.\n> \n> > That's a case I hadn't considered and you're right- the algorithm\n> > certainly wouldn't work in such a case. I don't recall specifically if\n> > the code had handled it better previously, or not, but I do recall there\n> > was something previously about being given a buffer and then having the\n> > API defined as \"give me back the exact same buffer because I had to\n> > stop\" and I recall finding that to ugly, but I get it now, seeing this\n> > issue. I'd certainly be happier if there was a better alternative but I\n> > don't know that there really is.\n> \n> Yeah. The only bright spot is that there's no reason for the caller\n> to change its mind about what it wants to write, so that this restriction\n> doesn't really affect anything. (The next call might potentially add\n> *more* data at the end, but that's fine.)\n\nRight, makes sense.\n\n> I realized when I got into it that my sketch above also considered only\n> part of the problem. In the general case, we might've encrypted some data\n> from the current write request and successfully sent it, and then\n> encrypted some more data but been unable to (fully) send that packet.\n> In this situation, it's best to report that we wrote however much data\n> corresponds to the fully sent packet(s). That way the caller can discard\n> that data from its buffer. We can't report the data corresponding to the\n> in-progress packet as being written, though, or we have the\n> might-not-get-another-call problem. Fortunately the API already has the\n> notion of a partial write, since the underlying socket calls do.\n\nYeah, I see how that's also an issue and agree that it makes sense to\nreport back what's been written and sent as a partial write, and not\nreport back everything we've \"consumed\" since we might not get called\nagain in that case.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 14 Jan 2020 15:45:17 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI\n ?)"
}
] |
[
{
"msg_contents": "A customer of ours complained that if you have an inactive primary,\nmonitoring the apply lag on a standby reports monotonically increasing\nlag. The reason for this is that the apply lag is only updated on\nCOMMIT records, which of course don't occur in inactive servers.\nBut CHECKPOINT records do occur, so the WAL insert pointer continues to\nmove forward, which is what causes the spurious lag.\n\n(I think newer releases are protected from this problem because they\ndon't emit checkpoints during periods of inactivity. I didn't verify\nthis.)\n\nThis patch fixes the problem by using the checkpoint timestamp to update\nthe lag tracker in the standby. This requires a little change in where\nthis update is invoked, because previously it was done only for the XACT\nrmgr; this makes the patch a little bigger than it should.\n\n-- \n�lvaro Herrera PostgreSQL Expert, https://www.2ndQuadrant.com/",
"msg_date": "Fri, 10 Jan 2020 11:08:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "standby apply lag on inactive servers"
},
{
"msg_contents": "On 2020-Jan-10, Alvaro Herrera wrote:\n\n> A customer of ours complained that if you have an inactive primary,\n> monitoring the apply lag on a standby reports monotonically increasing\n> lag. The reason for this is that the apply lag is only updated on\n> COMMIT records, which of course don't occur in inactive servers.\n> But CHECKPOINT records do occur, so the WAL insert pointer continues to\n> move forward, which is what causes the spurious lag.\n> \n> (I think newer releases are protected from this problem because they\n> don't emit checkpoints during periods of inactivity. I didn't verify\n> this.)\n> \n> This patch fixes the problem by using the checkpoint timestamp to update\n> the lag tracker in the standby. This requires a little change in where\n> this update is invoked, because previously it was done only for the XACT\n> rmgr; this makes the patch a little bigger than it should.\n\nHere's a version of the patch that applies to current master. It does\nfix the problem that CHECKPOINT wal records are not considered when\ndetermining time-of-latest-record.\n\nHowever, it does *not* fix the monitoring problem I mentioned (which\nrelied on comparing pg_last_xact_replay_timestamp() to 'now()') ...\nbecause commit 6ef2eba3f57f (pg10) made an idle server not emit\ncheckpoint records anymore. That is, my parenthical remark was\ncompletely wrong: the new versions not only are \"protected\", but also\nthis fix doesn't fix them. Luckily, the way to fix monitoring for\nservers of versions 10 and later is to use the new replay_lag (etc)\ncolumns in pg_stat_replication, commit 6912acc04f0b (also pg10).\n\nI am inclined to apply this to all branches unless there are strong\nobjections, because the current code seems pretty arbitrary anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 27 Jan 2020 17:34:19 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "Actually looking again, getRecordTimestamp is looking pretty strange.\nIt looks much more natural by using nested switch/case blocks, as with\nthis diff. I think the compiler does a better job this way too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 27 Jan 2020 18:06:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "Hello.\n\nAt Mon, 27 Jan 2020 18:06:27 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> Actually looking again, getRecordTimestamp is looking pretty strange.\n> It looks much more natural by using nested switch/case blocks, as with\n> this diff. I think the compiler does a better job this way too.\n\nAgreed. Anyway I looked the latest version.\n\nThe aim of the patch seem reasonable. XLOG_END_OF_RECOVERY and\nXLOG_XACT_PREPARE also have a timestamp but it doesn't help much. (But\ncould be included for consistency.)\n\nThe timestamp of a checkpoint record is the start time of a checkpoint\n(and doesn't have subseconds part, but it's not a problem.). That\nmeans that the timestamp is usually far behind the time at the record\nhas been inserted. As the result the stored timestamp can go back by a\nsignificant internval. I don't think that causes an actual problem but\nthe movement looks wierd as the result of\npg_last_xact_replay_timestamp().\n\nAsides from the backward movement, a timestamp from other than\ncommit/abort records in recvoeryLastXTime affects the following code.\n\nxlog.c:7329: (similar code exists at line 9332)\n> ereport(LOG,\n> \t\t(errmsg(\"redo done at %X/%X\",\n> \t\t\t\t(uint32) (ReadRecPtr >> 32), (uint32) ReadRecPtr)));\n> xtime = GetLatestXTime();\n> if (xtime)\n> \tereport(LOG,\n> \t\t\t(errmsg(\"last completed transaction was at log time %s\",\n> \t\t\t\t\ttimestamptz_to_str(xtime))));\n\nThis code assumes (and the name GetLatestXTime() suggests, I first\nnoticed that here..) that the timestamp comes from commit/abort logs,\nso otherwise it shows a wrong timestamp. We shouldn't update the\nvariable by other than that kind of records.\n\n\nIf (I don't think that comes true..) we set the timestamp from other\nthan that kind of record, the names and the comments of the functions\nshould be changed.\n\n> /*\n> * Save timestamp of latest processed commit/abort record.\n> *\n> * We keep this in XLogCtl, not a simple static variable, so that it can be\n> * seen by processes other than the startup process. Note in particular\n> * that CreateRestartPoint is executed in the checkpointer.\n> */\n> static void\n> SetLatestXTime(TimestampTz xtime)\n...\n> /*\n> * Fetch timestamp of latest processed commit/abort record.\n> */\n> TimestampTz\n> GetLatestXTime(void)\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 28 Jan 2020 18:12:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "On 2020-Jan-28, Kyotaro Horiguchi wrote:\n\n> At Mon, 27 Jan 2020 18:06:27 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> > Actually looking again, getRecordTimestamp is looking pretty strange.\n> > It looks much more natural by using nested switch/case blocks, as with\n> > this diff. I think the compiler does a better job this way too.\n> \n> Agreed. Anyway I looked the latest version.\n> \n> The aim of the patch seem reasonable. XLOG_END_OF_RECOVERY and\n> XLOG_XACT_PREPARE also have a timestamp but it doesn't help much. (But\n> could be included for consistency.)\n\nHello, thanks for looking.\n\n> The timestamp of a checkpoint record is the start time of a checkpoint\n> (and doesn't have subseconds part, but it's not a problem.). That\n> means that the timestamp is usually far behind the time at the record\n> has been inserted. As the result the stored timestamp can go back by a\n> significant internval. I don't think that causes an actual problem but\n> the movement looks wierd as the result of\n> pg_last_xact_replay_timestamp().\n\nOuch ... yeah, it should be set only if it doesn't go backwards.\n\n> xlog.c:7329: (similar code exists at line 9332)\n> > ereport(LOG,\n> > \t\t(errmsg(\"redo done at %X/%X\",\n> > \t\t\t\t(uint32) (ReadRecPtr >> 32), (uint32) ReadRecPtr)));\n> > xtime = GetLatestXTime();\n> > if (xtime)\n> > \tereport(LOG,\n> > \t\t\t(errmsg(\"last completed transaction was at log time %s\",\n> > \t\t\t\t\ttimestamptz_to_str(xtime))));\n> \n> This code assumes (and the name GetLatestXTime() suggests, I first\n> noticed that here..) that the timestamp comes from commit/abort logs,\n> so otherwise it shows a wrong timestamp. We shouldn't update the\n> variable by other than that kind of records.\n\nHmm, that's terrible. GetLatestXTime() being displayed user-visibly for\n\"last transaction completion\" but having it include unrelated things\nsuch as restore points is terrible. One idea is to should split it in\ntwo: one exclusively for transaction commit/abort, and another for all\nWAL activity. That way, the former can be used for that message, and\nthe latter for standby replay reports. However, that might be\noverengineering, if the only thing that the former is that one LOG\nmessage; instead changing the log message to state that the time is for\nother activity, as you suggest, is simpler and has no downside that I\ncan see.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 28 Jan 2020 11:18:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "On 2020-Jan-27, Alvaro Herrera wrote:\n\n> Actually looking again, getRecordTimestamp is looking pretty strange.\n> It looks much more natural by using nested switch/case blocks, as with\n> this diff. I think the compiler does a better job this way too.\n\nI hadn't noticed I forgot to attach the diff here :-(\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 28 Jan 2020 11:18:50 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "At Tue, 28 Jan 2020 11:18:50 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jan-27, Alvaro Herrera wrote:\n> \n> > Actually looking again, getRecordTimestamp is looking pretty strange.\n> > It looks much more natural by using nested switch/case blocks, as with\n> > this diff. I think the compiler does a better job this way too.\n> \n> I hadn't noticed I forgot to attach the diff here :-(\n\nYeay, that patch bases the apply-lag patch:) And contains\nXLOG_CHECKPOINT_*. But otherwise looks good to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 29 Jan 2020 13:52:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "At Tue, 28 Jan 2020 11:18:14 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> > xlog.c:7329: (similar code exists at line 9332)\n> > > ereport(LOG,\n> > > \t\t(errmsg(\"redo done at %X/%X\",\n> > > \t\t\t\t(uint32) (ReadRecPtr >> 32), (uint32) ReadRecPtr)));\n> > > xtime = GetLatestXTime();\n> > > if (xtime)\n> > > \tereport(LOG,\n> > > \t\t\t(errmsg(\"last completed transaction was at log time %s\",\n> > > \t\t\t\t\ttimestamptz_to_str(xtime))));\n> > \n> > This code assumes (and the name GetLatestXTime() suggests, I first\n> > noticed that here..) that the timestamp comes from commit/abort logs,\n> > so otherwise it shows a wrong timestamp. We shouldn't update the\n> > variable by other than that kind of records.\n> \n> Hmm, that's terrible. GetLatestXTime() being displayed user-visibly for\n> \"last transaction completion\" but having it include unrelated things\n> such as restore points is terrible. One idea is to should split it in\n> two: one exclusively for transaction commit/abort, and another for all\n> WAL activity. That way, the former can be used for that message, and\n> the latter for standby replay reports. However, that might be\n> overengineering, if the only thing that the former is that one LOG\n> message; instead changing the log message to state that the time is for\n> other activity, as you suggest, is simpler and has no downside that I\n> can see.\n\nPerhaps we can use ControlData->checkPointCopy.time instead. It misses\ncheckpoint records intermittently but works in general.\n\nBut as more significant issue, nowadays PostgreSQL doesn't run a\ncheckpoint if it is really inactive (that is, if no \"important\" WAL\nrecords have issued.).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 29 Jan 2020 14:03:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "On 2020-Jan-29, Kyotaro Horiguchi wrote:\n\n> But as more significant issue, nowadays PostgreSQL doesn't run a\n> checkpoint if it is really inactive (that is, if no \"important\" WAL\n> records have issued.).\n\nYeah, I mentioned this in message\n<20200127203419.GA15216@alvherre.pgsql>. The solution for monitoring\npurposes is to use the new \"lag\" columns in pg_stat_replication. But\nthat's not available in older releases (prior to 10), so this change is\nstill useful.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:48:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "On 2020-Jan-28, Kyotaro Horiguchi wrote:\n\n> The aim of the patch seem reasonable. XLOG_END_OF_RECOVERY and\n> XLOG_XACT_PREPARE also have a timestamp but it doesn't help much. (But\n> could be included for consistency.)\n\nHmm I think I should definitely include those.\n\n> The timestamp of a checkpoint record is the start time of a checkpoint\n> (and doesn't have subseconds part, but it's not a problem.). That\n> means that the timestamp is usually far behind the time at the record\n> has been inserted. As the result the stored timestamp can go back by a\n> significant internval. I don't think that causes an actual problem but\n> the movement looks wierd as the result of\n> pg_last_xact_replay_timestamp().\n\nA problem I see with this is that setting the timestamp is done with\nXLogCtl->lock held; since now we need to make the update conditional,\nwe'd have to read the current value, compare with the checkpoint time,\nthen set, all with the spinlock held. That seems way too expensive.\n\nA compromise might be to do the compare only when it's done for\ncheckpoint. These occur seldom enough that it shouldn't be a problem\n(as opposed to commit records, which can be very frequent).\n\n> Asides from the backward movement, a timestamp from other than\n> commit/abort records in recvoeryLastXTime affects the following code.\n> \n> xlog.c:7329: (similar code exists at line 9332)\n> > ereport(LOG,\n> > \t\t(errmsg(\"redo done at %X/%X\",\n> > \t\t\t\t(uint32) (ReadRecPtr >> 32), (uint32) ReadRecPtr)));\n> > xtime = GetLatestXTime();\n> > if (xtime)\n> > \tereport(LOG,\n> > \t\t\t(errmsg(\"last completed transaction was at log time %s\",\n> > \t\t\t\t\ttimestamptz_to_str(xtime))));\n> \n> This code assumes (and the name GetLatestXTime() suggests, I first\n> noticed that here..) that the timestamp comes from commit/abort logs,\n> so otherwise it shows a wrong timestamp. We shouldn't update the\n> variable by other than that kind of records.\n\nThinking about this some more, I think we should do keep the message the\nsame backbranches (avoid breaking anything that might be reading the log\n-- a remote but not inexistent possibility), and adjust the message in\nmaster to be something like \"last timestamped WAL activity at time %s\",\nand document that it means commit, abort, restore label, checkpoint.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 29 Jan 2020 19:11:31 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "At Wed, 29 Jan 2020 19:11:31 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jan-28, Kyotaro Horiguchi wrote:\n> \n> > The aim of the patch seem reasonable. XLOG_END_OF_RECOVERY and\n> > XLOG_XACT_PREPARE also have a timestamp but it doesn't help much. (But\n> > could be included for consistency.)\n> \n> Hmm I think I should definitely include those.\n\nI agree to that, given the following change of log messages.\n\n> > The timestamp of a checkpoint record is the start time of a checkpoint\n> > (and doesn't have subseconds part, but it's not a problem.). That\n> > means that the timestamp is usually far behind the time at the record\n> > has been inserted. As the result the stored timestamp can go back by a\n> > significant internval. I don't think that causes an actual problem but\n> > the movement looks wierd as the result of\n> > pg_last_xact_replay_timestamp().\n> \n> A problem I see with this is that setting the timestamp is done with\n> XLogCtl->lock held; since now we need to make the update conditional,\n> we'd have to read the current value, compare with the checkpoint time,\n> then set, all with the spinlock held. That seems way too expensive.\n> \n> A compromise might be to do the compare only when it's done for\n> checkpoint. These occur seldom enough that it shouldn't be a problem\n> (as opposed to commit records, which can be very frequent).\n\nI think we don't need to that, given the following change.\n\n> > Asides from the backward movement, a timestamp from other than\n> > commit/abort records in recvoeryLastXTime affects the following code.\n> > \n> > xlog.c:7329: (similar code exists at line 9332)\n> > > ereport(LOG,\n> > > \t\t(errmsg(\"redo done at %X/%X\",\n> > > \t\t\t\t(uint32) (ReadRecPtr >> 32), (uint32) ReadRecPtr)));\n> > > xtime = GetLatestXTime();\n> > > if (xtime)\n> > > \tereport(LOG,\n> > > \t\t\t(errmsg(\"last completed transaction was at log time %s\",\n> > > \t\t\t\t\ttimestamptz_to_str(xtime))));\n> > \n> > This code assumes (and the name GetLatestXTime() suggests, I first\n> > noticed that here..) that the timestamp comes from commit/abort logs,\n> > so otherwise it shows a wrong timestamp. We shouldn't update the\n> > variable by other than that kind of records.\n> \n> Thinking about this some more, I think we should do keep the message the\n> same backbranches (avoid breaking anything that might be reading the log\n> -- a remote but not inexistent possibility), and adjust the message in\n> master to be something like \"last timestamped WAL activity at time %s\",\n> and document that it means commit, abort, restore label, checkpoint.\n\nAgreed about backbranches. I'd like to preserve the word \"transaction\"\nas it is more familiar to users. How about something like the follows?\n\n\"transactions are completed up to log time %s\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:19:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "On 2020-Jan-30, Kyotaro Horiguchi wrote:\n\n> Agreed about backbranches. I'd like to preserve the word \"transaction\"\n> as it is more familiar to users. How about something like the follows?\n> \n> \"transactions are completed up to log time %s\"\n\nThat's a good point. I used the phrase \"transaction activity\", which\nseems sufficiently explicit to me.\n\nSo, the attached is the one for master; in back branches I would use the\nsame (plus minor conflict fixes), except that I would drop the message\nwording changes.\n\nThanks for the reviews so far,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 30 Jan 2020 17:45:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "At Thu, 30 Jan 2020 17:45:36 -0300, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Jan-30, Kyotaro Horiguchi wrote:\n> \n> > Agreed about backbranches. I'd like to preserve the word \"transaction\"\n> > as it is more familiar to users. How about something like the follows?\n> > \n> > \"transactions are completed up to log time %s\"\n> \n> That's a good point. I used the phrase \"transaction activity\", which\n> seems sufficiently explicit to me.\n> \n> So, the attached is the one for master; in back branches I would use the\n> same (plus minor conflict fixes), except that I would drop the message\n> wording changes.\n> \n> Thanks for the reviews so far,\n\nMy pleasure.\n\nregads.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 31 Jan 2020 14:35:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "\n\nOn 2020/01/31 5:45, Alvaro Herrera wrote:\n> On 2020-Jan-30, Kyotaro Horiguchi wrote:\n> \n>> Agreed about backbranches. I'd like to preserve the word \"transaction\"\n>> as it is more familiar to users. How about something like the follows?\n>>\n>> \"transactions are completed up to log time %s\"\n> \n> That's a good point. I used the phrase \"transaction activity\", which\n> seems sufficiently explicit to me.\n> \n> So, the attached is the one for master; in back branches I would use the\n> same (plus minor conflict fixes), except that I would drop the message\n> wording changes.\n\nYou're thinking to apply this change to the back branches? Sorry\nif my understanding is not right. But I don't think that back-patch\nis ok because it changes the documented existing behavior\nof pg_last_xact_replay_timestamp(). So it looks like the behavior\nchange not a bug fix.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 31 Jan 2020 21:30:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "On 2020-Jan-31, Fujii Masao wrote:\n\n> You're thinking to apply this change to the back branches? Sorry\n> if my understanding is not right. But I don't think that back-patch\n> is ok because it changes the documented existing behavior\n> of pg_last_xact_replay_timestamp(). So it looks like the behavior\n> change not a bug fix.\n\nYeah, I am thinking in backpatching it. The documented behavior is\nalready not what the code does. Do you have a situation where this\nchange would break something? If so, can you please explain what it is?\n\nI think (and I said it upthread) a 100% complete fix involves tracking\ntwo timestamps rather than one. I was thinking that that would be too\ninvasive because it changes XLogCtlData shmem struct ... but that struct\nis private to xlog.c, so I think it's fine to change the struct. The\nproblem though is that the user-visible change that I want to achieve is\npg_last_xact_replay_timestamp(), and it would be obviously wrong to use\nthe new XLogCtlData field rather than the existing one, as that would be\na behavior change in the same sense that you're now complaining about.\nSo I would achieve nothing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 31 Jan 2020 10:40:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "\n\nOn 2020/01/31 22:40, Alvaro Herrera wrote:\n> On 2020-Jan-31, Fujii Masao wrote:\n> \n>> You're thinking to apply this change to the back branches? Sorry\n>> if my understanding is not right. But I don't think that back-patch\n>> is ok because it changes the documented existing behavior\n>> of pg_last_xact_replay_timestamp(). So it looks like the behavior\n>> change not a bug fix.\n> \n> Yeah, I am thinking in backpatching it. The documented behavior is\n> already not what the code does.\n\nMaybe you thought this because getRecordTimestamp() extracts the\ntimestamp from even WAL record of a restore point? That is, you're\nconcerned about that pg_last_xact_replay_timestamp() returns the\ntimestamp of not only commit/abort record but also restore point one.\nRight?\n\nAs far as I read the code, this problem doesn't occur because\nSetLatestXTime() is called only for commit/abort records, in\nrecoveryStopsAfter(). No?\n\n> Do you have a situation where this\n> change would break something? If so, can you please explain what it is?\n\nFor example, use the return value of pg_last_xact_replay_timestamp()\n(and also the timestamp in the log message output at the end of\nrecovery) as a HINT when setting recovery_target_time later.\nUse it to compare with the timestamp retrieved from the master server,\nin order to monitor the replication delay.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 31 Jan 2020 23:29:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "On 2020-Jan-31, Fujii Masao wrote:\n> On 2020/01/31 22:40, Alvaro Herrera wrote:\n> > On 2020-Jan-31, Fujii Masao wrote:\n> > \n> > > You're thinking to apply this change to the back branches? Sorry\n> > > if my understanding is not right. But I don't think that back-patch\n> > > is ok because it changes the documented existing behavior\n> > > of pg_last_xact_replay_timestamp(). So it looks like the behavior\n> > > change not a bug fix.\n> > \n> > Yeah, I am thinking in backpatching it. The documented behavior is\n> > already not what the code does.\n> \n> Maybe you thought this because getRecordTimestamp() extracts the\n> timestamp from even WAL record of a restore point? That is, you're\n> concerned about that pg_last_xact_replay_timestamp() returns the\n> timestamp of not only commit/abort record but also restore point one.\n> Right?\n\nright.\n\n> As far as I read the code, this problem doesn't occur because\n> SetLatestXTime() is called only for commit/abort records, in\n> recoveryStopsAfter(). No?\n\n... uh, wow, you're right about that too. IMO this is extremely\nfragile, easy to break, and under-documented. But you're right, there's\nno bug there at present.\n\n> > Do you have a situation where this\n> > change would break something? If so, can you please explain what it is?\n> \n> For example, use the return value of pg_last_xact_replay_timestamp()\n> (and also the timestamp in the log message output at the end of\n> recovery) as a HINT when setting recovery_target_time later.\n\nHmm.\n\nI'm not sure how you would use it in that way. I mean, I understand how\nit *can* be used that way, but it seems too fragile to be done in\npractice, in a scenario that's not just laboratory games.\n\n> Use it to compare with the timestamp retrieved from the master server,\n> in order to monitor the replication delay.\n\nThat's precisely the use case that I'm aiming at. The timestamp\ncurrently is not useful because this usage breaks when the primary is\ninactive (no COMMIT records occur). During such periods of inactivity,\nCHECKPOINT records would keep the \"last xtime\" current. This has\nactually happened in a production setting, it's not a thought\nexperiment.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 31 Jan 2020 11:47:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: standby apply lag on inactive servers"
},
{
"msg_contents": "\n\nOn 2020/01/31 23:47, Alvaro Herrera wrote:\n> On 2020-Jan-31, Fujii Masao wrote:\n>> On 2020/01/31 22:40, Alvaro Herrera wrote:\n>>> On 2020-Jan-31, Fujii Masao wrote:\n>>>\n>>>> You're thinking to apply this change to the back branches? Sorry\n>>>> if my understanding is not right. But I don't think that back-patch\n>>>> is ok because it changes the documented existing behavior\n>>>> of pg_last_xact_replay_timestamp(). So it looks like the behavior\n>>>> change not a bug fix.\n>>>\n>>> Yeah, I am thinking in backpatching it. The documented behavior is\n>>> already not what the code does.\n>>\n>> Maybe you thought this because getRecordTimestamp() extracts the\n>> timestamp from even WAL record of a restore point? That is, you're\n>> concerned about that pg_last_xact_replay_timestamp() returns the\n>> timestamp of not only commit/abort record but also restore point one.\n>> Right?\n> \n> right.\n> \n>> As far as I read the code, this problem doesn't occur because\n>> SetLatestXTime() is called only for commit/abort records, in\n>> recoveryStopsAfter(). No?\n> \n> ... uh, wow, you're right about that too. IMO this is extremely\n> fragile, easy to break, and under-documented.\n\nYeah, it's worth improving the code.\n\n> But you're right, there's\n> no bug there at present.\n> \n>>> Do you have a situation where this\n>>> change would break something? If so, can you please explain what it is?\n>>\n>> For example, use the return value of pg_last_xact_replay_timestamp()\n>> (and also the timestamp in the log message output at the end of\n>> recovery) as a HINT when setting recovery_target_time later.\n> \n> Hmm.\n> \n> I'm not sure how you would use it in that way. I mean, I understand how\n> it *can* be used that way, but it seems too fragile to be done in\n> practice, in a scenario that's not just laboratory games.\n> \n>> Use it to compare with the timestamp retrieved from the master server,\n>> in order to monitor the replication delay.\n> \n> That's precisely the use case that I'm aiming at. The timestamp\n> currently is not useful because this usage breaks when the primary is\n> inactive (no COMMIT records occur). During such periods of inactivity,\n> CHECKPOINT records would keep the \"last xtime\" current. This has\n> actually happened in a production setting, it's not a thought\n> experiment.\n\nI've heard that someone periodically generates dummy tiny\ntransactions (say, every minute), as a band-aid solution,\nto avoid inactive primary. Of course, this is not a perfect solution.\n\nThe idea that I proposed previously was to introduce\npg_last_xact_insert_timestamp() [1] into core. This function returns\nthe timestamp of commit / abort records in *primary* side.\nSo we can retrieve that timestamp from the primary (e.g., by using dblink)\nand compare its result with pg_last_xact_replay_timestamp() to\ncalculate the delay in the standby.\n\nAnother idea is to include the commit / abort timestamp in\nprimary-keepalive-message that periodially sent from the primary\nto the standby. Then if we introduce the function returning\nthat timestamp, in the standby side, we can easily compare\nthe commit / abort timestamps taken from both primary and\nstandby, in the standby.\n\n[1] https://www.postgresql.org/message-id/CAHGQGwF3ZjfuNEj5ka683KU5rQUBtSWtqFq7g1X0g34o+JXWBw@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Sat, 1 Feb 2020 01:23:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: standby apply lag on inactive servers"
}
] |
[
{
"msg_contents": "I was using COPY recently and was wondering why BINARY format is not much\n(if any) faster than the default format. Once I switched from mostly\nexporting ints to mostly exporting double precisions (7e6 rows of 100\ncolumns, randomly generated), it was faster, but not by as much as I\nintuitively thought it should be.\n\nRunning 'perf top' to profile a \"COPY BINARY .. TO '/dev/null'\" on a AWS\nm5.large machine running Ubuntu 18.04, with self compiled PostgreSQL:\n\nPostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit\n\nI saw that the hotspot was pq_begintypsend at 20%, which was twice the\npercentage as the next place winner (AllocSetAlloc). If I drill down into\nteh function, I see something like the below. I don't really speak\nassembly, but usually when I see an assembly instruction being especially\nhot and not being the inner most instruction in a loop, I blame it on CPU\ncache misses. But everything being touched here should already be well\ncached, since initStringInfo has just got done setting it up. And if not\nfor that, then the by the 2nd invocation of appendStringInfoCharMacro it\ncertainly should be in the cache, yet that one is even slower than the 1st\nappendStringInfoCharMacro.\n\nWhy is this such a bottleneck?\n\npq_begintypsend /usr/local/pgsql/bin/postgres\n\n 0.15 | push %rbx\n 0.09 | mov %rdi,%rbx\n | initStringInfo(buf);\n 3.03 | callq initStringInfo\n | /* Reserve four bytes for the bytea length word */\n | appendStringInfoCharMacro(buf, '\\0');\n | movslq 0x8(%rbx),%rax\n 1.05 | lea 0x1(%rax),%edx\n 0.72 | cmp 0xc(%rbx),%edx\n | jge b0\n 2.92 | mov (%rbx),%rdx\n | movb $0x0,(%rdx,%rax,1)\n13.76 | mov 0x8(%rbx),%eax\n 0.81 | mov (%rbx),%rdx\n 0.52 | add $0x1,%eax\n 0.12 | mov %eax,0x8(%rbx)\n 2.85 | cltq\n 0.01 | movb $0x0,(%rdx,%rax,1)\n | appendStringInfoCharMacro(buf, '\\0');\n10.65 | movslq 0x8(%rbx),%rax\n | lea 0x1(%rax),%edx\n 0.90 | cmp 0xc(%rbx),%edx\n | jge ca\n 0.54 | 42: mov (%rbx),%rdx\n 1.84 | movb $0x0,(%rdx,%rax,1)\n13.88 | mov 0x8(%rbx),%eax\n 0.03 | mov (%rbx),%rdx\n | add $0x1,%eax\n 0.33 | mov %eax,0x8(%rbx)\n 2.60 | cltq\n 0.06 | movb $0x0,(%rdx,%rax,1)\n | appendStringInfoCharMacro(buf, '\\0');\n 3.21 | movslq 0x8(%rbx),%rax\n 0.23 | lea 0x1(%rax),%edx\n 1.74 | cmp 0xc(%rbx),%edx\n | jge e0\n 0.21 | 67: mov (%rbx),%rdx\n 1.18 | movb $0x0,(%rdx,%rax,1)\n 9.29 | mov 0x8(%rbx),%eax\n 0.18 | mov (%rbx),%rdx\n | add $0x1,%eax\n 0.19 | mov %eax,0x8(%rbx)\n 3.14 | cltq\n 0.12 | movb $0x0,(%rdx,%rax,1)\n | appendStringInfoCharMacro(buf, '\\0');\n 5.29 | movslq 0x8(%rbx),%rax\n 0.03 | lea 0x1(%rax),%edx\n 1.45 | cmp 0xc(%rbx),%edx\n | jge f6\n 0.41 | 8c: mov (%rbx),%rdx\n\nCheers,\n\nJeff\n\nI was using COPY recently and was wondering why BINARY format is not much (if any) faster than the default format. Once I switched from mostly exporting ints to mostly exporting double precisions (7e6 rows of 100 columns, randomly generated), it was faster, but not by as much as I intuitively thought it should be. Running 'perf top' to profile a \"COPY BINARY .. TO '/dev/null'\" on a AWS m5.large machine running Ubuntu 18.04, with self compiled PostgreSQL:PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bitI saw that the hotspot was pq_begintypsend at 20%, which was twice the percentage as the next place winner (AllocSetAlloc). If I drill down into teh function, I see something like the below. I don't really speak assembly, but usually when I see an assembly instruction being especially hot and not being the inner most instruction in a loop, I blame it on CPU cache misses. But everything being touched here should already be well cached, since \n\n initStringInfo has just got done setting it up. And if not for that, then the by the 2nd invocation of appendStringInfoCharMacro it certainly should be in the cache, yet that one is even slower than the 1st appendStringInfoCharMacro.Why is this such a bottleneck?pq_begintypsend /usr/local/pgsql/bin/postgres 0.15 | push %rbx 0.09 | mov %rdi,%rbx | initStringInfo(buf); 3.03 | callq initStringInfo | /* Reserve four bytes for the bytea length word */ | appendStringInfoCharMacro(buf, '\\0'); | movslq 0x8(%rbx),%rax 1.05 | lea 0x1(%rax),%edx 0.72 | cmp 0xc(%rbx),%edx | jge b0 2.92 | mov (%rbx),%rdx | movb $0x0,(%rdx,%rax,1)13.76 | mov 0x8(%rbx),%eax 0.81 | mov (%rbx),%rdx 0.52 | add $0x1,%eax 0.12 | mov %eax,0x8(%rbx) 2.85 | cltq 0.01 | movb $0x0,(%rdx,%rax,1) | appendStringInfoCharMacro(buf, '\\0');10.65 | movslq 0x8(%rbx),%rax | lea 0x1(%rax),%edx 0.90 | cmp 0xc(%rbx),%edx | jge ca 0.54 | 42: mov (%rbx),%rdx 1.84 | movb $0x0,(%rdx,%rax,1)13.88 | mov 0x8(%rbx),%eax 0.03 | mov (%rbx),%rdx | add $0x1,%eax 0.33 | mov %eax,0x8(%rbx) 2.60 | cltq 0.06 | movb $0x0,(%rdx,%rax,1) | appendStringInfoCharMacro(buf, '\\0'); 3.21 | movslq 0x8(%rbx),%rax 0.23 | lea 0x1(%rax),%edx 1.74 | cmp 0xc(%rbx),%edx | jge e0 0.21 | 67: mov (%rbx),%rdx 1.18 | movb $0x0,(%rdx,%rax,1) 9.29 | mov 0x8(%rbx),%eax 0.18 | mov (%rbx),%rdx | add $0x1,%eax 0.19 | mov %eax,0x8(%rbx) 3.14 | cltq 0.12 | movb $0x0,(%rdx,%rax,1) | appendStringInfoCharMacro(buf, '\\0'); 5.29 | movslq 0x8(%rbx),%rax 0.03 | lea 0x1(%rax),%edx 1.45 | cmp 0xc(%rbx),%edx | jge f6 0.41 | 8c: mov (%rbx),%rdxCheers,Jeff",
"msg_date": "Sat, 11 Jan 2020 14:04:51 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> I saw that the hotspot was pq_begintypsend at 20%, which was twice the\n> percentage as the next place winner (AllocSetAlloc).\n\nWeird.\n\n> Why is this such a bottleneck?\n\nNot sure, but it seems like a pretty dumb way to push the stringinfo's\nlen forward. We're reading/updating the len word in each line, and\nif your perf measurements are to be believed, it's the re-fetches of\nthe len values that are bottlenecked --- maybe your CPU isn't too\nbright about that? The bytes of the string value are getting written\ntwice too, thanks to uselessly setting up a terminating nul each time.\n\nI'd be inclined to replace the appendStringInfoCharMacro calls with\nappendStringInfoSpaces(buf, 4) --- I don't think we care exactly what\nis inserted into those bytes at this point. And maybe\nappendStringInfoSpaces could stand some micro-optimization, too.\nUse a memset and a single len adjustment, perhaps?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jan 2020 15:19:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 03:19:37PM -0500, Tom Lane wrote:\n> Jeff Janes <jeff.janes@gmail.com> writes:\n> > I saw that the hotspot was pq_begintypsend at 20%, which was twice the\n> > percentage as the next place winner (AllocSetAlloc).\n> \n> Weird.\n> \n> > Why is this such a bottleneck?\n> \n> Not sure, but it seems like a pretty dumb way to push the stringinfo's\n> len forward. We're reading/updating the len word in each line, and\n> if your perf measurements are to be believed, it's the re-fetches of\n> the len values that are bottlenecked --- maybe your CPU isn't too\n> bright about that? The bytes of the string value are getting written\n> twice too, thanks to uselessly setting up a terminating nul each time.\n> \n> I'd be inclined to replace the appendStringInfoCharMacro calls with\n> appendStringInfoSpaces(buf, 4) --- I don't think we care exactly what\n> is inserted into those bytes at this point. And maybe\n> appendStringInfoSpaces could stand some micro-optimization, too.\n> Use a memset and a single len adjustment, perhaps?\n\nPlease find attached a patch that does it both of the things you\nsuggested.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 12 Jan 2020 02:41:08 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Sat, Jan 11, 2020 at 03:19:37PM -0500, Tom Lane wrote:\n>> Jeff Janes <jeff.janes@gmail.com> writes:\n>>> I saw that the hotspot was pq_begintypsend at 20%, which was twice the\n>>> percentage as the next place winner (AllocSetAlloc).\n\n>> I'd be inclined to replace the appendStringInfoCharMacro calls with\n>> appendStringInfoSpaces(buf, 4) --- I don't think we care exactly what\n>> is inserted into those bytes at this point. And maybe\n>> appendStringInfoSpaces could stand some micro-optimization, too.\n>> Use a memset and a single len adjustment, perhaps?\n\n> Please find attached a patch that does it both of the things you\n> suggested.\n\nI've been fooling around with this here. On the test case Jeff\ndescribes --- COPY BINARY tab TO '/dev/null' where tab contains\n100 float8 columns filled from random() --- I can reproduce his\nresults. pq_begintypsend is the top hotspot and if perf's\nlocalization is accurate, it's the instructions that fetch\nstr->len that hurt the most. Still not very clear why that is...\n\nConverting pq_begintypsend to use appendStringInfoSpaces helps\na bit; it takes my test case from 91725 ms to 88847 ms, or about\n3% savings. Noodling around with appendStringInfoSpaces doesn't\nhelp noticeably; I tried memset, as well as open-coding (cf\npatch below) but the results were all well within the noise\nthreshold.\n\nI saw at this point that the remaining top spots were\nenlargeStringInfo and appendBinaryStringInfo, so I experimented\nwith inlining them (again, see patch below). That *did* move\nthe needle: down to 72691 ms, or 20% better than HEAD. Of\ncourse, that comes at a code-size cost, but it's only about\n13kB growth:\n\nbefore:\n$ size src/backend/postgres \n text data bss dec hex filename\n7485285 58088 203328 7746701 76348d src/backend/postgres\nafter:\n$ size src/backend/postgres \n text data bss dec hex filename\n7498652 58088 203328 7760068 7668c4 src/backend/postgres\n\nThat's under two-tenths of a percent. (This'd affect frontend\nexecutables too, and I didn't check them.)\n\nSeems like this is worth pursuing, especially if it can be\nshown to improve any other cases noticeably. It might be\nworth inlining some of the other trivial stringinfo functions,\nthough I'd tread carefully on that.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 11 Jan 2020 21:43:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "I wrote:\n> I saw at this point that the remaining top spots were\n> enlargeStringInfo and appendBinaryStringInfo, so I experimented\n> with inlining them (again, see patch below). That *did* move\n> the needle: down to 72691 ms, or 20% better than HEAD.\n\nOh ... marking the test in the inline part of enlargeStringInfo()\nas unlikely() helps quite a bit more: 66100 ms, a further 9% gain.\nMight be over-optimizing for this particular case, perhaps, but\nI think that's a reasonable marking given that we overallocate\nthe stringinfo buffer for most uses.\n\n(But ... I'm not finding these numbers to be super reproducible\nacross different ASLR layouts. So take it with a grain of salt.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jan 2020 22:32:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-11 22:32:45 -0500, Tom Lane wrote:\n> I wrote:\n> > I saw at this point that the remaining top spots were\n> > enlargeStringInfo and appendBinaryStringInfo, so I experimented\n> > with inlining them (again, see patch below). That *did* move\n> > the needle: down to 72691 ms, or 20% better than HEAD.\n>\n> Oh ... marking the test in the inline part of enlargeStringInfo()\n> as unlikely() helps quite a bit more: 66100 ms, a further 9% gain.\n> Might be over-optimizing for this particular case, perhaps\n>\n> (But ... I'm not finding these numbers to be super reproducible\n> across different ASLR layouts. So take it with a grain of salt.)\n\nFWIW, I've also observed, in another thread (the node func generation\nthing [1]), that inlining enlargeStringInfo() helps a lot, especially\nwhen inlining some of its callers. Moving e.g. appendStringInfo() inline\nallows the compiler to sometimes optimize away the strlen. But if\ne.g. an inlined appendBinaryStringInfo() still calls enlargeStringInfo()\nunconditionally, successive appends cannot optimize away memory accesses\nfor ->len/->data.\n\nFor the case of send functions, we really ought to have at least\npq_begintypsend(), enlargeStringInfo() and pq_endtypsend() inline. That\nway the compiler ought to be able to avoid repeatedly loading/storing\n->len, after the initial initStringInfo() call. Might even make sense to\nalso have initStringInfo() inline, because then the compiler would\nprobably never actually materialize the StringInfoData (and would\nautomatically have good aliasing information too).\n\n\nThe commit referenced above is obviously quite WIP-ey, and contains\nthings that should be split into separate commits. But I think it might\nbe worth moving more functions into the header, like I've done in that\ncommit.\n\nThe commit also adds appendStringInfoU?Int(32,64) operations - I've\nunsuprisingly found these to be *considerably* faster than going through\nappendStringInfoString().\n\n\n> but I think that's a reasonable marking given that we overallocate\n> the stringinfo buffer for most uses.\n\nWonder if it's worth providing a function to initialize the stringinfo\ndifferently for the many cases where we have at least a very good idea\nof how long the string will be. It's sad to allocate 1kb just for\ne.g. int4send to send an integer plus length.\n\nGreetings,\n\nAndres Freund\n\n[1] https://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=commit;h=127e860cf65f50434e0bb97acbba4b0ea6f38cfd\n\n\n",
"msg_date": "Mon, 13 Jan 2020 15:18:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-13 15:18:04 -0800, Andres Freund wrote:\n> On 2020-01-11 22:32:45 -0500, Tom Lane wrote:\n> > I wrote:\n> > > I saw at this point that the remaining top spots were\n> > > enlargeStringInfo and appendBinaryStringInfo, so I experimented\n> > > with inlining them (again, see patch below). That *did* move\n> > > the needle: down to 72691 ms, or 20% better than HEAD.\n> >\n> > Oh ... marking the test in the inline part of enlargeStringInfo()\n> > as unlikely() helps quite a bit more: 66100 ms, a further 9% gain.\n> > Might be over-optimizing for this particular case, perhaps\n> >\n> > (But ... I'm not finding these numbers to be super reproducible\n> > across different ASLR layouts. So take it with a grain of salt.)\n>\n> FWIW, I've also observed, in another thread (the node func generation\n> thing [1]), that inlining enlargeStringInfo() helps a lot, especially\n> when inlining some of its callers. Moving e.g. appendStringInfo() inline\n> allows the compiler to sometimes optimize away the strlen. But if\n> e.g. an inlined appendBinaryStringInfo() still calls enlargeStringInfo()\n> unconditionally, successive appends cannot optimize away memory accesses\n> for ->len/->data.\n\nWith a set of patches doing so, int4send itself is not a significant\nfactor for my test benchmark [1] anymore. The assembly looks about as\ngood as one could hope, I think:\n\n# save rbx on the stack\n 0x00000000004b7f90 <+0>:\tpush %rbx\n 0x00000000004b7f91 <+1>:\tsub $0x20,%rsp\n# store integer to be sent into rbx\n 0x00000000004b7f95 <+5>:\tmov 0x20(%rdi),%rbx\n# palloc length argument\n 0x00000000004b7f99 <+9>:\tmov $0x9,%edi\n 0x00000000004b7f9e <+14>:\tcallq 0x5d9aa0 <palloc>\n# store integer in buffer (ebx is 4 byte portion of rbx)\n 0x00000000004b7fa3 <+19>:\tmovbe %ebx,0x4(%rax)\n# store varlena header\n 0x00000000004b7fa8 <+24>:\tmovl $0x20,(%rax)\n# restore stack and rbx registerz\n 0x00000000004b7fae <+30>:\tadd $0x20,%rsp\n 0x00000000004b7fb2 <+34>:\tpop %rbx\n 0x00000000004b7fb3 <+35>:\tretq\n\nAll the $0x20 constants are a bit confusing, but they just happen to be\nthe same for int4send. It's the size of the stack frame,\noffset for FunctionCallInfoBaseData->args[0], the varlena header (and then the stack\nframe again) respectively.\n\nNote that I had to annotate palloc with __attribute__((malloc)) to make\nthe compiler understand that palloc's returned value will not alias with\nanything problematic (e.g. the potential of aliasing with fcinfo\nprevents optimizing to the above without that annotation). I think such\nannotations would be a good idea anyway, precisely because they allow\nthe compiler to optimize code significantly better.\n\n\nThese together yields about a 1.8x speedup for me. The profile shows\nthat the overhead now is overwhelmingly elsewhere:\n+ 26.30% postgres postgres [.] CopyOneRowTo\n+ 13.40% postgres postgres [.] tts_buffer_heap_getsomeattrs\n+ 10.61% postgres postgres [.] AllocSetAlloc\n+ 9.26% postgres libc-2.29.so [.] __memmove_avx_unaligned_erms\n+ 7.32% postgres postgres [.] SendFunctionCall\n+ 6.02% postgres postgres [.] palloc\n+ 4.45% postgres postgres [.] int4send\n+ 3.68% postgres libc-2.29.so [.] _IO_fwrite\n+ 2.71% postgres postgres [.] heapgettup_pagemode\n+ 1.96% postgres postgres [.] AllocSetReset\n+ 1.83% postgres postgres [.] CopySendEndOfRow\n+ 1.75% postgres libc-2.29.so [.] _IO_file_xsputn@@GLIBC_2.2.5\n+ 1.60% postgres postgres [.] ExecStoreBufferHeapTuple\n+ 1.57% postgres postgres [.] DoCopyTo\n+ 1.16% postgres postgres [.] memcpy@plt\n+ 1.07% postgres postgres [.] heapgetpage\n\nEven without using the new pq_begintypesend_ex()/initStringInfoEx(), the\ngenerated code is still considerably better than before, yielding a\n1.58x speedup. Tallocator overhead unsurprisingly is higher:\n+ 24.93% postgres postgres [.] CopyOneRowTo\n+ 17.10% postgres postgres [.] AllocSetAlloc\n+ 10.09% postgres postgres [.] tts_buffer_heap_getsomeattrs\n+ 6.50% postgres libc-2.29.so [.] __memmove_avx_unaligned_erms\n+ 5.99% postgres postgres [.] SendFunctionCall\n+ 5.11% postgres postgres [.] palloc\n+ 3.95% postgres libc-2.29.so [.] _int_malloc\n+ 3.38% postgres postgres [.] int4send\n+ 2.54% postgres postgres [.] heapgettup_pagemode\n+ 2.11% postgres libc-2.29.so [.] _int_free\n+ 2.06% postgres postgres [.] MemoryContextReset\n+ 2.02% postgres postgres [.] AllocSetReset\n+ 1.97% postgres libc-2.29.so [.] _IO_fwrite\n+ 1.47% postgres postgres [.] DoCopyTo\n+ 1.14% postgres postgres [.] ExecStoreBufferHeapTuple\n+ 1.06% postgres libc-2.29.so [.] _IO_file_xsputn@@GLIBC_2.2.5\n+ 1.04% postgres libc-2.29.so [.] malloc\n\n\nAdding a few pg_restrict*, and using appendBinaryStringInfoNT instead of\nappendBinaryStringInfo in CopySend* gains another 1.05x.\n\n\nThis does result in some code growth, but given the size of the\nimprovements, and that the improvements are significant even without\ncode changes to callsites, that seems worth it.\n\nbefore:\n text\t data\t bss\t dec\t hex\tfilename\n8482739\t 172304\t 204240\t8859283\t 872e93\tsrc/backend/postgres\nafter:\n text\t data\t bss\t dec\t hex\tfilename\n8604300\t 172304\t 204240\t8980844\t 89096c\tsrc/backend/postgres\n\nRegards,\n\nAndres\n\n[1]\nCREATE TABLE lotsaints4(c01 int4 NOT NULL, c02 int4 NOT NULL, c03 int4 NOT NULL, c04 int4 NOT NULL, c05 int4 NOT NULL, c06 int4 NOT NULL, c07 int4 NOT NULL, c08 int4 NOT NULL, c09 int4 NOT NULL, c10 int4 NOT NULL);\nINSERT INTO lotsaints4 SELECT ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int FROM generate_series(1, 2000000);\nVACUUM FREEZE lotsaints4;\nCOPY lotsaints4 TO '/dev/null' WITH binary;\n\nCREATE TABLE lotsaints8(c01 int8 NOT NULL, c02 int8 NOT NULL, c03 int8 NOT NULL, c04 int8 NOT NULL, c05 int8 NOT NULL, c06 int8 NOT NULL, c07 int8 NOT NULL, c08 int8 NOT NULL, c09 int8 NOT NULL, c10 int8 NOT NULL);\nINSERT INTO lotsaints8 SELECT ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int, ((random() * 2^31) - 1)::int FROM generate_series(1, 2000000);\nVACUUM FREEZE lotsaints8;\nCOPY lotsaints8 TO '/dev/null' WITH binary;",
"msg_date": "Tue, 14 Jan 2020 14:45:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n>> FWIW, I've also observed, in another thread (the node func generation\n>> thing [1]), that inlining enlargeStringInfo() helps a lot, especially\n>> when inlining some of its callers. Moving e.g. appendStringInfo() inline\n>> allows the compiler to sometimes optimize away the strlen. But if\n>> e.g. an inlined appendBinaryStringInfo() still calls enlargeStringInfo()\n>> unconditionally, successive appends cannot optimize away memory accesses\n>> for ->len/->data.\n\n> With a set of patches doing so, int4send itself is not a significant\n> factor for my test benchmark [1] anymore.\n\nThis thread seems to have died out, possibly because the last set of\npatches that Andres posted was sufficiently complicated and invasive\nthat nobody wanted to review it. I thought about this again after\nseeing that [1] is mostly about pq_begintypsend overhead, and had\nan epiphany: there isn't really a strong reason for pq_begintypsend\nto be inserting bits into the buffer at all. The bytes will be\nfilled by pq_endtypsend, and nothing in between should be touching\nthem. So I propose 0001 attached. It's poking into the stringinfo\nabstraction a bit more than I would want to do if there weren't a\ncompelling performance reason to do so, but there evidently is.\n\nWith 0001, pq_begintypsend drops from being the top single routine\nin a profile of a test case like [1] to being well down the list.\nThe next biggest cost compared to text-format output is that\nprinttup() itself is noticeably more expensive. A lot of the extra\ncost there seems to be from pq_sendint32(), which is getting inlined\ninto printtup(), and there probably isn't much we can do to make that\ncheaper. But eliminating a common subexpression as in 0002 below does\nhelp noticeably, at least with the rather old gcc I'm using.\n\nFor me, the combination of these two eliminates most but not quite\nall of the cost penalty of binary over text output as seen in [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CAMovtNoHFod2jMAKQjjxv209PCTJx5Kc66anwWvX0mEiaXwgmA%40mail.gmail.com",
"msg_date": "Mon, 18 May 2020 12:38:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Em seg., 18 de mai. de 2020 às 13:38, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Andres Freund <andres@anarazel.de> writes:\n> >> FWIW, I've also observed, in another thread (the node func generation\n> >> thing [1]), that inlining enlargeStringInfo() helps a lot, especially\n> >> when inlining some of its callers. Moving e.g. appendStringInfo() inline\n> >> allows the compiler to sometimes optimize away the strlen. But if\n> >> e.g. an inlined appendBinaryStringInfo() still calls enlargeStringInfo()\n> >> unconditionally, successive appends cannot optimize away memory accesses\n> >> for ->len/->data.\n>\n> > With a set of patches doing so, int4send itself is not a significant\n> > factor for my test benchmark [1] anymore.\n>\n> This thread seems to have died out, possibly because the last set of\n> patches that Andres posted was sufficiently complicated and invasive\n> that nobody wanted to review it. I thought about this again after\n> seeing that [1] is mostly about pq_begintypsend overhead, and had\n> an epiphany: there isn't really a strong reason for pq_begintypsend\n> to be inserting bits into the buffer at all. The bytes will be\n> filled by pq_endtypsend, and nothing in between should be touching\n> them. So I propose 0001 attached. It's poking into the stringinfo\n> abstraction a bit more than I would want to do if there weren't a\n> compelling performance reason to do so, but there evidently is.\n>\n> With 0001, pq_begintypsend drops from being the top single routine\n> in a profile of a test case like [1] to being well down the list.\n> The next biggest cost compared to text-format output is that\n> printtup() itself is noticeably more expensive. A lot of the extra\n> cost there seems to be from pq_sendint32(), which is getting inlined\n> into printtup(), and there probably isn't much we can do to make that\n> cheaper. But eliminating a common subexpression as in 0002 below does\n> help noticeably, at least with the rather old gcc I'm using.\n>\nAgain, I see problems with the types declared in Postgres.\n1. pq_sendint32 (StringInfo buf, uint32 i)\n2. extern void pq_sendbytes (StringInfo buf, const char * data, int\ndatalen);\n\nWouldn't it be better to declare outputlen (0002) as uint32?\nTo avoid converting from (int) to (uint32), even if afterwards there is a\nconversion from (uint32) to (int)?\n\nregards,\nRanier Vilela\n\nEm seg., 18 de mai. de 2020 às 13:38, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Andres Freund <andres@anarazel.de> writes:\n>> FWIW, I've also observed, in another thread (the node func generation\n>> thing [1]), that inlining enlargeStringInfo() helps a lot, especially\n>> when inlining some of its callers. Moving e.g. appendStringInfo() inline\n>> allows the compiler to sometimes optimize away the strlen. But if\n>> e.g. an inlined appendBinaryStringInfo() still calls enlargeStringInfo()\n>> unconditionally, successive appends cannot optimize away memory accesses\n>> for ->len/->data.\n\n> With a set of patches doing so, int4send itself is not a significant\n> factor for my test benchmark [1] anymore.\n\nThis thread seems to have died out, possibly because the last set of\npatches that Andres posted was sufficiently complicated and invasive\nthat nobody wanted to review it. I thought about this again after\nseeing that [1] is mostly about pq_begintypsend overhead, and had\nan epiphany: there isn't really a strong reason for pq_begintypsend\nto be inserting bits into the buffer at all. The bytes will be\nfilled by pq_endtypsend, and nothing in between should be touching\nthem. So I propose 0001 attached. It's poking into the stringinfo\nabstraction a bit more than I would want to do if there weren't a\ncompelling performance reason to do so, but there evidently is.\n\nWith 0001, pq_begintypsend drops from being the top single routine\nin a profile of a test case like [1] to being well down the list.\nThe next biggest cost compared to text-format output is that\nprinttup() itself is noticeably more expensive. A lot of the extra\ncost there seems to be from pq_sendint32(), which is getting inlined\ninto printtup(), and there probably isn't much we can do to make that\ncheaper. But eliminating a common subexpression as in 0002 below does\nhelp noticeably, at least with the rather old gcc I'm using.Again, I see problems with the types declared in Postgres.1. pq_sendint32 (StringInfo buf, uint32 i)2. extern void pq_sendbytes (StringInfo buf, const char * data, int datalen);Wouldn't it be better to declare outputlen (0002) as uint32?To avoid converting from (int) to (uint32), even if afterwards there is a conversion from (uint32) to (int)?regards,Ranier Vilela",
"msg_date": "Mon, 18 May 2020 14:54:03 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Again, I see problems with the types declared in Postgres.\n> 1. pq_sendint32 (StringInfo buf, uint32 i)\n> 2. extern void pq_sendbytes (StringInfo buf, const char * data, int\n> datalen);\n\nWe could spend the next ten years cleaning up minor discrepancies\nlike that, and have nothing much to show for the work.\n\n> To avoid converting from (int) to (uint32), even if afterwards there is a\n> conversion from (uint32) to (int)?\n\nYou do realize that that conversion costs nothing?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 May 2020 14:08:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2020-05-18 12:38:05 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> FWIW, I've also observed, in another thread (the node func generation\n> >> thing [1]), that inlining enlargeStringInfo() helps a lot, especially\n> >> when inlining some of its callers. Moving e.g. appendStringInfo() inline\n> >> allows the compiler to sometimes optimize away the strlen. But if\n> >> e.g. an inlined appendBinaryStringInfo() still calls enlargeStringInfo()\n> >> unconditionally, successive appends cannot optimize away memory accesses\n> >> for ->len/->data.\n>\n> > With a set of patches doing so, int4send itself is not a significant\n> > factor for my test benchmark [1] anymore.\n>\n> This thread seems to have died out, possibly because the last set of\n> patches that Andres posted was sufficiently complicated and invasive\n> that nobody wanted to review it.\n\nWell, I wasn't really planning to try to get that patchset into 13, and\nit wasn't in the CF...\n\n\n> I thought about this again after seeing that [1] is mostly about\n> pq_begintypsend overhead\n\nI'm not really convinced that that's the whole problem. Using the\nbenchmark from upthread, I get (median of three):\nmaster: 1181.581\nyours: 1171.445\nmine: 598.031\n\nThat's a very significant difference, imo. It helps a bit with the\nbenchmark from your [1], but not that much.\n\n\n> With 0001, pq_begintypsend drops from being the top single routine\n> in a profile of a test case like [1] to being well down the list.\n> The next biggest cost compared to text-format output is that\n> printtup() itself is noticeably more expensive. A lot of the extra\n> cost there seems to be from pq_sendint32(), which is getting inlined\n> into printtup(), and there probably isn't much we can do to make that\n> cheaper. But eliminating a common subexpression as in 0002 below does\n> help noticeably, at least with the rather old gcc I'm using.\n\nI think there's plenty more we can do:\n\nFirst, it's unnecessary to re-initialize a FunctionCallInfo for every\nsend/recv/out/in call. Instead we can reuse the same over and over.\n\n\nAfter that, the biggest remaining overhead for Jack's test is the palloc\nfor the stringinfo, as far as I can see. I've complained about that\nbefore...\n\nI've just hacked up a modification where, for send functions,\nfcinfo->context contains a stringinfo set up by printtup/CopyTo. That,\ncombined with using a FunctionCallInfo set up beforehand, instead of\nre-initializing it in every printtup cycle, results in a pretty good\nsaving.\n\nMaking the binary protocol 20% faster than text, in Jack's testcase. And\nmy lotsaints4 test, goes further down to ~410ms (this is 2.5x faster\nthan where started).\n\nNow obviously, the hack with passing a StringInfo in ->context is just\nthat, a hack. A somewhat gross one even. But I think it pretty clearly\nshows the problem and the way out.\n\nI don't know what the best non-gross solution for the overhead of the\nout/send functions is. There seems to be at least the following\nmajor options (and a lots of variants thereof):\n\n1) Just continue to incur significant overhead for every datum\n2) Accept the uglyness of passing in a buffer via\n FunctionCallInfo->context. Change nearly all in-core send functions\n over to that.\n3) Pass string buffer through an new INTERNAL argument to send/output\n function, allow both old/new style send functions. Use a wrapper\n function to adapt the \"old style\" to the \"new style\".\n4) Like 3, but make the new argument optional, and use ad-hoc\n stringbuffer if not provided. I don't like the unnecessary branches\n this adds.\n\nThe biggest problem after that is that we waste a lot of time memcpying\nstuff around repeatedly. There is:\n1) send function: datum -> per datum stringinfo\n2) printtup: per datum stringinfo -> per row stringinfo\n3) socket_putmessage: per row stringinfo -> PqSendBuffer\n4) send(): PqSendBuffer -> kernel buffer\n\nIt's obviously hard to avoid 1) and 4) in the common case, but the\nnumber of other copies seem pretty clearly excessive.\n\n\nIf we change the signature of the out/send function to always target a\nstring buffer, we could pretty easily avoid 2), and for out functions\nwe'd not have to redundantly call strlen (as the argument to\npq_sendcountedtext) anymore, which seems substantial too.\n\nAs I argued before, I think it's unnecessary to have a separate buffer\nbetween 3-4). We should construct the outgoing message inside the send\nbuffer. I still don't understand what \"recursion\" danger there is,\nnothing below printtup should ever send protocol messages, no?\n\n\nSometimes there's also 0) in the above, when detoasting a datum...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 2 Jun 2020 18:55:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "On Tue, Jun 2, 2020 at 9:56 PM Andres Freund <andres@anarazel.de> wrote:\n> The biggest problem after that is that we waste a lot of time memcpying\n> stuff around repeatedly. There is:\n> 1) send function: datum -> per datum stringinfo\n> 2) printtup: per datum stringinfo -> per row stringinfo\n> 3) socket_putmessage: per row stringinfo -> PqSendBuffer\n> 4) send(): PqSendBuffer -> kernel buffer\n>\n> It's obviously hard to avoid 1) and 4) in the common case, but the\n> number of other copies seem pretty clearly excessive.\n\nI too have seen recent benchmarking data where this was a big problem.\nBasically, you need a workload where the server doesn't have much or\nany actual query processing to do, but is just returning a lot of\nstuff to a really fast client - e.g. a locally connected client.\nThat's not necessarily the most common case but, if you have it, all\nthis extra copying is really pretty expensive.\n\nMy first thought was to wonder about changing all of our send/output\nfunctions to write into a buffer passed as an argument rather than\nreturning something which we then have to copy into a different\nbuffer, but that would be a somewhat painful change, so it is probably\nbetter to first pursue the idea of getting rid of some of the other\ncopies that happen in more centralized places (e.g. printtup). I\nwonder if we could replace the whole\npq_beginmessage...()/pq_send....()/pq_endmessage...() system with\nsomething a bit better-designed. For instance, suppose we get rid of\nthe idea that the caller supplies the buffer, and we move the\nresponsibility for error recovery into the pqcomm layer. So you do\nsomething like:\n\nmy_message = xyz_beginmessage('D');\nxyz_sendint32(my_message, 42);\nxyz_endmessage(my_message);\n\nMaybe what happens here under the hood is we keep a pool of free\nmessage buffers sitting around, and you just grab one and put your\ndata into it. When you end the message we add it to a list of used\nmessage buffers that are waiting to be sent, and once we send the data\nit goes back on the free list. If an error occurs after\nxyz_beginmessage() and before xyz_endmessage(), we put the buffer back\non the free list. That would allow us to merge (2) and (3) into a\nsingle copy. To go further, we could allow send/output functions to\nopt in to receiving a message buffer rather than returning a value,\nand then we could get rid of (1) for types that participate. (4) seems\nunavoidable AFAIK.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 3 Jun 2020 11:30:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-03 11:30:42 -0400, Robert Haas wrote:\n> I too have seen recent benchmarking data where this was a big problem.\n> Basically, you need a workload where the server doesn't have much or\n> any actual query processing to do, but is just returning a lot of\n> stuff to a really fast client - e.g. a locally connected client.\n> That's not necessarily the most common case but, if you have it, all\n> this extra copying is really pretty expensive.\n\nEven when the query actually is doing something, it's still quite\npossible to get the memcpies to be be measurable (say > 10% of\ncycles). Obviously not in a huge aggregating query. Even in something\nlike pgbench -M prepared -S, which is obviously spending most of its\ncycles elsewhere, the patches upthread improve throughput by ~1.5% (and\nthat's not eliding several unnecessary copies).\n\n\n> My first thought was to wonder about changing all of our send/output\n> functions to write into a buffer passed as an argument rather than\n> returning something which we then have to copy into a different\n> buffer, but that would be a somewhat painful change, so it is probably\n> better to first pursue the idea of getting rid of some of the other\n> copies that happen in more centralized places (e.g. printtup).\n\nFor those I think the allocator overhead is the bigger issue than the\nmemcpy itself. I wonder how much we could transparently hide in\npq_begintypsend()/pq_endtypsend().\n\n\n> I\n> wonder if we could replace the whole\n> pq_beginmessage...()/pq_send....()/pq_endmessage...() system with\n> something a bit better-designed. For instance, suppose we get rid of\n> the idea that the caller supplies the buffer, and we move the\n> responsibility for error recovery into the pqcomm layer. So you do\n> something like:\n> \n> my_message = xyz_beginmessage('D');\n> xyz_sendint32(my_message, 42);\n> xyz_endmessage(my_message);\n> \n> Maybe what happens here under the hood is we keep a pool of free\n> message buffers sitting around, and you just grab one and put your\n> data into it.\n\nWhy do we need multiple buffers? ISTM we don't want to just send\nmessages at endmsg() time, because that implies unnecessary syscall\noverhead. Nor do we want to imply the overhead of the copy from the\nmessage buffer to the network buffer.\n\nTo me that seems to imply that the best approach would be to have\nPqSendBuffer be something stringbuffer like, and have pg_beginmessage()\nrecord the starting position of the current message somewhere\n(->cursor?). When an error is thrown, we reset the position to be where\nthe in-progress message would have begun.\n\nI've previously outlined a slightly more complicated scheme, where we\nhave \"proxy\" stringinfos that point into another stringinfo, instead of\ntheir own buffer. And know how to resize the \"outer\" buffer when\nneeded. That'd have some advantages, but I'm not sure it's really\nneeded.\n\n\nThere's some disadvantages with what I describe above, in particular\nwhen dealing with send() sending only parts of our network buffer. We\ncouldn't cheaply reuse the already sent memory in that case.\n\nI've before wondered / suggested that we should have StringInfos not\ninsist on having one consecutive buffer (which obviously implies needing\nto copy contents when growing). Instead it should have a list of buffers\ncontaining chunks of the data, and never copy contents around while the\nstring is being built. We'd only allocate a buffer big enough for all\ndata when the caller actually wants to have all the resulting data in\none string (rather than using an API that can iterate over chunks).\n\nFor the network buffer case that'd allow us to reuse the earlier buffers\neven in the \"partial send\" case. And more generally it'd allow us to be\nless wasteful with buffer sizes, and perhaps even have a small \"inline\"\nbuffer inside StringInfoData avoiding unnecessary memory allocations in\na lot of cases where the common case is only a small amount of data\nbeing used. And I think the overhead while appending data to such a\nstringinfo should be neglegible, because it'd just require the exact\nsame checks we already have to do for enlargeStringInfo().\n\n\n> (4) seems unavoidable AFAIK.\n\nNot entirely. Linux can do zero-copy sends, but it does require somewhat\ncomplicated black magic rituals. Including more complex buffer\nmanagement for the application, because the memory containing the\nto-be-sent data cannot be reused until the kernel notifies that it's\ndone with the buffer.\n\nSee https://www.kernel.org/doc/html/latest/networking/msg_zerocopy.html\n\nThat might be something worth pursuing in the future (since it, I think,\nbasically avoids spending any cpu cycles on copying data around in the\nhappy path, relying on DMA instead), but I think for now there's much\nbigger fish to fry.\n\nI am hoping that somebody will write a nicer abstraction for zero-copy\nsends using io_uring. avoiding the need of a separate completion queue,\nby simply only signalling completion for the sendmsg operation once the\nbuffer isn't needed anymore. There's no corresponding completion logic\nfor normal sendmsg() calls, so it makes sense that something had to be\ninvented before something like io_uring existed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Jun 2020 11:10:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "On Wed, Jun 3, 2020 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> Why do we need multiple buffers? ISTM we don't want to just send\n> messages at endmsg() time, because that implies unnecessary syscall\n> overhead. Nor do we want to imply the overhead of the copy from the\n> message buffer to the network buffer.\n\nIt would only matter if there are multiple messages being constructed\nat the same time, and that's probably not common, but maybe there's\nsome way it can happen. It doesn't seem like it really costs anything\nto allow for it, and it might be useful sometime. For instance,\nconsider your idea of using Linux black magic to do zero-copy sends.\nNow you either need multiple buffers, or you need one big buffer that\nyou can recycle a bit at a time.\n\n> To me that seems to imply that the best approach would be to have\n> PqSendBuffer be something stringbuffer like, and have pg_beginmessage()\n> record the starting position of the current message somewhere\n> (->cursor?). When an error is thrown, we reset the position to be where\n> the in-progress message would have begun.\n\nYeah, I thought about that, but then how you detect the case where two\ndifferent people try to undertake message construction at the same\ntime?\n\nLike, with the idea I was proposing, you could still decide to limit\nyourself to 1 buffer at the same time, and just elog() if someone\ntries to allocate a second buffer when you've already reached the\nmaximum number of allocated buffers (i.e. one). But if you just have\none buffer in a global variable and everybody writes into it, you\nmight not notice if some unrelated code writes data into that buffer\nin the middle of someone else's message construction. Doing it the way\nI proposed, writing data requires passing a buffer pointer, so you can\nbe sure that somebody had to get the buffer from somewhere... and any\nrules you want to enforce can be enforced at that point.\n\n> I've before wondered / suggested that we should have StringInfos not\n> insist on having one consecutive buffer (which obviously implies needing\n> to copy contents when growing). Instead it should have a list of buffers\n> containing chunks of the data, and never copy contents around while the\n> string is being built. We'd only allocate a buffer big enough for all\n> data when the caller actually wants to have all the resulting data in\n> one string (rather than using an API that can iterate over chunks).\n\nIt's a thought. I doubt it's worth it for small amounts of data, but\nfor large amounts it might be. On the other hand, a better idea still\nmight be to size the buffer correctly from the start...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 9 Jun 2020 11:46:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2020-06-09 11:46:09 -0400, Robert Haas wrote:\n> On Wed, Jun 3, 2020 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > Why do we need multiple buffers? ISTM we don't want to just send\n> > messages at endmsg() time, because that implies unnecessary syscall\n> > overhead. Nor do we want to imply the overhead of the copy from the\n> > message buffer to the network buffer.\n> \n> It would only matter if there are multiple messages being constructed\n> at the same time, and that's probably not common, but maybe there's\n> some way it can happen.\n\nISTM that it'd be pretty broken if it could happen. We cannot have two\ndifferent parts of the system send messages to the client\nindependently. The protocol is pretty stateful...\n\n\n> > To me that seems to imply that the best approach would be to have\n> > PqSendBuffer be something stringbuffer like, and have pg_beginmessage()\n> > record the starting position of the current message somewhere\n> > (->cursor?). When an error is thrown, we reset the position to be where\n> > the in-progress message would have begun.\n> \n> Yeah, I thought about that, but then how you detect the case where two\n> different people try to undertake message construction at the same\n> time?\n\nSet a boolean and assert out if one already is in progress? We'd need\nsome state to know where to reset the position to on error anyway.\n\n\n> Like, with the idea I was proposing, you could still decide to limit\n> yourself to 1 buffer at the same time, and just elog() if someone\n> tries to allocate a second buffer when you've already reached the\n> maximum number of allocated buffers (i.e. one). But if you just have\n> one buffer in a global variable and everybody writes into it, you\n> might not notice if some unrelated code writes data into that buffer\n> in the middle of someone else's message construction. Doing it the way\n> I proposed, writing data requires passing a buffer pointer, so you can\n> be sure that somebody had to get the buffer from somewhere... and any\n> rules you want to enforce can be enforced at that point.\n\nI'd hope that we'd encapsulate the buffer management into file local\nvariables in pqcomm.c or such, and that code outside of that cannot\naccess the out buffer directly without using the appropriate helpers.\n\n\n> > I've before wondered / suggested that we should have StringInfos not\n> > insist on having one consecutive buffer (which obviously implies needing\n> > to copy contents when growing). Instead it should have a list of buffers\n> > containing chunks of the data, and never copy contents around while the\n> > string is being built. We'd only allocate a buffer big enough for all\n> > data when the caller actually wants to have all the resulting data in\n> > one string (rather than using an API that can iterate over chunks).\n> \n> It's a thought. I doubt it's worth it for small amounts of data, but\n> for large amounts it might be. On the other hand, a better idea still\n> might be to size the buffer correctly from the start...\n\nI think those are complimentary. I do agree that's it's useful to size\nstringinfos more appropriately immediately (there's an upthread patch\nadding a version of initStringInfo() that does so, quite useful for\nsmall stringinfos in particular). But there's enough cases where that's\nnot really knowable ahead of time that I think it'd be quite useful to\nhave support for the type of buffer I describe above.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Jun 2020 12:23:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "On Tue, Jun 9, 2020 at 3:23 PM Andres Freund <andres@anarazel.de> wrote:\n> ISTM that it'd be pretty broken if it could happen. We cannot have two\n> different parts of the system send messages to the client\n> independently. The protocol is pretty stateful...\n\nThere's a difference between building messages concurrently and\nsending them concurrently.\n\n> Set a boolean and assert out if one already is in progress? We'd need\n> some state to know where to reset the position to on error anyway.\n\nSure, that's basically just different notation for the same thing. I\nmight prefer my notation over yours, but you might prefer the reverse.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 9 Jun 2020 17:06:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "On Tue, Jun 2, 2020 at 9:56 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't know what the best non-gross solution for the overhead of the\n> out/send functions is. There seems to be at least the following\n> major options (and a lots of variants thereof):\n>\n> 1) Just continue to incur significant overhead for every datum\n> 2) Accept the uglyness of passing in a buffer via\n> FunctionCallInfo->context. Change nearly all in-core send functions\n> over to that.\n> 3) Pass string buffer through an new INTERNAL argument to send/output\n> function, allow both old/new style send functions. Use a wrapper\n> function to adapt the \"old style\" to the \"new style\".\n> 4) Like 3, but make the new argument optional, and use ad-hoc\n> stringbuffer if not provided. I don't like the unnecessary branches\n> this adds.\n\nI ran into this problem in another context today while poking at some\npg_basebackup stuff. There's another way of solving this problem which\nI think we should consider: just get rid of the per-row stringinfo and\npush the bytes directly from wherever they are into PqSendBuffer. Once\nwe start doing this, we can't error out, because internal_flush()\nmight've been called, sending a partial message to the client. Trying\nto now switch to sending an ErrorResponse will break protocol sync.\nBut it seems possible to avoid that. Just call all of the output\nfunctions first, and also do any required encoding conversions\n(pq_sendcountedtext -> pg_server_to_client). Then, do a bunch of\npq_putbytes() calls to shove the message out -- there's the small\nmatter of an assertion failure, but whatever. This effectively\ncollapses two copies into one. Or alternatively, build up an array of\niovecs and then have a variant of pq_putmessage(), like\npq_putmessage_iovec(), that knows what to do with them.\n\nOne advantage of this approach is that it approximately doubles the\nsize of the DataRow message we can send. We're currently limited to\n<1GB because of palloc, but the wire protocol just needs it to be <2GB\nso that a signed integer does not overflow. It would be nice to buy\nmore than a factor of two here, but that would require a wire protocol\nchange, and 2x is not bad.\n\nAnother advantage of this approach is that it doesn't require passing\nStringInfos all over the place. For the use case that I was looking\nat, that appears awkward. I'm not saying I couldn't make it work, but\nit wouldn't be my first choice. Right now, I've got data bubbling down\na chain of handlers until it eventually gets sent off to the client;\nwith your approach, I think I'd need to bubble buffers up and then\nbubble data down, which seems quite a bit more complex.\n\nA disadvantage of this approach is that we still end up doing three\ncopies: one from the datum to the per-datum StringInfo, a second into\nPqSendBuffer, and a third from there to the kernel. However, we could\nprobably improve on this. Whenever we internal_flush(), consider\nwhether the chunk of data we're the process of copying (the current\ncall to pq_putbytes(), or the current iovec) has enough bytes\nremaining to completely refill the buffer. If so, secure_write() a\nbuffer's worth of bytes (or more) directly, bypassing PqSendBuffer.\nThat way, we avoid individual system calls (or calls to OpenSSL or\nGSS) for small numbers of bytes, but we also avoid extra copying when\ntransmitting larger amounts of data.\n\nEven with that optimization, this still seems like it could end up\nbeing less efficient than your proposal (surprise, surprise). If we've\ngot a preallocated buffer which we won't be forced to resize during\nmessage construction -- and for DataRow messages we can get there just\nby keeping the buffer around, so that we only need to reallocate when\nwe see a larger message than we've ever seen before -- and we write\nall the data directly into that buffer and then send it from there\nstraight to the kernel, we only ever do 2 copies, whereas what I'm\nproposing sometimes does 3 copies and sometimes only 2.\n\nWhile I admit that's not great, it seems likely to still be a\nsignificant win over what we have now, and it's a *lot* less invasive\nthan your proposal. Not only does your approach require changing all\nof the type-output and type-sending functions inside and outside core\nto use this new model, admittedly with the possibility of backward\ncompatibility, but it also means that we could need similarly invasive\nchanges in any other place that wants to use this new style of message\nconstruction. You can't write any data anywhere that you might want to\nlater incorporate into a protocol message unless you write it into a\nStringInfo; and not only that, but you have to be able to get the\nright amount of data into the right place in the StringInfo right from\nthe start. I think that in some cases that will require fairly complex\norchestration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 31 Jul 2020 10:36:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Here is some review for the first few patches in this series.\n\nI am generally in favor of applying 0001-0003 no matter what else we\ndecide to do here. However, as might be expected, I have a few\nquestions and comments.\n\nRegarding 0001:\n\nI dislike the name initStringInfoEx() because we do not use the \"Ex\"\nconvention for function names anywhere in the code base. We do\nsometimes use \"extended\", so this could be initStringInfoExtended(),\nperhaps, or something more specific, like initStringInfoWithLength().\n\nRegarding the FIXME in that function, I suggest that this should be\nthe caller's job. Otherwise, there will probably be some caller which\ndoesn't want the add-one behavior, and then said caller will subtract\none to compensate, and that will be silly.\n\nI am not familiar with pg_restrict and don't entirely understand the\nmotivation for it here. I suspect you have done some experiments and\nfigured out that it produces better code, but it would be interesting\nto hear more about how you got there. Perhaps there could even be some\nbrief comments about it. Locutions like this are particularly\nconfusing to me:\n\n+static inline void\n+resetStringInfo(StringInfoData *pg_restrict str)\n+{\n+ *(char *pg_restrict) (str->data) = '\\0';\n+ str->len = 0;\n+ str->cursor = 0;\n+}\n\nI don't understand how this can be saving anything. I think the\nrestrict definitions here mean that str->data does not overlap with\nstr itself, but considering that these are unconditional stores, so\nwhat? If the code were doing something like memset(str->data, 0,\nstr->len) then I'd get it: it might be useful to know for optimization\npurposes that the memset isn't overwriting str->len. But what code can\nwe produce for this that wouldn't be valid if str->data = &str? I\nassume this is my lack of understanding rather than an actual problem\nwith the patch, but I would be grateful if you could explain.\n\nIt is easier to see why the pg_restrict stuff you've introduced into\nappendBinaryStringInfoNT is potentially helpful: e.g. in\nappendBinaryStringInfoNT, it promises that memcpy can't clobber\nstr->len, so the compiler is free to reorder without changing the\nresults. Or so I imagine. But then the one in appendBinaryStringInfo()\nconfuses me again: if str->data is already declared as a restricted\npointer, then why do we need to cast str->data + str->len to be\nrestricted also?\n\nIn appendStringInfoChar, why do we need to cast to restrict twice? Can\nwe not just do something like this:\n\nchar *pg_restrict ep = str->data+str->len;\nep[0] = ch;\nep[1] = '\\0';\n\nRegarding 0002:\n\nTotally mechanical, seems fine.\n\nRegarding 0003:\n\nFor the same reasons as above, I suggest renaming pq_begintypsend_ex()\nto pq_begintypsend_extended() or pq_begintypsend_with_length() or\nsomething of that sort, rather than using _ex.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 31 Jul 2020 11:14:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2020-07-31 11:14:46 -0400, Robert Haas wrote:\n> Here is some review for the first few patches in this series.\n\nThanks!\n\n\n> I am generally in favor of applying 0001-0003 no matter what else we\n> decide to do here. However, as might be expected, I have a few\n> questions and comments.\n> \n> Regarding 0001:\n> \n> I dislike the name initStringInfoEx() because we do not use the \"Ex\"\n> convention for function names anywhere in the code base. We do\n> sometimes use \"extended\", so this could be initStringInfoExtended(),\n> perhaps, or something more specific, like initStringInfoWithLength().\n\nI dislike the length of the function name, but ...\n\n\n> Regarding the FIXME in that function, I suggest that this should be\n> the caller's job. Otherwise, there will probably be some caller which\n> doesn't want the add-one behavior, and then said caller will subtract\n> one to compensate, and that will be silly.\n\nFair point.\n\n\n> I am not familiar with pg_restrict and don't entirely understand the\n> motivation for it here. I suspect you have done some experiments and\n> figured out that it produces better code, but it would be interesting\n> to hear more about how you got there. Perhaps there could even be some\n> brief comments about it. Locutions like this are particularly\n> confusing to me:\n> \n> +static inline void\n> +resetStringInfo(StringInfoData *pg_restrict str)\n> +{\n> + *(char *pg_restrict) (str->data) = '\\0';\n> + str->len = 0;\n> + str->cursor = 0;\n> +}\n\nThe restrict tells the compiler that 'str' and 'str->data' won't be\npointing to the same memory. Which can simpilify the code its\ngenerating. E.g. it'll allow the compiler to keep str->data in a\nregister, instead of reloading it for the next\nappendStringInfo*. Without the restrict it can't, because str->data = 0\ncould otherwise overwrite parts of the value of ->data itself, if ->data\npointed into the StringInfo. Similarly, str->data could be overwritten\nby str->len in some other cases.\n\nPartially the reason we need to add the markers is that we compile with\n-fno-strict-aliasing. But even if we weren't, this case wouldn't be\nsolved without an explicit marker even then, because char * is allowed\nto alias...\n\nBesides keeping ->data in a register, the restrict can also just\nentirely elide the null byte write in some cases, e.g. because the\nresetStringInfo() is followed by a appendStringInfoString(, \"constant\").\n\n\n> I don't understand how this can be saving anything. I think the\n> restrict definitions here mean that str->data does not overlap with\n> str itself, but considering that these are unconditional stores, so\n> what? If the code were doing something like memset(str->data, 0,\n> str->len) then I'd get it: it might be useful to know for optimization\n> purposes that the memset isn't overwriting str->len. But what code can\n> we produce for this that wouldn't be valid if str->data = &str? I\n> assume this is my lack of understanding rather than an actual problem\n> with the patch, but I would be grateful if you could explain.\n\nI hope the above makes this make sene now? It's about subsequent uses of\nthe StringInfo, rather than the body of resetStringInfo itself.\n\n\n> It is easier to see why the pg_restrict stuff you've introduced into\n> appendBinaryStringInfoNT is potentially helpful: e.g. in\n> appendBinaryStringInfoNT, it promises that memcpy can't clobber\n> str->len, so the compiler is free to reorder without changing the\n> results. Or so I imagine. But then the one in appendBinaryStringInfo()\n> confuses me again: if str->data is already declared as a restricted\n> pointer, then why do we need to cast str->data + str->len to be\n> restricted also?\n\nBut str->data isn't declared restricted without the explicit use of\nrestrict? str is restrict'ed, but it doesn't apply \"recursively\" to all\npointers contained therein.\n\n\n> In appendStringInfoChar, why do we need to cast to restrict twice? Can\n> we not just do something like this:\n> \n> char *pg_restrict ep = str->data+str->len;\n> ep[0] = ch;\n> ep[1] = '\\0';\n\nI don't think that'd tell the compiler that this couldn't overlap with\nstr itself? A single 'restrict' can never (?) help, you need *two*\nthings that are marked as not overlapping in any way.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 31 Jul 2020 08:50:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "On Fri, Jul 31, 2020 at 11:50 AM Andres Freund <andres@anarazel.de> wrote:\n> I hope the above makes this make sene now? It's about subsequent uses of\n> the StringInfo, rather than the body of resetStringInfo itself.\n\nThat does make sense, except that\nhttps://en.cppreference.com/w/c/language/restrict says \"During each\nexecution of a block in which a restricted pointer P is declared\n(typically each execution of a function body in which P is a function\nparameter), if some object that is accessible through P (directly or\nindirectly) is modified, by any means, then all accesses to that\nobject (both reads and writes) in that block must occur through P\n(directly or indirectly), otherwise the behavior is undefined.\" So my\ninterpretation of this was that it couldn't really affect what\nhappened outside of the function itself, even if the compiler chose to\nperform inlining. But I think see what you're saying: the *inference*\nis only valid with respect to restrict pointers in a particular\nfunction, but what can be optimized as a result of that inference may\nbe something further afield, if inlining is performed. Perhaps we\ncould add a comment about this, e.g.\n\nMarking these pointers with pg_restrict tells the compiler that str\nand str->data can't overlap, which may allow the compiler to optimize\nbetter when this code is inlined. For example, it may be possible to\nkeep str->data in a register across consecutive appendStringInfoString\noperations.\n\nSince pg_restrict is not widely used, I think it's worth adding this\nkind of annotation, lest other hackers get confused. I'm probably not\nthe only one who isn't on top of this.\n\n> > In appendStringInfoChar, why do we need to cast to restrict twice? Can\n> > we not just do something like this:\n> >\n> > char *pg_restrict ep = str->data+str->len;\n> > ep[0] = ch;\n> > ep[1] = '\\0';\n>\n> I don't think that'd tell the compiler that this couldn't overlap with\n> str itself? A single 'restrict' can never (?) help, you need *two*\n> things that are marked as not overlapping in any way.\n\nBut what's the difference between:\n\n+ *(char *pg_restrict) (str->data + str->len) = ch;\n+ str->len++;\n+ *(char *pg_restrict) (str->data + str->len) = '\\0';\n\nAnd:\n\nchar *pg_restrict ep = str->data+str->len;\nep[0] = ch;\nep[1] = '\\0';\n++str->len;\n\nWhether or not str itself is marked restricted is another question;\nwhat I'm talking about is why we need to repeat (char *pg_restrict)\n(str->data + str->len).\n\nI don't have any further comment on the remainder of your reply.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 31 Jul 2020 12:28:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2020-07-31 12:28:04 -0400, Robert Haas wrote:\n> On Fri, Jul 31, 2020 at 11:50 AM Andres Freund <andres@anarazel.de> wrote:\n> > I hope the above makes this make sene now? It's about subsequent uses of\n> > the StringInfo, rather than the body of resetStringInfo itself.\n>\n> That does make sense, except that\n> https://en.cppreference.com/w/c/language/restrict says \"During each\n> execution of a block in which a restricted pointer P is declared\n> (typically each execution of a function body in which P is a function\n> parameter), if some object that is accessible through P (directly or\n> indirectly) is modified, by any means, then all accesses to that\n> object (both reads and writes) in that block must occur through P\n> (directly or indirectly), otherwise the behavior is undefined.\" So my\n> interpretation of this was that it couldn't really affect what\n> happened outside of the function itself, even if the compiler chose to\n> perform inlining. But I think see what you're saying: the *inference*\n> is only valid with respect to restrict pointers in a particular\n> function, but what can be optimized as a result of that inference may\n> be something further afield, if inlining is performed.\n\nRight. There's two aspects:\n\n1) By looking at the function, with the restrict, the compiler can infer\n more about the behaviour of the function. E.g. knowing that -> len\n has a specific value, or that ->data[n] does. That information then\n can be used together with subsequent operations, e.g. avoiding a\n re-read of ->len. That could work in some cases even if subsequent\n operations were *not* marked up with restrict.\n\n2) The restrict signals to the compiler that we guarantee (i.e. it would\n be undefined behaviour if not) that the pointers do not\n overlap. Which means it can assume that in some of the calling code\n as well, if it can analyze that ->data isn't changed, for example.\n\n\n> Perhaps we could add a comment about this, e.g.\n> Marking these pointers with pg_restrict tells the compiler that str\n> and str->data can't overlap, which may allow the compiler to optimize\n> better when this code is inlined. For example, it may be possible to\n> keep str->data in a register across consecutive appendStringInfoString\n> operations.\n>\n> Since pg_restrict is not widely used, I think it's worth adding this\n> kind of annotation, lest other hackers get confused. I'm probably not\n> the only one who isn't on top of this.\n\nWould it make more sense to have a bit of an explanation at\npg_restrict's definition, instead of having it at (eventually) multiple\nplaces?\n\n\n> > > In appendStringInfoChar, why do we need to cast to restrict twice? Can\n> > > we not just do something like this:\n> > >\n> > > char *pg_restrict ep = str->data+str->len;\n> > > ep[0] = ch;\n> > > ep[1] = '\\0';\n> >\n> > I don't think that'd tell the compiler that this couldn't overlap with\n> > str itself? A single 'restrict' can never (?) help, you need *two*\n> > things that are marked as not overlapping in any way.\n>\n> But what's the difference between:\n>\n> + *(char *pg_restrict) (str->data + str->len) = ch;\n> + str->len++;\n> + *(char *pg_restrict) (str->data + str->len) = '\\0';\n>\n> And:\n>\n> char *pg_restrict ep = str->data+str->len;\n> ep[0] = ch;\n> ep[1] = '\\0';\n> ++str->len;\n>\n> Whether or not str itself is marked restricted is another question;\n> what I'm talking about is why we need to repeat (char *pg_restrict)\n> (str->data + str->len).\n\nAh, I misunderstood. Yea, there's no reason not to do that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 31 Jul 2020 10:00:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "On Fri, Jul 31, 2020 at 1:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > Perhaps we could add a comment about this, e.g.\n> > Marking these pointers with pg_restrict tells the compiler that str\n> > and str->data can't overlap, which may allow the compiler to optimize\n> > better when this code is inlined. For example, it may be possible to\n> > keep str->data in a register across consecutive appendStringInfoString\n> > operations.\n> >\n> > Since pg_restrict is not widely used, I think it's worth adding this\n> > kind of annotation, lest other hackers get confused. I'm probably not\n> > the only one who isn't on top of this.\n>\n> Would it make more sense to have a bit of an explanation at\n> pg_restrict's definition, instead of having it at (eventually) multiple\n> places?\n\nI think, at least for the first few, it might be better to have a more\nspecific explanation at the point of use, as it may be easier to\nunderstand in specific cases than in general. I imagine this only\nreally makes sense for places that are pretty hot.\n\n> Ah, I misunderstood. Yea, there's no reason not to do that.\n\nOK, then I vote for that version, as I think it looks nicer.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 31 Jul 2020 13:35:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nI was reminded of the patchset I had posted in this thread by\nhttps://postgr.es/m/679d5455cbbb0af667ccb753da51a475bae1eaed.camel%40cybertec.at\n\nOn 2020-07-31 13:35:43 -0400, Robert Haas wrote:\n> On Fri, Jul 31, 2020 at 1:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Perhaps we could add a comment about this, e.g.\n> > > Marking these pointers with pg_restrict tells the compiler that str\n> > > and str->data can't overlap, which may allow the compiler to optimize\n> > > better when this code is inlined. For example, it may be possible to\n> > > keep str->data in a register across consecutive appendStringInfoString\n> > > operations.\n> > >\n> > > Since pg_restrict is not widely used, I think it's worth adding this\n> > > kind of annotation, lest other hackers get confused. I'm probably not\n> > > the only one who isn't on top of this.\n> >\n> > Would it make more sense to have a bit of an explanation at\n> > pg_restrict's definition, instead of having it at (eventually) multiple\n> > places?\n> \n> I think, at least for the first few, it might be better to have a more\n> specific explanation at the point of use, as it may be easier to\n> understand in specific cases than in general. I imagine this only\n> really makes sense for places that are pretty hot.\n\nWhenever I looked at adding these comments, it felt wrong. We end up with\nrepetitive boilerplate stuff as quite a few functions use it. I've thus not\naddressed this aspect in the attached rebased version. Perhaps a compromise\nwould be to add such a comment to the top of stringinfo.h?\n\n\n> > Ah, I misunderstood. Yea, there's no reason not to do that.\n> \n> OK, then I vote for that version, as I think it looks nicer.\n\nDone.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 17 Feb 2024 17:59:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nIn <20240218015955.rmw5mcmobt5hbene@awork3.anarazel.de>\n \"Re: Why is pq_begintypsend so slow?\" on Sat, 17 Feb 2024 17:59:55 -0800,\n Andres Freund <andres@anarazel.de> wrote:\n\n> v3-0008-wip-make-in-out-send-recv-calls-in-copy.c-cheaper.patch\n\nIt seems that this is an alternative approach of [1].\n\n[1] https://www.postgresql.org/message-id/flat/20240215.153421.96888103784986803.kou%40clear-code.com#34df359e6d255795d16814ce138cc995\n\n\n+typedef struct CopyInAttributeInfo\n+{\n+\tAttrNumber\tnum;\n+\tconst char *name;\n+\n+\tOid\t\t\ttypioparam;\n+\tint32\t\ttypmod;\n+\n+\tFmgrInfo\tin_finfo;\n+\tunion\n+\t{\n+\t\tFunctionCallInfoBaseData fcinfo;\n+\t\tchar\t\tfcinfo_data[SizeForFunctionCallInfo(3)];\n+\t}\t\t\tin_fcinfo;\n\nDo we need one FunctionCallInfoBaseData for each attribute?\nHow about sharing one FunctionCallInfoBaseData by all\nattributes like [1]?\n\n\n@@ -956,20 +956,47 @@ NextCopyFrom(CopyFromState cstate, ExprContext *econtext,\n \n \t\t\t\tvalues[m] = ExecEvalExpr(defexprs[m], econtext, &nulls[m]);\n \t\t\t}\n-\n-\t\t\t/*\n-\t\t\t * If ON_ERROR is specified with IGNORE, skip rows with soft\n-\t\t\t * errors\n-\t\t\t */\n-\t\t\telse if (!InputFunctionCallSafe(&in_functions[m],\n-\t\t\t\t\t\t\t\t\t\t\tstring,\n-\t\t\t\t\t\t\t\t\t\t\ttypioparams[m],\n-\t\t\t\t\t\t\t\t\t\t\tatt->atttypmod,\n-\t\t\t\t\t\t\t\t\t\t\t(Node *) cstate->escontext,\n-\t\t\t\t\t\t\t\t\t\t\t&values[m]))\n\nInlining InputFuncallCallSafe() here to use pre-initialized\nfcinfo will decrease maintainability. Because we abstract\nInputFunctionCall family in fmgr.c. If we define a\nInputFunctionCall variant here, we need to change both of\nfmgr.c and here when InputFunctionCall family is changed.\nHow about defining details in fmgr.c and call it here\ninstead like [1]?\n\n+\t\t\t\tfcinfo->args[0].value = CStringGetDatum(string);\n+\t\t\t\tfcinfo->args[0].isnull = false;\n+\t\t\t\tfcinfo->args[1].value = ObjectIdGetDatum(attr->typioparam);\n+\t\t\t\tfcinfo->args[1].isnull = false;\n+\t\t\t\tfcinfo->args[2].value = Int32GetDatum(attr->typmod);\n+\t\t\t\tfcinfo->args[2].isnull = false;\n\nI think that \"fcinfo->isnull = false;\" is also needed like\n[1].\n\n@@ -1966,7 +1992,7 @@ CopyReadBinaryAttribute(CopyFromState cstate, FmgrInfo *flinfo,\n \tif (fld_size == -1)\n \t{\n \t\t*isnull = true;\n-\t\treturn ReceiveFunctionCall(flinfo, NULL, typioparam, typmod);\n+\t\treturn ReceiveFunctionCall(fcinfo->flinfo, NULL, attr->typioparam, attr->typmod);\n\nWhy pre-initialized fcinfo isn't used here?\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Sun, 18 Feb 2024 17:38:09 +0900 (JST)",
"msg_from": "Sutou Kouhei <kou@clear-code.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-18 17:38:09 +0900, Sutou Kouhei wrote:\n> In <20240218015955.rmw5mcmobt5hbene@awork3.anarazel.de>\n> \"Re: Why is pq_begintypsend so slow?\" on Sat, 17 Feb 2024 17:59:55 -0800,\n> Andres Freund <andres@anarazel.de> wrote:\n> \n> > v3-0008-wip-make-in-out-send-recv-calls-in-copy.c-cheaper.patch\n> \n> It seems that this is an alternative approach of [1].\n\nNote that what I posted was a very lightly polished rebase of a ~4 year old\npatchset.\n\n> [1] https://www.postgresql.org/message-id/flat/20240215.153421.96888103784986803.kou%40clear-code.com#34df359e6d255795d16814ce138cc995\n> \n> \n> +typedef struct CopyInAttributeInfo\n> +{\n> +\tAttrNumber\tnum;\n> +\tconst char *name;\n> +\n> +\tOid\t\t\ttypioparam;\n> +\tint32\t\ttypmod;\n> +\n> +\tFmgrInfo\tin_finfo;\n> +\tunion\n> +\t{\n> +\t\tFunctionCallInfoBaseData fcinfo;\n> +\t\tchar\t\tfcinfo_data[SizeForFunctionCallInfo(3)];\n> +\t}\t\t\tin_fcinfo;\n> \n> Do we need one FunctionCallInfoBaseData for each attribute?\n> How about sharing one FunctionCallInfoBaseData by all\n> attributes like [1]?\n\nThat makes no sense to me. You're throwing away most of the possible gains by\nhaving to update the FunctionCallInfo fields on every call. You're saving\nneglegible amounts of memory at a substantial runtime cost.\n\n\n> @@ -956,20 +956,47 @@ NextCopyFrom(CopyFromState cstate, ExprContext *econtext,\n> \n> \t\t\t\tvalues[m] = ExecEvalExpr(defexprs[m], econtext, &nulls[m]);\n> \t\t\t}\n> -\n> -\t\t\t/*\n> -\t\t\t * If ON_ERROR is specified with IGNORE, skip rows with soft\n> -\t\t\t * errors\n> -\t\t\t */\n> -\t\t\telse if (!InputFunctionCallSafe(&in_functions[m],\n> -\t\t\t\t\t\t\t\t\t\t\tstring,\n> -\t\t\t\t\t\t\t\t\t\t\ttypioparams[m],\n> -\t\t\t\t\t\t\t\t\t\t\tatt->atttypmod,\n> -\t\t\t\t\t\t\t\t\t\t\t(Node *) cstate->escontext,\n> -\t\t\t\t\t\t\t\t\t\t\t&values[m]))\n> \n> Inlining InputFuncallCallSafe() here to use pre-initialized\n> fcinfo will decrease maintainability. Because we abstract\n> InputFunctionCall family in fmgr.c. If we define a\n> InputFunctionCall variant here, we need to change both of\n> fmgr.c and here when InputFunctionCall family is changed.\n> How about defining details in fmgr.c and call it here\n> instead like [1]?\n\nI'm not sure I buy that that is a problem. It's not like my approach was\nactually bypassing fmgr abstractions alltogether - instead it just used the\nlower level APIs, because it's a performance sensitive area.\n\n\n> @@ -1966,7 +1992,7 @@ CopyReadBinaryAttribute(CopyFromState cstate, FmgrInfo *flinfo,\n> \tif (fld_size == -1)\n> \t{\n> \t\t*isnull = true;\n> -\t\treturn ReceiveFunctionCall(flinfo, NULL, typioparam, typmod);\n> +\t\treturn ReceiveFunctionCall(fcinfo->flinfo, NULL, attr->typioparam, attr->typmod);\n> \n> Why pre-initialized fcinfo isn't used here?\n\nBecause it's a prototype and because I don't think it's a common path.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 18 Feb 2024 12:09:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "On Sun, Feb 18, 2024 at 12:09:06PM -0800, Andres Freund wrote:\n> On 2024-02-18 17:38:09 +0900, Sutou Kouhei wrote:\n>> @@ -1966,7 +1992,7 @@ CopyReadBinaryAttribute(CopyFromState cstate, FmgrInfo *flinfo,\n>> \tif (fld_size == -1)\n>> \t{\n>> \t\t*isnull = true;\n>> -\t\treturn ReceiveFunctionCall(flinfo, NULL, typioparam, typmod);\n>> +\t\treturn ReceiveFunctionCall(fcinfo->flinfo, NULL, attr->typioparam, attr->typmod);\n>> \n>> Why pre-initialized fcinfo isn't used here?\n> \n> Because it's a prototype and because I don't think it's a common path.\n\n0008 and 0010 (a bit) are the only patches of the set that touch some\nof the areas that would be impacted by the refactoring to use\ncallbacks in the COPY code, still I don't see anything that could not\nbe changed in what's updated here, the one-row callback in COPY FROM\nbeing the most touched. So I don't quite see why each effort could\nnot happen on their own?\n\nOr Andres, do you think that any improvements you've been proposing in\nthis area should happen before we consider refactoring the COPY code\nto plug in the callbacks? I'm a bit confused by the situation, TBH.\n--\nMichael",
"msg_date": "Mon, 19 Feb 2024 07:36:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nIn <20240218200906.zvihkrs46yl2juzf@awork3.anarazel.de>\n \"Re: Why is pq_begintypsend so slow?\" on Sun, 18 Feb 2024 12:09:06 -0800,\n Andres Freund <andres@anarazel.de> wrote:\n\n>> [1] https://www.postgresql.org/message-id/flat/20240215.153421.96888103784986803.kou%40clear-code.com#34df359e6d255795d16814ce138cc995\n\n>> Do we need one FunctionCallInfoBaseData for each attribute?\n>> How about sharing one FunctionCallInfoBaseData by all\n>> attributes like [1]?\n> \n> That makes no sense to me. You're throwing away most of the possible gains by\n> having to update the FunctionCallInfo fields on every call. You're saving\n> neglegible amounts of memory at a substantial runtime cost.\n\nThe number of updated fields of your approach and [1] are\nsame:\n\nYour approach: 6 (I think that \"fcinfo->isnull = false\" is\nneeded though.)\n\n+\t\t\t\tfcinfo->args[0].value = CStringGetDatum(string);\n+\t\t\t\tfcinfo->args[0].isnull = false;\n+\t\t\t\tfcinfo->args[1].value = ObjectIdGetDatum(attr->typioparam);\n+\t\t\t\tfcinfo->args[1].isnull = false;\n+\t\t\t\tfcinfo->args[2].value = Int32GetDatum(attr->typmod);\n+\t\t\t\tfcinfo->args[2].isnull = false;\n\n[1]: 6 (including \"fcinfo->isnull = false\")\n\n+\tfcinfo->flinfo = flinfo;\n+\tfcinfo->context = escontext;\n+\tfcinfo->isnull = false;\n+\tfcinfo->args[0].value = CStringGetDatum(str);\n+\tfcinfo->args[1].value = ObjectIdGetDatum(typioparam);\n+\tfcinfo->args[2].value = Int32GetDatum(typmod);\n\n\n>> Inlining InputFuncallCallSafe() here to use pre-initialized\n>> fcinfo will decrease maintainability. Because we abstract\n>> InputFunctionCall family in fmgr.c. If we define a\n>> InputFunctionCall variant here, we need to change both of\n>> fmgr.c and here when InputFunctionCall family is changed.\n>> How about defining details in fmgr.c and call it here\n>> instead like [1]?\n> \n> I'm not sure I buy that that is a problem. It's not like my approach was\n> actually bypassing fmgr abstractions alltogether - instead it just used the\n> lower level APIs, because it's a performance sensitive area.\n\n[1] provides some optimized abstractions, which are\nimplemented with lower level APIs, without breaking the\nabstractions.\n\nNote that I don't have a strong opinion how to implement\nthis optimization. If other developers think this approach\nmakes sense for this optimization, I don't object it.\n\n>> @@ -1966,7 +1992,7 @@ CopyReadBinaryAttribute(CopyFromState cstate, FmgrInfo *flinfo,\n>> \tif (fld_size == -1)\n>> \t{\n>> \t\t*isnull = true;\n>> -\t\treturn ReceiveFunctionCall(flinfo, NULL, typioparam, typmod);\n>> +\t\treturn ReceiveFunctionCall(fcinfo->flinfo, NULL, attr->typioparam, attr->typmod);\n>> \n>> Why pre-initialized fcinfo isn't used here?\n> \n> Because it's a prototype and because I don't think it's a common path.\n\nHow about adding a comment why we don't need to optimize\nthis case?\n\n\nI don't have a strong opinion how to implement this\noptimization as I said above. It seems that you like your\napproach. So I withdraw [1]. Could you complete this\noptimization? Can we proceed making COPY format extensible\nwithout this optimization? It seems that it'll take a little\ntime to complete this optimization because your patch is\nstill WIP. And it seems that you can work on it after making\nCOPY format extensible.\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Mon, 19 Feb 2024 10:02:52 +0900 (JST)",
"msg_from": "Sutou Kouhei <kou@clear-code.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-19 10:02:52 +0900, Sutou Kouhei wrote:\n> In <20240218200906.zvihkrs46yl2juzf@awork3.anarazel.de>\n> \"Re: Why is pq_begintypsend so slow?\" on Sun, 18 Feb 2024 12:09:06 -0800,\n> Andres Freund <andres@anarazel.de> wrote:\n> \n> >> [1] https://www.postgresql.org/message-id/flat/20240215.153421.96888103784986803.kou%40clear-code.com#34df359e6d255795d16814ce138cc995\n> \n> >> Do we need one FunctionCallInfoBaseData for each attribute?\n> >> How about sharing one FunctionCallInfoBaseData by all\n> >> attributes like [1]?\n> > \n> > That makes no sense to me. You're throwing away most of the possible gains by\n> > having to update the FunctionCallInfo fields on every call. You're saving\n> > neglegible amounts of memory at a substantial runtime cost.\n> \n> The number of updated fields of your approach and [1] are\n> same:\n> \n> Your approach: 6 (I think that \"fcinfo->isnull = false\" is\n> needed though.)\n> \n> +\t\t\t\tfcinfo->args[0].value = CStringGetDatum(string);\n> +\t\t\t\tfcinfo->args[0].isnull = false;\n> +\t\t\t\tfcinfo->args[1].value = ObjectIdGetDatum(attr->typioparam);\n> +\t\t\t\tfcinfo->args[1].isnull = false;\n> +\t\t\t\tfcinfo->args[2].value = Int32GetDatum(attr->typmod);\n> +\t\t\t\tfcinfo->args[2].isnull = false;\n> \n> [1]: 6 (including \"fcinfo->isnull = false\")\n> \n> +\tfcinfo->flinfo = flinfo;\n> +\tfcinfo->context = escontext;\n> +\tfcinfo->isnull = false;\n> +\tfcinfo->args[0].value = CStringGetDatum(str);\n> +\tfcinfo->args[1].value = ObjectIdGetDatum(typioparam);\n> +\tfcinfo->args[2].value = Int32GetDatum(typmod);\n\nIf you want to do so you can elide the isnull assignments in my approach just\nas well as yours. Assigning not just the value but also flinfo and context is\noverhead. But you can't elide assigning flinfo and context, which is why\nreusing one FunctionCallInfo isn't going to win\n\nI don't think you necessarily need to assign fcinfo->isnull on every call,\nthese functions aren't allowed to return NULL IIRC. And if they do we'd error\nout, so it could only happen once.\n\n\n> I don't have a strong opinion how to implement this\n> optimization as I said above. It seems that you like your\n> approach. So I withdraw [1]. Could you complete this\n> optimization? Can we proceed making COPY format extensible\n> without this optimization? It seems that it'll take a little\n> time to complete this optimization because your patch is\n> still WIP. And it seems that you can work on it after making\n> COPY format extensible.\n\nI don't think optimizing this aspect needs to block making copy extensible.\n\nI don't know how much time/energy I'll have to focus on this in the near\nterm. I really just reposted this because the earlier patches were relevant\nfor the discussion in another thread. If you want to pick the COPY part up,\nfeel free.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Feb 2024 11:53:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
},
{
"msg_contents": "Hi,\n\nIn <20240219195351.5vy7cdl3wxia66kg@awork3.anarazel.de>\n \"Re: Why is pq_begintypsend so slow?\" on Mon, 19 Feb 2024 11:53:51 -0800,\n Andres Freund <andres@anarazel.de> wrote:\n\n>> I don't have a strong opinion how to implement this\n>> optimization as I said above. It seems that you like your\n>> approach. So I withdraw [1]. Could you complete this\n>> optimization? Can we proceed making COPY format extensible\n>> without this optimization? It seems that it'll take a little\n>> time to complete this optimization because your patch is\n>> still WIP. And it seems that you can work on it after making\n>> COPY format extensible.\n> \n> I don't think optimizing this aspect needs to block making copy extensible.\n\nOK. I'll work on making copy extensible without this\noptimization.\n\n> I don't know how much time/energy I'll have to focus on this in the near\n> term. I really just reposted this because the earlier patches were relevant\n> for the discussion in another thread. If you want to pick the COPY part up,\n> feel free.\n\nOK. I may work on this after I complete making copy\nextensible if you haven't completed this yet.\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Thu, 22 Feb 2024 15:40:55 +0900 (JST)",
"msg_from": "Sutou Kouhei <kou@clear-code.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is pq_begintypsend so slow?"
}
] |
[
{
"msg_contents": "While reading the code for heapam.c:heap_multi_insert I happened upon this\ncomment which I'm either too thick for, or it lacks a word or two:\n\n * ..\n * A check here does not definitively prevent a serialization anomaly;\n * that check MUST be done at least past the point of acquiring an\n * exclusive buffer content lock on every buffer that will be affected,\n * and MAY be done after all inserts are reflected in the buffers and\n * those locks are released; otherwise there race condition. Since\n * multiple buffers can be locked and unlocked in the loop below, and it\n * would not be feasible to identify and lock all of those buffers before\n * the loop, we must do a final check at the end.\n * ..\n\nThe part I don't understand is \"otherwise there race condition\", it doesn't\nsound complete to me as a non-native english speaker. Should that really be\n\"otherwise there *is a (potential)* race condition\" or something similar?\n\ncheers ./daniel\n\n\n",
"msg_date": "Mon, 13 Jan 2020 00:03:37 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Question regarding heap_multi_insert documentation"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> The part I don't understand is \"otherwise there race condition\", it doesn't\n> sound complete to me as a non-native english speaker. Should that really be\n> \"otherwise there *is a (potential)* race condition\" or something similar?\n\nI agree, it's missing \"is a\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Jan 2020 18:25:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding heap_multi_insert documentation"
},
{
"msg_contents": "> On 13 Jan 2020, at 00:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> The part I don't understand is \"otherwise there race condition\", it doesn't\n>> sound complete to me as a non-native english speaker. Should that really be\n>> \"otherwise there *is a (potential)* race condition\" or something similar?\n> \n> I agree, it's missing \"is a\".\n\nThanks for clarifying. PFA tiny patch for this.\n\ncheers ./daniel",
"msg_date": "Mon, 13 Jan 2020 00:40:20 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Question regarding heap_multi_insert documentation"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 12:40:20AM +0100, Daniel Gustafsson wrote:\n> Thanks for clarifying. PFA tiny patch for this.\n\nThanks, pushed.\n--\nMichael",
"msg_date": "Mon, 13 Jan 2020 17:59:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding heap_multi_insert documentation"
}
] |
[
{
"msg_contents": "This diff fixes what I consider a typo.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 13 Jan 2020 08:22:13 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Comment fix in session.h"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 12:51 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> This diff fixes what I consider a typo.\n>\n\nLGTM. I'll push this in some time.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Jan 2020 14:08:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment fix in session.h"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 2:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 13, 2020 at 12:51 PM Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > This diff fixes what I consider a typo.\n> >\n>\n> LGTM. I'll push this in some time.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Jan 2020 15:43:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment fix in session.h"
}
] |
[
{
"msg_contents": "During one of my works for logical rewrite, I want to check if the expr is\na given Expr.\n\nso the simplest way is:\nif (expr->opno == 418 && nodeTag(linitial(expr->args)) == T_xxx &&\nnodeTag(lsecond(expr->args)) == T_yyyy )\n{\n ..\n}\n\nif we write code like above, we may have issues if the oid changed in the\nfuture version.\nso what would be your suggestion?\n\nThanks\n\nDuring one of my works for logical rewrite, I want to check if the expr is a given Expr. so the simplest way is:if (expr->opno == 418 && nodeTag(linitial(expr->args)) == T_xxx && nodeTag(lsecond(expr->args)) == T_yyyy ) { .. }if we write code like above, we may have issues if the oid changed in the future version. so what would be your suggestion? Thanks",
"msg_date": "Mon, 13 Jan 2020 15:29:27 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to make a OpExpr check compatible among different versions"
},
{
"msg_contents": "On 2020-01-13 08:29, Andy Fan wrote:\n> During one of my works for logical rewrite, I want to check if the expr \n> is a given Expr.\n> \n> so the simplest way is:\n> if (expr->opno == 418 && nodeTag(linitial(expr->args)) == T_xxx && \n> nodeTag(lsecond(expr->args)) == T_yyyy )\n> {\n> ..\n> }\n> \n> if we write code like above, we may have issues if the oid changed in \n> the future version.\n\nGenerally, you would do this by using a preprocessor symbol. For \nexample, instead of hardcoding the OID of the text type, you would use \nthe symbol TEXTOID instead. Symbols like that exist for many catalog \nobjects that one might reasonably need to hardcode.\n\nHowever, hardcoding an OID reference to an operator looks like a design \nmistake to me. Operators should normally be looked up via operator \nclasses or similar structures that convey the meaning of the operator.\n\nAlso, instead of nodeTag() == T_xxx you should use the IsA() macro.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 Jan 2020 09:09:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How to make a OpExpr check compatible among different versions"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 4:09 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-01-13 08:29, Andy Fan wrote:\n> > During one of my works for logical rewrite, I want to check if the expr\n> > is a given Expr.\n> >\n> > so the simplest way is:\n> > if (expr->opno == 418 && nodeTag(linitial(expr->args)) == T_xxx &&\n> > nodeTag(lsecond(expr->args)) == T_yyyy )\n> > {\n> > ..\n> > }\n> >\n> > if we write code like above, we may have issues if the oid changed in\n> > the future version.\n>\n> Generally, you would do this by using a preprocessor symbol. For\n> example, instead of hardcoding the OID of the text type, you would use\n> the symbol TEXTOID instead. Symbols like that exist for many catalog\n> objects that one might reasonably need to hardcode.\n>\n> However, hardcoding an OID reference to an operator looks like a design\n> mistake to me. Operators should normally be looked up via operator\n> classes or similar structures that convey the meaning of the operator.\n>\n\nYes, I just realized this. Thanks for your point!\n\n\n> Also, instead of nodeTag() == T_xxx you should use the IsA() macro.\n>\n> Thank you for this as well.\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nOn Mon, Jan 13, 2020 at 4:09 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-01-13 08:29, Andy Fan wrote:\n> During one of my works for logical rewrite, I want to check if the expr \n> is a given Expr.\n> \n> so the simplest way is:\n> if (expr->opno == 418 && nodeTag(linitial(expr->args)) == T_xxx && \n> nodeTag(lsecond(expr->args)) == T_yyyy )\n> {\n> ..\n> }\n> \n> if we write code like above, we may have issues if the oid changed in \n> the future version.\n\nGenerally, you would do this by using a preprocessor symbol. For \nexample, instead of hardcoding the OID of the text type, you would use \nthe symbol TEXTOID instead. Symbols like that exist for many catalog \nobjects that one might reasonably need to hardcode.\n\nHowever, hardcoding an OID reference to an operator looks like a design \nmistake to me. Operators should normally be looked up via operator \nclasses or similar structures that convey the meaning of the operator. Yes, I just realized this. Thanks for your point! \nAlso, instead of nodeTag() == T_xxx you should use the IsA() macro.\nThank you for this as well. \n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 13 Jan 2020 17:25:05 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to make a OpExpr check compatible among different versions"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI want to propose to you an old patch for Postgres 11, off-site developed\nby Oliver Ford,\nbut I have permission from him to publish it and to continue it's\ndevelopment,\nthat allow distinct aggregates, like select sum(distinct nums) within a\nwindow function.\n\nI have rebased it for current git master branch and have made necessary\nchanges to it to work with Postgres 13devel.\n\nIt's a WIP, because it doesn't have tests yet (I will add them later) and\nalso, it works for a int, float, and numeric types,\nbut probably distinct check can be rewritten for possible performance\nimprovement,\nwith storing the distinct elements in a hash table which should give a\nperformance improvement.\n\nIf you find the implementation of patch acceptable from committers\nperspective,\nI will answer to all yours design and review notes and will try to go ahead\nwith it,\nalso, I will add this patch to the March commit fest.\n\nFor example usage of a patch, if you have time series data, with current\nPostgres you will get an error:\n\npostgres=# CREATE TABLE t_demo AS\npostgres-# SELECT ordinality, day, date_part('week', day) AS week\npostgres-# FROM generate_series('2020-01-02', '2020-01-15', '1\nday'::interval)\npostgres-# WITH ORDINALITY AS day;\nSELECT 14\npostgres=# SELECT * FROM t_demo;\n ordinality | day | week\n------------+------------------------+------\n 1 | 2020-01-02 00:00:00+02 | 1\n 2 | 2020-01-03 00:00:00+02 | 1\n 3 | 2020-01-04 00:00:00+02 | 1\n 4 | 2020-01-05 00:00:00+02 | 1\n 5 | 2020-01-06 00:00:00+02 | 2\n 6 | 2020-01-07 00:00:00+02 | 2\n 7 | 2020-01-08 00:00:00+02 | 2\n 8 | 2020-01-09 00:00:00+02 | 2\n 9 | 2020-01-10 00:00:00+02 | 2\n 10 | 2020-01-11 00:00:00+02 | 2\n 11 | 2020-01-12 00:00:00+02 | 2\n 12 | 2020-01-13 00:00:00+02 | 3\n 13 | 2020-01-14 00:00:00+02 | 3\n 14 | 2020-01-15 00:00:00+02 | 3\n(14 rows)\n\npostgres=# SELECT *,\npostgres-# array_agg(DISTINCT week) OVER (ORDER BY day ROWS\npostgres(# BETWEEN 2 PRECEDING AND 2\nFOLLOWING)\npostgres-# FROM t_demo;\nERROR: DISTINCT is not implemented for window functions\nLINE 2: array_agg(DISTINCT week) OVER (ORDER BY day ROWS\n ^\n\nSo you will need to write something like this:\n\npostgres=# SELECT *, (SELECT array_agg(DISTINCT unnest) FROM unnest(x)) AS\nb\npostgres-# FROM\npostgres-# (\npostgres(# SELECT *,\npostgres(# array_agg(week) OVER (ORDER BY day ROWS\npostgres(# BETWEEN 2 PRECEDING AND 2 FOLLOWING) AS x\npostgres(# FROM t_demo\npostgres(# ) AS a;\n ordinality | day | week | x | b\n------------+------------------------+------+-------------+-------\n 1 | 2020-01-02 00:00:00+02 | 1 | {1,1,1} | {1}\n 2 | 2020-01-03 00:00:00+02 | 1 | {1,1,1,1} | {1}\n 3 | 2020-01-04 00:00:00+02 | 1 | {1,1,1,1,2} | {1,2}\n 4 | 2020-01-05 00:00:00+02 | 1 | {1,1,1,2,2} | {1,2}\n 5 | 2020-01-06 00:00:00+02 | 2 | {1,1,2,2,2} | {1,2}\n 6 | 2020-01-07 00:00:00+02 | 2 | {1,2,2,2,2} | {1,2}\n 7 | 2020-01-08 00:00:00+02 | 2 | {2,2,2,2,2} | {2}\n 8 | 2020-01-09 00:00:00+02 | 2 | {2,2,2,2,2} | {2}\n 9 | 2020-01-10 00:00:00+02 | 2 | {2,2,2,2,2} | {2}\n 10 | 2020-01-11 00:00:00+02 | 2 | {2,2,2,2,3} | {2,3}\n 11 | 2020-01-12 00:00:00+02 | 2 | {2,2,2,3,3} | {2,3}\n 12 | 2020-01-13 00:00:00+02 | 3 | {2,2,3,3,3} | {2,3}\n 13 | 2020-01-14 00:00:00+02 | 3 | {2,3,3,3} | {2,3}\n 14 | 2020-01-15 00:00:00+02 | 3 | {3,3,3} | {3}\n(14 rows)\n\nWith attached version, you will get the desired results:\n\npostgres=# SELECT *,\npostgres-# array_agg(DISTINCT week) OVER (ORDER BY day ROWS\npostgres(# BETWEEN 2 PRECEDING AND 2\nFOLLOWING)\npostgres-# FROM t_demo;\n ordinality | day | week | array_agg\n------------+------------------------+------+-----------\n 1 | 2020-01-02 00:00:00+02 | 1 | {1}\n 2 | 2020-01-03 00:00:00+02 | 1 | {1}\n 3 | 2020-01-04 00:00:00+02 | 1 | {1,2}\n 4 | 2020-01-05 00:00:00+02 | 1 | {1,2}\n 5 | 2020-01-06 00:00:00+02 | 2 | {1,2}\n 6 | 2020-01-07 00:00:00+02 | 2 | {1,2}\n 7 | 2020-01-08 00:00:00+02 | 2 | {2}\n 8 | 2020-01-09 00:00:00+02 | 2 | {2}\n 9 | 2020-01-10 00:00:00+02 | 2 | {2}\n 10 | 2020-01-11 00:00:00+02 | 2 | {2,3}\n 11 | 2020-01-12 00:00:00+02 | 2 | {2,3}\n 12 | 2020-01-13 00:00:00+02 | 3 | {2,3}\n 13 | 2020-01-14 00:00:00+02 | 3 | {2,3}\n 14 | 2020-01-15 00:00:00+02 | 3 | {3}\n(14 rows)",
"msg_date": "Mon, 13 Jan 2020 11:17:02 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] distinct aggregates within a window function WIP"
},
{
"msg_contents": "Krasiyan Andreev <krasiyan@gmail.com> writes:\n> I want to propose to you an old patch for Postgres 11, off-site developed\n> by Oliver Ford,\n> but I have permission from him to publish it and to continue it's\n> development,\n> that allow distinct aggregates, like select sum(distinct nums) within a\n> window function.\n\nI started to respond by asking whether that's well-defined, but\nreading down further I see that that's not actually what the feature\nis: what it is is attaching DISTINCT to a window function itself.\nI'd still ask whether it's well-defined though, or even minimally\nsensible. Window functions are generally supposed to produce one\nrow per input row --- how does that square with the implicit row\nmerging of DISTINCT? They're also typically row-order-sensitive\n--- how does that work with DISTINCT? Also, to the extent that\nthis is sensible, can't you get the same results already today\nwith appropriate use of window framing options?\n\n> It's a WIP, because it doesn't have tests yet (I will add them later) and\n> also, it works for a int, float, and numeric types,\n\nAs a rule of thumb, operations like this should not be coded to be\ndatatype-specific. We threw out some features in the original window\nfunction patch until they could be rewritten to not be limited to a\nhard-coded set of data types (cf commit 0a459cec9), and I don't see\nwhy we'd apply a lesser standard here. Certainly DISTINCT for\naggregates has no such limitation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jan 2020 09:19:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP"
},
{
"msg_contents": "Tom Lane schrieb am 13.01.2020 um 15:19:\n\n> what it is is attaching DISTINCT to a window function itself.\n> I'd still ask whether it's well-defined though, or even minimally\n> sensible. Window functions are generally supposed to produce one\n> row per input row --- how does that square with the implicit row\n> merging of DISTINCT? They're also typically row-order-sensitive\n> --- how does that work with DISTINCT? Also, to the extent that\n> this is sensible, can't you get the same results already today\n> with appropriate use of window framing options?\n\nI find the example using array_agg() and cumulative window functions a\nbit confusing as well, but I think there are situations where having this\nis really helpful, e.g.:\n\n count(distinct some_column) over (partition by something)\n\nI know it's not an argument, but Oracle supports this and porting\nqueries like that from Oracle to Postgres isn't really fun.\n\nThomas\n\n\n\n\n\n",
"msg_date": "Mon, 13 Jan 2020 15:49:51 +0100",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP"
},
{
"msg_contents": "I understand yours note about datatype-specific operations, so I need to\nthink more generic about it.\nAbout yours additional note, I think that it is not possible to get easy\nthe same result with appropriate use of window framing options,\nbecause \"exclude ties\" will not exclude \"current row\" itself, only peers of\nit. So, that is the only difference and reason of DISTINCT aggregate.\nMaybe, if we can specify at the same time to \"exclude ties\" and \"exclude\ncurrent row\" itself, there will not be need of DISTINCT, but right now\nI think that nor \"exclude ties\" nor \"exclude groups\" or \"exclude current\nrow\", can specify it, because they can't be nested or used at the same time.\n\nНа пн, 13.01.2020 г. в 16:19 Tom Lane <tgl@sss.pgh.pa.us> написа:\n\n> Krasiyan Andreev <krasiyan@gmail.com> writes:\n> > I want to propose to you an old patch for Postgres 11, off-site developed\n> > by Oliver Ford,\n> > but I have permission from him to publish it and to continue it's\n> > development,\n> > that allow distinct aggregates, like select sum(distinct nums) within a\n> > window function.\n>\n> I started to respond by asking whether that's well-defined, but\n> reading down further I see that that's not actually what the feature\n> is: what it is is attaching DISTINCT to a window function itself.\n> I'd still ask whether it's well-defined though, or even minimally\n> sensible. Window functions are generally supposed to produce one\n> row per input row --- how does that square with the implicit row\n> merging of DISTINCT? They're also typically row-order-sensitive\n> --- how does that work with DISTINCT? Also, to the extent that\n> this is sensible, can't you get the same results already today\n> with appropriate use of window framing options?\n>\n> > It's a WIP, because it doesn't have tests yet (I will add them later) and\n> > also, it works for a int, float, and numeric types,\n>\n> As a rule of thumb, operations like this should not be coded to be\n> datatype-specific. We threw out some features in the original window\n> function patch until they could be rewritten to not be limited to a\n> hard-coded set of data types (cf commit 0a459cec9), and I don't see\n> why we'd apply a lesser standard here. Certainly DISTINCT for\n> aggregates has no such limitation.\n>\n> regards, tom lane\n>\n\nI understand yours note about datatype-specific operations, so I need to think more generic about it.About yours additional note, I think that it is not possible to get easy the same result with appropriate use of window framing options,because \"exclude ties\" will not exclude \"current row\" itself, only peers of it. So, that is the only difference and reason of DISTINCT aggregate.Maybe, if we can specify at the same time to \"exclude ties\" and \"exclude current row\" itself, there will not be need of DISTINCT, but right nowI think that nor \"exclude ties\" nor \"exclude groups\" or \"exclude current row\", can specify it, because they can't be nested or used at the same time.На пн, 13.01.2020 г. в 16:19 Tom Lane <tgl@sss.pgh.pa.us> написа:Krasiyan Andreev <krasiyan@gmail.com> writes:\n> I want to propose to you an old patch for Postgres 11, off-site developed\n> by Oliver Ford,\n> but I have permission from him to publish it and to continue it's\n> development,\n> that allow distinct aggregates, like select sum(distinct nums) within a\n> window function.\n\nI started to respond by asking whether that's well-defined, but\nreading down further I see that that's not actually what the feature\nis: what it is is attaching DISTINCT to a window function itself.\nI'd still ask whether it's well-defined though, or even minimally\nsensible. Window functions are generally supposed to produce one\nrow per input row --- how does that square with the implicit row\nmerging of DISTINCT? They're also typically row-order-sensitive\n--- how does that work with DISTINCT? Also, to the extent that\nthis is sensible, can't you get the same results already today\nwith appropriate use of window framing options?\n\n> It's a WIP, because it doesn't have tests yet (I will add them later) and\n> also, it works for a int, float, and numeric types,\n\nAs a rule of thumb, operations like this should not be coded to be\ndatatype-specific. We threw out some features in the original window\nfunction patch until they could be rewritten to not be limited to a\nhard-coded set of data types (cf commit 0a459cec9), and I don't see\nwhy we'd apply a lesser standard here. Certainly DISTINCT for\naggregates has no such limitation.\n\n regards, tom lane",
"msg_date": "Mon, 13 Jan 2020 17:22:29 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP"
},
{
"msg_contents": "On 13/01/2020 15:19, Tom Lane wrote:\n> Krasiyan Andreev <krasiyan@gmail.com> writes:\n>> I want to propose to you an old patch for Postgres 11, off-site developed\n>> by Oliver Ford,\n>> but I have permission from him to publish it and to continue it's\n>> development,\n>> that allow distinct aggregates, like select sum(distinct nums) within a\n>> window function.\n> I started to respond by asking whether that's well-defined, but\n> reading down further I see that that's not actually what the feature\n> is: what it is is attaching DISTINCT to a window function itself.\n> I'd still ask whether it's well-defined though, or even minimally\n> sensible. Window functions are generally supposed to produce one\n> row per input row --- how does that square with the implicit row\n> merging of DISTINCT? They're also typically row-order-sensitive\n> --- how does that work with DISTINCT? \n\n\nIt's a little strange because the spec says:\n\n\n<q>\nIf the window ordering clause or the window framing clause of the window\nstructure descriptor that describes the <window name or specification>\nis present, then no <aggregate function> simply contained in <window\nfunction> shall specify DISTINCT or <ordered set function>.\n</q>\n\n\nSo it seems to be well defined if all you have is a partition.\n\n\nBut then it also says:\n\n\n<q>\nDENSE_RANK() OVER WNS is equivalent to the <window function>:\n COUNT (DISTINCT ROW ( VE 1 , ..., VE N ) )\n OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n</q>\n\n\nAnd that kind of looks like a framing clause there.\n\n\n> Also, to the extent that\n> this is sensible, can't you get the same results already today\n> with appropriate use of window framing options?\n\n\nI don't see how.\n\n\nI have sometimes wanted this feature so I am +1 on us getting at least a\nminimal form of it.\n\n-- \n\nVik\n\n\n\n",
"msg_date": "Mon, 13 Jan 2020 17:19:54 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP"
},
{
"msg_contents": "I have currently suspended development of this patch, based on it's review,\nbut I will continue development of the other Oliver Ford's work about\nadding support of respect/ignore nulls\nfor lag(),lead(),first_value(),last_value() and nth_value() and from\nfirst/last for nth_value() patch,\nbut I am not sure how to proceed with it's implementation and any feedback\nwill be very helpful.\n\nI have dropped support of from first/last for nth_value(), but also I\nreimplemented it in a different way,\nby using negative number for the position argument, to be able to get the\nsame frame in exact reverse order.\nAfter that patch becomes much more simple and major concerns about\nprecedence hack has gone,\nbut maybe it can be additionally simplified.\n\nI have not renamed special bool type \"ignorenulls\", because it is probably\nnot acceptable way for calling extra version\nof window functions (but it makes things very easy and it can reuse\nframes), but I removed the other special bool type \"fromlast\".\n\nAttached file is for PostgreSQL 13 (master git branch) and I will add it\nnow to a March commit fest, to be able to track changes.\nEverything works and patch is in very good shape, all tests are passed and\nalso, I use it from some time for SQL analysis purposes\n(because ignore nulls is one of the most needed feature in OLAP/BI area and\nOracle, Amazon Redshift, even Informix have it).\n\nAfter patch review and suggestions about what to do with special bool type\nand unreserved keywords, I will reimplement it, if needed.\n\nНа пн, 13.01.2020 г. в 18:19 Vik Fearing <vik.fearing@2ndquadrant.com>\nнаписа:\n\n> On 13/01/2020 15:19, Tom Lane wrote:\n> > Krasiyan Andreev <krasiyan@gmail.com> writes:\n> >> I want to propose to you an old patch for Postgres 11, off-site\n> developed\n> >> by Oliver Ford,\n> >> but I have permission from him to publish it and to continue it's\n> >> development,\n> >> that allow distinct aggregates, like select sum(distinct nums) within a\n> >> window function.\n> > I started to respond by asking whether that's well-defined, but\n> > reading down further I see that that's not actually what the feature\n> > is: what it is is attaching DISTINCT to a window function itself.\n> > I'd still ask whether it's well-defined though, or even minimally\n> > sensible. Window functions are generally supposed to produce one\n> > row per input row --- how does that square with the implicit row\n> > merging of DISTINCT? They're also typically row-order-sensitive\n> > --- how does that work with DISTINCT?\n>\n>\n> It's a little strange because the spec says:\n>\n>\n> <q>\n> If the window ordering clause or the window framing clause of the window\n> structure descriptor that describes the <window name or specification>\n> is present, then no <aggregate function> simply contained in <window\n> function> shall specify DISTINCT or <ordered set function>.\n> </q>\n>\n>\n> So it seems to be well defined if all you have is a partition.\n>\n>\n> But then it also says:\n>\n>\n> <q>\n> DENSE_RANK() OVER WNS is equivalent to the <window function>:\n> COUNT (DISTINCT ROW ( VE 1 , ..., VE N ) )\n> OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n> </q>\n>\n>\n> And that kind of looks like a framing clause there.\n>\n>\n> > Also, to the extent that\n> > this is sensible, can't you get the same results already today\n> > with appropriate use of window framing options?\n>\n>\n> I don't see how.\n>\n>\n> I have sometimes wanted this feature so I am +1 on us getting at least a\n> minimal form of it.\n>\n> --\n>\n> Vik\n>\n>",
"msg_date": "Sun, 1 Mar 2020 08:32:17 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP"
},
{
"msg_contents": "On Thu, Mar 5, 2020 at 4:17 AM Krasiyan Andreev <krasiyan@gmail.com> wrote:\n\n> I have currently suspended development of this patch, based on it's\n> review,\n> but I will continue development of the other Oliver Ford's work about\n> adding support of respect/ignore nulls\n> for lag(),lead(),first_value(),last_value() and nth_value() and from\n> first/last for nth_value() patch,\n> but I am not sure how to proceed with it's implementation and any feedback\n> will be very helpful.\n>\n>\n* I applied your patch on top of 58c47ccfff20b8c125903 . It applied cleanly\n, compiled, make check pass, but it have white space errors:\n\n*Added functions on windowfuncs.c have no comments so it's not easily\nunderstandable.\n\n* Regression test addition seems huge to me. Can you reduce that? You can\nuse existing tables and fewer records.\n\n* I don’t understand why this patch has to change makeBoolAConst? It\nalready make “bool” constant node\n\n\nregards\n\nSurafel\n\nOn Thu, Mar 5, 2020 at 4:17 AM Krasiyan Andreev <krasiyan@gmail.com> wrote:I have currently suspended development of this patch, based on it's review, but I will continue development of the other Oliver Ford's work about adding support of respect/ignore nulls for lag(),lead(),first_value(),last_value() and nth_value() and from first/last for nth_value() patch, but I am not sure how to proceed with it's implementation and any feedback will be very helpful.\n\n* I applied your patch on top of 58c47ccfff20b8c125903 . It\napplied cleanly , compiled, make check pass, but it have white space\nerrors:\n*Added functions on windowfuncs.c have no comments so it's not\neasily understandable.\n* Regression test addition seems huge to me. Can you reduce that?\nYou can use existing tables and fewer records.\n* I don’t understand why this patch has to change\nmakeBoolAConst? It already make “bool” constant noderegards Surafel",
"msg_date": "Wed, 16 Sep 2020 11:19:06 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP"
},
{
"msg_contents": "Thank you very much.\n\nI think that Vik Fearing's patch about \"Implement <null treatment> for\nwindow functions\" is much clear, better and has a chance to be committed.\nFor me it's not important which patch will go into PostgreSQL, because it's\na much needed feature.\n\nIn mine patch, there is also a feature about using negative indexes, to be\nable to reverse order in exact same window frame for \"FROM FIRST/FROM LAST\",\nbut I am not sure, is such non-standard usage is acceptable (it's the same\nas some array functions in programming language), if it's acceptable, it\ncan be easy ported to Vik's patch.\n\nI am thinking also to concentrate on Vik's patch, if it has a clear design\npoint of view, clear design, I can withdraw mine patch.\n\n\n\nНа ср, 16.09.2020 г. в 11:19 Surafel Temesgen <surafel3000@gmail.com>\nнаписа:\n\n>\n> On Thu, Mar 5, 2020 at 4:17 AM Krasiyan Andreev <krasiyan@gmail.com>\n> wrote:\n>\n>> I have currently suspended development of this patch, based on it's\n>> review,\n>> but I will continue development of the other Oliver Ford's work about\n>> adding support of respect/ignore nulls\n>> for lag(),lead(),first_value(),last_value() and nth_value() and from\n>> first/last for nth_value() patch,\n>> but I am not sure how to proceed with it's implementation and any\n>> feedback will be very helpful.\n>>\n>>\n> * I applied your patch on top of 58c47ccfff20b8c125903 . It applied\n> cleanly , compiled, make check pass, but it have white space errors:\n>\n> *Added functions on windowfuncs.c have no comments so it's not easily\n> understandable.\n>\n> * Regression test addition seems huge to me. Can you reduce that? You can\n> use existing tables and fewer records.\n>\n> * I don’t understand why this patch has to change makeBoolAConst? It\n> already make “bool” constant node\n>\n>\n> regards\n>\n> Surafel\n>\n>\n\nThank you very much.I think that Vik Fearing's patch about \"Implement <null treatment> for window functions\" is much clear, better and has a chance to be committed.For me it's not important which patch will go into PostgreSQL, because it's a much needed feature.In mine patch, there is also a feature about using negative indexes, to be able to reverse order in exact same window frame for \"FROM FIRST/FROM LAST\",but I am not sure, is such non-standard usage is acceptable (it's the same as some array functions in programming language), if it's acceptable, it can be easy ported to Vik's patch.I am thinking also to concentrate on Vik's patch, if it has a clear design point of view, clear design, I can withdraw mine patch.На ср, 16.09.2020 г. в 11:19 Surafel Temesgen <surafel3000@gmail.com> написа:On Thu, Mar 5, 2020 at 4:17 AM Krasiyan Andreev <krasiyan@gmail.com> wrote:I have currently suspended development of this patch, based on it's review, but I will continue development of the other Oliver Ford's work about adding support of respect/ignore nulls for lag(),lead(),first_value(),last_value() and nth_value() and from first/last for nth_value() patch, but I am not sure how to proceed with it's implementation and any feedback will be very helpful.\n\n* I applied your patch on top of 58c47ccfff20b8c125903 . It\napplied cleanly , compiled, make check pass, but it have white space\nerrors:\n*Added functions on windowfuncs.c have no comments so it's not\neasily understandable.\n* Regression test addition seems huge to me. Can you reduce that?\nYou can use existing tables and fewer records.\n* I don’t understand why this patch has to change\nmakeBoolAConst? It already make “bool” constant noderegards Surafel",
"msg_date": "Wed, 16 Sep 2020 11:35:22 +0300",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP"
},
{
"msg_contents": "On Wed, Sep 16, 2020 at 11:35:22AM +0300, Krasiyan Andreev wrote:\n> I am thinking also to concentrate on Vik's patch, if it has a clear design\n> point of view, clear design, I can withdraw mine patch.\n\nOkay, I have done that then.\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 14:38:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] distinct aggregates within a window function WIP"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile reviewing some code in namespace.c, I have bumped into the\nfollowing issue introduced by 246a6c8:\ndiff --git a/src/backend/catalog/namespace.c\nb/src/backend/catalog/namespace.c\nindex c82f9fc4b5..e70243a008 100644\n--- a/src/backend/catalog/namespace.c\n+++ b/src/backend/catalog/namespace.c\n@@ -3235,8 +3235,8 @@ isTempNamespaceInUse(Oid namespaceId)\n\n backendId = GetTempNamespaceBackendId(namespaceId);\n\n- if (backendId == InvalidBackendId ||\n- backendId == MyBackendId)\n+ /* No such temporary namespace? */\n+ if (backendId == InvalidBackendId)\n return false;\n\nThe current logic of isTempNamespaceInUse() would cause a session\ncalling the routine to return always false if trying to check if its\nown temporary session is in use, but that's incorrect. It is actually\nsafe to remove the check on MyBackendId as the code would fall back on\na check equivalent to MyProc->tempNamespaceId a bit down as per the\nattached, so let's fix it.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 13 Jan 2020 18:37:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "isTempNamespaceInUse() is incorrect with its handling of MyBackendId"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 06:37:03PM +0900, Michael Paquier wrote:\n> Hi all,\n>\n> While reviewing some code in namespace.c, I have bumped into the\n> following issue introduced by 246a6c8:\n> diff --git a/src/backend/catalog/namespace.c\n> b/src/backend/catalog/namespace.c\n> index c82f9fc4b5..e70243a008 100644\n> --- a/src/backend/catalog/namespace.c\n> +++ b/src/backend/catalog/namespace.c\n> @@ -3235,8 +3235,8 @@ isTempNamespaceInUse(Oid namespaceId)\n>\n> backendId = GetTempNamespaceBackendId(namespaceId);\n>\n> - if (backendId == InvalidBackendId ||\n> - backendId == MyBackendId)\n> + /* No such temporary namespace? */\n> + if (backendId == InvalidBackendId)\n> return false;\n>\n> The current logic of isTempNamespaceInUse() would cause a session\n> calling the routine to return always false if trying to check if its\n> own temporary session is in use, but that's incorrect.\n\nIndeed.\n\n> It is actually\n> safe to remove the check on MyBackendId as the code would fall back on\n> a check equivalent to MyProc->tempNamespaceId a bit down as per the\n> attached, so let's fix it.\n>\n> Thoughts?\n\nBut that means an extraneous call to BackendIdGetProc() in that case, it seems\nbetter to avoid it if we already have the information.\n\n\n",
"msg_date": "Mon, 13 Jan 2020 13:09:01 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: isTempNamespaceInUse() is incorrect with its handling of\n MyBackendId"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 01:09:01PM +0100, Julien Rouhaud wrote:\n> But that means an extraneous call to BackendIdGetProc() in that\n> case, it seems better to avoid it if we already have the information.\n\nNote that you cannot make a direct comparison of the result from\nGetTempNamespaceBackendId() with MyBackendId, because it is critical\nto be able to handle the case of a session which has not created yet\nan object on its own temp namespace (this way any temp objects from\nprevious connections which used the same backend slot can be marked as\norphaned and discarded by autovacuum, the whole point of 246a6c8). So\nin order to get a fast-exit path we could do the following:\n- Add a routine GetTempNamespace which returns myTempNamespace (the\nresult can be InvalidOid).\n- Add an assertion at the beginning of isTempNamespaceInUse() to make\nsure that it never gets called with InvalidOid as input argument.\n- Return true as a first step of GetTempNamespaceBackendId() if\nnamespaceId matches GetTempNamespace().\n\nWhat do you think?\n--\nMichael",
"msg_date": "Mon, 13 Jan 2020 22:14:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: isTempNamespaceInUse() is incorrect with its handling of\n MyBackendId"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 10:14:52PM +0900, Michael Paquier wrote:\n> On Mon, Jan 13, 2020 at 01:09:01PM +0100, Julien Rouhaud wrote:\n> > But that means an extraneous call to BackendIdGetProc() in that\n> > case, it seems better to avoid it if we already have the information.\n>\n> Note that you cannot make a direct comparison of the result from\n> GetTempNamespaceBackendId() with MyBackendId, because it is critical\n> to be able to handle the case of a session which has not created yet\n> an object on its own temp namespace (this way any temp objects from\n> previous connections which used the same backend slot can be marked as\n> orphaned and discarded by autovacuum, the whole point of 246a6c8).\n\nOh right.\n\n> So in order to get a fast-exit path we could do the following:\n> - Add a routine GetTempNamespace which returns myTempNamespace (the\n> result can be InvalidOid).\n> - Add an assertion at the beginning of isTempNamespaceInUse() to make\n> sure that it never gets called with InvalidOid as input argument.\n> - Return true as a first step of GetTempNamespaceBackendId() if\n> namespaceId matches GetTempNamespace().\n>\n> What do you think?\n\nWell, since isTempNamespaceInUse is for now only called by autovacuum, and\nshouldn't change soon, this really feels premature optimzation, so your\noriginal approach looks better.\n\n\n",
"msg_date": "Mon, 13 Jan 2020 14:56:13 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: isTempNamespaceInUse() is incorrect with its handling of\n MyBackendId"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 02:56:13PM +0100, Julien Rouhaud wrote:\n> Well, since isTempNamespaceInUse is for now only called by autovacuum, and\n> shouldn't change soon, this really feels premature optimzation, so your\n> original approach looks better.\n\nYes, I'd rather keep this routine in its simplest shape for now. If\nthe optimization makes sense, though in most cases it won't because it\njust helps sessions to detect faster their own temp schema, then let's\ndo it. I'll let this patch aside for a couple of days to let others\ncomment on it, and if there are no objections, I'll commit the fix.\nThanks for the lookup!\n--\nMichael",
"msg_date": "Tue, 14 Jan 2020 07:23:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: isTempNamespaceInUse() is incorrect with its handling of\n MyBackendId"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 07:23:19AM +0900, Michael Paquier wrote:\n> Yes, I'd rather keep this routine in its simplest shape for now. If\n> the optimization makes sense, though in most cases it won't because it\n> just helps sessions to detect faster their own temp schema, then let's\n> do it. I'll let this patch aside for a couple of days to let others\n> comment on it, and if there are no objections, I'll commit the fix.\n> Thanks for the lookup!\n\nFor the archive's sake: this has been fixed with ac5bdf6, down to 11.\n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 12:51:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: isTempNamespaceInUse() is incorrect with its handling of\n MyBackendId"
}
] |
[
{
"msg_contents": "Hi,\n\nNow that I've committed [1] which allows us to use multiple extended\nstatistics per table, I'd like to start a thread discussing a couple of\nadditional improvements for extended statistics. I've considered\nstarting a separate patch for each, but that would be messy as those\nchanges will touch roughly the same places. So I've organized it into a\nsingle patch series, with the simpler parts at the beginning.\n\nThere are three main improvements:\n\n1) improve estimates of OR clauses\n\nUntil now, OR clauses pretty much ignored extended statistics, based on\nthe experience that they're less vulnerable to misestimates. But it's a\nbit weird that AND clauses are handled while OR clauses are not, so this\nextends the logic to OR clauses.\n\nStatus: I think this is fairly OK.\n\n\n2) support estimating clauses (Var op Var)\n\nCurrently, we only support clauses with a single Var, i.e. clauses like\n\n - Var op Const\n - Var IS [NOT] NULL\n - [NOT] Var\n - ...\n\nand AND/OR clauses built from those simple ones. This patch adds support\nfor clauses of the form (Var op Var), of course assuming both Vars come\nfrom the same relation.\n\nStatus: This works, but it feels a bit hackish. Needs more work.\n\n\n3) support extended statistics on expressions\n\nCurrently we only allow simple references to columns in extended stats,\nso we can do\n\n CREATE STATISTICS s ON a, b, c FROM t;\n\nbut not\n\n CREATE STATISTICS s ON (a+b), (c + 1) FROM t;\n\nThis patch aims to allow this. At the moment it's a WIP - it does most\nof the catalog changes and stats building, but with some hacks/bugs. And\nit does not even try to use those statistics during estimation.\n\nThe first question is how to extend the current pg_statistic_ext catalog\nto support expressions. I've been planning to do it the way we support\nexpressions for indexes, i.e. have two catalog fields - one for keys,\none for expressions.\n\nOne difference is that for statistics we don't care about order of the\nkeys, so that we don't need to bother with storing 0 keys in place for\nexpressions - we can simply assume keys are first, then expressions.\n\nAnd this is what the patch does now.\n\nI'm however wondering whether to keep this split - why not to just treat\neverything as expressions, and be done with it? A key just represents a\nVar expression, after all. And it would massively simplify a lot of code\nthat now has to care about both keys and expressions.\n\nOf course, expressions are a bit more expensive, but I wonder how\nnoticeable that would be.\n\nOpinions?\n\n\nragards\n\n[1] https://commitfest.postgresql.org/26/2320/\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 14 Jan 2020 00:00:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Additional improvements to extended statistics"
},
{
"msg_contents": "út 14. 1. 2020 v 0:00 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\nnapsal:\n\n> Hi,\n>\n> Now that I've committed [1] which allows us to use multiple extended\n> statistics per table, I'd like to start a thread discussing a couple of\n> additional improvements for extended statistics. I've considered\n> starting a separate patch for each, but that would be messy as those\n> changes will touch roughly the same places. So I've organized it into a\n> single patch series, with the simpler parts at the beginning.\n>\n> There are three main improvements:\n>\n> 1) improve estimates of OR clauses\n>\n> Until now, OR clauses pretty much ignored extended statistics, based on\n> the experience that they're less vulnerable to misestimates. But it's a\n> bit weird that AND clauses are handled while OR clauses are not, so this\n> extends the logic to OR clauses.\n>\n> Status: I think this is fairly OK.\n>\n>\n> 2) support estimating clauses (Var op Var)\n>\n> Currently, we only support clauses with a single Var, i.e. clauses like\n>\n> - Var op Const\n> - Var IS [NOT] NULL\n> - [NOT] Var\n> - ...\n>\n> and AND/OR clauses built from those simple ones. This patch adds support\n> for clauses of the form (Var op Var), of course assuming both Vars come\n> from the same relation.\n>\n> Status: This works, but it feels a bit hackish. Needs more work.\n>\n>\n> 3) support extended statistics on expressions\n>\n> Currently we only allow simple references to columns in extended stats,\n> so we can do\n>\n> CREATE STATISTICS s ON a, b, c FROM t;\n>\n> but not\n>\n> CREATE STATISTICS s ON (a+b), (c + 1) FROM t;\n>\n\n+1 for expression's statisctics - it can be great feature.\n\nPavel\n\n\n> This patch aims to allow this. At the moment it's a WIP - it does most\n> of the catalog changes and stats building, but with some hacks/bugs. And\n> it does not even try to use those statistics during estimation.\n>\n> The first question is how to extend the current pg_statistic_ext catalog\n> to support expressions. I've been planning to do it the way we support\n> expressions for indexes, i.e. have two catalog fields - one for keys,\n> one for expressions.\n>\n> One difference is that for statistics we don't care about order of the\n> keys, so that we don't need to bother with storing 0 keys in place for\n> expressions - we can simply assume keys are first, then expressions.\n>\n> And this is what the patch does now.\n>\n> I'm however wondering whether to keep this split - why not to just treat\n> everything as expressions, and be done with it? A key just represents a\n> Var expression, after all. And it would massively simplify a lot of code\n> that now has to care about both keys and expressions.\n>\n> Of course, expressions are a bit more expensive, but I wonder how\n> noticeable that would be.\n>\n> Opinions?\n>\n>\n> ragards\n>\n> [1] https://commitfest.postgresql.org/26/2320/\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nút 14. 1. 2020 v 0:00 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:Hi,\n\nNow that I've committed [1] which allows us to use multiple extended\nstatistics per table, I'd like to start a thread discussing a couple of\nadditional improvements for extended statistics. I've considered\nstarting a separate patch for each, but that would be messy as those\nchanges will touch roughly the same places. So I've organized it into a\nsingle patch series, with the simpler parts at the beginning.\n\nThere are three main improvements:\n\n1) improve estimates of OR clauses\n\nUntil now, OR clauses pretty much ignored extended statistics, based on\nthe experience that they're less vulnerable to misestimates. But it's a\nbit weird that AND clauses are handled while OR clauses are not, so this\nextends the logic to OR clauses.\n\nStatus: I think this is fairly OK.\n\n\n2) support estimating clauses (Var op Var)\n\nCurrently, we only support clauses with a single Var, i.e. clauses like\n\n - Var op Const\n - Var IS [NOT] NULL\n - [NOT] Var\n - ...\n\nand AND/OR clauses built from those simple ones. This patch adds support\nfor clauses of the form (Var op Var), of course assuming both Vars come\nfrom the same relation.\n\nStatus: This works, but it feels a bit hackish. Needs more work.\n\n\n3) support extended statistics on expressions\n\nCurrently we only allow simple references to columns in extended stats,\nso we can do\n\n CREATE STATISTICS s ON a, b, c FROM t;\n\nbut not\n\n CREATE STATISTICS s ON (a+b), (c + 1) FROM t;+1 for expression's statisctics - it can be great feature.Pavel\n\nThis patch aims to allow this. At the moment it's a WIP - it does most\nof the catalog changes and stats building, but with some hacks/bugs. And\nit does not even try to use those statistics during estimation.\n\nThe first question is how to extend the current pg_statistic_ext catalog\nto support expressions. I've been planning to do it the way we support\nexpressions for indexes, i.e. have two catalog fields - one for keys,\none for expressions.\n\nOne difference is that for statistics we don't care about order of the\nkeys, so that we don't need to bother with storing 0 keys in place for\nexpressions - we can simply assume keys are first, then expressions.\n\nAnd this is what the patch does now.\n\nI'm however wondering whether to keep this split - why not to just treat\neverything as expressions, and be done with it? A key just represents a\nVar expression, after all. And it would massively simplify a lot of code\nthat now has to care about both keys and expressions.\n\nOf course, expressions are a bit more expensive, but I wonder how\nnoticeable that would be.\n\nOpinions?\n\n\nragards\n\n[1] https://commitfest.postgresql.org/26/2320/\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 14 Jan 2020 09:16:50 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "Hi,\n\nHere is a rebased version of this patch series. I've polished the first\ntwo parts a bit - estimation of OR clauses and (Var op Var) clauses, and\nadded a bunch of regression tests to exercise this code. It's not quite\nthere yet, but I think it's feasible to get this committed for PG13.\n\nThe last part (extended stats on expressions) is far from complete, and\nit's not feasible to get it into PG13. There's too much missing stuff.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 6 Mar 2020 01:15:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Fri, Mar 06, 2020 at 01:15:56AM +0100, Tomas Vondra wrote:\n>Hi,\n>\n>Here is a rebased version of this patch series. I've polished the first\n>two parts a bit - estimation of OR clauses and (Var op Var) clauses, and\n>added a bunch of regression tests to exercise this code. It's not quite\n>there yet, but I think it's feasible to get this committed for PG13.\n>\n>The last part (extended stats on expressions) is far from complete, and\n>it's not feasible to get it into PG13. There's too much missing stuff.\n>\n\nMeh, the last part with stats on expression is not quite right and it\nbreaks the cputube tester, so here are the first two parts only. I don't\nplan to pursue the 0003 part for PG13 anyway, as mentioned.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 6 Mar 2020 13:58:00 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Fri, 6 Mar 2020 at 12:58, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Here is a rebased version of this patch series. I've polished the first\n> two parts a bit - estimation of OR clauses and (Var op Var) clauses.\n>\n\nHi,\n\nI've been looking over the first patch (OR list support). It mostly\nlooks reasonable to me, except there's a problem with the way\nstatext_mcv_clauselist_selectivity() combines multiple stat_sel values\ninto the final result -- in the OR case, it needs to start with sel =\n0, and then apply the OR formula to factor in each new estimate. I.e.,\nthis isn't right for an OR list:\n\n /* Factor the estimate from this MCV to the oveall estimate. */\n sel *= stat_sel;\n\n(Oh and there's a typo in that comment: s/oveall/overall/).\n\nFor example, with the regression test data, this isn't estimated well:\n\n SELECT * FROM mcv_lists_multi WHERE a = 0 OR b = 0 OR c = 0 OR d = 0;\n\nSimilarly, if no extended stats can be applied it needs to return 0\nnot 1, for example this query on the test data:\n\n SELECT * FROM mcv_lists WHERE a = 1 OR a = 2 OR d IS NOT NULL;\n\nIt might also be worth adding a couple more regression test cases like these.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 8 Mar 2020 19:17:10 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sun, Mar 08, 2020 at 07:17:10PM +0000, Dean Rasheed wrote:\n>On Fri, 6 Mar 2020 at 12:58, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Here is a rebased version of this patch series. I've polished the first\n>> two parts a bit - estimation of OR clauses and (Var op Var) clauses.\n>>\n>\n>Hi,\n>\n>I've been looking over the first patch (OR list support). It mostly\n>looks reasonable to me, except there's a problem with the way\n>statext_mcv_clauselist_selectivity() combines multiple stat_sel values\n>into the final result -- in the OR case, it needs to start with sel =\n>0, and then apply the OR formula to factor in each new estimate. I.e.,\n>this isn't right for an OR list:\n>\n> /* Factor the estimate from this MCV to the oveall estimate. */\n> sel *= stat_sel;\n>\n>(Oh and there's a typo in that comment: s/oveall/overall/).\n>\n>For example, with the regression test data, this isn't estimated well:\n>\n> SELECT * FROM mcv_lists_multi WHERE a = 0 OR b = 0 OR c = 0 OR d = 0;\n>\n>Similarly, if no extended stats can be applied it needs to return 0\n>not 1, for example this query on the test data:\n>\n> SELECT * FROM mcv_lists WHERE a = 1 OR a = 2 OR d IS NOT NULL;\n>\n\nAh, right. Thanks for noticing this. Attaches is an updated patch series\nwith parts 0002 and 0003 adding tests demonstrating the issue and then\nfixing it (both shall be merged to 0001).\n\n>It might also be worth adding a couple more regression test cases like these.\n\nAgreed, 0002 adds a couple of relevant tests.\n\nIncidentally, I've been working on improving test coverage for extended\nstats over the past few days (it has ~80% lines covered, which is not\nbad nor great). I haven't submitted that to hackers yet, because it's\nmostly mechanical and it's would interfere with the two existing threads\nabout extended stats ...\n\nSpeaking of which, would you take a look at [1]? I think supporting SAOP\nis fine, but I wonder if you agree with my conclusion we can't really\nsupport inclusion @> as explained in [2].\n\n[1] https://www.postgresql.org/message-id/flat/13902317.Eha0YfKkKy@pierred-pdoc\n[2] https://www.postgresql.org/message-id/20200202184134.swoqkqlqorqolrqv%40development\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 9 Mar 2020 01:01:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Mon, Mar 09, 2020 at 01:01:57AM +0100, Tomas Vondra wrote:\n>On Sun, Mar 08, 2020 at 07:17:10PM +0000, Dean Rasheed wrote:\n>>On Fri, 6 Mar 2020 at 12:58, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>\n>>>Here is a rebased version of this patch series. I've polished the first\n>>>two parts a bit - estimation of OR clauses and (Var op Var) clauses.\n>>>\n>>\n>>Hi,\n>>\n>>I've been looking over the first patch (OR list support). It mostly\n>>looks reasonable to me, except there's a problem with the way\n>>statext_mcv_clauselist_selectivity() combines multiple stat_sel values\n>>into the final result -- in the OR case, it needs to start with sel =\n>>0, and then apply the OR formula to factor in each new estimate. I.e.,\n>>this isn't right for an OR list:\n>>\n>> /* Factor the estimate from this MCV to the oveall estimate. */\n>> sel *= stat_sel;\n>>\n>>(Oh and there's a typo in that comment: s/oveall/overall/).\n>>\n>>For example, with the regression test data, this isn't estimated well:\n>>\n>> SELECT * FROM mcv_lists_multi WHERE a = 0 OR b = 0 OR c = 0 OR d = 0;\n>>\n>>Similarly, if no extended stats can be applied it needs to return 0\n>>not 1, for example this query on the test data:\n>>\n>> SELECT * FROM mcv_lists WHERE a = 1 OR a = 2 OR d IS NOT NULL;\n>>\n>\n>Ah, right. Thanks for noticing this. Attaches is an updated patch series\n>with parts 0002 and 0003 adding tests demonstrating the issue and then\n>fixing it (both shall be merged to 0001).\n>\n\nOne day I won't forget to actually attach the files ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 9 Mar 2020 01:06:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Mon, 9 Mar 2020 at 00:02, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Speaking of which, would you take a look at [1]? I think supporting SAOP\n> is fine, but I wonder if you agree with my conclusion we can't really\n> support inclusion @> as explained in [2].\n>\n\nHmm, I'm not sure. However, thinking about your example in [2] reminds\nme of a thought I had a while ago, but then forgot about --- there is\na flaw in the formula used for computing probabilities with functional\ndependencies:\n\n P(a,b) = P(a) * [f + (1-f)*P(b)]\n\nbecause it might return a value that is larger that P(b), which\nobviously should not be possible. We should amend that formula to\nprevent a result larger than P(b). The obvious way to do that would be\nto use:\n\n P(a,b) = Min(P(a) * [f + (1-f)*P(b)], P(b))\n\nbut actually I think it would be better and more principled to use:\n\n P(a,b) = f*Min(P(a),P(b)) + (1-f)*P(a)*P(b)\n\nI.e., for those rows believed to be functionally dependent, we use the\nminimum probability, and for the rows believed to be independent, we\nuse the product.\n\nI think that would solve the problem with the example you gave at the\nend of [2], but I'm not sure if it helps with the general case.\n\nRegards,\nDean\n\n\n> [1] https://www.postgresql.org/message-id/flat/13902317.Eha0YfKkKy@pierred-pdoc\n> [2] https://www.postgresql.org/message-id/20200202184134.swoqkqlqorqolrqv%40development\n\n\n",
"msg_date": "Mon, 9 Mar 2020 08:35:48 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Mon, Mar 09, 2020 at 08:35:48AM +0000, Dean Rasheed wrote:\n>On Mon, 9 Mar 2020 at 00:02, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Speaking of which, would you take a look at [1]? I think supporting SAOP\n>> is fine, but I wonder if you agree with my conclusion we can't really\n>> support inclusion @> as explained in [2].\n>>\n>\n>Hmm, I'm not sure. However, thinking about your example in [2] reminds\n>me of a thought I had a while ago, but then forgot about --- there is\n>a flaw in the formula used for computing probabilities with functional\n>dependencies:\n>\n> P(a,b) = P(a) * [f + (1-f)*P(b)]\n>\n>because it might return a value that is larger that P(b), which\n>obviously should not be possible.\n\nHmmm, yeah. It took me a while to reproduce this - the trick is in\n\"inverting\" the dependency on a subset of data, e.g. like this:\n\n create table t (a int, b int);\n insert into t select mod(i,50), mod(i,25)\n from generate_series(1,5000) s(i);\n update t set a = 0 where a < 3;\n create statistics s (dependencies) on a,b from t;\n\nwhich then does this:\n\n test=# explain select * from t where a = 0;\n QUERY PLAN\n ----------------------------------------------------\n Seq Scan on t (cost=0.00..86.50 rows=300 width=8)\n Filter: (a = 0)\n (2 rows)\n\n test=# explain select * from t where b = 0;\n QUERY PLAN\n ----------------------------------------------------\n Seq Scan on t (cost=0.00..86.50 rows=200 width=8)\n Filter: (b = 0)\n (2 rows)\n\n test=# explain select * from t where a = 0 and b = 0;\n QUERY PLAN\n ----------------------------------------------------\n Seq Scan on t (cost=0.00..99.00 rows=283 width=8)\n Filter: ((a = 0) AND (b = 0))\n (2 rows)\n\nWhich I think is the issue you've described ...\n\n>We should amend that formula to prevent a result larger than P(b). The\n>obvious way to do that would be to use:\n>\n> P(a,b) = Min(P(a) * [f + (1-f)*P(b)], P(b))\n>\n>but actually I think it would be better and more principled to use:\n>\n> P(a,b) = f*Min(P(a),P(b)) + (1-f)*P(a)*P(b)\n>\n>I.e., for those rows believed to be functionally dependent, we use the\n>minimum probability, and for the rows believed to be independent, we\n>use the product.\n>\n\nHmmm, yeah. The trouble is that we currently don't really have both\nselectivities in dependencies_clauselist_selectivity :-(\n\nWe get both clauses, but we only compute selectivity of the \"implied\"\nclause, and we leave the other one as not estimated (possibly up to\nclauselist_selectivity).\n\nIt's also not clear to me how would this work for more than two clauses,\nthat are all functionally dependent. Like (a => b => c), for example.\nBut I haven't thought about this very much yet.\n\n>I think that would solve the problem with the example you gave at the\n>end of [2], but I'm not sure if it helps with the general case.\n>\n\nI don't follow. I think the issue with [2] is that we can't really apply\nstats about the array values to queries on individual array elements.\nCan you explain how would the proposed changes deal with this?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 9 Mar 2020 19:19:15 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Mon, 9 Mar 2020 at 18:19, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:>\n> On Mon, Mar 09, 2020 at 08:35:48AM +0000, Dean Rasheed wrote:\n> >\n> > P(a,b) = P(a) * [f + (1-f)*P(b)]\n> >\n> >because it might return a value that is larger that P(b), which\n> >obviously should not be possible.\n>\n> Hmmm, yeah. It took me a while to reproduce this - the trick is in\n> \"inverting\" the dependency on a subset of data, e.g. like this:\n>\n> create table t (a int, b int);\n> insert into t select mod(i,50), mod(i,25)\n> from generate_series(1,5000) s(i);\n> update t set a = 0 where a < 3;\n> create statistics s (dependencies) on a,b from t;\n>\n> which then does this:\n>\n> test=# explain select * from t where a = 0;\n> QUERY PLAN\n> ----------------------------------------------------\n> Seq Scan on t (cost=0.00..86.50 rows=300 width=8)\n> Filter: (a = 0)\n> (2 rows)\n>\n> test=# explain select * from t where b = 0;\n> QUERY PLAN\n> ----------------------------------------------------\n> Seq Scan on t (cost=0.00..86.50 rows=200 width=8)\n> Filter: (b = 0)\n> (2 rows)\n>\n> test=# explain select * from t where a = 0 and b = 0;\n> QUERY PLAN\n> ----------------------------------------------------\n> Seq Scan on t (cost=0.00..99.00 rows=283 width=8)\n> Filter: ((a = 0) AND (b = 0))\n> (2 rows)\n>\n> Which I think is the issue you've described ...\n>\n\nI think this is also related to the problem that functional dependency\nstats don't take into account the fact that the user clauses may not\nbe compatible with one another. For example:\n\nCREATE TABLE t (a int, b int);\nINSERT INTO t\n SELECT x/10,x/10 FROM (SELECT generate_series(1,x)\n FROM generate_series(1,100) g(x)) AS t(x);\nCREATE STATISTICS s (dependencies) ON a,b FROM t;\nANALYSE t;\n\nEXPLAIN SELECT * FROM t WHERE a = 10;\n\n QUERY PLAN\n--------------------------------------------------\n Seq Scan on t (cost=0.00..86.12 rows=1 width=8)\n Filter: (a = 10)\n(2 rows)\n\nEXPLAIN SELECT * FROM t WHERE b = 1;\n\n QUERY PLAN\n----------------------------------------------------\n Seq Scan on t (cost=0.00..86.12 rows=865 width=8)\n Filter: (b = 1)\n(2 rows)\n\nEXPLAIN SELECT * FROM t WHERE a = 10 AND b = 1;\n\n QUERY PLAN\n----------------------------------------------------\n Seq Scan on t (cost=0.00..98.75 rows=865 width=8)\n Filter: ((a = 10) AND (b = 1))\n(2 rows)\n\nwhereas without stats it would estimate 1 row. That kind of\nover-estimate could get very bad, so it would be good to find a way to\nfix it.\n\n\n> >We should amend that formula to prevent a result larger than P(b). The\n> >obvious way to do that would be to use:\n> >\n> > P(a,b) = Min(P(a) * [f + (1-f)*P(b)], P(b))\n> >\n> >but actually I think it would be better and more principled to use:\n> >\n> > P(a,b) = f*Min(P(a),P(b)) + (1-f)*P(a)*P(b)\n> >\n> >I.e., for those rows believed to be functionally dependent, we use the\n> >minimum probability, and for the rows believed to be independent, we\n> >use the product.\n> >\n>\n> Hmmm, yeah. The trouble is that we currently don't really have both\n> selectivities in dependencies_clauselist_selectivity :-(\n>\n\nI hacked on this a bit, and I think it's possible to apply dependency\nstats in a more general way (not necessarily assuming equality\nclauses), something like the attached very rough patch.\n\nThis approach guarantees that the result of combining a pair of\nselectivities with a functional dependency between them gives a\ncombined selectivity that is never greater than either individual\nselectivity.\n\nOne regression test fails, but looking at it, that's to be expected --\nthe test alters the type of a column, causing its univariate stats to\nbe dropped, so the single-column estimate is reduced, and the new code\nrefuses to give a higher estimate than the single clause's new\nestimate.\n\n\n> It's also not clear to me how would this work for more than two clauses,\n> that are all functionally dependent. Like (a => b => c), for example.\n> But I haven't thought about this very much yet.\n>\n\nI attempted to solve that by computing a chain of conditional\nprobabilities. The maths needs checking over (as I said, this is a\nvery rough patch). In particular, I think it's wrong for cases like (\na->b, a->c ), but I think it's along the right lines.\n\n\n> >I think that would solve the problem with the example you gave at the\n> >end of [2], but I'm not sure if it helps with the general case.\n> >\n>\n> I don't follow. I think the issue with [2] is that we can't really apply\n> stats about the array values to queries on individual array elements.\n> Can you explain how would the proposed changes deal with this?\n>\n\nWith this patch, the original estimate of ~900 rows in that example is\nrestored with functional dependencies, because of the way it utilises\nthe minimum selectivity of the 2 clauses.\n\nI've not fully thought this through, but I think it might allow\nfunctional dependencies to be applied to a wider range of operators.\n\nRegards,\nDean",
"msg_date": "Wed, 11 Mar 2020 14:21:02 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Mon, 9 Mar 2020 at 00:06, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Mon, Mar 09, 2020 at 01:01:57AM +0100, Tomas Vondra wrote:\n> >\n> >Attaches is an updated patch series\n> >with parts 0002 and 0003 adding tests demonstrating the issue and then\n> >fixing it (both shall be merged to 0001).\n> >\n>\n> One day I won't forget to actually attach the files ...\n>\n\n0001-0003 look reasonable to me.\n\nOne minor point -- there are now 2 code blocks that are basically the\nsame, looping over a list of clauses, calling clause_selectivity() and\nthen applying the \"s1 = s1 + s2 - s1 * s2\" formula. Perhaps they could\nbe combined into a new function (clauselist_selectivity_simple_or(),\nsay). I guess it would need to be passed the initial starting\nselectivity s1, but it ought to help reduce code duplication.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 13 Mar 2020 16:54:51 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 04:54:51PM +0000, Dean Rasheed wrote:\n>On Mon, 9 Mar 2020 at 00:06, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Mon, Mar 09, 2020 at 01:01:57AM +0100, Tomas Vondra wrote:\n>> >\n>> >Attaches is an updated patch series\n>> >with parts 0002 and 0003 adding tests demonstrating the issue and then\n>> >fixing it (both shall be merged to 0001).\n>> >\n>>\n>> One day I won't forget to actually attach the files ...\n>>\n>\n>0001-0003 look reasonable to me.\n>\n>One minor point -- there are now 2 code blocks that are basically the\n>same, looping over a list of clauses, calling clause_selectivity() and\n>then applying the \"s1 = s1 + s2 - s1 * s2\" formula. Perhaps they could\n>be combined into a new function (clauselist_selectivity_simple_or(),\n>say). I guess it would need to be passed the initial starting\n>selectivity s1, but it ought to help reduce code duplication.\n>\n\nAttached is a patch series rebased on top of the current master, after\ncommitting the ScalarArrayOpExpr enhancements. I've updated the OR patch\nto get rid of the code duplication, and barring objections I'll get it\ncommitted shortly together with the two parts improving test coverage.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 14 Mar 2020 17:56:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sat, Mar 14, 2020 at 05:56:10PM +0100, Tomas Vondra wrote:\n>\n> ...\n>\n>Attached is a patch series rebased on top of the current master, after\n>committing the ScalarArrayOpExpr enhancements. I've updated the OR patch\n>to get rid of the code duplication, and barring objections I'll get it\n>committed shortly together with the two parts improving test coverage.\n>\n\nI've pushed the two patches improving test coverage for functional\ndependencies and MCV lists, which seems mostly non-controversial. I'll\nwait a bit more with the two patches actually changing behavior (rebased\nversion attached, to keep cputube happy). \n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 15 Mar 2020 01:08:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sun, Mar 15, 2020 at 1:08 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Sat, Mar 14, 2020 at 05:56:10PM +0100, Tomas Vondra wrote:\n> >Attached is a patch series rebased on top of the current master, after\n> >committing the ScalarArrayOpExpr enhancements. I've updated the OR patch\n> >to get rid of the code duplication, and barring objections I'll get it\n> >committed shortly together with the two parts improving test coverage.\n> >\n>\n> I've pushed the two patches improving test coverage for functional\n> dependencies and MCV lists, which seems mostly non-controversial. I'll\n> wait a bit more with the two patches actually changing behavior (rebased\n> version attached, to keep cputube happy).\n\nSome comment fixes:\n\n- /* Check if the expression the right shape (one Var,\none Const) */\n- if (!examine_clause_args(expr->args, &var, NULL, NULL))\n+ /*\n+ * Check if the expression the right shape (one Var\nand one Const,\n+ * or two Vars).\n+ */\n\nCheck if the expression \"has\" or \"is of\" the right shape.\n\n- * Attempts to match the arguments to either (Var op Const) or (Const op Var),\n- * possibly with a RelabelType on top. When the expression matches this form,\n- * returns true, otherwise returns false.\n+ * Attempts to match the arguments to either (Var op Const) or (Const op Var)\n+ * or (Var op Var), possibly with a RelabelType on top. When the expression\n+ * matches this form, returns true, otherwise returns false.\n\n... match the arguments to (Var op Const), (Const op Var) or (Var op Var), ...\n\n+ /*\n+ * Both variables have to be for the same relation\n(otherwise it's\n+ * a join clause, and we don't deal with those yet.\n+ */\n\nMissing close parenthesis.\n\nStimulated by some bad plans involving JSON, I found my way to your\nWIP stats-on-expressions patch in this thread. Do I understand\ncorrectly that it will eventually also support single expressions,\nlike CREATE STATISTICS t_distinct_abc (ndistinct) ON\n(my_jsonb_column->>'abc') FROM t? It looks like that would solve\nproblems that otherwise require a generated column or an expression\nindex just to get ndistinct.\n\n\n",
"msg_date": "Sun, 15 Mar 2020 14:48:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sun, Mar 15, 2020 at 02:48:02PM +1300, Thomas Munro wrote:\n>On Sun, Mar 15, 2020 at 1:08 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Sat, Mar 14, 2020 at 05:56:10PM +0100, Tomas Vondra wrote:\n>> >Attached is a patch series rebased on top of the current master, after\n>> >committing the ScalarArrayOpExpr enhancements. I've updated the OR patch\n>> >to get rid of the code duplication, and barring objections I'll get it\n>> >committed shortly together with the two parts improving test coverage.\n>> >\n>>\n>> I've pushed the two patches improving test coverage for functional\n>> dependencies and MCV lists, which seems mostly non-controversial. I'll\n>> wait a bit more with the two patches actually changing behavior (rebased\n>> version attached, to keep cputube happy).\n>\n>Some comment fixes:\n>\n>- /* Check if the expression the right shape (one Var,\n>one Const) */\n>- if (!examine_clause_args(expr->args, &var, NULL, NULL))\n>+ /*\n>+ * Check if the expression the right shape (one Var\n>and one Const,\n>+ * or two Vars).\n>+ */\n>\n>Check if the expression \"has\" or \"is of\" the right shape.\n>\n>- * Attempts to match the arguments to either (Var op Const) or (Const op Var),\n>- * possibly with a RelabelType on top. When the expression matches this form,\n>- * returns true, otherwise returns false.\n>+ * Attempts to match the arguments to either (Var op Const) or (Const op Var)\n>+ * or (Var op Var), possibly with a RelabelType on top. When the expression\n>+ * matches this form, returns true, otherwise returns false.\n>\n>... match the arguments to (Var op Const), (Const op Var) or (Var op Var), ...\n>\n>+ /*\n>+ * Both variables have to be for the same relation\n>(otherwise it's\n>+ * a join clause, and we don't deal with those yet.\n>+ */\n>\n>Missing close parenthesis.\n>\n\nThanks, I'll get this fixed.\n\n>Stimulated by some bad plans involving JSON, I found my way to your\n>WIP stats-on-expressions patch in this thread. Do I understand\n>correctly that it will eventually also support single expressions,\n>like CREATE STATISTICS t_distinct_abc (ndistinct) ON\n>(my_jsonb_column->>'abc') FROM t? It looks like that would solve\n>problems that otherwise require a generated column or an expression\n>index just to get ndistinct.\n\nYes, I think that's generally the plan. I was also thinking about\ninventing some sort of special JSON statistics (e.g. extracting paths\nfrom the JSONB and computing frequencies, or something like that). But\nstats on expressions are one of the things I'd like to do in PG14.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 15 Mar 2020 03:23:12 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sun, 15 Mar 2020 at 00:08, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sat, Mar 14, 2020 at 05:56:10PM +0100, Tomas Vondra wrote:\n> >\n> >Attached is a patch series rebased on top of the current master, after\n> >committing the ScalarArrayOpExpr enhancements. I've updated the OR patch\n> >to get rid of the code duplication, and barring objections I'll get it\n> >committed shortly together with the two parts improving test coverage.\n>\n> I've pushed the two patches improving test coverage for functional\n> dependencies and MCV lists, which seems mostly non-controversial. I'll\n> wait a bit more with the two patches actually changing behavior (rebased\n> version attached, to keep cputube happy).\n>\n\nPatch 0001 looks to be mostly ready. Just a couple of final comments:\n\n+ if (is_or)\n+ simple_sel = clauselist_selectivity_simple_or(root,\nstat_clauses, varRelid,\n+ jointype,\nsjinfo, NULL, 1.0);\n+ else\n\nSurely that should be passing 0.0 as the final argument, otherwise it\nwill always just return simple_sel = 1.0.\n\n\n+ *\n+ * XXX We can't multiply with current value, because for OR clauses\n+ * we start with 0.0, so we simply assign to s1 directly.\n+ */\n+ s = statext_clauselist_selectivity(root, clauses, varRelid,\n+ jointype, sjinfo, rel,\n+ &estimatedclauses, true);\n\nThat final part of the comment is no longer relevant (variable s1 no\nlonger exists). Probably it could now just be deleted, since I think\nthere are sufficient comments elsewhere to explain what's going on.\n\nOtherwise it looks good, and I think this will lead to some very\nworthwhile improvements.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 15 Mar 2020 12:37:37 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sun, Mar 15, 2020 at 12:37:37PM +0000, Dean Rasheed wrote:\n>On Sun, 15 Mar 2020 at 00:08, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sat, Mar 14, 2020 at 05:56:10PM +0100, Tomas Vondra wrote:\n>> >\n>> >Attached is a patch series rebased on top of the current master, after\n>> >committing the ScalarArrayOpExpr enhancements. I've updated the OR patch\n>> >to get rid of the code duplication, and barring objections I'll get it\n>> >committed shortly together with the two parts improving test coverage.\n>>\n>> I've pushed the two patches improving test coverage for functional\n>> dependencies and MCV lists, which seems mostly non-controversial. I'll\n>> wait a bit more with the two patches actually changing behavior (rebased\n>> version attached, to keep cputube happy).\n>>\n>\n>Patch 0001 looks to be mostly ready. Just a couple of final comments:\n>\n>+ if (is_or)\n>+ simple_sel = clauselist_selectivity_simple_or(root,\n>stat_clauses, varRelid,\n>+ jointype,\n>sjinfo, NULL, 1.0);\n>+ else\n>\n>Surely that should be passing 0.0 as the final argument, otherwise it\n>will always just return simple_sel = 1.0.\n>\n>\n>+ *\n>+ * XXX We can't multiply with current value, because for OR clauses\n>+ * we start with 0.0, so we simply assign to s1 directly.\n>+ */\n>+ s = statext_clauselist_selectivity(root, clauses, varRelid,\n>+ jointype, sjinfo, rel,\n>+ &estimatedclauses, true);\n>\n>That final part of the comment is no longer relevant (variable s1 no\n>longer exists). Probably it could now just be deleted, since I think\n>there are sufficient comments elsewhere to explain what's going on.\n>\n>Otherwise it looks good, and I think this will lead to some very\n>worthwhile improvements.\n>\n\nAttached is a rebased patch series, addressing both those issues.\n\nI've been wondering why none of the regression tests failed because of\nthe 0.0 vs. 1.0 issue, but I think the explanation is pretty simple - to\nmake the tests stable, all the MCV lists we use are \"perfect\" i.e. it\nrepresents 100% of the data. But this selectivity is used to compute\nselectivity only for the part not represented by the MCV list, i.e. it's\nnot really used. I suppose we could add a test that would use larger\nMCV item, but I'm afraid that'd be inherently unstable :-(\n\nAnother thing I was thinking about is the changes to the API. We need to\npass information whether the clauses are connected by AND or OR to a\nnumber of places, and 0001 does that in two ways. For some functions it\nadds a new parameter (called is_or), and for other functiosn it creates\na new copy of a function. So for example\n\n - statext_mcv_clauselist_selectivity\n - statext_clauselist_selectivity\n\ngot the new flag, while e.g. clauselist_selectivity gets a new \"copy\"\nsibling called clauselist_selectivity_or.\n\nThere were two reasons for not using flag. First, clauselist_selectivity\nand similar functions have to do very different stuff for these two\ncases, so it'd be just one huge if/else block. Second, minimizing\nbreakage of third-party code - pretty much all the extensions I've seen\nonly work with AND clauses, and call clauselist_selectivity. Adding a\nflag would break that code. (Also, there's a bit of laziness, because\nthis was the simplest thing to do during development.)\n\nBut I wonder if that's sufficient reason - maybe we should just add the\nflag in all cases. It might break some code, but the fix is trivial (add\na false there).\n\nOpinions?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 18 Mar 2020 20:31:28 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Wed, 18 Mar 2020 at 19:31, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Attached is a rebased patch series, addressing both those issues.\n>\n> I've been wondering why none of the regression tests failed because of\n> the 0.0 vs. 1.0 issue, but I think the explanation is pretty simple - to\n> make the tests stable, all the MCV lists we use are \"perfect\" i.e. it\n> represents 100% of the data. But this selectivity is used to compute\n> selectivity only for the part not represented by the MCV list, i.e. it's\n> not really used. I suppose we could add a test that would use larger\n> MCV item, but I'm afraid that'd be inherently unstable :-(\n>\n\nI think it ought to be possible to write stable tests for this code\nbranch -- I think all you need is for the number of rows to remain\nsmall, so that the stats sample every row and are predictable, while\nthe MCVs don't cover all values, which can be achieved with a mix of\nsome common values, and some that only occur once.\n\nI haven't tried it, but it seems like it would be possible in principle.\n\n> Another thing I was thinking about is the changes to the API. We need to\n> pass information whether the clauses are connected by AND or OR to a\n> number of places, and 0001 does that in two ways. For some functions it\n> adds a new parameter (called is_or), and for other functiosn it creates\n> a new copy of a function. So for example\n>\n> - statext_mcv_clauselist_selectivity\n> - statext_clauselist_selectivity\n>\n> got the new flag, while e.g. clauselist_selectivity gets a new \"copy\"\n> sibling called clauselist_selectivity_or.\n>\n> There were two reasons for not using flag. First, clauselist_selectivity\n> and similar functions have to do very different stuff for these two\n> cases, so it'd be just one huge if/else block. Second, minimizing\n> breakage of third-party code - pretty much all the extensions I've seen\n> only work with AND clauses, and call clauselist_selectivity. Adding a\n> flag would break that code. (Also, there's a bit of laziness, because\n> this was the simplest thing to do during development.)\n>\n> But I wonder if that's sufficient reason - maybe we should just add the\n> flag in all cases. It might break some code, but the fix is trivial (add\n> a false there).\n>\n> Opinions?\n>\n\n-1\n\nI think of clause_selectivity() and clauselist_selectivity() as the\npublic API that everyone is using, whilst the functions that support\nlists of clauses to be combined using OR are internal (to the planner)\nimplementation details. I think callers of public API tend to either\nhave implicitly AND'ed list of clauses, or a single OR clause.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 19 Mar 2020 19:08:07 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 07:08:07PM +0000, Dean Rasheed wrote:\n>On Wed, 18 Mar 2020 at 19:31, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Attached is a rebased patch series, addressing both those issues.\n>>\n>> I've been wondering why none of the regression tests failed because of\n>> the 0.0 vs. 1.0 issue, but I think the explanation is pretty simple - to\n>> make the tests stable, all the MCV lists we use are \"perfect\" i.e. it\n>> represents 100% of the data. But this selectivity is used to compute\n>> selectivity only for the part not represented by the MCV list, i.e. it's\n>> not really used. I suppose we could add a test that would use larger\n>> MCV item, but I'm afraid that'd be inherently unstable :-(\n>>\n>\n>I think it ought to be possible to write stable tests for this code\n>branch -- I think all you need is for the number of rows to remain\n>small, so that the stats sample every row and are predictable, while\n>the MCVs don't cover all values, which can be achieved with a mix of\n>some common values, and some that only occur once.\n>\n>I haven't tried it, but it seems like it would be possible in principle.\n>\n\nAh, right. Yeah, I think that should work. I thought there would be some\nvolatility due to groups randomly not making it into the MCV list, but\nyou're right it's possible to construct the data in a way to make it\nperfectly deterministic. So that's what I've done in the attached patch.\n\n\n>> Another thing I was thinking about is the changes to the API. We need to\n>> pass information whether the clauses are connected by AND or OR to a\n>> number of places, and 0001 does that in two ways. For some functions it\n>> adds a new parameter (called is_or), and for other functiosn it creates\n>> a new copy of a function. So for example\n>>\n>> - statext_mcv_clauselist_selectivity\n>> - statext_clauselist_selectivity\n>>\n>> got the new flag, while e.g. clauselist_selectivity gets a new \"copy\"\n>> sibling called clauselist_selectivity_or.\n>>\n>> There were two reasons for not using flag. First, clauselist_selectivity\n>> and similar functions have to do very different stuff for these two\n>> cases, so it'd be just one huge if/else block. Second, minimizing\n>> breakage of third-party code - pretty much all the extensions I've seen\n>> only work with AND clauses, and call clauselist_selectivity. Adding a\n>> flag would break that code. (Also, there's a bit of laziness, because\n>> this was the simplest thing to do during development.)\n>>\n>> But I wonder if that's sufficient reason - maybe we should just add the\n>> flag in all cases. It might break some code, but the fix is trivial (add\n>> a false there).\n>>\n>> Opinions?\n>>\n>\n>-1\n>\n>I think of clause_selectivity() and clauselist_selectivity() as the\n>public API that everyone is using, whilst the functions that support\n>lists of clauses to be combined using OR are internal (to the planner)\n>implementation details. I think callers of public API tend to either\n>have implicitly AND'ed list of clauses, or a single OR clause.\n>\n\nOK, thanks. That was mostly my reasoning too - not wanting to cause\nunnecessary breakage. And yes, I agree most people just call\nclauselist_selectivity with a list of clauses combined using AND.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 21 Mar 2020 22:59:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sat, 21 Mar 2020 at 21:59, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Ah, right. Yeah, I think that should work. I thought there would be some\n> volatility due to groups randomly not making it into the MCV list, but\n> you're right it's possible to construct the data in a way to make it\n> perfectly deterministic. So that's what I've done in the attached patch.\n>\n\nLooking through those new tests, another issue occurred to me -- under\nsome circumstances this patch can lead to extended stats being applied\ntwice to the same clause, which is not great, because it involves\nquite a lot of extra work, and also because it can lead to\noverestimates because of the way that MCV stats are applied as a delta\ncorrection to simple_sel.\n\nThe way this comes about is as follows -- if we start with an OR\nclause, that will be passed as a single-item implicitly AND'ed list to\nclauselist_selectivity(), and from there to\nstatext_clauselist_selectivity() and then to\nstatext_mcv_clauselist_selectivity(). This will call\nclauselist_selectivity_simple() to get simple_sel, before calling\nmcv_clauselist_selectivity(), which will recursively compute all the\nMCV corrections. However, the call to clauselist_selectivity_simple()\nwill call clause_selectivity() for each clause (just a single OR\nclause in this example), which will now call\nclauselist_selectivity_or(), which will go back into\nstatext_clauselist_selectivity() with \"is_or = true\", which will apply\nthe MCV corrections to the same set of clauses that the outer call was\nabout to process.\n\nI'm not sure what's the best way to resolve that. Perhaps\nstatext_mcv_clauselist_selectivity() / statext_is_compatible_clause()\nshould ignore OR clauses from an AND-list, on the basis that they will\nget processed recursively later. Or perhaps estimatedclauses can\nsomehow be used to prevent this, though I'm not sure exactly how that\nwould work.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 23 Mar 2020 08:21:42 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 08:21:42AM +0000, Dean Rasheed wrote:\n>On Sat, 21 Mar 2020 at 21:59, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Ah, right. Yeah, I think that should work. I thought there would be some\n>> volatility due to groups randomly not making it into the MCV list, but\n>> you're right it's possible to construct the data in a way to make it\n>> perfectly deterministic. So that's what I've done in the attached patch.\n>>\n>\n>Looking through those new tests, another issue occurred to me -- under\n>some circumstances this patch can lead to extended stats being applied\n>twice to the same clause, which is not great, because it involves\n>quite a lot of extra work, and also because it can lead to\n>overestimates because of the way that MCV stats are applied as a delta\n>correction to simple_sel.\n>\n>The way this comes about is as follows -- if we start with an OR\n>clause, that will be passed as a single-item implicitly AND'ed list to\n>clauselist_selectivity(), and from there to\n>statext_clauselist_selectivity() and then to\n>statext_mcv_clauselist_selectivity(). This will call\n>clauselist_selectivity_simple() to get simple_sel, before calling\n>mcv_clauselist_selectivity(), which will recursively compute all the\n>MCV corrections. However, the call to clauselist_selectivity_simple()\n>will call clause_selectivity() for each clause (just a single OR\n>clause in this example), which will now call\n>clauselist_selectivity_or(), which will go back into\n>statext_clauselist_selectivity() with \"is_or = true\", which will apply\n>the MCV corrections to the same set of clauses that the outer call was\n>about to process.\n>\n\nHmmm. So let's consider a simple OR clause with two arguments, both\ncovered by single statistics object. Something like this:\n\n CREATE TABLE t (a int, b int);\n INSERT INTO t SELECT mod(i, 10), mod(i, 10)\n FROM generate_series(1,100000);\n CREATE STATISTICS s (mcv) ON a,b FROM t;\n\n SELECT * FROM t WHERE a = 0 OR b = 0;\n\nWhich is estimated correctly, but the calls graph looks like this:\n\n clauselist_selectivity\n statext_clauselist_selectivity\n statext_mcv_clauselist_selectivity <---\n clauselist_selectivity_simple\n clause_selectivity\n clauselist_selectivity_or\n statext_clauselist_selectivity\n statext_mcv_clauselist_selectivity <---\n clauselist_selectivity_simple_or\n clause_selectivity\n clause_selectivity\n mcv_clauselist_selectivity\n clauselist_selectivity_simple_or\n mcv_clauselist_selectivity\n clauselist_selectivity_simple\n (already estimated)\n\nIIUC the problem you have in mind is that we end up calling\nstatext_mcv_clauselist_selectivity twice, once for the top-level AND\nclause with a single argument, and then recursively for the expanded OR\nclause. Indeed, that seems to be applying the correction twice.\n\n\n>I'm not sure what's the best way to resolve that. Perhaps\n>statext_mcv_clauselist_selectivity() / statext_is_compatible_clause()\n>should ignore OR clauses from an AND-list, on the basis that they will\n>get processed recursively later. Or perhaps estimatedclauses can\n>somehow be used to prevent this, though I'm not sure exactly how that\n>would work.\n\nI don't know. I feel uneasy about just ignoring some of the clauses,\nbecause what happens for complex clauses, where the OR is not directly\nin the AND clause, but is negated or something like that?\n\nIsn't it the case that clauselist_selectivity_simple (and the OR\nvariant) should ignore extended stats entirely? That is, we'd need to\nadd a flag (or _simple variant) to clause_selectivity, so that it calls\ncauselist_selectivity_simple_or. So the calls would look like this:\n\n clauselist_selectivity\n statext_clauselist_selectivity\n statext_mcv_clauselist_selectivity\n clauselist_selectivity_simple <--- disable extended stats\n clause_selectivity\n clauselist_selectivity_simple_or\n clause_selectivity\n clause_selectivity\n mcv_clauselist_selectivity\n clauselist_selectivity_simple\n already estimated\n\nI've only quickly hacked clause_selectivity, but it does not seems very\ninvasive (of course, it means disruption to clause_selectivity callers,\nbut I suppose most call clauselist_selectivity).\n\nBTW do you have an example where this would actually cause an issue?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Mar 2020 02:08:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Tue, 24 Mar 2020 at 01:08, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Hmmm. So let's consider a simple OR clause with two arguments, both\n> covered by single statistics object. Something like this:\n>\n> CREATE TABLE t (a int, b int);\n> INSERT INTO t SELECT mod(i, 10), mod(i, 10)\n> FROM generate_series(1,100000);\n> CREATE STATISTICS s (mcv) ON a,b FROM t;\n>\n> SELECT * FROM t WHERE a = 0 OR b = 0;\n>\n> Which is estimated correctly...\n>\n\nHmm, the reason that is estimated correctly is that the MCV values\ncover all values in the table, so mcv_totalsel is 1 (or pretty close\nto 1), and other_sel is capped to nearly 0, and so the result is\nbasically just mcv_sel -- i.e., in this case the MCV estimates are\nreturned more-or-less as-is, rather than being applied as a\ncorrection, and so the result is independent of how many times\nextended stats are applied.\n\nThe more interesting (and probably more realistic) case is where the\nMCV values do not cover the all values in the table, as in the new\nmcv_lists_partial examples in the regression tests, for example this\ntest case, which produces a significant overestimate:\n\n SELECT * FROM mcv_lists_partial WHERE a = 0 OR b = 0 OR c = 0\n\nalthough actually, I think there's another reason for that (in\naddition to the extended stats correction being applied twice) -- the\nway the MCV code makes use of base selectivity doesn't seem really\nappropriate for OR clauses, because the way base_frequency is computed\nis really based on the assumption that every column would be matched.\nI'm not sure that there's any easy answer to that though. I feels like\nwhat's needed when computing the match bitmaps in mcv.c, is to produce\na bitmap (would it fit in 1 byte?) per value, to indicate which\ncolumns match, and base_frequency values per column. That would be\nsignificantly more work though, so almost certainly isn't feasible for\nPG13.\n\n> IIUC the problem you have in mind is that we end up calling\n> statext_mcv_clauselist_selectivity twice, once for the top-level AND\n> clause with a single argument, and then recursively for the expanded OR\n> clause. Indeed, that seems to be applying the correction twice.\n>\n>\n> >I'm not sure what's the best way to resolve that. Perhaps\n> >statext_mcv_clauselist_selectivity() / statext_is_compatible_clause()\n> >should ignore OR clauses from an AND-list, on the basis that they will\n> >get processed recursively later. Or perhaps estimatedclauses can\n> >somehow be used to prevent this, though I'm not sure exactly how that\n> >would work.\n>\n> I don't know. I feel uneasy about just ignoring some of the clauses,\n> because what happens for complex clauses, where the OR is not directly\n> in the AND clause, but is negated or something like that?\n>\n> Isn't it the case that clauselist_selectivity_simple (and the OR\n> variant) should ignore extended stats entirely? That is, we'd need to\n> add a flag (or _simple variant) to clause_selectivity, so that it calls\n> causelist_selectivity_simple_or. So the calls would look like this:\n>\n> clauselist_selectivity\n> statext_clauselist_selectivity\n> statext_mcv_clauselist_selectivity\n> clauselist_selectivity_simple <--- disable extended stats\n> clause_selectivity\n> clauselist_selectivity_simple_or\n> clause_selectivity\n> clause_selectivity\n> mcv_clauselist_selectivity\n> clauselist_selectivity_simple\n> already estimated\n>\n> I've only quickly hacked clause_selectivity, but it does not seems very\n> invasive (of course, it means disruption to clause_selectivity callers,\n> but I suppose most call clauselist_selectivity).\n>\n\nSounds like a reasonable approach, but I think it would be better to\npreserve the current public API by having clauselist_selectivity()\nbecome a thin wrapper around a new function that optionally applies\nextended stats.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 24 Mar 2020 13:20:07 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 01:20:07PM +0000, Dean Rasheed wrote:\n>On Tue, 24 Mar 2020 at 01:08, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Hmmm. So let's consider a simple OR clause with two arguments, both\n>> covered by single statistics object. Something like this:\n>>\n>> CREATE TABLE t (a int, b int);\n>> INSERT INTO t SELECT mod(i, 10), mod(i, 10)\n>> FROM generate_series(1,100000);\n>> CREATE STATISTICS s (mcv) ON a,b FROM t;\n>>\n>> SELECT * FROM t WHERE a = 0 OR b = 0;\n>>\n>> Which is estimated correctly...\n>>\n>\n>Hmm, the reason that is estimated correctly is that the MCV values\n>cover all values in the table, so mcv_totalsel is 1 (or pretty close\n>to 1), and other_sel is capped to nearly 0, and so the result is\n>basically just mcv_sel -- i.e., in this case the MCV estimates are\n>returned more-or-less as-is, rather than being applied as a\n>correction, and so the result is independent of how many times\n>extended stats are applied.\n>\n>The more interesting (and probably more realistic) case is where the\n>MCV values do not cover the all values in the table, as in the new\n>mcv_lists_partial examples in the regression tests, for example this\n>test case, which produces a significant overestimate:\n>\n> SELECT * FROM mcv_lists_partial WHERE a = 0 OR b = 0 OR c = 0\n>\n>although actually, I think there's another reason for that (in\n>addition to the extended stats correction being applied twice) -- the\n>way the MCV code makes use of base selectivity doesn't seem really\n>appropriate for OR clauses, because the way base_frequency is computed\n>is really based on the assumption that every column would be matched.\n>I'm not sure that there's any easy answer to that though. I feels like\n>what's needed when computing the match bitmaps in mcv.c, is to produce\n>a bitmap (would it fit in 1 byte?) per value, to indicate which\n>columns match, and base_frequency values per column. That would be\n>significantly more work though, so almost certainly isn't feasible for\n>PG13.\n>\n\nGood point. I haven't thought about the base frequencies. I think 1 byte\nshould be enough, as we limit the number of columns to 8.\n\n>> IIUC the problem you have in mind is that we end up calling\n>> statext_mcv_clauselist_selectivity twice, once for the top-level AND\n>> clause with a single argument, and then recursively for the expanded OR\n>> clause. Indeed, that seems to be applying the correction twice.\n>>\n>>\n>> >I'm not sure what's the best way to resolve that. Perhaps\n>> >statext_mcv_clauselist_selectivity() / statext_is_compatible_clause()\n>> >should ignore OR clauses from an AND-list, on the basis that they will\n>> >get processed recursively later. Or perhaps estimatedclauses can\n>> >somehow be used to prevent this, though I'm not sure exactly how that\n>> >would work.\n>>\n>> I don't know. I feel uneasy about just ignoring some of the clauses,\n>> because what happens for complex clauses, where the OR is not directly\n>> in the AND clause, but is negated or something like that?\n>>\n>> Isn't it the case that clauselist_selectivity_simple (and the OR\n>> variant) should ignore extended stats entirely? That is, we'd need to\n>> add a flag (or _simple variant) to clause_selectivity, so that it calls\n>> causelist_selectivity_simple_or. So the calls would look like this:\n>>\n>> clauselist_selectivity\n>> statext_clauselist_selectivity\n>> statext_mcv_clauselist_selectivity\n>> clauselist_selectivity_simple <--- disable extended stats\n>> clause_selectivity\n>> clauselist_selectivity_simple_or\n>> clause_selectivity\n>> clause_selectivity\n>> mcv_clauselist_selectivity\n>> clauselist_selectivity_simple\n>> already estimated\n>>\n>> I've only quickly hacked clause_selectivity, but it does not seems very\n>> invasive (of course, it means disruption to clause_selectivity callers,\n>> but I suppose most call clauselist_selectivity).\n>>\n>\n>Sounds like a reasonable approach, but I think it would be better to\n>preserve the current public API by having clauselist_selectivity()\n>become a thin wrapper around a new function that optionally applies\n>extended stats.\n>\n\nOK, makes sense. I'll take a stab at it.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Mar 2020 15:33:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sun, Mar 15, 2020 at 3:23 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Sun, Mar 15, 2020 at 02:48:02PM +1300, Thomas Munro wrote:\n> >Stimulated by some bad plans involving JSON, I found my way to your\n> >WIP stats-on-expressions patch in this thread. Do I understand\n> >correctly that it will eventually also support single expressions,\n> >like CREATE STATISTICS t_distinct_abc (ndistinct) ON\n> >(my_jsonb_column->>'abc') FROM t? It looks like that would solve\n> >problems that otherwise require a generated column or an expression\n> >index just to get ndistinct.\n>\n> Yes, I think that's generally the plan. I was also thinking about\n> inventing some sort of special JSON statistics (e.g. extracting paths\n> from the JSONB and computing frequencies, or something like that). But\n> stats on expressions are one of the things I'd like to do in PG14.\n\nInteresting idea. If you had simple single-expression statistics, I\nsuppose a cave-person version of this would be to write a\nscript/stored procedure that extracts the distinct set of JSON paths\nand does CREATE STATISTICS for expressions to access each path. That\nsaid, I suspect that in many cases there's a small set of a paths and\na human designer would know what to do. I didn't manage to try your\nWIP stats-on-expressions patch due to bitrot and unfinished parts, but\nI am hoping it just needs to remove the \"if (numcols < 2)\nereport(ERROR ...)\" check to get a very very useful thing.\n\n\n",
"msg_date": "Wed, 25 Mar 2020 10:05:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "> On 24 Mar 2020, at 15:33, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> \n> On Tue, Mar 24, 2020 at 01:20:07PM +0000, Dean Rasheed wrote:\n\n>> Sounds like a reasonable approach, but I think it would be better to\n>> preserve the current public API by having clauselist_selectivity()\n>> become a thin wrapper around a new function that optionally applies\n>> extended stats.\n>> \n> \n> OK, makes sense. I'll take a stab at it.\n\nHave you had time to hack on this? The proposed patch no longer applies, so\nI've marked the entry Waiting on Author.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 1 Jul 2020 13:19:40 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Wed, Jul 01, 2020 at 01:19:40PM +0200, Daniel Gustafsson wrote:\n>> On 24 Mar 2020, at 15:33, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Tue, Mar 24, 2020 at 01:20:07PM +0000, Dean Rasheed wrote:\n>\n>>> Sounds like a reasonable approach, but I think it would be better to\n>>> preserve the current public API by having clauselist_selectivity()\n>>> become a thin wrapper around a new function that optionally applies\n>>> extended stats.\n>>>\n>>\n>> OK, makes sense. I'll take a stab at it.\n>\n>Have you had time to hack on this? The proposed patch no longer applies, so\n>I've marked the entry Waiting on Author.\n\nYep, here's a rebased patch. This does not include the changes we've\ndiscussed with Dean in March, but I plan to address that soon.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 3 Jul 2020 03:10:30 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "Hi,\n\nHere is an improved WIP version of the patch series, modified to address\nthe issue with repeatedly applying the extended statistics, as discussed\nwith Dean in this thread. It's a bit rough and not committable, but I\nneed some feedback so I'm posting it in this state.\n\n(Note: The WIP patch is expected to fail regression tests. A couple\nstats_ext regression tests fail due to changed estimate - I've left it\nlike that to make the changes more obvious for now.)\n\nEarlier in this thread I used this example:\n\n\n CREATE TABLE t (a int, b int);\n INSERT INTO t SELECT mod(i, 10), mod(i, 10)\n FROM generate_series(1,100000) s(i);\n CREATE STATISTICS s (mcv) ON a,b FROM t;\n ANALYZE t;\n\n EXPLAIN SELECT * FROM t WHERE a = 0 OR b = 0;\n\nwhich had this call graph with two statext_mcv_clauselist_selectivity\ncalls (which was kinda the issue):\n\n clauselist_selectivity\n statext_clauselist_selectivity\n statext_mcv_clauselist_selectivity <--- (1)\n clauselist_selectivity_simple\n clause_selectivity\n clauselist_selectivity_or\n statext_clauselist_selectivity\n statext_mcv_clauselist_selectivity <--- (2)\n clauselist_selectivity_simple_or\n clause_selectivity\n clause_selectivity\n mcv_clauselist_selectivity\n clauselist_selectivity_simple_or\n mcv_clauselist_selectivity\n clauselist_selectivity_simple\n (already estimated)\n\nwith the patches applied, the call looks like this:\n\n clauselist_selectivity_internal (use_extended_stats=1)\n statext_clauselist_selectivity\n statext_mcv_clauselist_selectivity (is_or=0)\n clauselist_selectivity_simple\n clause_selectivity_internal (use_extended_stats=0)\n clauselist_selectivity_or (use_extended_stats=0)\n clauselist_selectivity_simple_or\n clause_selectivity_internal (use_extended_stats=0)\n clause_selectivity_internal (use_extended_stats=0)\n mcv_clauselist_selectivity (is_or=0)\n clauselist_selectivity_simple\n\nThe nested call is removed, which I think addresses the issue. As for\nthe effect on estimates, there's a couple regression tests where the\nestimates change - not much though, an example is:\n\n SELECT * FROM check_estimated_rows('SELECT * FROM mcv_lists_partial\nWHERE a = 0 OR b = 0 OR c = 10');\n estimated | actual\n -----------+--------\n- 412 | 104\n+ 308 | 104\n (1 row)\n\nThis is on top of 0001, though. Interestingly enough, this ends up with\nthe same estimate as current master, but I consider that a coincidence.\n\n\nAs for the patches:\n\n0001 is the original patch improving estimates of OR clauses\n\n0002 adds thin wrappers for clause[list]_selectivity, with \"internal\"\nfunctions allowing to specify whether to keep considering extended stats\n\n0003 does the same for the \"simple\" functions\n\n\nI've kept it like this to demonstrate that 0002 is not sufficient. In my\nresponse from March 24 I wrote this:\n\n> Isn't it the case that clauselist_selectivity_simple (and the OR \n> variant) should ignore extended stats entirely? That is, we'd need\n> to add a flag (or _simple variant) to clause_selectivity, so that it\n> calls causelist_selectivity_simple_or.\nBut that's actually wrong, as 0002 shows (as it breaks a couple of\nregression tests), because of the way we handle OR clauses. At the top\nlevel, an OR-clause is actually just a single clause and it may get\npassed to clauselist_selectivity_simple. So entirely disabling extended\nstats for the \"simple\" functions would also mean disabling extended\nstats for a large number of OR clauses. Which is clearly wrong.\n\nSo 0003 addresses that, by adding a flag to the two \"simple\" functions.\nUltimately, this should probably do the same thing as 0002 and add thin\nwrappers, because the existing functions are part of the public API.\n\nDean, does this address the issue you had in mind? Can you come up with\nan example of that issue in the form of a regression test or something?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 12 Nov 2020 15:18:24 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Thu, 12 Nov 2020 at 14:18, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Here is an improved WIP version of the patch series, modified to address\n> the issue with repeatedly applying the extended statistics, as discussed\n> with Dean in this thread. It's a bit rough and not committable, but I\n> need some feedback so I'm posting it in this state.\n>\n\nCool. I haven't forgotten that I promised to look at this.\n\n> Dean, does this address the issue you had in mind? Can you come up with\n> an example of that issue in the form of a regression test or something?\n>\n\nI'm quite busy with my day job at the moment, but I expect to have\ntime to review this next week.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 12 Nov 2020 14:26:29 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Thu, 12 Nov 2020 at 14:18, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Here is an improved WIP version of the patch series, modified to address\n> the issue with repeatedly applying the extended statistics, as discussed\n> with Dean in this thread. It's a bit rough and not committable, but I\n> need some feedback so I'm posting it in this state.\n\nAs it stands, it doesn't compile if 0003 is applied, because it missed\none of the callers of clauselist_selectivity_simple(), but that's\neasily fixed.\n\n> 0001 is the original patch improving estimates of OR clauses\n>\n> 0002 adds thin wrappers for clause[list]_selectivity, with \"internal\"\n> functions allowing to specify whether to keep considering extended stats\n>\n> 0003 does the same for the \"simple\" functions\n>\n>\n> I've kept it like this to demonstrate that 0002 is not sufficient. In my\n> response from March 24 I wrote this:\n>\n> > Isn't it the case that clauselist_selectivity_simple (and the OR\n> > variant) should ignore extended stats entirely? That is, we'd need\n> > to add a flag (or _simple variant) to clause_selectivity, so that it\n> > calls causelist_selectivity_simple_or.\n> But that's actually wrong, as 0002 shows (as it breaks a couple of\n> regression tests), because of the way we handle OR clauses. At the top\n> level, an OR-clause is actually just a single clause and it may get\n> passed to clauselist_selectivity_simple. So entirely disabling extended\n> stats for the \"simple\" functions would also mean disabling extended\n> stats for a large number of OR clauses. Which is clearly wrong.\n>\n> So 0003 addresses that, by adding a flag to the two \"simple\" functions.\n> Ultimately, this should probably do the same thing as 0002 and add thin\n> wrappers, because the existing functions are part of the public API.\n\nI agree that, taken together, these patches fix the\nmultiple-extended-stats-evaluation issue. However:\n\nI think this has ended up with too many variants of these functions,\nsince we now have \"_internal\" and \"_simple\" variants, and you're\nproposing adding more. The original purpose of the \"_simple\" variants\nwas to compute selectivities without looking at extended stats, and\nnow the \"_internal\" variants compute selectivities with an additional\n\"use_extended_stats\" flag to control whether or not to look at\nextended stats. Thus they're basically the same, and could be rolled\ntogether.\n\nAdditionally, it's worth noting that the \"_simple\" variants expose the\n\"estimatedclauses\" bitmap as an argument, which IMO is a bit messy as\nan API. All callers of the \"_simple\" functions outside of clausesel.c\nactually pass in estimatedclauses=NULL, so it's possible to refactor\nand get rid of that, turning estimatedclauses into a purely internal\nvariable.\n\nAlso, it's quite messy that clauselist_selectivity_simple_or() needs\nto be passed a Selectivity input (the final argument) that is the\nselectivity of any already-estimated clauses, or the value to return\nif no not-already-estimated clauses are found, and must be 0.0 when\ncalled from the extended stats code.\n\nAttached is the kind of thing I had in mind (as a single patch, since\nI don't think it's worth splitting up). This replaces the \"_simple\"\nand \"_internal\" variants of these functions with \"_opt_ext_stats\"\nvariants whose signatures match the originals except for having the\nsingle extra \"use_extended_stats\" boolean parameter. Additionally, the\n\"_simple\" functions are merged into the originals (making them more\nlike they were in PG11) so that the \"estimatedclauses\" bitmap and\npartial-OR-list Selectivity become internal details, no longer exposed\nin the API.\n\nRegards,\nDean",
"msg_date": "Tue, 17 Nov 2020 15:35:25 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "\n\nOn 11/17/20 4:35 PM, Dean Rasheed wrote:\n> On Thu, 12 Nov 2020 at 14:18, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Here is an improved WIP version of the patch series, modified to address\n>> the issue with repeatedly applying the extended statistics, as discussed\n>> with Dean in this thread. It's a bit rough and not committable, but I\n>> need some feedback so I'm posting it in this state.\n> \n> As it stands, it doesn't compile if 0003 is applied, because it missed\n> one of the callers of clauselist_selectivity_simple(), but that's\n> easily fixed.\n> \n>> 0001 is the original patch improving estimates of OR clauses\n>>\n>> 0002 adds thin wrappers for clause[list]_selectivity, with \"internal\"\n>> functions allowing to specify whether to keep considering extended stats\n>>\n>> 0003 does the same for the \"simple\" functions\n>>\n>>\n>> I've kept it like this to demonstrate that 0002 is not sufficient. In my\n>> response from March 24 I wrote this:\n>>\n>>> Isn't it the case that clauselist_selectivity_simple (and the OR\n>>> variant) should ignore extended stats entirely? That is, we'd need\n>>> to add a flag (or _simple variant) to clause_selectivity, so that it\n>>> calls causelist_selectivity_simple_or.\n>> But that's actually wrong, as 0002 shows (as it breaks a couple of\n>> regression tests), because of the way we handle OR clauses. At the top\n>> level, an OR-clause is actually just a single clause and it may get\n>> passed to clauselist_selectivity_simple. So entirely disabling extended\n>> stats for the \"simple\" functions would also mean disabling extended\n>> stats for a large number of OR clauses. Which is clearly wrong.\n>>\n>> So 0003 addresses that, by adding a flag to the two \"simple\" functions.\n>> Ultimately, this should probably do the same thing as 0002 and add thin\n>> wrappers, because the existing functions are part of the public API.\n> \n> I agree that, taken together, these patches fix the\n> multiple-extended-stats-evaluation issue. However:\n> \n> I think this has ended up with too many variants of these functions,\n> since we now have \"_internal\" and \"_simple\" variants, and you're\n> proposing adding more. The original purpose of the \"_simple\" variants\n> was to compute selectivities without looking at extended stats, and\n> now the \"_internal\" variants compute selectivities with an additional\n> \"use_extended_stats\" flag to control whether or not to look at\n> extended stats. Thus they're basically the same, and could be rolled\n> together.\n> \n\nYeah, I agree there were far too many functions. Your patch looks much\ncleaner / saner than the one I shared last week.\n\n> Additionally, it's worth noting that the \"_simple\" variants expose the\n> \"estimatedclauses\" bitmap as an argument, which IMO is a bit messy as\n> an API. All callers of the \"_simple\" functions outside of clausesel.c\n> actually pass in estimatedclauses=NULL, so it's possible to refactor\n> and get rid of that, turning estimatedclauses into a purely internal\n> variable.\n> \n\nHmmm. I think there were two reasons for exposing the estimatedclauses\nbitmap like that: (a) we used the function internally and (b) we wanted\nto allow cases where the user code might do something with the bitmap.\nThe first item is not an issue - we can hide that. As for the second\nitem, my guess is it was unnecessary future-proofing - we don't know\nabout any use case that might need this, so +1 to get rid of it.\n\n> Also, it's quite messy that clauselist_selectivity_simple_or() needs\n> to be passed a Selectivity input (the final argument) that is the\n> selectivity of any already-estimated clauses, or the value to return\n> if no not-already-estimated clauses are found, and must be 0.0 when\n> called from the extended stats code.\n> \n\nTrue.\n\n> Attached is the kind of thing I had in mind (as a single patch, since\n> I don't think it's worth splitting up). This replaces the \"_simple\"\n> and \"_internal\" variants of these functions with \"_opt_ext_stats\"\n> variants whose signatures match the originals except for having the\n> single extra \"use_extended_stats\" boolean parameter. Additionally, the\n> \"_simple\" functions are merged into the originals (making them more\n> like they were in PG11) so that the \"estimatedclauses\" bitmap and\n> partial-OR-list Selectivity become internal details, no longer exposed\n> in the API.\n> \n\nSeems fine to me, although the \"_opt_ext_stats\" is rather cryptic.\nAFAICS we use \"_internal\" for similar functions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Nov 2020 23:37:15 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Wed, 18 Nov 2020 at 22:37, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Seems fine to me, although the \"_opt_ext_stats\" is rather cryptic.\n> AFAICS we use \"_internal\" for similar functions.\n>\n\nThere's precedent for using \"_opt_xxx\" for function variants that add\nan option to existing functions, but I agree that in this case it's a\nbit of a mouthful. I don't think \"_internal\" is appropriate though,\nsince the clauselist function isn't internal. Perhaps using just\n\"_ext\" would be OK.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 19 Nov 2020 09:52:04 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "> On Wed, 18 Nov 2020 at 22:37, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > Seems fine to me, although the \"_opt_ext_stats\" is rather cryptic.\n> > AFAICS we use \"_internal\" for similar functions.\n> >\n\nI have been thinking about this some more. The one part of this that I\nstill wasn't happy with was the way that base frequencies were used to\ncompute the selectivity correction to apply. As noted in [1], using\nbase frequencies in this way isn't really appropriate for clauses\ncombined using \"OR\". The reason is that an item's base frequency is\ncomputed as the product of the per-column selectivities, so that (freq\n- base_freq) is the right correction to apply for a set of clauses\ncombined with \"AND\", but it doesn't really work properly for clauses\ncombined with \"OR\". This is why a number of the estimates in the\nregression tests end up being significant over-estimates.\n\nI speculated in [1] that we might fix that by tracking which columns\nof the match bitmap actually matched the clauses being estimated, and\nthen only use those base frequencies. Unfortunately that would also\nmean changing the format of the stats that we store, and so would be a\nrather invasive change.\n\nIt occurred to me though, that there is another, much more\nstraightforward way to do it. We can rewrite the \"OR\" clauses, and\nturn them into \"AND\" clauses using the fact that\n\n P(A OR B) = P(A) + P(B) - P(A AND B)\n\nand then use the multivariate stats to estimate the P(A AND B) part in\nthe usual way.\n\nAttached is the resulting patch doing it that way. The main change is\nin the way that statext_mcv_clauselist_selectivity() works, combined\nwith a new function mcv_clause_selectivity_or() that does the\nnecessary MCV bitmap manipulations.\n\nDoing it this way also means that clausesel.c doesn't need to export\nclauselist_selectivity_or(), and the new set of exported functions\nseem a bit neater now.\n\nA handful of regression test results change, and in all cases except\none the new estimates are much better. One estimate is made worse, but\nin that case we only have 2 sets of partial stats:\n\n SELECT * FROM mcv_lists_multi WHERE a = 0 OR b = 0 OR c = 0 OR d = 0\n\nwith stats on (a,b) and (c,d) so it's not surprising that combining (a\n= 0 OR b = 0) with (c = 0 OR d = 0) mis-estimates a bit. I suspect the\nold MV stats estimate was more down to chance in this case.\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/CAEZATCX8u9bZzcWEzqA_t7f_OQHu2oxeTUGnFHNEOXnJo35AQg%40mail.gmail.com",
"msg_date": "Sun, 29 Nov 2020 14:57:42 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "\nOn 11/29/20 3:57 PM, Dean Rasheed wrote:\n>> On Wed, 18 Nov 2020 at 22:37, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> Seems fine to me, although the \"_opt_ext_stats\" is rather cryptic.\n>>> AFAICS we use \"_internal\" for similar functions.\n>>>\n> \n> I have been thinking about this some more. The one part of this that I\n> still wasn't happy with was the way that base frequencies were used to\n> compute the selectivity correction to apply. As noted in [1], using\n> base frequencies in this way isn't really appropriate for clauses\n> combined using \"OR\". The reason is that an item's base frequency is\n> computed as the product of the per-column selectivities, so that (freq\n> - base_freq) is the right correction to apply for a set of clauses\n> combined with \"AND\", but it doesn't really work properly for clauses\n> combined with \"OR\". This is why a number of the estimates in the\n> regression tests end up being significant over-estimates.\n> \n> I speculated in [1] that we might fix that by tracking which columns\n> of the match bitmap actually matched the clauses being estimated, and\n> then only use those base frequencies. Unfortunately that would also\n> mean changing the format of the stats that we store, and so would be a\n> rather invasive change.\n> \n> It occurred to me though, that there is another, much more\n> straightforward way to do it. We can rewrite the \"OR\" clauses, and\n> turn them into \"AND\" clauses using the fact that\n> \n> P(A OR B) = P(A) + P(B) - P(A AND B)\n> \n> and then use the multivariate stats to estimate the P(A AND B) part in\n> the usual way.\n> \n\nOK, that seems quite reasonable.\n\n> Attached is the resulting patch doing it that way. The main change is\n> in the way that statext_mcv_clauselist_selectivity() works, combined\n> with a new function mcv_clause_selectivity_or() that does the\n> necessary MCV bitmap manipulations.\n> \n> Doing it this way also means that clausesel.c doesn't need to export\n> clauselist_selectivity_or(), and the new set of exported functions\n> seem a bit neater now.\n> \n\nNice. I agree this looks way better than the version I hacked together.\n\nI wonder how much of the comment before clauselist_selectivity should\nmove to clauselist_selectivity_ext - it does talk about range clauses\nand so on, but clauselist_selectivity does not really deal with that.\nBut maybe that's just an implementation detail and it's better to keep\nthe comment the way it is.\n\nI noticed this outdated comment:\n\n /* Always compute the selectivity using clause_selectivity */\n s2 = clause_selectivity_ext(root, clause, varRelid, jointype, sjinfo,\n\nAlso, the comment at clauselist_selectivity_or seems to not follow the\nusual pattern, which I think is\n\n/*\n * function name\n *\tshort one-sentence description\n *\n * ... longer description ...\n */\n\nThose are fairly minor issues. I don't have any deeper objections, and\nit seems committable. Do you plan to do that sometime soon?\n\n> A handful of regression test results change, and in all cases except\n> one the new estimates are much better. One estimate is made worse, but\n> in that case we only have 2 sets of partial stats:\n> \n> SELECT * FROM mcv_lists_multi WHERE a = 0 OR b = 0 OR c = 0 OR d = 0\n> \n> with stats on (a,b) and (c,d) so it's not surprising that combining (a\n> = 0 OR b = 0) with (c = 0 OR d = 0) mis-estimates a bit. I suspect the\n> old MV stats estimate was more down to chance in this case.\n> \n\nYeah, that's quite possible - we're multiplying two estimates, but\nthere's a clear correlation. So it was mostly luck we had over-estimated\nthe clauses before, which gave us higher product and thus accidentally\nbetter overall estimate.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Nov 2020 22:02:22 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sun, 29 Nov 2020 at 21:02, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Those are fairly minor issues. I don't have any deeper objections, and\n> it seems committable. Do you plan to do that sometime soon?\n>\n\nOK, I've updated the patch status in the CF app, and I should be able\nto push it in the next day or so.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 1 Dec 2020 08:15:15 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On 12/1/20 9:15 AM, Dean Rasheed wrote:\n> On Sun, 29 Nov 2020 at 21:02, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Those are fairly minor issues. I don't have any deeper objections, and\n>> it seems committable. Do you plan to do that sometime soon?\n>>\n> \n> OK, I've updated the patch status in the CF app, and I should be able\n> to push it in the next day or so.\n> \n\nCool, thanks.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 1 Dec 2020 13:50:59 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Sun, 29 Nov 2020 at 21:02, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I wonder how much of the comment before clauselist_selectivity should\n> move to clauselist_selectivity_ext - it does talk about range clauses\n> and so on, but clauselist_selectivity does not really deal with that.\n> But maybe that's just an implementation detail and it's better to keep\n> the comment the way it is.\n\nI think it's better to keep it the way it is, so that the entirety of\nwhat clauselist_selectivity() does (via clauselist_selectivity_ext())\ncan be read in one place, but I have added separate comments for the\nnew \"_ext\" functions to explain how they differ. That matches similar\nexamples elsewhere.\n\n\n> I noticed this outdated comment:\n>\n> /* Always compute the selectivity using clause_selectivity */\n> s2 = clause_selectivity_ext(root, clause, varRelid, jointype, sjinfo,\n\nUpdated.\n\n\n> Also, the comment at clauselist_selectivity_or seems to not follow the\n> usual pattern, which I think is\n>\n> /*\n> * function name\n> * short one-sentence description\n> *\n> * ... longer description ...\n> */\n\nHmm, it seems OK to me. The first part is basically copied from\nclauselist_selectivity(). The \"longer description\" doesn't really have\nmuch more to say because it's much simpler than\nclauselist_selectivity(), but it seems neater to keep the two roughly\nin sync.\n\n\nI've been hacking on this a bit more and attached is an updated\n(hopefully final) version with some other comment improvements and\nalso a couple of other tweaks:\n\nThe previous version had duplicated code blocks that implemented the\nsame MCV-correction algorithm using simple_sel, mcv_sel, base_sel,\nother_sel and total_sel, which was quite messy. So I refactored that\ninto a new function mcv_combine_selectivities(). About half the\ncomments from statext_mcv_clauselist_selectivity() then move over to\nmcv_combine_selectivities(). I also updated the comments for\nmcv_clauselist_selectivity() and mcv_clause_selectivity_or() to\nexplain how their outputs are expected to be used by\nmcv_combine_selectivities(). That hopefully makes for a clean\nseparation of concerns, and makes it easier to tweak the way MCV stats\nare applied on top of simple stats, if someone thinks of a better\napproach in the future.\n\nIn the previous version, for an ORed list of clauses, the MCV\ncorrection was only applied to the overlaps between clauses. That's OK\nas long as each clause only refers to a single column, since the\nper-column statistics ought to be the best way to estimate each\nindividual clause in that case. However, if the individual clauses\nrefer to more than one column, I think the MCV correction should be\napplied to each individual clause as well as to the overlaps. That\nturns out to be pretty straightforward, since we're already doing all\nthe hard work of computing the match bitmap for each clause. The sort\nof queries I had in mind were things like this:\n\n WHERE (a = 1 AND b = 1) OR (a = 2 AND b = 2)\n\nI added a new test case along those lines and the new estimates are\nmuch better than they are without this patch, but not for the reason I\nthought --- the old code consistently over-estimated queries like that\nbecause it actually applied the MCV correction twice (once while\nprocessing each AND list, via clause_selectivity(), called from\nclauselist_selectivity_simple(), and once for the top-level OR clause,\ncontained in a single-element implicitly-ANDed list). The way the new\ncode is structured avoids any kind of double application of extended\nstats, producing a much better estimate, which is good.\n\nHowever, the new code doesn't apply the extended stats directly using\nclauselist_selectivity_or() for this kind of query because there are\nno RestrictInfos for the nested AND clauses, so\nfind_single_rel_for_clauses() (and similarly\nstatext_is_compatible_clause()) regards those clauses as not\ncompatible with extended stats. So what ends up happening is that\nextended stats are used only when we descend down to the two AND\nclauses, and their results are combined using the original \"s1 + s2 -\ns1 * s2\" formula. That actually works OK in this case, because there\nis no overlap between the two AND clauses, but it wouldn't work so\nwell if there was.\n\nI'm pretty sure that can be fixed by teaching\nfind_single_rel_for_clauses() and statext_is_compatible_clause() to\nhandle BoolExpr clauses, looking for RestrictInfos underneath them,\nbut I think that should be left for a follow-in patch. I have left a\nregression test in place, whose estimates ought to be improved by such\na fix.\n\nThe upshot of all that is that the new code that applies the MCV\ncorrection to the individual clauses in an ORed list doesn't help with\nqueries like the one above at the moment, and it's not obvious whether\nit is currently reachable, but I think it's worth leaving in because\nit seems more principled, and makes that code more future-proof. I\nalso think it's neater because now the signature of\nmcv_clause_selectivity_or() is more natural --- it's primary return\nvalue is now the clause's MCV selectivity, as suggested by the\nfunction's name, rather than the overlap selectivity that the previous\nversion was returning. Also, after your \"Var Op Var\" patch is applied,\nI think it would be possible to construct queries that would benefit\nfrom this, so it would be good to get that committed too.\n\nBarring any further comments, I'll push this sometime soon.\n\nRegards,\nDean",
"msg_date": "Wed, 2 Dec 2020 15:51:29 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On 12/2/20 4:51 PM, Dean Rasheed wrote:\n> On Sun, 29 Nov 2020 at 21:02, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I wonder how much of the comment before clauselist_selectivity should\n>> move to clauselist_selectivity_ext - it does talk about range clauses\n>> and so on, but clauselist_selectivity does not really deal with that.\n>> But maybe that's just an implementation detail and it's better to keep\n>> the comment the way it is.\n> \n> I think it's better to keep it the way it is, so that the entirety of\n> what clauselist_selectivity() does (via clauselist_selectivity_ext())\n> can be read in one place, but I have added separate comments for the\n> new \"_ext\" functions to explain how they differ. That matches similar\n> examples elsewhere.\n> \n\n+1\n\n> \n>> I noticed this outdated comment:\n>>\n>> /* Always compute the selectivity using clause_selectivity */\n>> s2 = clause_selectivity_ext(root, clause, varRelid, jointype, sjinfo,\n> \n> Updated.\n> \n> \n>> Also, the comment at clauselist_selectivity_or seems to not follow the\n>> usual pattern, which I think is\n>>\n>> /*\n>> * function name\n>> * short one-sentence description\n>> *\n>> * ... longer description ...\n>> */\n> \n> Hmm, it seems OK to me. The first part is basically copied from\n> clauselist_selectivity(). The \"longer description\" doesn't really have\n> much more to say because it's much simpler than\n> clauselist_selectivity(), but it seems neater to keep the two roughly\n> in sync.\n> \n\nI see. In that case it's OK, I guess.\n\n> \n> I've been hacking on this a bit more and attached is an updated\n> (hopefully final) version with some other comment improvements and\n> also a couple of other tweaks:\n> \n> The previous version had duplicated code blocks that implemented the\n> same MCV-correction algorithm using simple_sel, mcv_sel, base_sel,\n> other_sel and total_sel, which was quite messy. So I refactored that\n> into a new function mcv_combine_selectivities(). About half the\n> comments from statext_mcv_clauselist_selectivity() then move over to\n> mcv_combine_selectivities(). I also updated the comments for\n> mcv_clauselist_selectivity() and mcv_clause_selectivity_or() to\n> explain how their outputs are expected to be used by\n> mcv_combine_selectivities(). That hopefully makes for a clean\n> separation of concerns, and makes it easier to tweak the way MCV stats\n> are applied on top of simple stats, if someone thinks of a better\n> approach in the future.\n> \n> In the previous version, for an ORed list of clauses, the MCV\n> correction was only applied to the overlaps between clauses. That's OK\n> as long as each clause only refers to a single column, since the\n> per-column statistics ought to be the best way to estimate each\n> individual clause in that case. However, if the individual clauses\n> refer to more than one column, I think the MCV correction should be\n> applied to each individual clause as well as to the overlaps. That\n> turns out to be pretty straightforward, since we're already doing all\n> the hard work of computing the match bitmap for each clause. The sort\n> of queries I had in mind were things like this:\n> \n> WHERE (a = 1 AND b = 1) OR (a = 2 AND b = 2)\n> \n> I added a new test case along those lines and the new estimates are\n> much better than they are without this patch, but not for the reason I\n> thought --- the old code consistently over-estimated queries like that\n> because it actually applied the MCV correction twice (once while\n> processing each AND list, via clause_selectivity(), called from\n> clauselist_selectivity_simple(), and once for the top-level OR clause,\n> contained in a single-element implicitly-ANDed list). The way the new\n> code is structured avoids any kind of double application of extended\n> stats, producing a much better estimate, which is good.\n> \n\nNice.\n\n> However, the new code doesn't apply the extended stats directly using\n> clauselist_selectivity_or() for this kind of query because there are\n> no RestrictInfos for the nested AND clauses, so\n> find_single_rel_for_clauses() (and similarly\n> statext_is_compatible_clause()) regards those clauses as not\n> compatible with extended stats. So what ends up happening is that\n> extended stats are used only when we descend down to the two AND\n> clauses, and their results are combined using the original \"s1 + s2 -\n> s1 * s2\" formula. That actually works OK in this case, because there\n> is no overlap between the two AND clauses, but it wouldn't work so\n> well if there was.\n> \n> I'm pretty sure that can be fixed by teaching\n> find_single_rel_for_clauses() and statext_is_compatible_clause() to\n> handle BoolExpr clauses, looking for RestrictInfos underneath them,\n> but I think that should be left for a follow-in patch. I have left a\n> regression test in place, whose estimates ought to be improved by such\n> a fix.\n> \n\nYeah, I agree with leaving this for a separate patch. We can't do\neverything at the same time.\n\n> The upshot of all that is that the new code that applies the MCV\n> correction to the individual clauses in an ORed list doesn't help with\n> queries like the one above at the moment, and it's not obvious whether\n> it is currently reachable, but I think it's worth leaving in because\n> it seems more principled, and makes that code more future-proof. I\n> also think it's neater because now the signature of\n> mcv_clause_selectivity_or() is more natural --- it's primary return\n> value is now the clause's MCV selectivity, as suggested by the\n> function's name, rather than the overlap selectivity that the previous\n> version was returning. Also, after your \"Var Op Var\" patch is applied,\n> I think it would be possible to construct queries that would benefit\n> from this, so it would be good to get that committed too.\n> \n> Barring any further comments, I'll push this sometime soon.\n> \n\n+1\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 2 Dec 2020 17:34:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Wed, 2 Dec 2020 at 16:34, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/2/20 4:51 PM, Dean Rasheed wrote:\n> >\n> > Barring any further comments, I'll push this sometime soon.\n>\n> +1\n>\n\nPushed.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 3 Dec 2020 12:56:34 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On Wed, 2 Dec 2020 at 15:51, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> The sort of queries I had in mind were things like this:\n>\n> WHERE (a = 1 AND b = 1) OR (a = 2 AND b = 2)\n>\n> However, the new code doesn't apply the extended stats directly using\n> clauselist_selectivity_or() for this kind of query because there are\n> no RestrictInfos for the nested AND clauses, so\n> find_single_rel_for_clauses() (and similarly\n> statext_is_compatible_clause()) regards those clauses as not\n> compatible with extended stats. So what ends up happening is that\n> extended stats are used only when we descend down to the two AND\n> clauses, and their results are combined using the original \"s1 + s2 -\n> s1 * s2\" formula. That actually works OK in this case, because there\n> is no overlap between the two AND clauses, but it wouldn't work so\n> well if there was.\n>\n> I'm pretty sure that can be fixed by teaching\n> find_single_rel_for_clauses() and statext_is_compatible_clause() to\n> handle BoolExpr clauses, looking for RestrictInfos underneath them,\n> but I think that should be left for a follow-in patch.\n\nAttached is a patch doing that, which improves a couple of the\nestimates for queries with AND clauses underneath OR clauses, as\nexpected.\n\nThis also revealed a minor bug in the way that the estimates for\nmultiple statistics objects were combined while processing an OR\nclause -- the estimates for the overlaps between clauses only apply\nfor the current statistics object, so we really have to combine the\nestimates for each set of clauses for each statistics object as if\nthey were independent of one another.\n\n0001 fixes the multiple-extended-stats issue for OR clauses, and 0002\nimproves the estimates for sub-AND clauses underneath OR clauses.\n\nThese are both quite small patches, that hopefully won't interfere\nwith any of the other extended stats patches.\n\nRegards,\nDean",
"msg_date": "Mon, 7 Dec 2020 16:15:56 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
},
{
"msg_contents": "On 12/7/20 5:15 PM, Dean Rasheed wrote:\n> On Wed, 2 Dec 2020 at 15:51, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>\n>> The sort of queries I had in mind were things like this:\n>>\n>> WHERE (a = 1 AND b = 1) OR (a = 2 AND b = 2)\n>>\n>> However, the new code doesn't apply the extended stats directly using\n>> clauselist_selectivity_or() for this kind of query because there are\n>> no RestrictInfos for the nested AND clauses, so\n>> find_single_rel_for_clauses() (and similarly\n>> statext_is_compatible_clause()) regards those clauses as not\n>> compatible with extended stats. So what ends up happening is that\n>> extended stats are used only when we descend down to the two AND\n>> clauses, and their results are combined using the original \"s1 + s2 -\n>> s1 * s2\" formula. That actually works OK in this case, because there\n>> is no overlap between the two AND clauses, but it wouldn't work so\n>> well if there was.\n>>\n>> I'm pretty sure that can be fixed by teaching\n>> find_single_rel_for_clauses() and statext_is_compatible_clause() to\n>> handle BoolExpr clauses, looking for RestrictInfos underneath them,\n>> but I think that should be left for a follow-in patch.\n> \n> Attached is a patch doing that, which improves a couple of the\n> estimates for queries with AND clauses underneath OR clauses, as\n> expected.\n> \n> This also revealed a minor bug in the way that the estimates for\n> multiple statistics objects were combined while processing an OR\n> clause -- the estimates for the overlaps between clauses only apply\n> for the current statistics object, so we really have to combine the\n> estimates for each set of clauses for each statistics object as if\n> they were independent of one another.\n> \n> 0001 fixes the multiple-extended-stats issue for OR clauses, and 0002\n> improves the estimates for sub-AND clauses underneath OR clauses.\n> \n\nCool! Thanks for taking time to investigate and fixing those. Both\npatches seem fine to me.\n\n> These are both quite small patches, that hopefully won't interfere\n> with any of the other extended stats patches.\n> \n\nI haven't tried, but it should not interfere with it too much.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Dec 2020 13:46:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Additional improvements to extended statistics"
}
] |
[
{
"msg_contents": "I threatened to do this in another thread [1], so here it is.\n\nThis patch removes the restriction that the server encoding must\nbe UTF-8 in order to write any Unicode escape with a value outside\nthe ASCII range. Instead, we'll allow the notation and convert to\nthe server encoding if that's possible. (If it isn't, of course\nyou get an encoding conversion failure.)\n\nIn the cases that were already supported, namely ASCII characters\nor UTF-8 server encoding, this should be only immeasurably slower\nthan before. Otherwise, it calls the appropriate encoding conversion\nprocedure, which of course will take a little time. But that's\nbetter than failing, surely.\n\nOne way in which this is slightly less good than before is that\nyou no longer get a syntax error cursor pointing at the problematic\nescape when conversion fails. If we were really excited about that,\nsomething could be done with setting up an errcontext stack entry.\nBut that would add a few cycles, so I wasn't sure whether to do it.\n\nGrepping for other direct uses of unicode_to_utf8(), I notice that\nthere are a couple of places in the JSON code where we have a similar\nrestriction that you can only write a Unicode escape in UTF8 server\nencoding. I'm not sure whether these same semantics could be\napplied there, so I didn't touch that.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CACPNZCvaoa3EgVWm5yZhcSTX6RAtaLgniCPcBVOCwm8h3xpWkw%40mail.gmail.com",
"msg_date": "Mon, 13 Jan 2020 18:31:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Unicode escapes with any backend encoding"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n\n>\n> Grepping for other direct uses of unicode_to_utf8(), I notice that\n> there are a couple of places in the JSON code where we have a similar\n> restriction that you can only write a Unicode escape in UTF8 server\n> encoding. I'm not sure whether these same semantics could be\n> applied there, so I didn't touch that.\n>\n\n\nOff the cuff I'd be inclined to say we should keep the text escape\nrules the same. We've already extended the JSON standard y allowing\nnon-UTF8 encodings.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Jan 2020 12:14:16 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On Tue, Jan 14, 2020 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Grepping for other direct uses of unicode_to_utf8(), I notice that\n>> there are a couple of places in the JSON code where we have a similar\n>> restriction that you can only write a Unicode escape in UTF8 server\n>> encoding. I'm not sure whether these same semantics could be\n>> applied there, so I didn't touch that.\n\n> Off the cuff I'd be inclined to say we should keep the text escape\n> rules the same. We've already extended the JSON standard y allowing\n> non-UTF8 encodings.\n\nRight. I'm just thinking though that if you can write \"é\" literally\nin a JSON string, even though you're using LATIN1 not UTF8, then why\nnot allow writing that as \"\\u00E9\" instead? The latter is arguably\ntruer to spec.\n\nHowever, if JSONB collapses \"\\u00E9\" to LATIN1 \"é\", that would be bad,\nunless we have a way to undo it on printout. So there might be\nsome more moving parts here than I thought.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jan 2020 21:05:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "I wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On Tue, Jan 14, 2020 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Grepping for other direct uses of unicode_to_utf8(), I notice that\n>>> there are a couple of places in the JSON code where we have a similar\n>>> restriction that you can only write a Unicode escape in UTF8 server\n>>> encoding. I'm not sure whether these same semantics could be\n>>> applied there, so I didn't touch that.\n\n>> Off the cuff I'd be inclined to say we should keep the text escape\n>> rules the same. We've already extended the JSON standard y allowing\n>> non-UTF8 encodings.\n\n> Right. I'm just thinking though that if you can write \"é\" literally\n> in a JSON string, even though you're using LATIN1 not UTF8, then why\n> not allow writing that as \"\\u00E9\" instead? The latter is arguably\n> truer to spec.\n> However, if JSONB collapses \"\\u00E9\" to LATIN1 \"é\", that would be bad,\n> unless we have a way to undo it on printout. So there might be\n> some more moving parts here than I thought.\n\nOn third thought, what would be so bad about that? Let's suppose\nI write:\n\n\tINSERT ... values('{\"x\": \"\\u00E9\"}'::jsonb);\n\nand the jsonb parsing logic chooses to collapse the backslash to\nthe represented character, i.e., \"é\". Why should it matter whether\nthe database encoding is UTF8 or LATIN1? If I am using UTF8\nclient encoding, I will see the \"é\" in UTF8 encoding either way,\nbecause of output encoding conversion. If I am using LATIN1\nclient encoding, I will see the \"é\" in LATIN1 either way --- or\nat least, I will if the database encoding is UTF8. Right now I get\nan error for that when the database encoding is LATIN1 ... but if\nI store the \"é\" as literal \"é\", it works, either way. So it seems\nto me that this error is just useless pedantry. As long as the DB\nencoding can represent the desired character, it should be transparent\nto users.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 10:10:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "On 1/14/20 10:10 AM, Tom Lane wrote:\n> to me that this error is just useless pedantry. As long as the DB\n> encoding can represent the desired character, it should be transparent\n> to users.\n\nThat's my position too.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 14 Jan 2020 12:55:24 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 4:25 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> On 1/14/20 10:10 AM, Tom Lane wrote:\n> > to me that this error is just useless pedantry. As long as the DB\n> > encoding can represent the desired character, it should be transparent\n> > to users.\n>\n> That's my position too.\n>\n\n\nand mine.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jan 2020 07:47:58 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On Wed, Jan 15, 2020 at 4:25 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>> On 1/14/20 10:10 AM, Tom Lane wrote:\n>>> to me that this error is just useless pedantry. As long as the DB\n>>> encoding can represent the desired character, it should be transparent\n>>> to users.\n\n>> That's my position too.\n\n> and mine.\n\nI'm confused --- yesterday you seemed to be against this idea.\nHave you changed your mind?\n\nI'll gladly go change the patch if people are on board with this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 16:25:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "On 1/14/20 4:25 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On Wed, Jan 15, 2020 at 4:25 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>>> On 1/14/20 10:10 AM, Tom Lane wrote:\n>>>> to me that this error is just useless pedantry. As long as the DB\n>>>> encoding can represent the desired character, it should be transparent\n>>>> to users.\n> \n>>> That's my position too.\n> \n>> and mine.\n> \n> I'm confused --- yesterday you seemed to be against this idea.\n> Have you changed your mind?\n> \n> I'll gladly go change the patch if people are on board with this.\n\nHmm, well, let me clarify for my own part what I think I'm agreeing\nwith ... perhaps it's misaligned with something further upthread.\n\nIn an ideal world (which may be ideal in more ways than are in scope\nfor the present discussion) I would expect to see these principles:\n\n1. On input, whether a Unicode escape is or isn't allowed should\n not depend on any encoding settings. It should be lexically\n allowed always, and if it represents a character that exists\n in the server encoding, it should mean that character. If it's\n not representable in the storage format, it should produce an\n error that says that.\n\n2. If it happens that the character is representable in both the\n storage encoding and the client encoding, it shouldn't matter\n whether it arrives literally as an é or as an escape. Either\n should get stored on disk as the same bytes.\n\n3. On output, as long as the character is representable in the client\n encoding, there is nothing to worry about. It will be sent as its\n representation in the client encoding (which may be different bytes\n than its representation in the server encoding).\n\n4. If a character to be output isn't in the client encoding, it\n will be datatype-dependent whether there is any way to escape.\n For example, xml_out could produce &#x????; forms, and json_out\n could produce \\u???? forms.\n\n5. If the datatype being output has no escaping rules available\n (as would be the case for an ordinary text column, say), then\n the unrepresentable character has to be reported in an error.\n (Encoding conversions often have the option of substituting\n a replacement character like ? but I don't believe a DBMS has\n any business making such changes to data, unless by explicit\n opt-in. If it can't give you the data you wanted, it should\n say \"here's why I can't give you that.\")\n\n6. While 'text' in general provides no escaping mechanism, some\n functions that produce text may still have that option. For\n example, quote_literal and quote_ident could conceivably\n produce the U&'...' or U&\"...\" forms, respectively, if\n the argument contains characters that won't go in the client\n encoding.\n\nI understand that on the way from 1 to 6 I will have drifted\nfurther from what's discussed in this thread; for example, I bet\nthat quote_literal/quote_ident never produce U& forms now, and\nthat no one is proposing to change that, and I'm pretending not\nto notice the question of how astonishing such behavior could be.\n(Not to mention, how would they know whether they are returning\na value that's destined to go across the client encoding, rather\nthan to be used in a purely server-side expression? Maybe distinct\nversions of those functions could take an encoding argument, and\nproduce the U& forms when the content won't go in the specified\nencoding. That would avoid astonishing changes to existing functions.)\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 14 Jan 2020 17:03:33 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 7:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On Wed, Jan 15, 2020 at 4:25 AM Chapman Flack <chap@anastigmatix.net> wrote:\n> >> On 1/14/20 10:10 AM, Tom Lane wrote:\n> >>> to me that this error is just useless pedantry. As long as the DB\n> >>> encoding can represent the desired character, it should be transparent\n> >>> to users.\n>\n> >> That's my position too.\n>\n> > and mine.\n>\n> I'm confused --- yesterday you seemed to be against this idea.\n> Have you changed your mind?\n>\n> I'll gladly go change the patch if people are on board with this.\n>\n>\n\nPerhaps I expressed myself badly. What I meant was that we should keep\nthe json and text escape rules in sync, as they are now. Since we're\nchanging the text rules to allow resolvable non-ascii unicode escapes\nin non-utf8 locales, we should do the same for json.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jan 2020 09:20:32 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Perhaps I expressed myself badly. What I meant was that we should keep\n> the json and text escape rules in sync, as they are now. Since we're\n> changing the text rules to allow resolvable non-ascii unicode escapes\n> in non-utf8 locales, we should do the same for json.\n\nGot it. I'll make the patch do that in a little bit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 18:04:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "I wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> Perhaps I expressed myself badly. What I meant was that we should keep\n>> the json and text escape rules in sync, as they are now. Since we're\n>> changing the text rules to allow resolvable non-ascii unicode escapes\n>> in non-utf8 locales, we should do the same for json.\n\n> Got it. I'll make the patch do that in a little bit.\n\nOK, here's v2, which brings JSONB into the fold and also makes some\neffort to produce an accurate error cursor for invalid Unicode escapes.\nAs it's set up, we only pay the extra cost of setting up an error\ncontext callback when we're actually processing a Unicode escape,\nso I think that's an acceptable cost. (It's not much of a cost,\nanyway.)\n\nThe callback support added here is pretty much a straight copy-and-paste\nof the existing functions setup_parser_errposition_callback() and friends.\nThat's slightly annoying --- we could perhaps merge those into one.\nBut I didn't see a good common header to put such a thing into, so\nI just did it like this.\n\nAnother note is that we could use the additional scanner infrastructure\nto produce more accurate error pointers for other cases where we're\nwhining about a bad escape sequence, or some other sub-part of a lexical\ntoken. I think that'd likely be a good idea, since the existing cursor\nplacement at the start of the token isn't too helpful if e.g. you're\ndealing with a very long string constant. But to keep this focused,\nI only touched the behavior for Unicode escapes. The rest could be\ndone as a separate patch.\n\nThis also mops up after 7f380c59 by making use of the new pg_wchar.c\nexports is_utf16_surrogate_first() etc everyplace that they're relevant\n(which is just the JSON code I was touching anyway, as it happens).\nI also made a bit of an effort to ensure test coverage of all the\ncode touched in that patch and this one.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 15 Jan 2020 17:34:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "I wrote:\n> [ unicode-escapes-with-other-server-encodings-2.patch ]\n\nI see this patch got sideswiped by the recent refactoring of JSON\nlexing. Here's an attempt at fixing it up. Since the frontend\ncode isn't going to have access to encoding conversion facilities,\nthis creates a difference between frontend and backend handling\nof JSON Unicode escapes, which is mildly annoying but probably\nisn't going to bother anyone in the real world. Outside of\njsonapi.c, there are no changes from v2.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 24 Feb 2020 12:49:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "On Mon, Feb 24, 2020 at 11:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I see this patch got sideswiped by the recent refactoring of JSON\n> lexing. Here's an attempt at fixing it up. Since the frontend\n> code isn't going to have access to encoding conversion facilities,\n> this creates a difference between frontend and backend handling\n> of JSON Unicode escapes, which is mildly annoying but probably\n> isn't going to bother anyone in the real world. Outside of\n> jsonapi.c, there are no changes from v2.\n\nFor the record, as far as JSON goes, I think I'm responsible for the\ncurrent set of restrictions, and I'm not attached to them. I believe I\nwas uncertain of my ability to implement anything better than what we\nhave now and also slightly unclear on what the semantics ought to be.\nI'm happy to see it improved, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 25 Feb 2020 09:13:15 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "On Tue, Feb 25, 2020 at 1:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > [ unicode-escapes-with-other-server-encodings-2.patch ]\n>\n> I see this patch got sideswiped by the recent refactoring of JSON\n> lexing. Here's an attempt at fixing it up. Since the frontend\n> code isn't going to have access to encoding conversion facilities,\n> this creates a difference between frontend and backend handling\n> of JSON Unicode escapes, which is mildly annoying but probably\n> isn't going to bother anyone in the real world. Outside of\n> jsonapi.c, there are no changes from v2.\n\nWith v3, I successfully converted escapes using a database with EUC-KR\nencoding, from strings, json, and jsonpath expressions.\n\nThen I ran a raw parsing microbenchmark with ASCII unicode escapes in\nUTF-8 to verify no significant regression. I also tried the same with\nEUC-KR, even though that's not really apples-to-apples since it\ndoesn't work on HEAD. It seems to give the same numbers. (median of 3,\ndone 3 times with postmaster restart in between)\n\nmaster, UTF-8 ascii\n1.390s\n1.405s\n1.406s\n\nv3, UTF-8 ascii\n1.396s\n1.388s\n1.390s\n\nv3, EUC-KR non-ascii\n1.382s\n1.401s\n1.394s\n\nNot this patch's job perhaps, but now that check_unicode_value() only\ndepends on the input, maybe it can be put into pgwchar.h with other\nstatic inline helper functions? That test is duplicated in\naddunicode() and pg_unicode_to_server(). Maybe:\n\nstatic inline bool\ncodepoint_is_valid(pgwchar c)\n{\n return (c > 0 && c <= 0x10FFFF);\n}\n\nMaybe Chapman has a use case in mind he can test with? Barring that,\nthe patch seems ready for commit.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Mar 2020 13:32:59 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> Not this patch's job perhaps, but now that check_unicode_value() only\n> depends on the input, maybe it can be put into pgwchar.h with other\n> static inline helper functions? That test is duplicated in\n> addunicode() and pg_unicode_to_server(). Maybe:\n\n> static inline bool\n> codepoint_is_valid(pgwchar c)\n> {\n> return (c > 0 && c <= 0x10FFFF);\n> }\n\nSeems reasonable, done.\n\n> Maybe Chapman has a use case in mind he can test with? Barring that,\n> the patch seems ready for commit.\n\nI went ahead and pushed this, just to get it out of my queue.\nChapman's certainly welcome to kibitz some more of course.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Mar 2020 14:19:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unicode escapes with any backend encoding"
},
{
"msg_contents": "On 3/6/20 2:19 PM, Tom Lane wrote:\n>> Maybe Chapman has a use case in mind he can test with? Barring that,\n>> the patch seems ready for commit.\n> \n> I went ahead and pushed this, just to get it out of my queue.\n> Chapman's certainly welcome to kibitz some more of course.\n\nSorry, yeah, I don't think I had any kibitzing to do. My use case\nwas for an automated SQL generator to confidently emit Unicode-\nescaped forms with few required assumptions about the database they'll\nbe loaded in, subject of course to the natural limitation that its\nencoding contain the characters being used, but not to arbitrary\nother limits. And unless I misunderstand the patch, it accomplishes\nthat, thereby depriving me of stuff to kibitz about.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 6 Mar 2020 14:53:23 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Unicode escapes with any backend encoding"
}
] |
[
{
"msg_contents": "Hi all,\n(Daniel G. in CC.)\n\nAs discussed on the thread to be able to set the min/max SSL protocols\nwith libpq, when mixing incorrect bounds the user experience is not\nthat good: \nhttps://www.postgresql.org/message-id/9CFA34EE-F670-419D-B92C-CB7943A27573@yesql.se\n\nIt happens that the error generated with incorrect combinations\ndepends solely on what OpenSSL thinks is fine, and that's the\nfollowing:\npsql: error: could not connect to server: SSL error: tlsv1 alert\ninternal error\n\nIt is hard for users to understand what such an error means and how to\nact on it. \n\nPlease note that OpenSSL 1.1.0 has added two routines to be able to\nget the min/max protocols set in a context, called\nSSL_CTX_get_min/max_proto_version. Thinking about older versions of\nOpenSSL I think that it is better to use\nssl_protocol_version_to_openssl to do the parsing work. I also found\nthat it is easier to check for compatible versions after setting both\nbounds in the SSL context, so as there is no need to worry about\ninvalid values depending on the build of OpenSSL used.\n\nSo attached is a patch to improve the detection of incorrect\ncombinations. Once applied, we get a complain about an incorrect\nversion at server startup (FATAL) or reload (LOG). The patch includes\nnew regression tests.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 14 Jan 2020 12:54:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Improve errors when setting incorrect bounds for SSL protocols "
},
{
"msg_contents": "> On 14 Jan 2020, at 04:54, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> Hi all,\n> (Daniel G. in CC.)\n> \n> As discussed on the thread to be able to set the min/max SSL protocols\n> with libpq, when mixing incorrect bounds the user experience is not\n> that good: \n> https://www.postgresql.org/message-id/9CFA34EE-F670-419D-B92C-CB7943A27573@yesql.se\n> \n> It happens that the error generated with incorrect combinations\n> depends solely on what OpenSSL thinks is fine, and that's the\n> following:\n> psql: error: could not connect to server: SSL error: tlsv1 alert\n> internal error\n> \n> It is hard for users to understand what such an error means and how to\n> act on it.\n\nCorrect, it's an easy mistake to make but based on the error it might take some\ntime to figure it out.\n\n> Please note that OpenSSL 1.1.0 has added two routines to be able to\n> get the min/max protocols set in a context, called\n> SSL_CTX_get_min/max_proto_version. Thinking about older versions of\n> OpenSSL I think that it is better to use\n> ssl_protocol_version_to_openssl to do the parsing work. I also found\n> that it is easier to check for compatible versions after setting both\n> bounds in the SSL context, so as there is no need to worry about\n> invalid values depending on the build of OpenSSL used.\n\nI'm not convinced that it's a good idea to check for incompatible protocol\nrange in the OpenSSL backend. We've spent a lot of energy to make the TLS code\nlibrary agnostic and pluggable, and since identifying a basic configuration\nerror isn't OpenSSL specific I think it should be in the guc code. That would\nkeep the layering as well as ensure that we don't mistakenly treat this\ndifferently should we get a second TLS backend.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 14 Jan 2020 11:21:53 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols "
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 11:21:53AM +0100, Daniel Gustafsson wrote:\n> On 14 Jan 2020, at 04:54, Michael Paquier <michael@paquier.xyz> wrote:\n>> Please note that OpenSSL 1.1.0 has added two routines to be able to\n>> get the min/max protocols set in a context, called\n>> SSL_CTX_get_min/max_proto_version. Thinking about older versions of\n>> OpenSSL I think that it is better to use\n>> ssl_protocol_version_to_openssl to do the parsing work. I also found\n>> that it is easier to check for compatible versions after setting both\n>> bounds in the SSL context, so as there is no need to worry about\n>> invalid values depending on the build of OpenSSL used.\n> \n> I'm not convinced that it's a good idea to check for incompatible protocol\n> range in the OpenSSL backend. We've spent a lot of energy to make the TLS code\n> library agnostic and pluggable, and since identifying a basic configuration\n> error isn't OpenSSL specific I think it should be in the guc code. That would\n> keep the layering as well as ensure that we don't mistakenly treat this\n> differently should we get a second TLS backend.\n\nGood points. And the get routines are not that portable in OpenSSL\neither even if HEAD supports 1.0.1 and newer versions... Attached is\nan updated patch which uses a GUC check for both parameters, and\nprovides a hint on top of the original error message. The SSL context\ndoes not get reloaded if there is an error, so the errors from OpenSSL\ncannot be triggered as far as I checked (after mixing a couple of\ncorrent and incorrect combinations manually).\n--\nMichael",
"msg_date": "Wed, 15 Jan 2020 11:28:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "> On 15 Jan 2020, at 03:28, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Jan 14, 2020 at 11:21:53AM +0100, Daniel Gustafsson wrote:\n>> On 14 Jan 2020, at 04:54, Michael Paquier <michael@paquier.xyz> wrote:\n>>> Please note that OpenSSL 1.1.0 has added two routines to be able to\n>>> get the min/max protocols set in a context, called\n>>> SSL_CTX_get_min/max_proto_version. Thinking about older versions of\n>>> OpenSSL I think that it is better to use\n>>> ssl_protocol_version_to_openssl to do the parsing work. I also found\n>>> that it is easier to check for compatible versions after setting both\n>>> bounds in the SSL context, so as there is no need to worry about\n>>> invalid values depending on the build of OpenSSL used.\n>> \n>> I'm not convinced that it's a good idea to check for incompatible protocol\n>> range in the OpenSSL backend. We've spent a lot of energy to make the TLS code\n>> library agnostic and pluggable, and since identifying a basic configuration\n>> error isn't OpenSSL specific I think it should be in the guc code. That would\n>> keep the layering as well as ensure that we don't mistakenly treat this\n>> differently should we get a second TLS backend.\n> \n> Good points. And the get routines are not that portable in OpenSSL\n> either even if HEAD supports 1.0.1 and newer versions... Attached is\n> an updated patch which uses a GUC check for both parameters, and\n> provides a hint on top of the original error message. The SSL context\n> does not get reloaded if there is an error, so the errors from OpenSSL\n> cannot be triggered as far as I checked (after mixing a couple of\n> corrent and incorrect combinations manually).\n\nThis is pretty much exactly the patch I was intending to write for this, so +1\nfrom me.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 15 Jan 2020 18:34:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 06:34:39PM +0100, Daniel Gustafsson wrote:\n> This is pretty much exactly the patch I was intending to write for this, so +1\n> from me.\n\nThanks for the review. Let's wait a couple of days to see if others\nhave objections or more comments about this patch, but I'd like to\nfix the issue and backpatch down to 12 where the parameters have been\nintroduced.\n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 10:00:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 10:00:52AM +0900, Michael Paquier wrote:\n> Thanks for the review. Let's wait a couple of days to see if others\n> have objections or more comments about this patch, but I'd like to\n> fix the issue and backpatch down to 12 where the parameters have been\n> introduced.\n\nAnd committed.\n--\nMichael",
"msg_date": "Sat, 18 Jan 2020 12:36:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "On 2020-01-15 03:28, Michael Paquier wrote:\n> Good points. And the get routines are not that portable in OpenSSL\n> either even if HEAD supports 1.0.1 and newer versions... Attached is\n> an updated patch which uses a GUC check for both parameters, and\n> provides a hint on top of the original error message. The SSL context\n> does not get reloaded if there is an error, so the errors from OpenSSL\n> cannot be triggered as far as I checked (after mixing a couple of\n> corrent and incorrect combinations manually).\n\nThe reason this wasn't done originally is that it is not correct to have \nGUC check hooks that refer to other GUC variables, because otherwise you \nget inconsistent behavior depending on the order of processing of the \nassignments. In this case, I think it would work because you have \nsymmetric checks for both variables, but in general it is a problematic \nstrategy.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 Jan 2020 09:11:30 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Jan 16, 2020 at 10:00:52AM +0900, Michael Paquier wrote:\n>> Thanks for the review. Let's wait a couple of days to see if others\n>> have objections or more comments about this patch, but I'd like to\n>> fix the issue and backpatch down to 12 where the parameters have been\n>> introduced.\n\n> And committed.\n\nI just happened to look at this patch while working on the release notes.\nI think this is a bad idea and very probably creates worse problems than\nit fixes. As we have learned painfully in the past, you can't have GUC\ncheck or assign hooks that look at other GUC variables, because that\ncreates order-of-operations problems. If a postgresql.conf update is\ntrying to change both values (hardly an unlikely scenario, for this\npair of variables) then the checks are going to be comparing against the\nold values of the other variables, leading to either incorrect rejections\nof valid states or incorrect acceptances of invalid states. It's pure\naccident that the particular cases tested in the regression tests behave\nsanely.\n\nI think this should be reverted. Perhaps there's a way to do it without\nthese problems, but we failed to find one in the past.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Feb 2020 14:04:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "> On 6 Feb 2020, at 20:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I think this should be reverted. Perhaps there's a way to do it without\n> these problems, but we failed to find one in the past.\n\nOr change to the v1 patch in this thread, which avoids the problem by doing it\nin the OpenSSL code. It's a shame to have generic TLS functionality be OpenSSL\nspecific when everything else TLS has been abstracted, but not working is\nclearly a worse option.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 6 Feb 2020 23:30:40 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols "
},
{
"msg_contents": "On Thu, Feb 06, 2020 at 11:30:40PM +0100, Daniel Gustafsson wrote:\n> Or change to the v1 patch in this thread, which avoids the problem by doing it\n> in the OpenSSL code. It's a shame to have generic TLS functionality be OpenSSL\n> specific when everything else TLS has been abstracted, but not working is\n> clearly a worse option.\n\nThe v1 would work just fine considering that, as the code would be\ninvoked in a context where all GUCs are already loaded. That's too\nlate before the release though, so I have reverted 41aadee, and\nattached is a new patch to consider with improvements compared to v1\nmainly in the error messages.\n--\nMichael",
"msg_date": "Fri, 7 Feb 2020 09:33:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "> On 7 Feb 2020, at 01:33, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Feb 06, 2020 at 11:30:40PM +0100, Daniel Gustafsson wrote:\n>> Or change to the v1 patch in this thread, which avoids the problem by doing it\n>> in the OpenSSL code. It's a shame to have generic TLS functionality be OpenSSL\n>> specific when everything else TLS has been abstracted, but not working is\n>> clearly a worse option.\n> \n> The v1 would work just fine considering that, as the code would be\n> invoked in a context where all GUCs are already loaded. That's too\n> late before the release though, so I have reverted 41aadee, and\n> attached is a new patch to consider with improvements compared to v1\n> mainly in the error messages.\n\nHaving gone back to look at this, I can't think of a better way to implement\nthis and I think we should go ahead with the proposed patch.\n\nIn this message we aren't quoting the TLS protocol setting:\n+ (errmsg(\"%s setting %s not supported by this build\",\n..but in this detail we are:\n+ errdetail(\"\\\"%s\\\" cannot be higher than \\\"%s\\\"\",\nPerhaps we should be consistent across all ereports?\n\nMarking as ready for committer.\n\ncheers ./daniel\n\n\n\n",
"msg_date": "Thu, 19 Mar 2020 22:54:35 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols "
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 10:54:35PM +0100, Daniel Gustafsson wrote:\n> In this message we aren't quoting the TLS protocol setting:\n> + (errmsg(\"%s setting %s not supported by this build\",\n> ..but in this detail we are:\n> + errdetail(\"\\\"%s\\\" cannot be higher than \\\"%s\\\"\",\n> Perhaps we should be consistent across all ereports?\n\nRight. Using quotes is a more popular style when it comes to GUC\nparameters and their values, so switched to use that, and committed\nthe patch. Thanks for the review.\n--\nMichael",
"msg_date": "Mon, 23 Mar 2020 11:06:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "Working in the TLS corners of the backend, I found while re-reviewing and\nre-testing for the release that this patch actually was a small, but vital,\nbrick shy of a load. The error handling is always invoked due to a set of\nmissing braces. Going into the check will cause the context to be freed and\nbe_tls_open_server error out. The tests added narrowly escapes it by not\nsetting the max version in the final test, but I'm not sure it's worth changing\nthat now as not setting a value is an interesting testcase too. Sorry for\nmissing that at the time of reviewing.\n\ncheers ./daniel",
"msg_date": "Wed, 29 Apr 2020 13:57:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols "
},
{
"msg_contents": "On Wed, Apr 29, 2020 at 01:57:49PM +0200, Daniel Gustafsson wrote:\n> Working in the TLS corners of the backend, I found while re-reviewing and\n> re-testing for the release that this patch actually was a small, but vital,\n> brick shy of a load. The error handling is always invoked due to a set of\n> missing braces. Going into the check will cause the context to be freed and\n> be_tls_open_server error out. The tests added narrowly escapes it by not\n> setting the max version in the final test, but I'm not sure it's worth changing\n> that now as not setting a value is an interesting testcase too. Sorry for\n> missing that at the time of reviewing.\n\nGood catch, fixed. We would still have keep around the SSL old\ncontext if both bounds were set. Testing this case would mean one\nextra full restart of the server, and I am not sure either if that's\nworth the extra cost here.\n--\nMichael",
"msg_date": "Thu, 30 Apr 2020 08:14:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols"
},
{
"msg_contents": "> On 30 Apr 2020, at 01:14, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Apr 29, 2020 at 01:57:49PM +0200, Daniel Gustafsson wrote:\n>> Working in the TLS corners of the backend, I found while re-reviewing and\n>> re-testing for the release that this patch actually was a small, but vital,\n>> brick shy of a load. The error handling is always invoked due to a set of\n>> missing braces. Going into the check will cause the context to be freed and\n>> be_tls_open_server error out. The tests added narrowly escapes it by not\n>> setting the max version in the final test, but I'm not sure it's worth changing\n>> that now as not setting a value is an interesting testcase too. Sorry for\n>> missing that at the time of reviewing.\n> \n> Good catch, fixed. We would still have keep around the SSL old\n> context if both bounds were set. Testing this case would mean one\n> extra full restart of the server, and I am not sure either if that's\n> worth the extra cost here.\n\nAgreed. I don't think the cost is warranted given the low probability of new\nerrors around here, so I think the commit as it stands is sufficient. Thanks.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 30 Apr 2020 09:51:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Improve errors when setting incorrect bounds for SSL protocols "
}
] |
[
{
"msg_contents": "Hello\r\n\r\nI have tested the patch with a partition table with several foreign\r\npartitions living on seperate data nodes. The initial testing was done\r\nwith a partition table having 3 foreign partitions, test was done with\r\nvariety of scale facters. The seonnd test was with fixed data per data\r\nnode but number of data nodes were increased incrementally to see\r\nthe peformance impact as more nodes are added to the cluster. The\r\ntest three is similar to the initial test but with much huge data and\r\n4 nodes.\r\n\r\nThe results are summary is given below and test script attached:\r\n\r\nTest ENV\r\nParent node:2Core 8G\r\nChild Nodes:2Core 4G\r\n\r\n\r\nTest one:\r\n\r\n1.1 The partition struct as below:\r\n\r\n [ ptf:(a int, b int, c varchar)]\r\n (Parent node)\r\n | | |\r\n [ptf1] [ptf2] [ptf3]\r\n (Node1) (Node2) (Node3)\r\n\r\nThe table data is partitioned across nodes, the test is done using a\r\nsimple select query and a count aggregate as shown below. The result\r\nis an average of executing each query multiple times to ensure reliable\r\nand consistent results.\r\n\r\n①select * from ptf where b = 100;\r\n②select count(*) from ptf;\r\n\r\n1.2. Test Results\r\n\r\n For ① result:\r\n scalepernode master patched performance\r\n 2G 7s 2s 350%\r\n 5G 173s 63s 275%\r\n 10G 462s 156s 296%\r\n 20G 968s 327s 296%\r\n 30G 1472s 494s 297%\r\n \r\n For ② result:\r\n scalepernode master patched performance\r\n 2G 1079s 291s 370%\r\n 5G 2688s 741s 362%\r\n 10G 4473s 1493s 299%\r\n\r\nIt takes too long time to test a aggregate so the test was done with a\r\nsmaller data size.\r\n\r\n\r\n1.3. summary\r\n\r\nWith the table partitioned over 3 nodes, the average performance gain\r\nacross variety of scale factors is almost 300%\r\n\r\n\r\nTest Two\r\n2.1 The partition struct as below:\r\n\r\n [ ptf:(a int, b int, c varchar)]\r\n (Parent node)\r\n | | |\r\n [ptf1] ... [ptfN]\r\n (Node1) (...) (NodeN)\r\n\r\n①select * from ptf\r\n②select * from ptf where b = 100;\r\n\r\nThis test is done with same size of data per node but table is partitioned\r\nacross N number of nodes. Each varation (master or patches) is tested\r\nat-least 3 times to get reliable and consistent results. The purpose of the\r\ntest is to see impact on performance as number of data nodes are increased.\r\n\r\n2.2 The results\r\n\r\nFor ① result(scalepernode=2G):\r\n nodenumber master patched performance\r\n 2 432s 180s 240%\r\n 3 636s 223s 285%\r\n 4 830s 283s 293%\r\n 5 1065s 361s 295%\r\nFor ② result(scalepernode=10G):\r\n nodenumber master patched performance\r\n 2 281s 140s 201%\r\n 3 421s 140s 300%\r\n 4 562s 141s 398%\r\n 5 702s 141s 497%\r\n 6 833s 139s 599%\r\n 7 986s 141s 699%\r\n 8 1125s 140s 803%\r\n\r\n\r\nTest Three\r\n\r\nThis test is similar to the [test one] but with much huge data and \r\n4 nodes.\r\n\r\nFor ① result:\r\n scalepernode master patched performance\r\n 100G 6592s 1649s 399%\r\nFor ② result:\r\n scalepernode master patched performance\r\n 100G 35383 12363 286%\r\nThe result show it work well in much huge data.\r\n\r\n\r\nSummary\r\nThe patch is pretty good, it works well when there were little data back to\r\nthe parent node. The patch doesn’t provide parallel FDW scan, it ensures\r\nthat child nodes can send data to parent in parallel but the parent can only\r\nsequennly process the data from data nodes.\r\n\r\nProviding there is no performance degrdation for non FDW append queries,\r\nI would recomend to consider this patch as an interim soluton while we are\r\nwaiting for parallel FDW scan.\r\n\r\n\r\n\r\n\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Tue, 14 Jan 2020 17:12:21 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: Re: Append with naive multiplexing of FDWs"
}
] |
[
{
"msg_contents": "walreceiver uses a temporary replication slot by default\n\nIf no permanent replication slot is configured using\nprimary_slot_name, the walreceiver now creates and uses a temporary\nreplication slot. A new setting wal_receiver_create_temp_slot can be\nused to disable this behavior, for example, if the remote instance is\nout of replication slots.\n\nReviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\nDiscussion: https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/329730827848f61eb8d353d5addcbd885fa823da\n\nModified Files\n--------------\ndoc/src/sgml/config.sgml | 20 +++++++++++\n.../libpqwalreceiver/libpqwalreceiver.c | 4 +++\nsrc/backend/replication/walreceiver.c | 41 ++++++++++++++++++++++\nsrc/backend/utils/misc/guc.c | 9 +++++\nsrc/backend/utils/misc/postgresql.conf.sample | 1 +\nsrc/include/replication/walreceiver.h | 7 ++++\n6 files changed, 82 insertions(+)",
"msg_date": "Tue, 14 Jan 2020 13:57:34 +0000",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "Hi Peter,\n(Adding Andres and Sergei in CC.)\n\nOn Tue, Jan 14, 2020 at 01:57:34PM +0000, Peter Eisentraut wrote:\n> walreceiver uses a temporary replication slot by default\n> \n> If no permanent replication slot is configured using\n> primary_slot_name, the walreceiver now creates and uses a temporary\n> replication slot. A new setting wal_receiver_create_temp_slot can be\n> used to disable this behavior, for example, if the remote instance is\n> out of replication slots.\n\nA recent message from Seigei Kornilov has attracted my attention to\nthis commit:\nhttps://www.postgresql.org/message-id/370331579618998@vla3-6a5326aeb4ee.qloud-c.yandex.net\n\nIn the thread about switching primary_conninfo to be reloadable, we \nhave argued at great lengths that we should never have the WAL\nreceiver fetch by itself the GUC parameters used for the connection\nwith its primary. Here is the main area of the discussion:\nhttps://www.postgresql.org/message-id/20190217192720.qphwrraj66rht5lj@alap3.anarazel.de\n\nThe previous thread was long enough so it can easily be missed.\nHowever, it seems to me that we may need to revisit a couple of things\nfor this commit? In short, the following things:\n- wal_receiver_create_temp_slot should be made PGC_POSTMASTER,\nsimilarly to primary_slot_name and primary_conninfo.\n- WalReceiverMain() should not load the parameter from the GUC context\nby itself.\n- RequestXLogStreaming(), called by the startup process, should be in\ncharge of defining if a temp slot should be used or not.\n--\nMichael",
"msg_date": "Wed, 22 Jan 2020 14:55:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "Hello\n\n> In short, the following things:\n> - wal_receiver_create_temp_slot should be made PGC_POSTMASTER,\n> similarly to primary_slot_name and primary_conninfo.\n> - WalReceiverMain() should not load the parameter from the GUC context\n> by itself.\n> - RequestXLogStreaming(), called by the startup process, should be in\n> charge of defining if a temp slot should be used or not.\n\nI would like to cross-post here a patch with such changes that I posted in \"allow online change primary_conninfo\" thread.\nThis thread is more appropriate for discussion about wal_receiver_create_temp_slot.\n\nPS: I posted this patch in both threads mostly to make cfbot happy.\n\nregards, Sergei",
"msg_date": "Wed, 22 Jan 2020 18:58:46 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 8:57 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> walreceiver uses a temporary replication slot by default\n>\n> If no permanent replication slot is configured using\n> primary_slot_name, the walreceiver now creates and uses a temporary\n> replication slot. A new setting wal_receiver_create_temp_slot can be\n> used to disable this behavior, for example, if the remote instance is\n> out of replication slots.\n>\n> Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> Discussion: https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com\n\nNeither the commit message for this patch nor any of the comments in\nthe patch seem to explain why this is a desirable change.\n\nI assume that's probably discussed on the thread that is linked here,\nbut you shouldn't have to dig through the discussion thread to figure\nout what the benefits of a change like this are.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Jan 2020 15:49:37 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "On 2020-01-23 21:49, Robert Haas wrote:\n> On Tue, Jan 14, 2020 at 8:57 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> walreceiver uses a temporary replication slot by default\n>>\n>> If no permanent replication slot is configured using\n>> primary_slot_name, the walreceiver now creates and uses a temporary\n>> replication slot. A new setting wal_receiver_create_temp_slot can be\n>> used to disable this behavior, for example, if the remote instance is\n>> out of replication slots.\n>>\n>> Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n>> Discussion: https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com\n> \n> Neither the commit message for this patch nor any of the comments in\n> the patch seem to explain why this is a desirable change.\n> \n> I assume that's probably discussed on the thread that is linked here,\n> but you shouldn't have to dig through the discussion thread to figure\n> out what the benefits of a change like this are.\n\nYou are right, this has gotten a bit lost in the big thread.\n\nThe rationale is basically the same as why client-side tools like \npg_basebackup use a temporary slot: So that the WAL data that they are \ninterested in doesn't disappear while they are connected.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Feb 2020 16:37:53 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "On 2020-01-22 06:55, Michael Paquier wrote:\n> In the thread about switching primary_conninfo to be reloadable, we\n> have argued at great lengths that we should never have the WAL\n> receiver fetch by itself the GUC parameters used for the connection\n> with its primary. Here is the main area of the discussion:\n> https://www.postgresql.org/message-id/20190217192720.qphwrraj66rht5lj@alap3.anarazel.de\n\nThe way I understood that discussion was that the issue is having both \nthe startup process and the WAL receiver having possibly inconsistent \nknowledge about the current configuration. That doesn't apply in this \ncase, because the setting is only used by the WAL receiver. Maybe I \nmisunderstood.\n\n> The previous thread was long enough so it can easily be missed.\n> However, it seems to me that we may need to revisit a couple of things\n> for this commit? In short, the following things:\n> - wal_receiver_create_temp_slot should be made PGC_POSTMASTER,\n> similarly to primary_slot_name and primary_conninfo.\n> - WalReceiverMain() should not load the parameter from the GUC context\n> by itself.\n> - RequestXLogStreaming(), called by the startup process, should be in\n> charge of defining if a temp slot should be used or not.\n\nThat would be a reasonable fix if we think the above is really an issue.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Feb 2020 16:46:04 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-10 16:46:04 +0100, Peter Eisentraut wrote:\n> On 2020-01-22 06:55, Michael Paquier wrote:\n> > In the thread about switching primary_conninfo to be reloadable, we\n> > have argued at great lengths that we should never have the WAL\n> > receiver fetch by itself the GUC parameters used for the connection\n> > with its primary. Here is the main area of the discussion:\n> > https://www.postgresql.org/message-id/20190217192720.qphwrraj66rht5lj@alap3.anarazel.de\n> \n> The way I understood that discussion was that the issue is having both the\n> startup process and the WAL receiver having possibly inconsistent knowledge\n> about the current configuration. That doesn't apply in this case, because\n> the setting is only used by the WAL receiver. Maybe I misunderstood.\n\nYes, that was my concern there. I do agree there's much less of an issue\nhere.\n\nI still architecturally don't find it attractive that the active\nconfiguration between walreceiver and startup process can diverge\nthough. Imagine if we e.g. added the ability to receive WAL over\nmultiple connections from one host, or from multiple hosts (e.g. to be\nable to get the bulk of the WAL from a cascading node, but also to\nprovide syncrep acknowledgements directly to the primary), or to allow\nfor logical replication without needing all WAL locally on a standby\ndoing decoding. It seems not great if there's potentially diverging\nconfiguration (hot standby feedback, temporary slots, ... ) between\nthose walreceivers, just depending on when they started. Here the model\ne.g. paralell workers use, which explicitly ensure that the GUC state is\nthe same in workers and the leader, is considerably better, imo.\n\nSo I think adding more of these parameters affecting walreceivers\nwithout coordination is not going quite in the right direction.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Feb 2020 13:46:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "Hello,\n\nOn Mon, 10 Feb 2020 16:37:53 +0100\nPeter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-01-23 21:49, Robert Haas wrote:\n> > On Tue, Jan 14, 2020 at 8:57 AM Peter Eisentraut <peter@eisentraut.org>\n> > wrote: \n> >> walreceiver uses a temporary replication slot by default\n> >>\n> >> If no permanent replication slot is configured using\n> >> primary_slot_name, the walreceiver now creates and uses a temporary\n> >> replication slot. A new setting wal_receiver_create_temp_slot can be\n> >> used to disable this behavior, for example, if the remote instance is\n> >> out of replication slots.\n> >>\n> >> Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n> >> Discussion:\n> >> https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com \n> > \n> > Neither the commit message for this patch nor any of the comments in\n> > the patch seem to explain why this is a desirable change.\n> > \n> > I assume that's probably discussed on the thread that is linked here,\n> > but you shouldn't have to dig through the discussion thread to figure\n> > out what the benefits of a change like this are. \n> \n> You are right, this has gotten a bit lost in the big thread.\n> \n> The rationale is basically the same as why client-side tools like \n> pg_basebackup use a temporary slot: So that the WAL data that they are \n> interested in doesn't disappear while they are connected.\n\nIn my humble opinion, I prefer the previous behavior, streaming without\ntemporary slot, for one reason: primary availability. \n\nShould the standby lag far behind the primary (no matter the root cause),\nthe standby was disconnected because of missing WAL. Worst case scenario, we\nmust rebuild it, hopefully from backups. Best case scenario, it fetches WALs\nfrom PITR backup. As soon as the later is possible in the stack, I consider slot\nlike a burden from the operability point of view. If standbys can not fetch\narchived WAL from PITR, then we can consider slots.\n\nWith temp slot created by default, if one standby lag far behind, it can make\nthe primary unavailable. We have nothing yet to forbid a slot to fill the\npg_wal partition. How new users creating their first cluster would react in such\nsituation? I suppose the original discussion was mostly targeting them?\nRecovering from this is way more scary than building a standby.\n\nSo the default behavior might not be desirable and maybe\nwal_receiver_create_temp_slot might be off by default?\n\nNote that Kyotaro HORIGUCHI is working on a patch to restricting maximum keep\nsegments by repslots:\n\nhttps://www.postgresql.org/message-id/flat/20190627162256.4f4872b8%40firost#6cba1177f766e7ffa5237789e748da38\n\nRegards,\n\n\n",
"msg_date": "Tue, 11 Feb 2020 23:53:26 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "On Mon, Feb 10, 2020 at 01:46:04PM -0800, Andres Freund wrote:\n> I still architecturally don't find it attractive that the active\n> configuration between walreceiver and startup process can diverge\n> though. Imagine if we e.g. added the ability to receive WAL over\n> multiple connections from one host, or from multiple hosts (e.g. to be\n> able to get the bulk of the WAL from a cascading node, but also to\n> provide syncrep acknowledgements directly to the primary), or to allow\n> for logical replication without needing all WAL locally on a standby\n> doing decoding. It seems not great if there's potentially diverging\n> configuration (hot standby feedback, temporary slots, ... ) between\n> those walreceivers, just depending on when they started. Here the model\n> e.g. parallel workers use, which explicitly ensure that the GUC state is\n> the same in workers and the leader, is considerably better, imo.\n\nYes, I still think that we should fix that inconsistency, mark the new\nGUC wal_receiver_create_temp_slot as PGC_POSTMASTER, and add a note at\nthe top of RequestXLogStreaming() and walreceiver.c about the\nassumptions we'd prefer rely to for the GUCs starting a WAL receiver.\n\n> So I think adding more of these parameters affecting walreceivers\n> without coordination is not going quite in the right direction.\n\nIndeed. Adding more comments would be one way to prevent the\nsituation to happen here, I fear that others may forget this stuff in\nthe future.\n--\nMichael",
"msg_date": "Wed, 12 Feb 2020 14:13:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "\n\nOn 2020/02/12 7:53, Jehan-Guillaume de Rorthais wrote:\n> Hello,\n> \n> On Mon, 10 Feb 2020 16:37:53 +0100\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n>> On 2020-01-23 21:49, Robert Haas wrote:\n>>> On Tue, Jan 14, 2020 at 8:57 AM Peter Eisentraut <peter@eisentraut.org>\n>>> wrote:\n>>>> walreceiver uses a temporary replication slot by default\n>>>>\n>>>> If no permanent replication slot is configured using\n>>>> primary_slot_name, the walreceiver now creates and uses a temporary\n>>>> replication slot. A new setting wal_receiver_create_temp_slot can be\n>>>> used to disable this behavior, for example, if the remote instance is\n>>>> out of replication slots.\n>>>>\n>>>> Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\n>>>> Discussion:\n>>>> https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com\n>>>\n>>> Neither the commit message for this patch nor any of the comments in\n>>> the patch seem to explain why this is a desirable change.\n>>>\n>>> I assume that's probably discussed on the thread that is linked here,\n>>> but you shouldn't have to dig through the discussion thread to figure\n>>> out what the benefits of a change like this are.\n>>\n>> You are right, this has gotten a bit lost in the big thread.\n>>\n>> The rationale is basically the same as why client-side tools like\n>> pg_basebackup use a temporary slot: So that the WAL data that they are\n>> interested in doesn't disappear while they are connected.\n> \n> In my humble opinion, I prefer the previous behavior, streaming without\n> temporary slot, for one reason: primary availability.\n\n+1\n \n> Should the standby lag far behind the primary (no matter the root cause),\n> the standby was disconnected because of missing WAL. Worst case scenario, we\n> must rebuild it, hopefully from backups. Best case scenario, it fetches WALs\n> from PITR backup. As soon as the later is possible in the stack, I consider slot\n> like a burden from the operability point of view. If standbys can not fetch\n> archived WAL from PITR, then we can consider slots.\n> \n> With temp slot created by default, if one standby lag far behind, it can make\n> the primary unavailable. We have nothing yet to forbid a slot to fill the\n> pg_wal partition. How new users creating their first cluster would react in such\n> situation? I suppose the original discussion was mostly targeting them?\n> Recovering from this is way more scary than building a standby.\n> \n> So the default behavior might not be desirable and maybe\n> wal_receiver_create_temp_slot might be off by default?\n> \n> Note that Kyotaro HORIGUCHI is working on a patch to restricting maximum keep\n> segments by repslots:\n> \n> https://www.postgresql.org/message-id/flat/20190627162256.4f4872b8%40firost#6cba1177f766e7ffa5237789e748da38\n\nYeah, I think it's better to disable this option until something like\nHoriguchi-san's proposal will have been committed, i.e., until\nthe upper limit on the number (or size) of WAL files that remain\nfor slots become configurable.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 12 Feb 2020 18:11:06 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "On Wed, Feb 12, 2020 at 06:11:06PM +0900, Fujii Masao wrote:\n> On 2020/02/12 7:53, Jehan-Guillaume de Rorthais wrote:\n>> In my humble opinion, I prefer the previous behavior, streaming without\n>> temporary slot, for one reason: primary availability.\n> \n> +1\n>\n>> With temp slot created by default, if one standby lag far behind, it can make\n>> the primary unavailable. We have nothing yet to forbid a slot to fill the\n>> pg_wal partition. How new users creating their first cluster would react in such\n>> situation? I suppose the original discussion was mostly targeting them?\n>> Recovering from this is way more scary than building a standby.\n>> \n>> So the default behavior might not be desirable and maybe\n>> wal_receiver_create_temp_slot might be off by default?\n>> \n>> Note that Kyotaro HORIGUCHI is working on a patch to restricting maximum keep\n>> segments by repslots:\n>> \n>> https://www.postgresql.org/message-id/flat/20190627162256.4f4872b8%40firost#6cba1177f766e7ffa5237789e748da38\n> \n> Yeah, I think it's better to disable this option until something like\n> Horiguchi-san's proposal will have been committed, i.e., until\n> the upper limit on the number (or size) of WAL files that remain\n> for slots become configurable.\n\nEven with that, are we sure this extra feature would be a reason\nsufficient to change the default value of this option to be enabled?\nI am not sure about that either. My opinion is that this option is\nuseful to have and that it is not really a problem if you have slot\nmonitoring on the primary (or a standby for cascading). And I'd like\nto believe that it is a common practice lately for base backups,\narchivers based on pg_receivewal or even logical decoding, but it\ncould be surprising for some users who do not do that yet. So\nJehan-Guillaume's arguments sound also sensible to me (he also\nmaintains an automatic failover solution called PAF). \n\nFrom what I can see nobody really likes the current state of things\nfor this option, and that does not come down only to its default\nvalue. The default GUC value and the way the parameter is loaded by\nthe WAL sender are problematic, still easy enough to fix. How do we\nmove on from here? I could post a patch based on what Sergei Kornilov\nhas sent around [1], but that's Peter's feature. Any opinions?\n\n[1]: https://www.postgresql.org/message-id/20200122055510.GH174860@paquier.xyz\n--\nMichael",
"msg_date": "Thu, 13 Feb 2020 16:48:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "At Thu, 13 Feb 2020 16:48:21 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Feb 12, 2020 at 06:11:06PM +0900, Fujii Masao wrote:\n> > On 2020/02/12 7:53, Jehan-Guillaume de Rorthais wrote:\n> >> In my humble opinion, I prefer the previous behavior, streaming without\n> >> temporary slot, for one reason: primary availability.\n> > \n> > +1\n> >\n> >> With temp slot created by default, if one standby lag far behind, it can make\n> >> the primary unavailable. We have nothing yet to forbid a slot to fill the\n> >> pg_wal partition. How new users creating their first cluster would react in such\n> >> situation? I suppose the original discussion was mostly targeting them?\n> >> Recovering from this is way more scary than building a standby.\n> >> \n> >> So the default behavior might not be desirable and maybe\n> >> wal_receiver_create_temp_slot might be off by default?\n> >> \n> >> Note that Kyotaro HORIGUCHI is working on a patch to restricting maximum keep\n> >> segments by repslots:\n> >> \n> >> https://www.postgresql.org/message-id/flat/20190627162256.4f4872b8%40firost#6cba1177f766e7ffa5237789e748da38\n> > \n> > Yeah, I think it's better to disable this option until something like\n> > Horiguchi-san's proposal will have been committed, i.e., until\n> > the upper limit on the number (or size) of WAL files that remain\n> > for slots become configurable.\n> \n> Even with that, are we sure this extra feature would be a reason\n> sufficient to change the default value of this option to be enabled?\n\nI think the feature (slot limit) is not going to be an reason to\nenable it (tmp slot). In the first place I think we cannot determine\nthe default value generally workable..\n\n> I am not sure about that either. My opinion is that this option is\n> useful to have and that it is not really a problem if you have slot\n> monitoring on the primary (or a standby for cascading). And I'd like\n> to believe that it is a common practice lately for base backups,\n> archivers based on pg_receivewal or even logical decoding, but it\n> could be surprising for some users who do not do that yet. So\n> Jehan-Guillaume's arguments sound also sensible to me (he also\n> maintains an automatic failover solution called PAF). \n> \n> From what I can see nobody really likes the current state of things\n> for this option, and that does not come down only to its default\n> value. The default GUC value and the way the parameter is loaded by\n> the WAL sender are problematic, still easy enough to fix. How do we\n> move on from here? I could post a patch based on what Sergei Kornilov\n> has sent around [1], but that's Peter's feature. Any opinions?\n> \n> [1]: https://www.postgresql.org/message-id/20200122055510.GH174860@paquier.xyz\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 14 Feb 2020 17:29:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 06:58:46PM +0300, Sergei Kornilov wrote:\n> I would like to cross-post here a patch with such changes that I posted in \"allow online change primary_conninfo\" thread.\n> This thread is more appropriate for discussion about wal_receiver_create_temp_slot.\n> \n> PS: I posted this patch in both threads mostly to make cfbot happy.\n\nThanks for posting this patch, Sergei. Here is a review to make\nthings move on.\n\n- * Create temporary replication slot if no slot name is configured or\n- * the slot from the previous run was temporary, unless\n- * wal_receiver_create_temp_slot is disabled. We also need to handle\n- * the case where the previous run used a temporary slot but\n- * wal_receiver_create_temp_slot was changed in the meantime. In that\n- * case, we delete the old slot name in shared memory. (This would\n+ * Create temporary replication slot if requested. In that\n+ * case, we update slot name in shared memory. (This would\n\nThe set of comments you are removing from walreceiver.c to decide if a\ntemporary slot needs to be created or not should be moved to\nwalreceiverfuncs.c as you move the logic from the WAL receiver startup\nphase to the moment the WAL receiver spawn is requested.\n\nI agree with the simplifications in WalReceiverMain() as you have\nswitched wal_receiver_create_temp_slot to be PGC_POSTMASTER, so\nmodifications are no longer a state that matter.\n\nIt would be more consistent with primary_conn_info and\nprimary_slot_name if wal_receiver_create_temp_slot is passed down as\nan argument of RequestXLogStreaming().\n\nAs per the discussion done on this thread, let's also switch the\nparameter default to be disabled. Peter, as the committer of 3297308,\nit would be good if you could chime in.\n--\nMichael",
"msg_date": "Sun, 16 Feb 2020 15:03:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "Hello\n\n> Thanks for posting this patch, Sergei. Here is a review to make\n> things move on.\n\nThank you, here is updated patch\n\n> The set of comments you are removing from walreceiver.c to decide if a\n> temporary slot needs to be created or not should be moved to\n> walreceiverfuncs.c as you move the logic from the WAL receiver startup\n> phase to the moment the WAL receiver spawn is requested.\n\nI changed this comments because they describes behavior during change value of wal_receiver_create_temp_slot.\nBut yes, I need to add some comments to RequestXLogStreaming.\n\n> It would be more consistent with primary_conn_info and\n> primary_slot_name if wal_receiver_create_temp_slot is passed down as\n> an argument of RequestXLogStreaming().\n\nYep, I thought about that. Changed.\n\n> As per the discussion done on this thread, let's also switch the\n> parameter default to be disabled.\n\nDone (my vote is also for disabling this option by default).\n\nregards, Sergei",
"msg_date": "Mon, 17 Feb 2020 16:57:04 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "On Mon, Feb 17, 2020 at 04:57:04PM +0300, Sergei Kornilov wrote:\n> Thank you, here is updated patch\n\nThanks\n\n> I changed this comments because they describes behavior during\n> change value of wal_receiver_create_temp_slot. But yes, I need to\n> add some comments to RequestXLogStreaming.\n\nI have reworked that part, adding more comments about the use of GUC\nparameters when establishing the connection to the primary for a WAL\nreceiver. And also I have added an extra comment to walreceiver.c\nabout the use of GUcs in general, to avoid this stuff again in the\nfuture. There were some extra nits with the format of\npostgresql.conf.sample.\n\n>> As per the discussion done on this thread, let's also switch the\n>> parameter default to be disabled.\n> \n> Done (my vote is also for disabling this option by default).\n\nWe visibly tend to move in this direction, at least based on our\ndiscussion. Let's see where this leads. For now, I have registered\nthis patch to next CF (https://commitfest.postgresql.org/27/2456/),\nwith yourself as author and myself as reviewer, and then let's wait\nfor mainly Peter E. and others for more input. \n--\nMichael",
"msg_date": "Tue, 18 Feb 2020 11:43:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "Hello\n\n> I have reworked that part, adding more comments about the use of GUC\n> parameters when establishing the connection to the primary for a WAL\n> receiver. And also I have added an extra comment to walreceiver.c\n> about the use of GUcs in general, to avoid this stuff again in the\n> future. There were some extra nits with the format of\n> postgresql.conf.sample.\n\nThank you! I just noticed that you removed my proposed change to this condition in RequestXLogStreaming\n\n-\tif (slotname != NULL)\n+\tif (slotname != NULL && slotname[0] != '\\0')\n\nWe need this change to set is_temp_slot properly. PrimarySlotName GUC can usually be an empty string, so just \"slotname != NULL\" is not enough.\n\nI attached patch with this change.\n\nregards, Sergei",
"msg_date": "Tue, 17 Mar 2020 23:39:11 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 11:39:11PM +0300, Sergei Kornilov wrote:\n> We need this change to set is_temp_slot properly. PrimarySlotName\n> GUC can usually be an empty string, so just \"slotname != NULL\" is\n> not enough.\n\nYep, or a temporary slot would never be created even if there is no\nslot defined, and the priority goes to primary_slot_name if set.\n\n> I attached patch with this change.\n\nThanks, I have added a new open item for v13 to track this effort:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n--\nMichael",
"msg_date": "Thu, 19 Mar 2020 11:26:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: walreceiver uses a temporary replication slot by default"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nRight now changing policies (create/alter policy statements) requires \nexclusive lock of target table:\n\n /* Get id of table. Also handles permissions checks. */\n table_id = RangeVarGetRelidExtended(stmt->table, AccessExclusiveLock,\n 0,\n RangeVarCallbackForPolicy,\n (void *) stmt);\n\nUnfortunately there are use cases where policies are changed quite \nfrequently and this exclusive lock becomes a bottleneck.\nI wonder why do we really need exclusive lock here?\nPolicies are stored in pg_policy table and we get RowExclusiveLock on it.\n\nMay be I missed something, but why we can not rely on standard MVCC \nvisibility rules for pg_policy table?\nUntil transaction executing CREATE/ALTER POLICY is committed, other \ntransactions will not see its changes in pg_policy table and perform\nRLS checks according to old policies. Once transaction is committed, \neverybody will switch to new policies.\n\nI wonder if we it is possible to replace AccessExclusiveLock with \nAccessSharedLock in RangeVarGetRelidExtended in CreatePolicy and \nAlterPolicy?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 14 Jan 2020 17:18:31 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Create/alter policy and exclusive table lock"
},
{
"msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> Right now changing policies (create/alter policy statements) requires \n> exclusive lock of target table:\n\nYup.\n\n> I wonder why do we really need exclusive lock here?\n\nBecause it affects the behavior of a SELECT.\n\n> May be I missed something, but why we can not rely on standard MVCC \n> visibility rules for pg_policy table?\n\nWe cannot have a situation where the schema details of a table might\nchange midway through planning/execution of a statement. The results\nare unlikely to be as clean as \"you get either the old behavior or the\nnew one\", because that sequence might examine the details more than\nonce. Also, even if you cleanly get the old behavior, that's hardly\nsatisfactory. Consider\n\nSession 1 Session 2\n\nbegin;\nalter policy ... on t1 ...;\ninsert new data into t1;\n\n begin planning SELECT on t1;\n\ncommit;\n\n begin executing SELECT on t1;\n\nWith your proposal, session 2 would see the new data in t1\n(because the executor takes a fresh snapshot) but it would not\nbe affected by the new policy. That's a security failure,\nand it's one that does not happen today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 09:40:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create/alter policy and exclusive table lock"
},
{
"msg_contents": "\n\nOn 14.01.2020 17:40, Tom Lane wrote:\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n>> Right now changing policies (create/alter policy statements) requires\n>> exclusive lock of target table:\n> Yup.\n>\n>> I wonder why do we really need exclusive lock here?\n> Because it affects the behavior of a SELECT.\n>\n>> May be I missed something, but why we can not rely on standard MVCC\n>> visibility rules for pg_policy table?\n> We cannot have a situation where the schema details of a table might\n> change midway through planning/execution of a statement. The results\n> are unlikely to be as clean as \"you get either the old behavior or the\n> new one\", because that sequence might examine the details more than\n> once. Also, even if you cleanly get the old behavior, that's hardly\n> satisfactory. Consider\n>\n> Session 1 Session 2\n>\n> begin;\n> alter policy ... on t1 ...;\n> insert new data into t1;\n>\n> begin planning SELECT on t1;\n>\n> commit;\n>\n> begin executing SELECT on t1;\n>\n> With your proposal, session 2 would see the new data in t1\n> (because the executor takes a fresh snapshot) but it would not\n> be affected by the new policy. That's a security failure,\n> and it's one that does not happen today.\n\nThank you for explanation.\nBut let me ask you one more question: why do we obtaining snapshot twice \nin exec_simple_query:\nfirst for analyze (pg_analyze_and_rewrite) and one for execution \n(PortalStart)?\nGetSnapshotData is quite expensive operation and the fact that we are \ncalling it twice for each query execution (with read committed isolation \nlevel)\nseems to be strange. Also the problem in scenario you have described \nabove is caused by using different snapshots by planner and executor. If \nthem will share the same snapshot,\nthen there should be no security violation, right?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 14 Jan 2020 18:49:21 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Create/alter policy and exclusive table lock"
},
{
"msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> But let me ask you one more question: why do we obtaining snapshot twice \n> in exec_simple_query:\n> first for analyze (pg_analyze_and_rewrite) and one for execution \n> (PortalStart)?\n\nThat would happen anyway if the plan is cached. If we were to throw away\nall plan caching and swear a mighty oath that we'll never put it back,\nmaybe we could build in a design assumption that planning and execution\nuse identical snapshots. I doubt that would lead to a net win though.\n\nAlso note that our whole approach to cache invalidation is based on the\nassumption that if session A needs to see the effects of session B,\nthey will be taking conflicting locks. Otherwise sinval signaling\nis not guaranteed to be detected at the necessary times.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 12:40:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create/alter policy and exclusive table lock"
}
] |
[
{
"msg_contents": "Folks,\n\nThe recent patch for distinct windowing aggregates contained a partial\nfix of the FIXME that didn't seem entirely right, so I extracted that\npart, changed it to use compiler intrinsics, and submit it here.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 14 Jan 2020 18:35:53 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "Hi David,\n\nOn Tue, Jan 14, 2020 at 9:36 AM David Fetter <david@fetter.org> wrote:\n>\n> Folks,\n>\n> The recent patch for distinct windowing aggregates contained a partial\n> fix of the FIXME that didn't seem entirely right, so I extracted that\n> part, changed it to use compiler intrinsics, and submit it here.\n\nThe changes in hash AM and SIMPLEHASH do look like a net positive\nimprovement. My biggest cringe might be in pg_bitutils:\n\n> diff --git a/src/include/port/pg_bitutils.h b/src/include/port/pg_bitutils.h\n> index 498e532308..cc9338da25 100644\n> --- a/src/include/port/pg_bitutils.h\n> +++ b/src/include/port/pg_bitutils.h\n> @@ -145,4 +145,32 @@ pg_rotate_right32(uint32 word, int n)\n> return (word >> n) | (word << (sizeof(word) * BITS_PER_BYTE - n));\n> }\n>\n> +/* ceil(lg2(num)) */\n> +static inline uint32\n> +ceil_log2_32(uint32 num)\n> +{\n> + return pg_leftmost_one_pos32(num-1) + 1;\n> +}\n> +\n> +static inline uint64\n> +ceil_log2_64(uint64 num)\n> +{\n> + return pg_leftmost_one_pos64(num-1) + 1;\n> +}\n> +\n> +/* calculate first power of 2 >= num\n> + * per https://graphics.stanford.edu/~seander/bithacks.html#RoundUpPowerOf2\n> + * using BSR where available */\n> +static inline uint32\n> +next_power_of_2_32(uint32 num)\n> +{\n> + return ((uint32) 1) << (pg_leftmost_one_pos32(num-1) + 1);\n> +}\n> +\n> +static inline uint64\n> +next_power_of_2_64(uint64 num)\n> +{\n> + return ((uint64) 1) << (pg_leftmost_one_pos64(num-1) + 1);\n> +}\n> +\n> #endif /* PG_BITUTILS_H */\n>\n\n1. Is ceil_log2_64 dead code?\n\n2. The new utilities added here (ceil_log2_32 and company,\nnext_power_of_2_32 and company) all require num > 1, but don't clearly\nAssert (or at the very least document) so.\n\n3. A couple of the callers can actively pass in an argument of 1, e.g.\nfrom _hash_spareindex in hashutil.c, while some other callers are iffy\nat best (simplehash.h maybe?)\n\n4. It seems like you *really* would like an operation like LZCNT in x86\n(first appearing in Haswell) that is well defined on zero input. ISTM\nthe alternatives are:\n\n a) Special case 1. That seems straightforward, but the branching cost\n on a seemingly unlikely condition seems to be a lot of performance\n loss\n\n b) Use architecture specific intrinsic (and possibly with CPUID\n shenanigans) like __builtin_ia32_lzcnt_u64 on x86 and use the CLZ\n intrinsic elsewhere. The CLZ GCC intrinsic seems to map to\n instructions that are well defined on zero in most ISA's other than\n x86, so maybe we can get away with special-casing x86?\n\nCheers,\nJesse\n\n\n",
"msg_date": "Tue, 14 Jan 2020 12:21:41 -0800",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 12:21:41PM -0800, Jesse Zhang wrote:\n> Hi David,\n> \n> On Tue, Jan 14, 2020 at 9:36 AM David Fetter <david@fetter.org> wrote:\n> >\n> > Folks,\n> >\n> > The recent patch for distinct windowing aggregates contained a partial\n> > fix of the FIXME that didn't seem entirely right, so I extracted that\n> > part, changed it to use compiler intrinsics, and submit it here.\n> \n> The changes in hash AM and SIMPLEHASH do look like a net positive\n> improvement. My biggest cringe might be in pg_bitutils:\n\nThanks for looking at this!\n\n> > diff --git a/src/include/port/pg_bitutils.h b/src/include/port/pg_bitutils.h\n> > index 498e532308..cc9338da25 100644\n> > --- a/src/include/port/pg_bitutils.h\n> > +++ b/src/include/port/pg_bitutils.h\n> > @@ -145,4 +145,32 @@ pg_rotate_right32(uint32 word, int n)\n> > return (word >> n) | (word << (sizeof(word) * BITS_PER_BYTE - n));\n> > }\n> >\n> > +/* ceil(lg2(num)) */\n> > +static inline uint32\n> > +ceil_log2_32(uint32 num)\n> > +{\n> > + return pg_leftmost_one_pos32(num-1) + 1;\n> > +}\n> > +\n> > +static inline uint64\n> > +ceil_log2_64(uint64 num)\n> > +{\n> > + return pg_leftmost_one_pos64(num-1) + 1;\n> > +}\n> > +\n> > +/* calculate first power of 2 >= num\n> > + * per https://graphics.stanford.edu/~seander/bithacks.html#RoundUpPowerOf2\n> > + * using BSR where available */\n> > +static inline uint32\n> > +next_power_of_2_32(uint32 num)\n> > +{\n> > + return ((uint32) 1) << (pg_leftmost_one_pos32(num-1) + 1);\n> > +}\n> > +\n> > +static inline uint64\n> > +next_power_of_2_64(uint64 num)\n> > +{\n> > + return ((uint64) 1) << (pg_leftmost_one_pos64(num-1) + 1);\n> > +}\n> > +\n> > #endif /* PG_BITUTILS_H */\n> >\n> \n> 1. Is ceil_log2_64 dead code?\n\nLet's call it nascent code. I suspect there are places it could go, if\nI look for them. Also, it seemed silly to have one without the other.\n\n> 2. The new utilities added here (ceil_log2_32 and company,\n> next_power_of_2_32 and company) all require num > 1, but don't clearly\n> Assert (or at the very least document) so.\n\nAssert()ed.\n\n> 3. A couple of the callers can actively pass in an argument of 1, e.g.\n> from _hash_spareindex in hashutil.c, while some other callers are iffy\n> at best (simplehash.h maybe?)\n\nWhat would you recommend be done about this?\n\n> 4. It seems like you *really* would like an operation like LZCNT in x86\n> (first appearing in Haswell) that is well defined on zero input. ISTM\n> the alternatives are:\n> \n> a) Special case 1. That seems straightforward, but the branching cost\n> on a seemingly unlikely condition seems to be a lot of performance\n> loss\n> \n> b) Use architecture specific intrinsic (and possibly with CPUID\n> shenanigans) like __builtin_ia32_lzcnt_u64 on x86 and use the CLZ\n> intrinsic elsewhere. The CLZ GCC intrinsic seems to map to\n> instructions that are well defined on zero in most ISA's other than\n> x86, so maybe we can get away with special-casing x86?\n\nb) seems much more attractive. Is there some way to tilt the tools so\nthat this happens? What should I be reading up on?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 14 Jan 2020 23:09:18 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 2:09 PM David Fetter <david@fetter.org> wrote:\n> > The changes in hash AM and SIMPLEHASH do look like a net positive\n> > improvement. My biggest cringe might be in pg_bitutils:\n> >\n> > 1. Is ceil_log2_64 dead code?\n>\n> Let's call it nascent code. I suspect there are places it could go, if\n> I look for them. Also, it seemed silly to have one without the other.\n>\n\nWhile not absolutely required, I'd like us to find at least one\nplace and start using it. (Clang also nags at me when we have\nunused functions).\n\n> On Tue, Jan 14, 2020 at 12:21:41PM -0800, Jesse Zhang wrote:\n> > 4. It seems like you *really* would like an operation like LZCNT in x86\n> > (first appearing in Haswell) that is well defined on zero input. ISTM\n> > the alternatives are:\n> >\n> > a) Special case 1. That seems straightforward, but the branching cost\n> > on a seemingly unlikely condition seems to be a lot of performance\n> > loss\n> >\n> > b) Use architecture specific intrinsic (and possibly with CPUID\n> > shenanigans) like __builtin_ia32_lzcnt_u64 on x86 and use the CLZ\n> > intrinsic elsewhere. The CLZ GCC intrinsic seems to map to\n> > instructions that are well defined on zero in most ISA's other than\n> > x86, so maybe we can get away with special-casing x86?\n\ni. We can detect LZCNT instruction by checking one of the\n\"extended feature\" (EAX=80000001) bits using CPUID. Unlike the\n\"basic features\" (EAX=1), extended feature flags have been more\nvendor-specific, but fortunately it seems that the feature bit\nfor LZCNT is the same [1][2].\n\nii. We'll most likely still need to provide a fallback\nimplementation for processors that don't have LZCNT (either\nbecause they are from a different vendor, or an older Intel/AMD\nprocessor). I wonder if simply checking for 1 is \"good enough\".\nMaybe a micro benchmark is in order?\n\n> Is there some way to tilt the tools so that this happens?\nWe have a couple options here:\n\n1. Use a separate object (a la our SSE 4.2 implemenation of\nCRC). On Clang and GCC (I don't have MSVC at hand), -mabm or\n-mlzcnt should cause __builtin_clz to generate the LZCNT\ninstruction, which is well defined on zero input. The default\nconfiguration would translate __builtin_clz to code that\nsubtracts BSR from the width of the input, but BSR leaves the\ndestination undefined on zero input.\n\n2. (My least favorite) use inline asm (a la our popcount\nimplementation).\n\n> b) seems much more attractive. Is there some way to tilt the tools so\n> that this happens? What should I be reading up on?\n\nThe enclosed references hopefully are good places to start. Let\nme know if you have more ideas.\n\nCheers,\nJesse\n\n\nReferences:\n\n[1] \"How to detect New Instruction support in the 4th generation Intel®\nCore™ processor family\"\nhttps://software.intel.com/en-us/articles/how-to-detect-new-instruction-support-in-the-4th-generation-intel-core-processor-family\n[2] \"Bit Manipulation Instruction Sets\"\nhttps://en.wikipedia.org/wiki/Bit_Manipulation_Instruction_Sets\n\n\n",
"msg_date": "Wed, 15 Jan 2020 15:45:12 -0800",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 6:09 AM David Fetter <david@fetter.org> wrote:\n> [v2 patch]\n\nHi David,\n\nI have a stylistic comment on this snippet:\n\n- for (i = _hash_log2(metap->hashm_bsize); i > 0; --i)\n- {\n- if ((1 << i) <= metap->hashm_bsize)\n- break;\n- }\n+ i = pg_leftmost_one_pos32(metap->hashm_bsize);\n Assert(i > 0);\n metap->hashm_bmsize = 1 << i;\n metap->hashm_bmshift = i + BYTE_TO_BIT;\n\nNaming the variable \"i\" made sense when it was a loop counter, but it\nseems out of place now. Same with the Assert.\n\nAlso, this\n\n+ * using BSR where available */\n\nis not directly tied to anything in this function, or even in the\nfunction it calls, and could get out of date easily.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 18 Jan 2020 11:46:24 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 11:46:24AM +0800, John Naylor wrote:\n> On Wed, Jan 15, 2020 at 6:09 AM David Fetter <david@fetter.org> wrote:\n> > [v2 patch]\n> \n> Hi David,\n> \n> I have a stylistic comment on this snippet:\n> \n> - for (i = _hash_log2(metap->hashm_bsize); i > 0; --i)\n> - {\n> - if ((1 << i) <= metap->hashm_bsize)\n> - break;\n> - }\n> + i = pg_leftmost_one_pos32(metap->hashm_bsize);\n> Assert(i > 0);\n> metap->hashm_bmsize = 1 << i;\n> metap->hashm_bmshift = i + BYTE_TO_BIT;\n> \n> Naming the variable \"i\" made sense when it was a loop counter, but it\n> seems out of place now. Same with the Assert.\n\nFixed by removing the variable entirely.\n\n> Also, this\n> \n> + * using BSR where available */\n> \n> is not directly tied to anything in this function, or even in the\n> function it calls, and could get out of date easily.\n\nRemoved.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 19 Jan 2020 01:00:52 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 03:45:12PM -0800, Jesse Zhang wrote:\n> On Tue, Jan 14, 2020 at 2:09 PM David Fetter <david@fetter.org> wrote:\n> > > The changes in hash AM and SIMPLEHASH do look like a net positive\n> > > improvement. My biggest cringe might be in pg_bitutils:\n> > >\n> > > 1. Is ceil_log2_64 dead code?\n> >\n> > Let's call it nascent code. I suspect there are places it could go, if\n> > I look for them. Also, it seemed silly to have one without the other.\n> >\n> \n> While not absolutely required, I'd like us to find at least one\n> place and start using it. (Clang also nags at me when we have\n> unused functions).\n\nDone in the expanded patches attached.\n\n> > On Tue, Jan 14, 2020 at 12:21:41PM -0800, Jesse Zhang wrote:\n> > > 4. It seems like you *really* would like an operation like LZCNT in x86\n> > > (first appearing in Haswell) that is well defined on zero input. ISTM\n> > > the alternatives are:\n> > >\n> > > a) Special case 1. That seems straightforward, but the branching cost\n> > > on a seemingly unlikely condition seems to be a lot of performance\n> > > loss\n> > >\n> > > b) Use architecture specific intrinsic (and possibly with CPUID\n> > > shenanigans) like __builtin_ia32_lzcnt_u64 on x86 and use the CLZ\n> > > intrinsic elsewhere. The CLZ GCC intrinsic seems to map to\n> > > instructions that are well defined on zero in most ISA's other than\n> > > x86, so maybe we can get away with special-casing x86?\n> \n> i. We can detect LZCNT instruction by checking one of the\n> \"extended feature\" (EAX=80000001) bits using CPUID. Unlike the\n> \"basic features\" (EAX=1), extended feature flags have been more\n> vendor-specific, but fortunately it seems that the feature bit\n> for LZCNT is the same [1][2].\n> \n> ii. We'll most likely still need to provide a fallback\n> implementation for processors that don't have LZCNT (either\n> because they are from a different vendor, or an older Intel/AMD\n> processor). I wonder if simply checking for 1 is \"good enough\".\n> Maybe a micro benchmark is in order?\n\nI'm not sure how I'd run one on the architectures we support. What\nI've done here is generalize our implementation to be basically like\nLZCNT and TZCNT at the cost of a brief branch that might go away at\nruntime.\n\n> 2. (My least favorite) use inline asm (a la our popcount\n> implementation).\n\nYeah, I'd like to fix that, but I kept the scope of this one\nrelatively narrow.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 31 Jan 2020 16:59:18 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 04:59:18PM +0100, David Fetter wrote:\n> On Wed, Jan 15, 2020 at 03:45:12PM -0800, Jesse Zhang wrote:\n> > On Tue, Jan 14, 2020 at 2:09 PM David Fetter <david@fetter.org> wrote:\n> > > > The changes in hash AM and SIMPLEHASH do look like a net positive\n> > > > improvement. My biggest cringe might be in pg_bitutils:\n> > > >\n> > > > 1. Is ceil_log2_64 dead code?\n> > >\n> > > Let's call it nascent code. I suspect there are places it could go, if\n> > > I look for them. Also, it seemed silly to have one without the other.\n> > >\n> > \n> > While not absolutely required, I'd like us to find at least one\n> > place and start using it. (Clang also nags at me when we have\n> > unused functions).\n> \n> Done in the expanded patches attached.\n\nThese bit-rotted a little, so I've updated them.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 26 Feb 2020 09:12:24 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Wed, Feb 26, 2020 at 09:12:24AM +0100, David Fetter wrote:\n> On Fri, Jan 31, 2020 at 04:59:18PM +0100, David Fetter wrote:\n> > On Wed, Jan 15, 2020 at 03:45:12PM -0800, Jesse Zhang wrote:\n> > > On Tue, Jan 14, 2020 at 2:09 PM David Fetter <david@fetter.org> wrote:\n> > > > > The changes in hash AM and SIMPLEHASH do look like a net positive\n> > > > > improvement. My biggest cringe might be in pg_bitutils:\n> > > > >\n> > > > > 1. Is ceil_log2_64 dead code?\n> > > >\n> > > > Let's call it nascent code. I suspect there are places it could go, if\n> > > > I look for them. Also, it seemed silly to have one without the other.\n> > > >\n> > > \n> > > While not absolutely required, I'd like us to find at least one\n> > > place and start using it. (Clang also nags at me when we have\n> > > unused functions).\n> > \n> > Done in the expanded patches attached.\n> \n> These bit-rotted a little, so I've updated them.\n\n05d8449e73694585b59f8b03aaa087f04cc4679a broke this patch set, so fix.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Thu, 27 Feb 2020 06:56:40 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Thu, Feb 27, 2020 at 1:56 PM David Fetter <david@fetter.org> wrote:\n> [v6 set]\n\nHi David,\n\nIn 0002, the pg_bitutils functions have a test (input > 0), and the\nnew callers ceil_log2_* and next_power_of_2_* have asserts. That seems\nbackward to me. I imagine some callers of bitutils will already know\nthe value > 0, and it's probably good to keep that branch out of the\nlowest level functions. What do you think?\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Feb 2020 14:41:49 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Thu, Feb 27, 2020 at 02:41:49PM +0800, John Naylor wrote:\n> On Thu, Feb 27, 2020 at 1:56 PM David Fetter <david@fetter.org> wrote:\n> > [v6 set]\n> \n> Hi David,\n> \n> In 0002, the pg_bitutils functions have a test (input > 0), and the\n> new callers ceil_log2_* and next_power_of_2_* have asserts. That seems\n> backward to me.\n\nTo me, too, now that you mention it. My thinking was a little fuzzed\nby trying to accommodate platforms with intrinsics where clz is\ndefined for 0 inputs.\n\n> I imagine some callers of bitutils will already know the value > 0,\n> and it's probably good to keep that branch out of the lowest level\n> functions. What do you think?\n\nI don't know quite how smart compilers and CPUs are these days, so\nit's unclear to me how often that branch would actually happen.\n\nAnyhow, I'll get a revised patch set out later today.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Fri, 28 Feb 2020 16:13:00 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "Hi David,\n\nOn Wed, Feb 26, 2020 at 9:56 PM David Fetter <david@fetter.org> wrote:\n>\n> On Wed, Feb 26, 2020 at 09:12:24AM +0100, David Fetter wrote:\n> > On Fri, Jan 31, 2020 at 04:59:18PM +0100, David Fetter wrote:\n> > > On Wed, Jan 15, 2020 at 03:45:12PM -0800, Jesse Zhang wrote:\n> > > > On Tue, Jan 14, 2020 at 2:09 PM David Fetter <david@fetter.org> wrote:\n> > > > > > The changes in hash AM and SIMPLEHASH do look like a net positive\n> > > > > > improvement. My biggest cringe might be in pg_bitutils:\n> > > > > >\n> > > > > > 1. Is ceil_log2_64 dead code?\n> > > > >\n> > > > > Let's call it nascent code. I suspect there are places it could go, if\n> > > > > I look for them. Also, it seemed silly to have one without the other.\n> > > > >\n> > > >\n> > > > While not absolutely required, I'd like us to find at least one\n> > > > place and start using it. (Clang also nags at me when we have\n> > > > unused functions).\n> > >\n> > > Done in the expanded patches attached.\nI see that you've found use of it in dynahash, thanks!\n\nThe math in the new (from v4 to v6) patch is wrong: it yields\nceil_log2(1) = 1 or next_power_of_2(1) = 2. I can see that you lifted\nthe restriction of \"num greater than one\" for ceil_log2() in this patch\nset, but it's now _more_ problematic to base those functions on\npg_leftmost_one_pos().\n\nI'm not comfortable with your changes to pg_leftmost_one_pos() to remove\nthe restriction on word being non-zero. Specifically\npg_leftmost_one_pos() is made to return 0 on 0 input. While none of its\ncurrent callers (in HEAD) is harmed, this introduces muddy semantics:\n\n1. pg_leftmost_one_pos is semantically undefined on 0 input: scanning\nfor a set bit in a zero word won't find it anywhere.\n\n2. we can _try_ generalizing it to accommodate ceil_log2 by\nextrapolating based on the invariant that BSR + LZCNT = 31 (or 63). In\nthat case, the extrapolation yields -1 for pg_leftmost_one_pos(0).\n\nI'm not convinced that others on the list will be comfortable with the\ngeneralization suggested in 2 above.\n\nI've quickly put together a PoC patch on top of yours, which\nre-implements ceil_log2 using LZCNT coupled with a CPUID check.\nThoughts?\n\nCheers,\nJesse",
"msg_date": "Mon, 2 Mar 2020 12:45:21 -0800",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Thu, Feb 27, 2020 at 1:56 PM David Fetter <david@fetter.org> wrote:\n>\n> [v6 patch set]\n\nHere I'm only looking at 0001. It needs rebasing, but it's trivial to\nsee what it does. I noticed in some places, you've replaced \"long\"\nwith uint64, but many are int64. I started making a list, but it got\ntoo long, and I had to stop and ask: Is there a reason to change from\nsigned to unsigned for any of the ones that aren't directly related to\nhashing code? Is there some larger pattern I'm missing?\n\n-static long gistBuffersGetFreeBlock(GISTBuildBuffers *gfbb);\n-static void gistBuffersReleaseBlock(GISTBuildBuffers *gfbb, long blocknum);\n+static uint64 gistBuffersGetFreeBlock(GISTBuildBuffers *gfbb);\n+static void gistBuffersReleaseBlock(GISTBuildBuffers *gfbb, uint64 blocknum);\n\nI believe these should actually use BlockNumber, if these refer to\nrelation blocks as opposed to temp file blocks (I haven't read the\ncode).\n\n-exec_execute_message(const char *portal_name, long max_rows)\n+exec_execute_message(const char *portal_name, uint64 max_rows)\n\nThe only call site of this function uses an int32, which gets its\nvalue from pq_getmsgint, which returns uint32.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Mar 2020 16:59:32 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Tue, Mar 3, 2020 at 4:46 AM Jesse Zhang <sbjesse@gmail.com> wrote:\n> The math in the new (from v4 to v6) patch is wrong: it yields\n> ceil_log2(1) = 1 or next_power_of_2(1) = 2.\n\nI think you're right.\n\n> I can see that you lifted\n> the restriction of \"num greater than one\" for ceil_log2() in this patch\n> set, but it's now _more_ problematic to base those functions on\n> pg_leftmost_one_pos().\n\n> I'm not comfortable with your changes to pg_leftmost_one_pos() to remove\n> the restriction on word being non-zero. Specifically\n> pg_leftmost_one_pos() is made to return 0 on 0 input. While none of its\n> current callers (in HEAD) is harmed, this introduces muddy semantics:\n>\n> 1. pg_leftmost_one_pos is semantically undefined on 0 input: scanning\n> for a set bit in a zero word won't find it anywhere.\n\nRight.\n\n> I've quickly put together a PoC patch on top of yours, which\n> re-implements ceil_log2 using LZCNT coupled with a CPUID check.\n> Thoughts?\n\nThis patch seems to be making an assumption that an indirect function\ncall is faster than taking a branch (in inlined code) that the CPU\nwill almost always predict correctly. It would be nice to have some\nnumbers to compare. (against pg_count_leading_zeros_* using the \"slow\"\nversions but statically inlined).\n\nStylistically, \"8 * sizeof(num)\" is a bit overly formal, since the\nhard-coded number we want is in the name of the function.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Mar 2020 17:46:48 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Mon, Mar 02, 2020 at 12:45:21PM -0800, Jesse Zhang wrote:\n> Hi David,\n> \n> On Wed, Feb 26, 2020 at 9:56 PM David Fetter <david@fetter.org> wrote:\n> >\n> > On Wed, Feb 26, 2020 at 09:12:24AM +0100, David Fetter wrote:\n> > > On Fri, Jan 31, 2020 at 04:59:18PM +0100, David Fetter wrote:\n> > > > On Wed, Jan 15, 2020 at 03:45:12PM -0800, Jesse Zhang wrote:\n> > > > > On Tue, Jan 14, 2020 at 2:09 PM David Fetter <david@fetter.org> wrote:\n> > > > > > > The changes in hash AM and SIMPLEHASH do look like a net positive\n> > > > > > > improvement. My biggest cringe might be in pg_bitutils:\n> > > > > > >\n> > > > > > > 1. Is ceil_log2_64 dead code?\n> > > > > >\n> > > > > > Let's call it nascent code. I suspect there are places it could go, if\n> > > > > > I look for them. Also, it seemed silly to have one without the other.\n> > > > > >\n> > > > >\n> > > > > While not absolutely required, I'd like us to find at least one\n> > > > > place and start using it. (Clang also nags at me when we have\n> > > > > unused functions).\n> > > >\n> > > > Done in the expanded patches attached.\n> I see that you've found use of it in dynahash, thanks!\n> \n> The math in the new (from v4 to v6) patch is wrong: it yields\n> ceil_log2(1) = 1 or next_power_of_2(1) = 2. I can see that you lifted\n> the restriction of \"num greater than one\" for ceil_log2() in this patch\n> set, but it's now _more_ problematic to base those functions on\n> pg_leftmost_one_pos().\n> \n> I'm not comfortable with your changes to pg_leftmost_one_pos() to remove\n> the restriction on word being non-zero. Specifically\n> pg_leftmost_one_pos() is made to return 0 on 0 input. While none of its\n> current callers (in HEAD) is harmed, this introduces muddy semantics:\n> \n> 1. pg_leftmost_one_pos is semantically undefined on 0 input: scanning\n> for a set bit in a zero word won't find it anywhere.\n> \n> 2. we can _try_ generalizing it to accommodate ceil_log2 by\n> extrapolating based on the invariant that BSR + LZCNT = 31 (or 63). In\n> that case, the extrapolation yields -1 for pg_leftmost_one_pos(0).\n> \n> I'm not convinced that others on the list will be comfortable with the\n> generalization suggested in 2 above.\n> \n> I've quickly put together a PoC patch on top of yours, which\n> re-implements ceil_log2 using LZCNT coupled with a CPUID check.\n> Thoughts?\n\nPer discussion on IRC with Andrew (RhodiumToad) Gierth:\n\nThe runtime detection means there's always an indirect call overhead\nand no way to inline. This is counter to what using compiler\nintrinsics is supposed to do.\n\nIt's better to rely on the compiler, because:\n(a) The compiler often knows whether the value can or can't be 0 and\n can therefore skip a conditional jump.\n(b) If you're targeting a recent microarchitecture, the compiler can\n just use the right instruction.\n(c) Even if the conditional branch is left in, it's not a big overhead.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 8 Mar 2020 19:34:06 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "Hi John,\nOops this email has been sitting in my outbox for 3 days...\n\nOn Wed, Mar 4, 2020 at 1:46 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> On Tue, Mar 3, 2020 at 4:46 AM Jesse Zhang <sbjesse@gmail.com> wrote:\n> > I've quickly put together a PoC patch on top of yours, which\n> > re-implements ceil_log2 using LZCNT coupled with a CPUID check.\n> > Thoughts?\n>\n> This patch seems to be making an assumption that an indirect function\n> call is faster than taking a branch (in inlined code) that the CPU\n> will almost always predict correctly. It would be nice to have some\n> numbers to compare. (against pg_count_leading_zeros_* using the \"slow\"\n> versions but statically inlined).\n>\n\nAh, how could I forget that... I ran a quick benchmark on my laptop, and\nindeed, even though the GCC-generated code takes a hit on zero input\n(Clang generates slightly different code that gives indistinguishable\nruntime for zero and non-zero inputs), the inlined code (the function\ninput in my benchmark is never a constant literal so the branch does get\nexercised at runtime) is still more than twice as fast as the function\ncall.\n\n------------------------------------------------------\nBenchmark Time CPU Iterations\n------------------------------------------------------\nBM_pfunc/0 1.57 ns 1.56 ns 447127265\nBM_pfunc/1 1.56 ns 1.56 ns 449618696\nBM_pfunc/8 1.57 ns 1.57 ns 443013856\nBM_pfunc/64 1.57 ns 1.57 ns 448784369\nBM_slow/0 0.602 ns 0.600 ns 1000000000\nBM_slow/1 0.391 ns 0.390 ns 1000000000\nBM_slow/8 0.392 ns 0.391 ns 1000000000\nBM_slow/64 0.391 ns 0.390 ns 1000000000\nBM_fast/0 1.47 ns 1.46 ns 477513921\nBM_fast/1 1.47 ns 1.46 ns 473992040\nBM_fast/8 1.46 ns 1.46 ns 474895755\nBM_fast/64 1.47 ns 1.46 ns 477215268\n\n\nFor your amusement, I've attached the meat of the benchmark. To build\nthe code you can grab the repository at\nhttps://github.com/d/glowing-chainsaw/tree/pfunc\n\n> Stylistically, \"8 * sizeof(num)\" is a bit overly formal, since the\n> hard-coded number we want is in the name of the function.\n\nOh yeah, overly generic code is indicative of the remnants of my C++\nbrain, will fix.\n\nCheers,\nJesse",
"msg_date": "Sun, 8 Mar 2020 16:44:21 -0700",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "Hi David,\nOn Sun, Mar 8, 2020 at 11:34 AM David Fetter <david@fetter.org> wrote:\n>\n> On Mon, Mar 02, 2020 at 12:45:21PM -0800, Jesse Zhang wrote:\n> > Hi David,\n>\n> Per discussion on IRC with Andrew (RhodiumToad) Gierth:\n>\n> The runtime detection means there's always an indirect call overhead\n> and no way to inline. This is counter to what using compiler\n> intrinsics is supposed to do.\n>\n> It's better to rely on the compiler, because:\n> (a) The compiler often knows whether the value can or can't be 0 and\n> can therefore skip a conditional jump.\n\nYes, the compiler would know to eliminate the branch if the inlined\nfunction is called with a literal argument, or it infers an invariant\nfrom the context (like nesting inside a conditional block, or a previous\nconditional \"noreturn\" path).\n\n> (b) If you're targeting a recent microarchitecture, the compiler can\n> just use the right instruction.\n\nI might be more conservative than you are on (b). The thought of\nbuilding a binary that cannot run \"somewhere\" where the compiler\nsupports by default still mortifies me.\n\n> (c) Even if the conditional branch is left in, it's not a big overhead.\n>\n\nI 100% agree with (c), see benchmarking results upthread.\n\nCheers,\nJesse\n\n\n",
"msg_date": "Sun, 8 Mar 2020 17:29:25 -0700",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Sat, 29 Feb 2020 at 04:13, David Fetter <david@fetter.org> wrote:\n>\n> On Thu, Feb 27, 2020 at 02:41:49PM +0800, John Naylor wrote:\n> > In 0002, the pg_bitutils functions have a test (input > 0), and the\n> > new callers ceil_log2_* and next_power_of_2_* have asserts. That seems\n> > backward to me.\n>\n> To me, too, now that you mention it. My thinking was a little fuzzed\n> by trying to accommodate platforms with intrinsics where clz is\n> defined for 0 inputs.\n\nWouldn't it be better just to leave the existing definitions of the\npg_leftmost_one_pos* function alone? It seems to me you're hacking\naway at those just so you can support passing 1 to the new functions,\nand that's giving you trouble now because you're doing num-1 to handle\nthe case where the number is already a power of 2. Which is\ntroublesome because 1-1 is 0, which you're trying to code around.\n\nIsn't it better just to put in a run-time check for numbers that are\nalready a power of 2 and then get rid of the num - 1? Something like:\n\n/*\n * pg_nextpow2_32\n * Returns the next highest power of 2 of 'num', or 'num', if\nit's already a\n * power of 2. 'num' mustn't be 0 or be above UINT_MAX / 2.\n */\nstatic inline uint32\npg_nextpow2_32(uint32 num)\n{\n Assert(num > 0 && num <= UINT_MAX / 2);\n /* use some bitmasking tricks to see if only 1 bit is on */\n return (num & (num - 1)) == 0 ? num : ((uint32) 1) <<\n(pg_leftmost_one_pos32(num) + 1);\n}\n\nI think you'll also want to mention the issue about numbers greater\nthan UINT_MAX / 2, as I've done above and also align your naming\nconversion to what else is in that file.\n\nI don't think Jesse's proposed solution is that great due to the\nadditional function call overhead for pg_count_leading_zeros_32(). The\n(num & (num - 1)) == 0 I imagine will perform better, but I didn't\ntest it.\n\nAlso, wondering if you've looked at any of the other places where we\ndo \"*= 2;\" or \"<<= 1;\" inside a loop? There's quite a number that look\nlike candidates for using the new function.\n\n\n",
"msg_date": "Thu, 12 Mar 2020 12:42:25 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 7:42 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I don't think Jesse's proposed solution is that great due to the\n> additional function call overhead for pg_count_leading_zeros_32(). The\n> (num & (num - 1)) == 0 I imagine will perform better, but I didn't\n> test it.\n\nRight, I believe we've all landed on the same page about that. I see\ntwo ways of doing next_power_of_2_32 without an indirect function\ncall, and leaving pg_leftmost_one_pos32 the same as it is now. I\nhaven't measured either yet (or tested for that matter):\n\nstatic inline uint32\nnext_power_of_2_32(uint32 num)\n{\n Assert(num > 0 && num <= UINT_MAX / 2);\n /* use some bitmasking tricks to see if only 1 bit is on */\n if (num & (num - 1)) == 0)\n return num;\n return ((uint32) 1) << (pg_leftmost_one_pos32(num) + 1)\n}\nOR\n{\n Assert(num > 0 && num <= UINT_MAX / 2);\n return ((uint32) 1) << ceil_log2_32(num);\n}\n\nstatic inline uint32\nceil_log2_32(uint32 num)\n{\n Assert(num > 0);\n if (num == 1)\n return 0;\n return pg_leftmost_one_pos32(num-1) + 1;\n}\n\nOne naming thing I noticed: the name \"next power of two\" implies to me\nnum *= 2 for a power of two, not the same as the input. The latter\nbehavior is better called \"ceil power of 2\".\n\n> Also, wondering if you've looked at any of the other places where we\n> do \"*= 2;\" or \"<<= 1;\" inside a loop? There's quite a number that look\n> like candidates for using the new function.\n\nA brief look shows a few functions where this is done in a tight loop:\n\nnodes/list.c:new_list\nLWLockRegisterTranche\nensure_record_cache_typmod_slot_exists\npqCheckOutBufferSpace\nExecChooseHashTableSize\nExecHashBuildSkewHash\nchoose_nelem_alloc\ninit_htab\nhash_estimate_size\nhash_select_dirsize\nAllocSetAlloc\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Mar 2020 17:59:25 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Thu, 12 Mar 2020 at 22:59, John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> On Thu, Mar 12, 2020 at 7:42 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > I don't think Jesse's proposed solution is that great due to the\n> > additional function call overhead for pg_count_leading_zeros_32(). The\n> > (num & (num - 1)) == 0 I imagine will perform better, but I didn't\n> > test it.\n>\n> Right, I believe we've all landed on the same page about that. I see\n> two ways of doing next_power_of_2_32 without an indirect function\n> call, and leaving pg_leftmost_one_pos32 the same as it is now. I\n> haven't measured either yet (or tested for that matter):\n\nI've attached an updated patch. It includes the modifications\nmentioned above to pre-check for a power of 2 number with the bit\nmasking hack mentioned above. I also renamed the functions to be more\naligned to the other functions in pg_bitutils.h I'm not convinced\npg_ceil_log2_* needs the word \"ceil\" in there.\n\nI dropped the part of the patch that was changing longs to ints of a\nknown size. I went on and did some additional conversion in the 0003\npatch. There are more laying around the code base, but I ended up\nfinding a bit to fix up than i had thought I would. e.g. various\nplaces that repalloc() inside a loop that is multiplying the\nallocation size by 2 each time. The repalloc should be done at the\nend, not during the loop. I thought I might come back to those some\ntime in the future.\n\nIs anyone able to have a look at this?\n\nDavid",
"msg_date": "Tue, 7 Apr 2020 23:40:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Tue, Apr 7, 2020 at 7:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I've attached an updated patch. It includes the modifications\n> mentioned above to pre-check for a power of 2 number with the bit\n> masking hack mentioned above. I also renamed the functions to be more\n> aligned to the other functions in pg_bitutils.h I'm not convinced\n> pg_ceil_log2_* needs the word \"ceil\" in there.\n>\n> I dropped the part of the patch that was changing longs to ints of a\n> known size. I went on and did some additional conversion in the 0003\n> patch. There are more laying around the code base, but I ended up\n> finding a bit to fix up than i had thought I would. e.g. various\n> places that repalloc() inside a loop that is multiplying the\n> allocation size by 2 each time. The repalloc should be done at the\n> end, not during the loop. I thought I might come back to those some\n> time in the future.\n>\n> Is anyone able to have a look at this?\n>\n> David\n\nHi David,\n\nOverall looks good to me. Just a couple things I see:\n\nIt seems _hash_log2 is still in the tree, but has no callers?\n\n- max_size = 8; /* semi-arbitrary small power of 2 */\n- while (max_size < min_size + LIST_HEADER_OVERHEAD)\n- max_size *= 2;\n+ max_size = pg_nextpower2_32(Max(8, min_size + LIST_HEADER_OVERHEAD));\n\nMinor nit: We might want to keep the comment that the number is\n\"semi-arbitrary\" here as well.\n\n- 'pg_waldump', 'scripts');\n+ 'pg_validatebackup', 'pg_waldump', 'scripts');\n\nThis seems like a separate concern?\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Apr 2020 20:16:47 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "Hi John,\n\nThanks for having a look at this.\n\nOn Wed, 8 Apr 2020 at 00:16, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> Overall looks good to me. Just a couple things I see:\n>\n> It seems _hash_log2 is still in the tree, but has no callers?\n\nYeah, I left it in there since it was an external function. Perhaps\nwe could rip it out and write something in the commit message that it\nshould be replaced with the newer functions. Thinking of extension\nauthors here.\n\n> - max_size = 8; /* semi-arbitrary small power of 2 */\n> - while (max_size < min_size + LIST_HEADER_OVERHEAD)\n> - max_size *= 2;\n> + max_size = pg_nextpower2_32(Max(8, min_size + LIST_HEADER_OVERHEAD));\n>\n> Minor nit: We might want to keep the comment that the number is\n> \"semi-arbitrary\" here as well.\n\nI had dropped that as the 8 part was mentioned in the comment above:\n\"The minimum allocation is 8 ListCell units\". I can put it back, I had\njust thought it was overkill.\n\n> - 'pg_waldump', 'scripts');\n> + 'pg_validatebackup', 'pg_waldump', 'scripts');\n>\n> This seems like a separate concern?\n\nThat's required due to the #include \"lib/simplehash.h\" in\npg_validatebackup.c. I have to say, I didn't really take the time to\nunderstand all the Perl code there, but without that change, I was\ngetting a link error when testing on Windows, and after I added\npg_validatebackup to that array, it worked.\n\nDavid\n\n\n",
"msg_date": "Wed, 8 Apr 2020 00:26:32 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Tue, Apr 7, 2020 at 8:26 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Hi John,\n>\n> Thanks for having a look at this.\n>\n> On Wed, 8 Apr 2020 at 00:16, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> > Overall looks good to me. Just a couple things I see:\n> >\n> > It seems _hash_log2 is still in the tree, but has no callers?\n>\n> Yeah, I left it in there since it was an external function. Perhaps\n> we could rip it out and write something in the commit message that it\n> should be replaced with the newer functions. Thinking of extension\n> authors here.\n\nI'm not the best judge of where to draw the line for extensions, but\nthis function does have a name beginning with an underscore, which to\nme is a red flag that it's internal in nature.\n\n> > Minor nit: We might want to keep the comment that the number is\n> > \"semi-arbitrary\" here as well.\n>\n> I had dropped that as the 8 part was mentioned in the comment above:\n> \"The minimum allocation is 8 ListCell units\". I can put it back, I had\n> just thought it was overkill.\n\nOh I see now, nevermind.\n\n> > - 'pg_waldump', 'scripts');\n> > + 'pg_validatebackup', 'pg_waldump', 'scripts');\n> >\n> > This seems like a separate concern?\n>\n> That's required due to the #include \"lib/simplehash.h\" in\n> pg_validatebackup.c. I have to say, I didn't really take the time to\n> understand all the Perl code there, but without that change, I was\n> getting a link error when testing on Windows, and after I added\n> pg_validatebackup to that array, it worked.\n\nHmm. Does pg_bitutils.h need something like this?\n\n#ifndef FRONTEND\nextern PGDLLIMPORT const uint8 pg_leftmost_one_pos[256];\nextern PGDLLIMPORT const uint8 pg_rightmost_one_pos[256];\nextern PGDLLIMPORT const uint8 pg_number_of_ones[256];\n#else\nextern const uint8 pg_leftmost_one_pos[256];\nextern const uint8 pg_rightmost_one_pos[256];\nextern const uint8 pg_number_of_ones[256];\n#endif\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Apr 2020 21:16:17 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Wed, 8 Apr 2020 at 01:16, John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 8:26 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > Hi John,\n> >\n> > Thanks for having a look at this.\n> >\n> > On Wed, 8 Apr 2020 at 00:16, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> > > Overall looks good to me. Just a couple things I see:\n> > >\n> > > It seems _hash_log2 is still in the tree, but has no callers?\n> >\n> > Yeah, I left it in there since it was an external function. Perhaps\n> > we could rip it out and write something in the commit message that it\n> > should be replaced with the newer functions. Thinking of extension\n> > authors here.\n>\n> I'm not the best judge of where to draw the line for extensions, but\n> this function does have a name beginning with an underscore, which to\n> me is a red flag that it's internal in nature.\n\nOK. I've removed that function now and stuck a note in the commit\nmessage to mention an alternative.\n\n> Hmm. Does pg_bitutils.h need something like this?\n>\n> #ifndef FRONTEND\n> extern PGDLLIMPORT const uint8 pg_leftmost_one_pos[256];\n> extern PGDLLIMPORT const uint8 pg_rightmost_one_pos[256];\n> extern PGDLLIMPORT const uint8 pg_number_of_ones[256];\n> #else\n> extern const uint8 pg_leftmost_one_pos[256];\n> extern const uint8 pg_rightmost_one_pos[256];\n> extern const uint8 pg_number_of_ones[256];\n> #endif\n\nYeah, looking at keywords.h, we hit this before in c2d1eea9e75. Your\nproposed fix works and is the same as in keywords.h, so I've gone with\nthat.\n\nI've attached v8 of the patchset.\n\nDavid",
"msg_date": "Wed, 8 Apr 2020 13:04:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 9:04 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> [v8]\n\nLooks good to me, marked RFC.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Apr 2020 11:06:21 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
},
{
"msg_contents": "On Wed, 8 Apr 2020 at 15:06, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> Looks good to me, marked RFC.\n\nThanks a lot for reviewing those changes. I've now pushed all 3 of the patches.\n\nDavid\n\n\n",
"msg_date": "Wed, 8 Apr 2020 18:35:24 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use compiler intrinsics for bit ops in hash"
}
] |
[
{
"msg_contents": "I just ran pgindent over some patch, and noticed that this hunk ended up\nin my working tree:\n\ndiff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c\nindex 861a9148ed..fff54062b0 100644\n--- a/src/backend/statistics/extended_stats.c\n+++ b/src/backend/statistics/extended_stats.c\n@@ -1405,13 +1405,13 @@ examine_opclause_expression(OpExpr *expr, Var **varp, Const **cstp, bool *varonl\n \tif (IsA(rightop, RelabelType))\n \t\trightop = (Node *) ((RelabelType *) rightop)->arg;\n \n-\tif (IsA(leftop, Var) && IsA(rightop, Const))\n+\tif (IsA(leftop, Var) &&IsA(rightop, Const))\n \t{\n \t\tvar = (Var *) leftop;\n \t\tcst = (Const *) rightop;\n \t\tvaronleft = true;\n \t}\n-\telse if (IsA(leftop, Const) && IsA(rightop, Var))\n+\telse if (IsA(leftop, Const) &&IsA(rightop, Var))\n \t{\n \t\tvar = (Var *) rightop;\n \t\tcst = (Const *) leftop;\n\nThis seems a really strange change; this\ngit grep '&&[^([:space:]]' -- *.c\nshows that we already have a dozen or so occurrences already. (That's\nignoring execExprInterp.c's use of computed gotos.)\n\nI don't care all that much, but wanted to throw it out in case somebody\nis specifically interested in studying pgindent's logic, since the last\nround of changes has yielded excellent results.\n\nThanks,\n\n-- \n�lvaro Herrera PostgreSQL Expert, https://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Tue, 14 Jan 2020 19:18:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "pgindent && weirdness"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I just ran pgindent over some patch, and noticed that this hunk ended up\n> in my working tree:\n \n> -\tif (IsA(leftop, Var) && IsA(rightop, Const))\n> +\tif (IsA(leftop, Var) &&IsA(rightop, Const))\n\nYeah, it's been doing that for decades. I think the triggering\nfactor is the typedef name (Var, here) preceding the &&.\n\nIt'd be nice to fix properly, but I've tended to take the path\nof least resistance by breaking such lines to avoid the ugliness:\n\n\tif (IsA(leftop, Var) &&\n\t IsA(rightop, Const))\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 17:30:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 05:30:21PM -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I just ran pgindent over some patch, and noticed that this hunk ended up\n> > in my working tree:\n> \n> > -\tif (IsA(leftop, Var) && IsA(rightop, Const))\n> > +\tif (IsA(leftop, Var) &&IsA(rightop, Const))\n> \n> Yeah, it's been doing that for decades. I think the triggering\n> factor is the typedef name (Var, here) preceding the &&.\n> \n> It'd be nice to fix properly, but I've tended to take the path\n> of least resistance by breaking such lines to avoid the ugliness:\n> \n> \tif (IsA(leftop, Var) &&\n> \t IsA(rightop, Const))\n\nIn the past I would use a post-processing step after BSD indent to fix\nup these problems.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 15 Jan 2020 11:46:09 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 11:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I just ran pgindent over some patch, and noticed that this hunk ended up\n> > in my working tree:\n>\n> > - if (IsA(leftop, Var) && IsA(rightop, Const))\n> > + if (IsA(leftop, Var) &&IsA(rightop, Const))\n>\n> Yeah, it's been doing that for decades. I think the triggering\n> factor is the typedef name (Var, here) preceding the &&.\n>\n> It'd be nice to fix properly, but I've tended to take the path\n> of least resistance by breaking such lines to avoid the ugliness:\n>\n> if (IsA(leftop, Var) &&\n> IsA(rightop, Const))\n\nI am on vacation away from the Internet this week but somehow saw this\non my phone and couldn't stop myself from peeking at pg_bsd_ident\nagain. Yeah, \"(Var)\" (where Var is a known typename) causes it to\nthink that any following operator must be unary.\n\nOne way to fix that in the cases Alvaro is referring to is to tell\noverride the setting so that && (and likewise ||) are never considered\nto be unary, though I haven't tested this much and there are surely\nother ways to achieve this:\n\ndiff --git a/lexi.c b/lexi.c\nindex d43723c..6de3227 100644\n--- a/lexi.c\n+++ b/lexi.c\n@@ -655,6 +655,12 @@ stop_lit:\n unary_delim = state->last_u_d;\n break;\n }\n+\n+ /* && and || are never unary */\n+ if ((token[0] == '&' && *buf_ptr == '&') ||\n+ (token[0] == '|' && *buf_ptr == '|'))\n+ state->last_u_d = false;\n+\n while (*(e_token - 1) == *buf_ptr || *buf_ptr == '=') {\n /*\n * handle ||, &&, etc, and also things as in int *****i\n\nThe problem with that is that && sometimes *should* be formatted like\na unary operator: when it's part of the nonstandard GCC computed goto\nsyntax.\n\n\n",
"msg_date": "Thu, 16 Jan 2020 15:59:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 3:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jan 15, 2020 at 11:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Yeah, it's been doing that for decades. I think the triggering\n> > factor is the typedef name (Var, here) preceding the &&.\n\nHere's a better fix:\n\ndiff --git a/indent.c b/indent.c\nindex 9faf57a..51a60a6 100644\n--- a/indent.c\n+++ b/indent.c\n@@ -570,8 +570,11 @@ check_type:\n ps.in_or_st = false; /* turn off flag for structure decl or\n * initialization */\n }\n- /* parenthesized type following sizeof or offsetof is not a cast */\n- if (ps.keyword == 1 || ps.keyword == 2)\n+ /*\n+ * parenthesized type following sizeof or offsetof is\nnot a cast;\n+ * likewise for function-like macros that take a type\n+ */\n+ if (ps.keyword == 1 || ps.keyword == 2 || ps.last_token == ident)\n ps.not_cast_mask |= 1 << ps.p_l_follow;\n break;\n\n\n",
"msg_date": "Fri, 17 Jan 2020 09:54:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On 2020-Jan-17, Thomas Munro wrote:\n\n> On Thu, Jan 16, 2020 at 3:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, Jan 15, 2020 at 11:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Yeah, it's been doing that for decades. I think the triggering\n> > > factor is the typedef name (Var, here) preceding the &&.\n> \n> Here's a better fix:\n\nThis is indeed a very good fix! Several badly formatted sites in our\ncode are improved with this change.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jan 2020 18:13:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 06:13:36PM -0300, Alvaro Herrera wrote:\n> This is indeed a very good fix! Several badly formatted sites in our\n> code are improved with this change.\n\nNice find! Could you commit that? I can see many places improved as\nwell, among explain.c, tablecmds.c, typecmds.c, and much more. \n--\nMichael",
"msg_date": "Fri, 17 Jan 2020 10:21:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On 2020-Jan-17, Michael Paquier wrote:\n\n> On Thu, Jan 16, 2020 at 06:13:36PM -0300, Alvaro Herrera wrote:\n> > This is indeed a very good fix! Several badly formatted sites in our\n> > code are improved with this change.\n> \n> Nice find! Could you commit that? I can see many places improved as\n> well, among explain.c, tablecmds.c, typecmds.c, and much more. \n\nI think Tom is the only one who can commit that,\nhttps://git.postgresql.org/gitweb/?p=pg_bsd_indent.git\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jan 2020 23:37:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jan-17, Michael Paquier wrote:\n>> Nice find! Could you commit that? I can see many places improved as\n>> well, among explain.c, tablecmds.c, typecmds.c, and much more. \n\n> I think Tom is the only one who can commit that,\n> https://git.postgresql.org/gitweb/?p=pg_bsd_indent.git\n\nI don't *think* that repo is locked down that hard --- IIRC,\nPG committers should have access to it. But I was hoping to\nhear Piotr's opinion before moving forward.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 21:41:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On 2020-Jan-16, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jan-17, Michael Paquier wrote:\n> >> Nice find! Could you commit that? I can see many places improved as\n> >> well, among explain.c, tablecmds.c, typecmds.c, and much more. \n> \n> > I think Tom is the only one who can commit that,\n> > https://git.postgresql.org/gitweb/?p=pg_bsd_indent.git\n> \n> I don't *think* that repo is locked down that hard --- IIRC,\n> PG committers should have access to it. But I was hoping to\n> hear Piotr's opinion before moving forward.\n\nFWIW I think this code predates Piotr's involvement, I think; at least,\nit was already there in the FreeBSD code he imported:\nhttps://github.com/pstef/freebsd_indent/commit/55c29a8774923f2d40fef7919b9490f61e57e7bb#diff-85c94ae15198235e2363f96216b9a1b2R565\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jan 2020 23:50:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jan-16, Tom Lane wrote:\n>> ... But I was hoping to\n>> hear Piotr's opinion before moving forward.\n\n> FWIW I think this code predates Piotr's involvement, I think; at least,\n> it was already there in the FreeBSD code he imported:\n> https://github.com/pstef/freebsd_indent/commit/55c29a8774923f2d40fef7919b9490f61e57e7bb#diff-85c94ae15198235e2363f96216b9a1b2R565\n\nThe roots of that code are even older than Postgres, I believe,\nand there may not be anybody left who understands it completely.\nBut Piotr has certainly spent more time looking at it than I have,\nso I'd still like to hear what he thinks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 21:58:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 3:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jan-16, Tom Lane wrote:\n> >> ... But I was hoping to\n> >> hear Piotr's opinion before moving forward.\n\nMe too.\n\nThinking about this again: It's obviously not true that everything\nthat looks like a function call is not a cast. You could have\n\"my_cast(Type)\" that expands to \"(Type)\" or some slightly more useful\nvariant of that, and then \"my_cast(Type) -1\" would, with this patch\napplied, be reformatted as \"my_cast(Type) - 1\" because it'd err on the\nside of thinking that the expression produces a value and therefore\nthe minus sign must be a binary operator that needs whitespace on both\nsides, and that'd be wrong. However, it seems to me that macros that\nexpand to raw cast syntax (and I mean just \"(Type)\", not a complete\ncast including the value to be cast, like \"((Type) (x))\") must be rare\nand unusual things, and I think it's better to err on the side of\nthinking that function-like macros are values, not casts. That's all\nthe change does, and fortunately the authors of indent showed how to\ndo that with their existing special cases for offsetof and sizeof; I'm\njust extending that treatment to any identifier.\n\nIs there some other case I'm not thinking of that is confused by the\nchange? I'm sure you could contrive something it screws up, but my\nquestion is about real code that people would actually write. Piotr,\nis there an easy way to reindent some large non-PostgreSQL body of\ncode that uses a cousin of this code to see if it gets better or worse\nwith the change?\n\n\n",
"msg_date": "Mon, 17 Feb 2020 12:42:37 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On 2020-Feb-17, Thomas Munro wrote:\n\n> Thinking about this again: It's obviously not true that everything\n> that looks like a function call is not a cast. You could have\n> \"my_cast(Type)\" that expands to \"(Type)\" or some slightly more useful\n> variant of that, and then \"my_cast(Type) -1\" would, with this patch\n> applied, be reformatted as \"my_cast(Type) - 1\" because it'd err on the\n> side of thinking that the expression produces a value and therefore\n> the minus sign must be a binary operator that needs whitespace on both\n> sides, and that'd be wrong. However, it seems to me that macros that\n> expand to raw cast syntax (and I mean just \"(Type)\", not a complete\n> cast including the value to be cast, like \"((Type) (x))\") must be rare\n> and unusual things, and I think it's better to err on the side of\n> thinking that function-like macros are values, not casts. That's all\n> the change does, and fortunately the authors of indent showed how to\n> do that with their existing special cases for offsetof and sizeof; I'm\n> just extending that treatment to any identifier.\n\nHmm ... this suggests to me that if you remove these alleged special\ncases for offsetof and sizeof, the new code handles them correctly\nanyway. Do you think it's worth giving that a try? Not because\nremoving the special cases would have any value, but rather to see if\nanything interesting pops up.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Feb 2020 12:35:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 4:35 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Hmm ... this suggests to me that if you remove these alleged special\n> cases for offsetof and sizeof, the new code handles them correctly\n> anyway. Do you think it's worth giving that a try? Not because\n> removing the special cases would have any value, but rather to see if\n> anything interesting pops up.\n\nGood thought, since keywords also have last_token == ident so it's\nredundant to check the keyword. But while testing that I realised\nthat either way we get this wrong:\n\n- return (int) *s1 - (int) *s2;\n+ return (int) * s1 - (int) *s2;\n\nSo I think the right formulation is one that allows offsetof and\nsizeof to receive not-a-cast treatment, but not any other known\nkeyword:\n\ndiff --git a/indent.c b/indent.c\nindex 9faf57a..ed6dce2 100644\n--- a/indent.c\n+++ b/indent.c\n@@ -570,8 +570,15 @@ check_type:\n ps.in_or_st = false; /* turn off flag for structure decl or\n * initialization */\n }\n- /* parenthesized type following sizeof or offsetof is not a cast */\n- if (ps.keyword == 1 || ps.keyword == 2)\n+ /*\n+ * parenthesized type following sizeof or offsetof is not a\n+ * cast; we also assume the same about similar macros,\n+ * so if there is any other non-keyword identifier immediately\n+ * preceding a type name in parens we won't consider it to be\n+ * a cast\n+ */\n+ if (ps.last_token == ident &&\n+ (ps.keyword == 0 || ps.keyword == 1 || ps.keyword == 2))\n ps.not_cast_mask |= 1 << ps.p_l_follow;\n break;\n\nAnother problem is that there is one thing in our tree that looks like\na non-cast under the new rule, but it actually expands to a type name,\nso now we get that wrong! (I mean, unpatched indent doesn't really\nunderstand it either, it thinks it's a cast, but at least it knows the\nfollowing * is not a binary operator):\n\n- STACK_OF(X509_NAME) *root_cert_list = NULL;\n+ STACK_OF(X509_NAME) * root_cert_list = NULL;\n\nThat's a macro from an OpenSSL header. Not sure what to do about that.\n\n\n",
"msg_date": "Tue, 18 Feb 2020 11:46:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Another problem is that there is one thing in our tree that looks like\n> a non-cast under the new rule, but it actually expands to a type name,\n> so now we get that wrong! (I mean, unpatched indent doesn't really\n> understand it either, it thinks it's a cast, but at least it knows the\n> following * is not a binary operator):\n\n> - STACK_OF(X509_NAME) *root_cert_list = NULL;\n> + STACK_OF(X509_NAME) * root_cert_list = NULL;\n\n> That's a macro from an OpenSSL header. Not sure what to do about that.\n\nIf we get that wrong, but a hundred other places look better,\nI'm not too fussed about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Feb 2020 18:42:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Another problem is that there is one thing in our tree that looks like\n> > a non-cast under the new rule, but it actually expands to a type name,\n> > so now we get that wrong! (I mean, unpatched indent doesn't really\n> > understand it either, it thinks it's a cast, but at least it knows the\n> > following * is not a binary operator):\n>\n> > - STACK_OF(X509_NAME) *root_cert_list = NULL;\n> > + STACK_OF(X509_NAME) * root_cert_list = NULL;\n>\n> > That's a macro from an OpenSSL header. Not sure what to do about that.\n>\n> If we get that wrong, but a hundred other places look better,\n> I'm not too fussed about it.\n\nHere's the patch I propose to commit to pg_bsd_indent, if the repo\nlets me, and here's the result of running it on the PG tree today.\n\nI suppose the principled way to fix that problem with STACK_OF(x)\nwould be to have a user-supplied list of function-like-macros that\nexpand to a type name, but I'm not planning to waste time on that.",
"msg_date": "Sat, 16 May 2020 10:05:38 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Here's the patch I propose to commit to pg_bsd_indent, if the repo\n> lets me, and here's the result of running it on the PG tree today.\n\n+1. I think the repo will let you in, but if not, I can do it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 May 2020 18:15:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On 2020-May-16, Thomas Munro wrote:\n\n> Here's the patch I propose to commit to pg_bsd_indent, if the repo\n> lets me, and here's the result of running it on the PG tree today.\n\nLooks good. Of all these changes in PG, only two are of the STACK_OK()\nnature, and there are 38 places that get better.\n\n> I suppose the principled way to fix that problem with STACK_OF(x)\n> would be to have a user-supplied list of function-like-macros that\n> expand to a type name, but I'm not planning to waste time on that.\n\n+1 on just ignoring that problem as insignificant.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 15 May 2020 18:15:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-May-16, Thomas Munro wrote:\n>> Here's the patch I propose to commit to pg_bsd_indent, if the repo\n>> lets me, and here's the result of running it on the PG tree today.\n\n> Looks good. Of all these changes in PG, only two are of the STACK_OK()\n> nature, and there are 38 places that get better.\n\nIt should also be noted that there are a lot of places where we've\nprogrammed around this silliness by artificially breaking conditions\nusing IsA() into multiple lines. So the \"38 places\" is a lowball\nestimate of how much of a problem this has been.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 May 2020 18:18:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On Sat, May 16, 2020 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Here's the patch I propose to commit to pg_bsd_indent, if the repo\n> > lets me, and here's the result of running it on the PG tree today.\n>\n> +1. I think the repo will let you in, but if not, I can do it.\n\nIt seems I cannot. Please go ahead.\n\nI'll eventually see if I can get this into FreeBSD's usr.bin/indent.\nIt's possible that that process results in a request to make it\noptional (some project with a lot of STACK_OF- and no IsA-style macros\nwouldn't like it), but I don't think it hurts to commit it to our copy\nlike this in the meantime to fix our weird formatting problem.\n\n\n",
"msg_date": "Sat, 16 May 2020 15:51:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, May 16, 2020 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +1. I think the repo will let you in, but if not, I can do it.\n\n> It seems I cannot. Please go ahead.\n\n[ yawn... ] It's about bedtime here, but I'll take care of it in the\nmorning.\n\nOff the critical path, we oughta figure out why the repo wouldn't\nlet you commit. What I was told was it was set up to be writable\nby all PG committers.\n\n> I'll eventually see if I can get this into FreeBSD's usr.bin/indent.\n\n+1 to that, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 16 May 2020 00:33:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, May 16, 2020 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +1. I think the repo will let you in, but if not, I can do it.\n\n> It seems I cannot. Please go ahead.\n\nPushed, and I bumped pg_bsd_indent's version to 2.1.1, and synced\nour core repo with that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 16 May 2020 11:58:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > It seems I cannot. Please go ahead.\n> \n> [ yawn... ] It's about bedtime here, but I'll take care of it in the\n> morning.\n> \n> Off the critical path, we oughta figure out why the repo wouldn't\n> let you commit. What I was told was it was set up to be writable\n> by all PG committers.\n\nJust happened to see this. Might be I'm not looking at the right thing,\nbut from what I can tell, the repo is set up with only you as having\nwrite access. We also don't currently have updating the pg_bsd_indent\nrepo on git.postgresql.org as part of our SOP for adding/removing\ncommitters.\n\nAll of this is fixable, of course. I've CC'd this to sysadmins@ to\nhighlight this issue and possible change to that repo and our SOP.\n\nBarring complaints or concerns, based on Tom's comments above, I'll\nadjust that repo to be 'owned' by pginfra, with all committers having\nread/write access, and add it to our committer add/remove SOP to\nupdate that repo's access list whenever there are changes.\n\nI'll plan to do that in a couple of days to allow for any concerns or\nquestions, as this isn't a critical issue at present based on the above\ncomments.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 18 May 2020 14:48:44 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "On 16/01/2020 03.59, Thomas Munro wrote:\n> On Wed, Jan 15, 2020 at 11:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>> I just ran pgindent over some patch, and noticed that this hunk ended up\n>>> in my working tree:\n>>\n>>> - if (IsA(leftop, Var) && IsA(rightop, Const))\n>>> + if (IsA(leftop, Var) &&IsA(rightop, Const))\n>>\n>> Yeah, it's been doing that for decades. I think the triggering\n>> factor is the typedef name (Var, here) preceding the &&.\n>>\n>> It'd be nice to fix properly, but I've tended to take the path\n>> of least resistance by breaking such lines to avoid the ugliness:\n>>\n>> if (IsA(leftop, Var) &&\n>> IsA(rightop, Const))\n> \n> I am on vacation away from the Internet this week but somehow saw this\n> on my phone and couldn't stop myself from peeking at pg_bsd_ident\n> again. Yeah, \"(Var)\" (where Var is a known typename) causes it to\n> think that any following operator must be unary.\n> \n> One way to fix that in the cases Alvaro is referring to is to tell\n> override the setting so that && (and likewise ||) are never considered\n> to be unary, though I haven't tested this much and there are surely\n> other ways to achieve this:\n> \n> diff --git a/lexi.c b/lexi.c\n> index d43723c..6de3227 100644\n> --- a/lexi.c\n> +++ b/lexi.c\n> @@ -655,6 +655,12 @@ stop_lit:\n> unary_delim = state->last_u_d;\n> break;\n> }\n> +\n> + /* && and || are never unary */\n> + if ((token[0] == '&' && *buf_ptr == '&') ||\n> + (token[0] == '|' && *buf_ptr == '|'))\n> + state->last_u_d = false;\n> +\n> while (*(e_token - 1) == *buf_ptr || *buf_ptr == '=') {\n> /*\n> * handle ||, &&, etc, and also things as in int *****i\n> \n> The problem with that is that && sometimes *should* be formatted like\n> a unary operator: when it's part of the nonstandard GCC computed goto\n> syntax.\n\nThese comments are made in the context of pushing this change or \nequivalent to FreeBSD repository.\n\nI think this is a better approach then the one committed to \npg_bsd_indent. It's ubiquitous that the operators are binary, except - \nas you mentioned - in a nonstandard GCC syntax. The alternative given \nhas more disadvantages, with potential impact on FreeBSD code \nformatting, which it should support as well as everything else -- to a \nreasonable extent. sys/kern/ is usually a decent litmus test, but I \ndon't claim it should show anything interesting in this particular case.\n\nThis change may seem hacky, but it would be far from the worst hack in \nthis program's history or even in its present form. It's actually very \nmuch in indent's spirit, which is an attribute I neither support nor \ncondemn.\n\nIn any case, this change, or equivalent, should be committed to FreeBSD \nrepository together with a test case or two.\n\n\n",
"msg_date": "Thu, 21 May 2020 23:33:16 +0200",
"msg_from": "Piotr Stefaniak <postgres@piotr-stefaniak.me>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
},
{
"msg_contents": "Piotr Stefaniak <postgres@piotr-stefaniak.me> <VE1P192MB07504EB33625F9A23F41CCD0F2B70@VE1P192MB0750.EURP192.PROD.OUTLOOK.COM> writes:\n> On 16/01/2020 03.59, Thomas Munro wrote:\n>> One way to fix that in the cases Alvaro is referring to is to tell\n>> override the setting so that && (and likewise ||) are never considered\n>> to be unary, though I haven't tested this much and there are surely\n>> other ways to achieve this:\n\n> I think this is a better approach then the one committed to \n> pg_bsd_indent. It's ubiquitous that the operators are binary, except - \n> as you mentioned - in a nonstandard GCC syntax. The alternative given \n> has more disadvantages, with potential impact on FreeBSD code \n> formatting, which it should support as well as everything else -- to a \n> reasonable extent. sys/kern/ is usually a decent litmus test, but I \n> don't claim it should show anything interesting in this particular case.\n\nI think that the fix we chose is better, at least for our purposes.\nYou can see its effects on our source tree at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fa27dd40d5c5f56a1ee837a75c97549e992e32a4\n\nand while certainly most of the diffs are around && or || operators,\nthere are a fair number that are not, such as\n\n- dummy_lex.input = unconstify(char *, str) +1;\n+ dummy_lex.input = unconstify(char *, str) + 1;\n\nor more interestingly\n\n- strncmp(text, \"pg_\", 3) !=0)\n+ strncmp(text, \"pg_\", 3) != 0)\n\nwhere the previous misformatting is because \"text\" is a known typedef\nname. So it appears to me that this change reduces the misformatting\ncost of typedef names that chance to match field or variable names,\nand that's actually quite a big issue for us. We have, ATM, 3574 known\ntypedefs in typedefs.list, a fair number of which are not under our\ncontrol (since they come from system headers on various platforms).\nSo it's inevitable that there are going to be collisions.\n\nIn short, I'm inclined to stick with the hack we've got. I'm sad that\nit will result in further divergence from FreeBSD indent; but it does\nuseful stuff for us, and I don't want to give it up.\n\n(That said, I don't see any penalty to carrying both changes; so we'll\nprobably also absorb the &&/|| change at some convenient time.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 May 2020 18:40:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent && weirdness"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI want to add the feature to erase data so that it cannot be restored \nbecause it prevents attackers from stealing data from released data area.\n\n- Background\nInternational security policies require that above threat is taken measures.\nIt is \"Base Protection Profile for Database Management Systems Version 2.12 (DBMS PP)\" [1] based on iso 15408.\nIf the security is improved, it will be more likely to be adopted by security-conscious procurers such as public agencies.\n\n- Feature\nThis feature erases data area just before it is returned to the OS (“erase” means that overwrite data area to hide its contents here) \nbecause there is a risk that the data will be restored by attackers if it is returned to the OS without being overwritten.\nThe erase timing is when DROP, VACUUM, TRUNCATE, etc. are executed.\nI want users to be able to customize the erasure method for their security policies.\n\n- Implementation\nMy idea is adding a new parameter erase_command to postgresql.conf.\nThe command that users set in this parameter is executed just before unlink(path) or ftruncate(fd, 0) is called.\nFor example, the command is shred on Linux and SDelete on Windows.\n\nWhen erase_command is set, VACUUM does not truncate a file size to non-zero \nbecause it's safer for users to return the entire file to the OS than to return part of it.\nAlso, there is no standard tool that overwrites part of a file.\nWith the above specifications, users can easily and safely use this feature using standard tool that overwrites entire file like shred.\n\nHope to hear your feedback and comments.\n\n[1] https://www.commoncriteriaportal.org/files/ppfiles/pp0088V2b_pdf.pdf\nP44 8.1.2\n\n- Threat/Policy\nA threat agent may use or manage TSF, bypassing the protection mechanisms of the TSF.\n\n- TOE Security Objectives Addressing the Threat/Policy \nThe TOE will ensure that any information contained in a protected resource within its Scope of Control \nis not inappropriately disclosed when the resource is reallocated.\n\n- Rationale\ndiminishes this threat by ensuring that TSF data and user data is not persistent\nwhen resources are released by one user/process and allocated to another user/process.\n\nTOE: Target of Evaluation\nTSF: TOE Security Functionality\n\n\nRegards\n\n--\nTakanori Asaba\n\n\n\n",
"msg_date": "Wed, 15 Jan 2020 01:31:44 +0000",
"msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Complete data erasure "
},
{
"msg_contents": "Hello, Asaba-san.\n\nAt Wed, 15 Jan 2020 01:31:44 +0000, \"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com> wrote in \n> Hello hackers,\n> \n> I want to add the feature to erase data so that it cannot be restored \n> because it prevents attackers from stealing data from released data area.\n> \n> - Background\n> International security policies require that above threat is taken measures.\n> It is \"Base Protection Profile for Database Management Systems Version 2.12 (DBMS PP)\" [1] based on iso 15408.\n> If the security is improved, it will be more likely to be adopted by security-conscious procurers such as public agencies.\n> \n> - Feature\n> This feature erases data area just before it is returned to the OS (\u001b-Y´erase¡ means that overwrite data area to hide its contents here) \u001b-A\n> because there is a risk that the data will be restored by attackers if it is returned to the OS without being overwritten.\n> The erase timing is when DROP, VACUUM, TRUNCATE, etc. are executed.\n> I want users to be able to customize the erasure method for their security policies.\n\nshred(1) or wipe(1) doesn't seem to contribute to the objective on\njournaled or copy-on-write file systems. I'm not sure, but maybe the\nsame can be true for read-modify-write devices like SSD. I'm not sure\nabout SDelete, but anyway replacing unlink() with something like\n'system(\"shred\")' leads to siginificant performance degradation.\n\nman 1 wipe says (https://linux.die.net/man/1/wipe) : (shred has a\nsimilar note.)\n\n> NOTE ABOUT JOURNALING FILESYSTEMS AND SOME RECOMMENDATIONS (JUNE 2004)\n> Journaling filesystems (such as Ext3 or ReiserFS) are now being used\n> by default by most Linux distributions. No secure deletion program\n> that does filesystem-level calls can sanitize files on such\n> filesystems, because sensitive data and metadata can be written to the\n> journal, which cannot be readily accessed. Per-file secure deletion is\n> better implemented in the operating system.\n\n\nWAL files contain copies of such sensitive information, which is not\ncovered by the proposal. Also temporary files are not. If the system\ndoesn't want not to be recoverable after corruption, it must copy such\nWAL files to archive.\n\nCurrently there's a discussion on transparent data encyryption\ncovering the all of the above cases on and off of this mailing list.\nIt is different from device-level encryption mentioned in the man\npage. Doesn't that fit the requirement?\n\nhttps://www.postgresql.org/message-id/CALS%2BJ3-57cL%3Djz_eT9uxiLa8CAh5BE3-HcQvXQBz0ScMjag4Zg%40mail.gmail.com\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Jan 2020 12:45:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure "
},
{
"msg_contents": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com> writes:\n> I want to add the feature to erase data so that it cannot be restored \n> because it prevents attackers from stealing data from released data area.\n\nI think this is fairly pointless, unfortunately.\n\nDropping or truncating tables is as much as we can do without making\nunwarranted assumptions about the filesystem's behavior. You say\nyou want to zero out the files first, but what will that accomplish\non copy-on-write filesystems?\n\nEven granting that zeroing our storage files is worth something,\nit's not worth much if there are more copies of the data elsewhere.\nAnd any well-run database is going to have backups, or standby servers,\nor both. There's no way for the primary node to force standbys to erase\nthemselves (and it'd be a different sort of security hazard if there\nwere).\n\nAlso to the point: if your assumption is that an attacker has access\nto the storage medium at a sufficiently low level that they can examine\npreviously-deleted files, what's stopping them from reading the data\n*before* it's deleted?\n\nSo I think doing anything useful in this area is a bit outside\nPostgres' remit. You'd need to be thinking at an operating-system\nor hardware level.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 23:01:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> \"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com> writes:\n> > I want to add the feature to erase data so that it cannot be restored \n> > because it prevents attackers from stealing data from released data area.\n> \n> I think this is fairly pointless, unfortunately.\n\nI disagree- it's a feature that's been asked for multiple times and does\nhave value in some situations.\n\n> Dropping or truncating tables is as much as we can do without making\n> unwarranted assumptions about the filesystem's behavior. You say\n> you want to zero out the files first, but what will that accomplish\n> on copy-on-write filesystems?\n\nWhat about filesystems which are not copy-on-write though?\n\n> Even granting that zeroing our storage files is worth something,\n> it's not worth much if there are more copies of the data elsewhere.\n\nBackups are not our responsibility to worry about, or, at least, it'd be\nan independent feature if we wanted to add something like this to\npg_basebackup, and not something the initial feature would need to worry\nabout.\n\n> And any well-run database is going to have backups, or standby servers,\n> or both. There's no way for the primary node to force standbys to erase\n> themselves (and it'd be a different sort of security hazard if there\n> were).\n\nA user can't \"force\" PG to do anything more than we can \"force\" a\nreplica to do something, but a user can issue a request to a primary and\nthat primary can then pass that request along to the replica as part of\nthe WAL stream.\n\n> Also to the point: if your assumption is that an attacker has access\n> to the storage medium at a sufficiently low level that they can examine\n> previously-deleted files, what's stopping them from reading the data\n> *before* it's deleted?\n\nThis argument certainly doesn't make any sense- who said they had access\nto the storage medium at a time before the files were deleted? What if\nthey only had access after the files were zero'd? When you consider the\nlifetime of a storage medium, it's certainly a great deal longer than\nthe length of time that a given piece of sensitive data might reside on\nit.\n\n> So I think doing anything useful in this area is a bit outside\n> Postgres' remit. You'd need to be thinking at an operating-system\n> or hardware level.\n\nI disagree entirely. If the operating system and hardware level provide\na way for this to work, which is actually rather common when you\nconsider that ext4 is an awful popular filesystem where this would work\njust fine with nearly all traditional hardware underneath it, then we're\njust blocking enabling this capability for no justifiably reason.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 15 Jan 2020 10:23:22 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 10:23:22AM -0500, Stephen Frost wrote:\n>Greetings,\n>\n>* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> \"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com> writes:\n>> > I want to add the feature to erase data so that it cannot be restored\n>> > because it prevents attackers from stealing data from released data area.\n>>\n>> I think this is fairly pointless, unfortunately.\n>\n>I disagree- it's a feature that's been asked for multiple times and does\n>have value in some situations.\n>\n>> Dropping or truncating tables is as much as we can do without making\n>> unwarranted assumptions about the filesystem's behavior. You say\n>> you want to zero out the files first, but what will that accomplish\n>> on copy-on-write filesystems?\n>\n>What about filesystems which are not copy-on-write though?\n>\n>> Even granting that zeroing our storage files is worth something,\n>> it's not worth much if there are more copies of the data elsewhere.\n>\n>Backups are not our responsibility to worry about, or, at least, it'd be\n>an independent feature if we wanted to add something like this to\n>pg_basebackup, and not something the initial feature would need to worry\n>about.\n>\n>> And any well-run database is going to have backups, or standby servers,\n>> or both. There's no way for the primary node to force standbys to erase\n>> themselves (and it'd be a different sort of security hazard if there\n>> were).\n>\n>A user can't \"force\" PG to do anything more than we can \"force\" a\n>replica to do something, but a user can issue a request to a primary and\n>that primary can then pass that request along to the replica as part of\n>the WAL stream.\n>\n>> Also to the point: if your assumption is that an attacker has access\n>> to the storage medium at a sufficiently low level that they can examine\n>> previously-deleted files, what's stopping them from reading the data\n>> *before* it's deleted?\n>\n>This argument certainly doesn't make any sense- who said they had access\n>to the storage medium at a time before the files were deleted? What if\n>they only had access after the files were zero'd? When you consider the\n>lifetime of a storage medium, it's certainly a great deal longer than\n>the length of time that a given piece of sensitive data might reside on\n>it.\n>\n>> So I think doing anything useful in this area is a bit outside\n>> Postgres' remit. You'd need to be thinking at an operating-system\n>> or hardware level.\n>\n>I disagree entirely. If the operating system and hardware level provide\n>a way for this to work, which is actually rather common when you\n>consider that ext4 is an awful popular filesystem where this would work\n>just fine with nearly all traditional hardware underneath it, then we're\n>just blocking enabling this capability for no justifiably reason.\n>\n\nNot sure. I agree the goal (securely discarding data) is certainly\nworthwile, although I suspect it's just of many things you'd need to\ncare about. That is, there's probably a million other things you'd need\nto worry about (logs, WAL, CSV files, temp files, ...), so with actually\nsensitive data I'd expect people to just dump/load the data into a clean\nsystem and rebuild the old one (zero drives, ...).\n\nBut let's assume it makes sense - is this really the right solution? I\nthink what I'd prefer is encryption + rotation of the keys. Which should\nwork properly even on COW filesystems, the performance impact is kinda\nlow and amortized etc. Of course, we're discussing built-in encryption\nfor quite a bit of time.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jan 2020 17:22:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Wed, Jan 15, 2020 at 10:23:22AM -0500, Stephen Frost wrote:\n> >I disagree entirely. If the operating system and hardware level provide\n> >a way for this to work, which is actually rather common when you\n> >consider that ext4 is an awful popular filesystem where this would work\n> >just fine with nearly all traditional hardware underneath it, then we're\n> >just blocking enabling this capability for no justifiably reason.\n> \n> Not sure. I agree the goal (securely discarding data) is certainly\n> worthwile, although I suspect it's just of many things you'd need to\n> care about. That is, there's probably a million other things you'd need\n> to worry about (logs, WAL, CSV files, temp files, ...), so with actually\n> sensitive data I'd expect people to just dump/load the data into a clean\n> system and rebuild the old one (zero drives, ...).\n\nOf course there's other things that one would need to worry about, but\nsaying this isn't useful because PG isn't also scanning your entire\ninfrastructure for CSV files that have this sensitive information isn't\na sensible argument.\n\nI agree that there are different levels of sensitive data- and for many,\nmany cases what is being proposed here will work just fine, even if that\nlevel of sensitive data isn't considered \"actually sensitive data\" by\nother people.\n\n> But let's assume it makes sense - is this really the right solution? I\n> think what I'd prefer is encryption + rotation of the keys. Which should\n> work properly even on COW filesystems, the performance impact is kinda\n> low and amortized etc. Of course, we're discussing built-in encryption\n> for quite a bit of time.\n\nIn some cases that may make sense as an alternative, in other situations\nit doesn't. In other words, I disagree that what you're proposing is\njust a different implementation of the same thing (which I'd be happy to\ndiscuss), it's an entirely different feature with different trade-offs.\n\nI'm certainly on-board with the idea of having table level and column\nlevel encryption to facilitate an approach like this, but I disagree\nstrongly that we shouldn't have this simple \"just overwrite the data a\nfew times\" because we might, one day, have useful in-core encryption or\neven that, even if we had it, that we should tell everyone to use that\ninstead of providing this capability. I don't uninstall 'shread' when I\ninstall 'gpg' on my system.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 15 Jan 2020 11:32:07 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> But let's assume it makes sense - is this really the right solution? I\n> think what I'd prefer is encryption + rotation of the keys. Which should\n> work properly even on COW filesystems, the performance impact is kinda\n> low and amortized etc. Of course, we're discussing built-in encryption\n> for quite a bit of time.\n\nYeah, it seems to me that encrypted storage would solve strictly more\ncases than this proposal does, and it has fewer foot-guns.\n\nOne foot-gun that had been vaguely bothering me yesterday, but the point\ndidn't quite crystallize till just now, is that the only place that we\ncould implement such file zeroing is post-commit in a transaction that\nhas done DROP TABLE or TRUNCATE. Of course post-commit is an absolutely\nhorrid place to be doing anything that could fail, since there's no very\ngood way to recover from an error. It's an even worse place to be doing\nanything that could take a long time --- if the user loses patience and\nkills the session, now your problems are multiplied.\n\nRight now our risks in that area are confined to leaking files if\nunlink() fails, which isn't great but it isn't catastrophic either.\nWith this proposal, erroring out post-commit becomes a security\nfailure, if it happens anywhere before we've finished a possibly\nlarge amount of zero-writing. I don't want to go there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jan 2020 11:53:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > But let's assume it makes sense - is this really the right solution? I\n> > think what I'd prefer is encryption + rotation of the keys. Which should\n> > work properly even on COW filesystems, the performance impact is kinda\n> > low and amortized etc. Of course, we're discussing built-in encryption\n> > for quite a bit of time.\n> \n> Yeah, it seems to me that encrypted storage would solve strictly more\n> cases than this proposal does, and it has fewer foot-guns.\n\nI still view that as strictly a different solution and one that\ncertainly won't fit in all cases, not to mention that it's a great deal\nmore complicated and we're certainly no where near close to having it.\n\n> One foot-gun that had been vaguely bothering me yesterday, but the point\n> didn't quite crystallize till just now, is that the only place that we\n> could implement such file zeroing is post-commit in a transaction that\n> has done DROP TABLE or TRUNCATE. Of course post-commit is an absolutely\n> horrid place to be doing anything that could fail, since there's no very\n> good way to recover from an error. It's an even worse place to be doing\n> anything that could take a long time --- if the user loses patience and\n> kills the session, now your problems are multiplied.\n\nThis is presuming that we make this feature something that can be run in\na transaction and rolled back. I don't think there's been any specific\nexpression that there is such a requirement, and you bring up good\npoints that show why providing that functionality would be particularly\nchallenging, but that isn't really an argument against this feature in\ngeneral.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 15 Jan 2020 12:03:53 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Hello, Horiguchi-san\r\n\r\nThank you for comment.\r\n\r\nAt Wed, 15 Jan 2020 03:46 +0000, \"Kyotaro Horiguchi \"<horikyota.ntt@gmail.com> wrote in\r\n> shred(1) or wipe(1) doesn't seem to contribute to the objective on\r\n> journaled or copy-on-write file systems. I'm not sure, but maybe the\r\n> same can be true for read-modify-write devices like SSD. I'm not sure\r\n> about SDelete, but anyway replacing unlink() with something like\r\n> 'system(\"shred\")' leads to siginificant performance degradation.\r\n> \r\n> man 1 wipe says (https://linux.die.net/man/1/wipe) : (shred has a\r\n> similar note.)\r\n> \r\n> > NOTE ABOUT JOURNALING FILESYSTEMS AND SOME RECOMMENDATIONS\r\n> (JUNE 2004)\r\n> > Journaling filesystems (such as Ext3 or ReiserFS) are now being used\r\n> > by default by most Linux distributions. No secure deletion program\r\n> > that does filesystem-level calls can sanitize files on such\r\n> > filesystems, because sensitive data and metadata can be written to the\r\n> > journal, which cannot be readily accessed. Per-file secure deletion is\r\n> > better implemented in the operating system.\r\n\r\nshred can be used in certain modes of journaled file systems.\r\nHow about telling users that they must set the certain mode\r\nif they set shred for erase_command in journaled file systems?\r\nman 1 shred goes on like this:\r\n\r\n> In the case of ext3 file systems, the above disclaimer applies (and shred is thus\r\n> of limited effectiveness) only in data=journal mode, which journals file data in\r\n> addition to just metadata. In both the data=ordered (default) and data=writeback\r\n> modes, shred works as usual. Ext3 journaling modes can be changed by adding the\r\n> data=something option to the mount options for a particular file system in the\r\n> /etc/fstab file, as documented in the mount man page (man mount).\r\n\r\nAs shown above, shred works as usual in both the data=ordered (default) and data=writeback modes.\r\nI think data=journal mode is not used in many cases because it degrades performance.\r\nTherefore, I think it is enough to indicate that shred cannot be used in data=journal mode.\r\n\r\nRegards,\r\n\r\n--\r\nTakanori Asaba\r\n\r\n",
"msg_date": "Fri, 17 Jan 2020 08:29:15 +0000",
"msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Complete data erasure "
},
{
"msg_contents": "Greetings,\n\n* asaba.takanori@fujitsu.com (asaba.takanori@fujitsu.com) wrote:\n> This feature erases data area just before it is returned to the OS (“erase” means that overwrite data area to hide its contents here) \n> because there is a risk that the data will be restored by attackers if it is returned to the OS without being overwritten.\n> The erase timing is when DROP, VACUUM, TRUNCATE, etc. are executed.\n\nLooking at this fresh, I wanted to point out that I think Tom's right-\nwe aren't going to be able to reasonbly support this kind of data\nerasure on a simple DROP TABLE or TRUNCATE.\n\n> I want users to be able to customize the erasure method for their security policies.\n\nThere's also this- but I think what it means is that we'd probably have\na top-level command that basically is \"ERASE TABLE blah;\" or similar\nwhich doesn't operate during transaction commit but instead marks the\ntable as \"to be erased\" and then perhaps \"erasure in progress\" and then\n\"fully erased\" (or maybe just back to 'normal' at that point). Making\nthose updates will require the command to perform its own transaction\nmanagement which is why it can't be in a transaction itself but also\nmeans that the data erasure process doesn't need to be done during\ncommit.\n\n> My idea is adding a new parameter erase_command to postgresql.conf.\n\nYeah, I don't think that's really a sensible option or even approach.\n\n> When erase_command is set, VACUUM does not truncate a file size to non-zero \n> because it's safer for users to return the entire file to the OS than to return part of it.\n\nThere was discussion elsewhere about preventing VACUUM from doing a\ntruncate on a file because of the lock it requires and problems with\nreplicas.. I'm not sure where that ended up, but, in general, I don't\nthink this feature and VACUUM should really have anything to do with\neach other except for the possible case that a user might be told to\nconfigure their system to not allow VACUUM to truncate tables if they\ncare about this case.\n\nAs mentioned elsewhere, you do also have to consider that the sensitive\ndata will end up in the WAL and on replicas. I don't believe that means\nthis feature is without use, but it means that users of this feature\nwill also need to understand and be able to address WAL and replicas\n(along with backups and such too, of course).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 17 Jan 2020 09:37:17 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Hello Stephen,\r\n\r\nThank you for comment.\r\n\r\nFrom: Stephen Frost <sfrost@snowman.net>\r\n> Greetings,\r\n> \r\n> * asaba.takanori@fujitsu.com (asaba.takanori@fujitsu.com) wrote:\r\n> > This feature erases data area just before it is returned to the OS (“erase”\r\n> means that overwrite data area to hide its contents here)\r\n> > because there is a risk that the data will be restored by attackers if it is returned\r\n> to the OS without being overwritten.\r\n> > The erase timing is when DROP, VACUUM, TRUNCATE, etc. are executed.\r\n> \r\n> Looking at this fresh, I wanted to point out that I think Tom's right-\r\n> we aren't going to be able to reasonbly support this kind of data\r\n> erasure on a simple DROP TABLE or TRUNCATE.\r\n> \r\n> > I want users to be able to customize the erasure method for their security\r\n> policies.\r\n> \r\n> There's also this- but I think what it means is that we'd probably have\r\n> a top-level command that basically is \"ERASE TABLE blah;\" or similar\r\n> which doesn't operate during transaction commit but instead marks the\r\n> table as \"to be erased\" and then perhaps \"erasure in progress\" and then\r\n> \"fully erased\" (or maybe just back to 'normal' at that point). Making\r\n> those updates will require the command to perform its own transaction\r\n> management which is why it can't be in a transaction itself but also\r\n> means that the data erasure process doesn't need to be done during\r\n> commit.\r\n> \r\n> > My idea is adding a new parameter erase_command to postgresql.conf.\r\n> \r\n> Yeah, I don't think that's really a sensible option or even approach.\r\n\r\nI think erase_command can also manage the state of a table.\r\nThe exit status of a configured command shows it.( 0 is \"fully erased\" or \"normal\", 1 is \"erasure in progress\") \r\nerase_command is executed not during a transaction but when unlink() is executed. \r\n(for example, after a transaction that has done DROP TABLE)\r\nI think that this shows \" to be erased \".\r\n\r\n> > When erase_command is set, VACUUM does not truncate a file size to non-zero\r\n> > because it's safer for users to return the entire file to the OS than to return part\r\n> of it.\r\n> \r\n> There was discussion elsewhere about preventing VACUUM from doing a\r\n> truncate on a file because of the lock it requires and problems with\r\n> replicas.. I'm not sure where that ended up, but, in general, I don't\r\n> think this feature and VACUUM should really have anything to do with\r\n> each other except for the possible case that a user might be told to\r\n> configure their system to not allow VACUUM to truncate tables if they\r\n> care about this case.\r\n\r\nI think that if ftruncate(fd, 0) is executed in VACUUM, \r\ndata area allocated to a file is returned to the OS, so that area must be overwritten.\r\n\r\n> As mentioned elsewhere, you do also have to consider that the sensitive\r\n> data will end up in the WAL and on replicas. I don't believe that means\r\n> this feature is without use, but it means that users of this feature\r\n> will also need to understand and be able to address WAL and replicas\r\n> (along with backups and such too, of course).\r\n\r\nI see.\r\nI can't think of it right away, but I will deal with it.\r\n\r\nSorry for my late reply.\r\nIt takes time to understand email from you because I'm a beginner.\r\nPlease point out any mistakes.\r\n\r\nRegards,\r\n\r\n--\r\nTakanori Asaba\r\n\r\n\r\n",
"msg_date": "Wed, 22 Jan 2020 02:45:34 +0000",
"msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "Greetings,\n\n* asaba.takanori@fujitsu.com (asaba.takanori@fujitsu.com) wrote:\n> From: Stephen Frost <sfrost@snowman.net>\n> > * asaba.takanori@fujitsu.com (asaba.takanori@fujitsu.com) wrote:\n> > > This feature erases data area just before it is returned to the OS (“erase”\n> > means that overwrite data area to hide its contents here)\n> > > because there is a risk that the data will be restored by attackers if it is returned\n> > to the OS without being overwritten.\n> > > The erase timing is when DROP, VACUUM, TRUNCATE, etc. are executed.\n> > \n> > Looking at this fresh, I wanted to point out that I think Tom's right-\n> > we aren't going to be able to reasonbly support this kind of data\n> > erasure on a simple DROP TABLE or TRUNCATE.\n> > \n> > > I want users to be able to customize the erasure method for their security\n> > policies.\n> > \n> > There's also this- but I think what it means is that we'd probably have\n> > a top-level command that basically is \"ERASE TABLE blah;\" or similar\n> > which doesn't operate during transaction commit but instead marks the\n> > table as \"to be erased\" and then perhaps \"erasure in progress\" and then\n> > \"fully erased\" (or maybe just back to 'normal' at that point). Making\n> > those updates will require the command to perform its own transaction\n> > management which is why it can't be in a transaction itself but also\n> > means that the data erasure process doesn't need to be done during\n> > commit.\n> > \n> > > My idea is adding a new parameter erase_command to postgresql.conf.\n> > \n> > Yeah, I don't think that's really a sensible option or even approach.\n> \n> I think erase_command can also manage the state of a table.\n> The exit status of a configured command shows it.( 0 is \"fully erased\" or \"normal\", 1 is \"erasure in progress\") \n> erase_command is executed not during a transaction but when unlink() is executed. \n\nI really don't see what the advantage of having this be configurable is.\nIn addition, an external command's actions wouldn't be put through the\nWAL meaning that replicas would have to be dealt with in some other way\nbeyind regular WAL and that seems like it'd just be ugly.\n\n> (for example, after a transaction that has done DROP TABLE)\n\nWe certainly can't run external commands during transaction COMMIT, so\nthis can't be part of a regular DROP TABLE.\n\n> > > When erase_command is set, VACUUM does not truncate a file size to non-zero\n> > > because it's safer for users to return the entire file to the OS than to return part\n> > of it.\n> > \n> > There was discussion elsewhere about preventing VACUUM from doing a\n> > truncate on a file because of the lock it requires and problems with\n> > replicas.. I'm not sure where that ended up, but, in general, I don't\n> > think this feature and VACUUM should really have anything to do with\n> > each other except for the possible case that a user might be told to\n> > configure their system to not allow VACUUM to truncate tables if they\n> > care about this case.\n> \n> I think that if ftruncate(fd, 0) is executed in VACUUM, \n> data area allocated to a file is returned to the OS, so that area must be overwritten.\n\nAs I mentioned, there was already talk of making that disallowed in\nVACUUM, so that would just need to be configured (and be able to be\nconfigured) when running in an environment which requires this.\n\n> > As mentioned elsewhere, you do also have to consider that the sensitive\n> > data will end up in the WAL and on replicas. I don't believe that means\n> > this feature is without use, but it means that users of this feature\n> > will also need to understand and be able to address WAL and replicas\n> > (along with backups and such too, of course).\n> \n> I see.\n> I can't think of it right away, but I will deal with it.\n\nIt's something that will need to be understood and dealt with. A simple\nanswer might be \"the user must also destroy all WAL and all backups in\nan approved manner, and ensure all replicas have replayed all WAL or\nbeen destroyed\".\n\nThanks,\n\nStephen",
"msg_date": "Tue, 28 Jan 2020 14:34:07 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 02:34:07PM -0500, Stephen Frost wrote:\n>Greetings,\n>\n>* asaba.takanori@fujitsu.com (asaba.takanori@fujitsu.com) wrote:\n>> From: Stephen Frost <sfrost@snowman.net>\n>> > * asaba.takanori@fujitsu.com (asaba.takanori@fujitsu.com) wrote:\n>> > > This feature erases data area just before it is returned to the OS (“erase”\n>> > means that overwrite data area to hide its contents here)\n>> > > because there is a risk that the data will be restored by attackers if it is returned\n>> > to the OS without being overwritten.\n>> > > The erase timing is when DROP, VACUUM, TRUNCATE, etc. are executed.\n>> >\n>> > Looking at this fresh, I wanted to point out that I think Tom's right-\n>> > we aren't going to be able to reasonbly support this kind of data\n>> > erasure on a simple DROP TABLE or TRUNCATE.\n>> >\n>> > > I want users to be able to customize the erasure method for their security\n>> > policies.\n>> >\n>> > There's also this- but I think what it means is that we'd probably have\n>> > a top-level command that basically is \"ERASE TABLE blah;\" or similar\n>> > which doesn't operate during transaction commit but instead marks the\n>> > table as \"to be erased\" and then perhaps \"erasure in progress\" and then\n>> > \"fully erased\" (or maybe just back to 'normal' at that point). Making\n>> > those updates will require the command to perform its own transaction\n>> > management which is why it can't be in a transaction itself but also\n>> > means that the data erasure process doesn't need to be done during\n>> > commit.\n>> >\n>> > > My idea is adding a new parameter erase_command to postgresql.conf.\n>> >\n>> > Yeah, I don't think that's really a sensible option or even approach.\n>>\n>> I think erase_command can also manage the state of a table.\n>> The exit status of a configured command shows it.( 0 is \"fully erased\" or \"normal\", 1 is \"erasure in progress\")\n>> erase_command is executed not during a transaction but when unlink() is executed.\n>\n>I really don't see what the advantage of having this be configurable is.\n>In addition, an external command's actions wouldn't be put through the\n>WAL meaning that replicas would have to be dealt with in some other way\n>beyind regular WAL and that seems like it'd just be ugly.\n>\n>> (for example, after a transaction that has done DROP TABLE)\n>\n>We certainly can't run external commands during transaction COMMIT, so\n>this can't be part of a regular DROP TABLE.\n>\n\nIMO the best solution would be that the DROP TABLE does everything as\nusual, but instead of deleting the relfilenode it moves it to some sort\nof queue. And then a background worker would \"erase\" these relfilenodes\noutside the COMMIT.\n\nAnd yes, we need to do this in a way that works with replicas, i.e. we\nneed to WAL-log it somehow. And it should to be done in a way that works\nwhen the replica is on a different type of filesystem.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 29 Jan 2020 00:24:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Tue, Jan 28, 2020 at 02:34:07PM -0500, Stephen Frost wrote:\n> >We certainly can't run external commands during transaction COMMIT, so\n> >this can't be part of a regular DROP TABLE.\n> \n> IMO the best solution would be that the DROP TABLE does everything as\n> usual, but instead of deleting the relfilenode it moves it to some sort\n> of queue. And then a background worker would \"erase\" these relfilenodes\n> outside the COMMIT.\n\nThat sounds interesting, though I'm a bit worried that it's going to\nlead to the same kind of complications and difficulty that we have with\ndeleted columns- anything that's working with the system tables will\nneed to see this new \"dropped but pending delete\" flag. Would we also\nrename the table when this happens? Or change the schema it's in?\nOtherwise, your typical DROP IF EXISTS / CREATE could end up breaking.\n\n> And yes, we need to do this in a way that works with replicas, i.e. we\n> need to WAL-log it somehow. And it should to be done in a way that works\n> when the replica is on a different type of filesystem.\n\nI agree it should go through WAL somehow (ideally without needing an\nactual zero'd or whatever page for every page in the relation), but why\ndo we care about the filesystem on the replica? We don't have anything\nthat's really filesystem specific in WAL replay today and I don't see\nthis as needing to change that..\n\nThanks,\n\nStephen",
"msg_date": "Mon, 3 Feb 2020 09:07:09 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "On Mon, Feb 03, 2020 at 09:07:09AM -0500, Stephen Frost wrote:\n>Greetings,\n>\n>* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> On Tue, Jan 28, 2020 at 02:34:07PM -0500, Stephen Frost wrote:\n>> >We certainly can't run external commands during transaction COMMIT, so\n>> >this can't be part of a regular DROP TABLE.\n>>\n>> IMO the best solution would be that the DROP TABLE does everything as\n>> usual, but instead of deleting the relfilenode it moves it to some sort\n>> of queue. And then a background worker would \"erase\" these relfilenodes\n>> outside the COMMIT.\n>\n>That sounds interesting, though I'm a bit worried that it's going to\n>lead to the same kind of complications and difficulty that we have with\n>deleted columns- anything that's working with the system tables will\n>need to see this new \"dropped but pending delete\" flag. Would we also\n>rename the table when this happens? Or change the schema it's in?\n>Otherwise, your typical DROP IF EXISTS / CREATE could end up breaking.\n>\n\nThat's not really what I meant - let me explain. When I said DROP TABLE\nshould do everything as usual, that includes catalog changes. I.e. after\nthe commit there would not be any remaining entries in system catalogs\nor anything like that.\n\nThe only thing we'd do differently is that instead of unlinking the\nrelfilenode segments, we'd move the relfilenode to a persistent queue\n(essentially a regular table used as a queue relfilenodes). The\nbackground worker would watch the queue, and when it gets a new\nrelfilenode it'd \"delete\" the data and then remove the relfilenode from\nthe queue.\n\nSo essentially others would not be able to even see the (now dropped)\nobject, they could create new object with the same name etc.\n\nI imagine we might provide a way to wait for the deletion to actually\ncomplete (can't do that as part of the DROP TABLE, though), so that\npeople can be sure when the data is actually gone (for scripts etc.).\nA simple function waiting for the queue to get empty might be enough, I\nguess, but maybe not.\n\n>> And yes, we need to do this in a way that works with replicas, i.e. we\n>> need to WAL-log it somehow. And it should to be done in a way that works\n>> when the replica is on a different type of filesystem.\n>\n>I agree it should go through WAL somehow (ideally without needing an\n>actual zero'd or whatever page for every page in the relation), but why\n>do we care about the filesystem on the replica? We don't have anything\n>that's really filesystem specific in WAL replay today and I don't see\n>this as needing to change that..\n>\n\nI think this depends on what our requirements are.\n\nMy assumption is that when you perform this \"secure data erasure\" on the\nprimary, you probably also want to erase the data on the replica. But if\nthe instances use different file systems (COW vs. non-COW, ...) the\nexact thing that needs to happen may be different. Or maybe the replica\ndoes not need to do anything, making it noop?\n\nIn which case we don't need to WAL-log the exact change for each page,\nit might even be fine to not even WAL-log anything except for the final\nremoval from the queue. I mean, the data is worthless and not used by\nanyone at this point, there's no point in replicating it ...\n\nI haven't thought about this very hard. It's not clear what should\nhappen if we complete the erasure on primary, remove the relfilenode\nfrom the queue, and then restart the replica before it finishes the\nlocal erasure. The queue (if represented by a simple table) will be\nreplicated, so the replica will forget it still has work to do.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 3 Feb 2020 18:30:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> That's not really what I meant - let me explain. When I said DROP TABLE\n> should do everything as usual, that includes catalog changes. I.e. after\n> the commit there would not be any remaining entries in system catalogs\n> or anything like that.\n> \n> The only thing we'd do differently is that instead of unlinking the\n> relfilenode segments, we'd move the relfilenode to a persistent queue\n> (essentially a regular table used as a queue relfilenodes). The\n> background worker would watch the queue, and when it gets a new\n> relfilenode it'd \"delete\" the data and then remove the relfilenode from\n> the queue.\n> \n> So essentially others would not be able to even see the (now dropped)\n> object, they could create new object with the same name etc.\n\nThat sounds good. I think we can also follow the way the WAL archiver does its job, instead of using a regular table. That is, when the transaction that performed DROP TABLE commits, it puts the data files in the \"trash bin,\" which is actually a filesystem directory. Or, it just renames the data files in the original directory by appending some suffix such as \".del\". Then, the background worker scans the trash bin or the data directory to erase the file content and delete the file.\n\nThe trash bin mechanism may open up the application for restoring mistakenly dropped tables, a feature like Oracle's Flash Drop. The dropping transaction puts the table metadata (system catalog data or DDL) in the trash bin as well as the data file.\n\n\n> I imagine we might provide a way to wait for the deletion to actually\n> complete (can't do that as part of the DROP TABLE, though), so that\n> people can be sure when the data is actually gone (for scripts etc.).\n> A simple function waiting for the queue to get empty might be enough, I\n> guess, but maybe not.\n\nAgreed, because the user should expect the disk space to be available after DROP TABLE has been committed. Can't we really make the COMMIT to wait for the erasure to complete? Do we have to use an asynchronous erasure method with a background worker? For example, COMMIT performs:\n\n1. Writes a commit WAL record, finalizing the system catalog change.\n2. Puts the data files in the trash bin or renames them.\n3. Erase the file content and delete the file. This could take a long time.\n4. COMMIT replies success to the client.\n\nWhat is concerned about is that the need to erase and delete the data file would be forgotten if the server crashes during step 3. If so, postmaster can do the job at startup, just like it deletes temporary files (although it delays the startup.)\n\n\n> I think this depends on what our requirements are.\n> \n> My assumption is that when you perform this \"secure data erasure\" on the\n> primary, you probably also want to erase the data on the replica. But if\n> the instances use different file systems (COW vs. non-COW, ...) the\n> exact thing that needs to happen may be different. Or maybe the replica\n> does not need to do anything, making it noop?\n\nWe can guide the use of non-COW file systems on both the primary and standby in the manual.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Tue, 4 Feb 2020 00:53:44 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "",
"msg_date": "Tue, 4 Feb 2020 06:06:39 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "On Tue, Feb 04, 2020 at 12:53:44AM +0000, tsunakawa.takay@fujitsu.com\nwrote:\n>From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> That's not really what I meant - let me explain. When I said DROP\n>> TABLE should do everything as usual, that includes catalog changes.\n>> I.e. after the commit there would not be any remaining entries in\n>> system catalogs or anything like that.\n>>\n>> The only thing we'd do differently is that instead of unlinking the\n>> relfilenode segments, we'd move the relfilenode to a persistent queue\n>> (essentially a regular table used as a queue relfilenodes). The\n>> background worker would watch the queue, and when it gets a new\n>> relfilenode it'd \"delete\" the data and then remove the relfilenode\n>> from the queue.\n>>\n>> So essentially others would not be able to even see the (now dropped)\n>> object, they could create new object with the same name etc.\n>\n>That sounds good. I think we can also follow the way the WAL archiver\n>does its job, instead of using a regular table. That is, when the\n>transaction that performed DROP TABLE commits, it puts the data files\n>in the \"trash bin,\" which is actually a filesystem directory. Or, it\n>just renames the data files in the original directory by appending some\n>suffix such as \".del\". Then, the background worker scans the trash bin\n>or the data directory to erase the file content and delete the file.\n>\n\nYeah, that could work, I guess.\n\n>The trash bin mechanism may open up the application for restoring\n>mistakenly dropped tables, a feature like Oracle's Flash Drop. The\n>dropping transaction puts the table metadata (system catalog data or\n>DDL) in the trash bin as well as the data file.\n>\n\nThat seems like a very different feature, and I doubt this is the right\nway to implement that. That would require much more infrastructure than\njust moving the file to a separate dir.\n\n>\n>> I imagine we might provide a way to wait for the deletion to actually\n>> complete (can't do that as part of the DROP TABLE, though), so that\n>> people can be sure when the data is actually gone (for scripts etc.).\n>> A simple function waiting for the queue to get empty might be enough,\n>> I guess, but maybe not.\n>\n>Agreed, because the user should expect the disk space to be available\n>after DROP TABLE has been committed. Can't we really make the COMMIT\n>to wait for the erasure to complete? Do we have to use an asynchronous\n>erasure method with a background worker? For example, COMMIT performs:\n>\n\nI think it depends how exactly it's implemented. As Tom pointed out in\nhis message [1], we can't do the erasure itself in the post-commit is\nnot being able to handle errors. But if the files are renamed durably,\nand the erasure happens in a separate process, that could be OK. The\nCOMMIT may wayt for it or not, that's mostly irrelevant I think.\n\n[1] https://www.postgresql.org/message-id/9104.1579107235%40sss.pgh.pa.us\n\n>1. Writes a commit WAL record, finalizing the system catalog change.\n>2. Puts the data files in the trash bin or renames them.\n>3. Erase the file content and delete the file. This could take a long time.\n>4. COMMIT replies success to the client.\n>\n\nI don't think the COMMIT has to wait for (3) - it might, of course, but\nfor some use cases it may be better to just commit and leave the\nbgworker do the work. And then allow checking if it completed.\n\n>What is concerned about is that the need to erase and delete the data\n>file would be forgotten if the server crashes during step 3. If so,\n>postmaster can do the job at startup, just like it deletes temporary\n>files (although it delays the startup.)\n>\n\nStartup seems like a pretty bad place to do this stuff. There may be a\nlot of data to erase, making recovery very long.\n\n>\n>> I think this depends on what our requirements are.\n>>\n>> My assumption is that when you perform this \"secure data erasure\" on\n>> the primary, you probably also want to erase the data on the replica.\n>> But if the instances use different file systems (COW vs. non-COW,\n>> ...) the exact thing that needs to happen may be different. Or maybe\n>> the replica does not need to do anything, making it noop?\n>\n>We can guide the use of non-COW file systems on both the primary and\n>standby in the manual.\n>\n\nI don't see how that solves the issue. I think it's quite useful to be\nable to use different filesystems for primary/replica. And we may even\nknow how to securely erase data on both, in which case I don't see a\npoint not to allow such configurations.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Feb 2020 22:29:00 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I think it depends how exactly it's implemented. As Tom pointed out in\n> his message [1], we can't do the erasure itself in the post-commit is\n> not being able to handle errors. But if the files are renamed durably,\n> and the erasure happens in a separate process, that could be OK. The\n> COMMIT may wayt for it or not, that's mostly irrelevant I think.\n\nHow is requiring a file rename to be completed post-commit any less\nproblematic than the other way? You still have a non-negligible\nchance of failure.\n\n>> 1. Writes a commit WAL record, finalizing the system catalog change.\n>> 2. Puts the data files in the trash bin or renames them.\n>> 3. Erase the file content and delete the file. This could take a long time.\n>> 4. COMMIT replies success to the client.\n\n> I don't think the COMMIT has to wait for (3) - it might, of course, but\n> for some use cases it may be better to just commit and leave the\n> bgworker do the work. And then allow checking if it completed.\n\nThis doesn't seem like a path that will lead to success. The fundamental\npoint here is that COMMIT has to be an atomic action --- or if it isn't,\nfailure partway through has to lead to a database crash & restart, which\nisn't very pleasant, especially if WAL replay of the commit after the\nrestart re-encounters the same error.\n\nUp to now, we've sort of looked the other way with respect to failures\nof file unlinks post-commit, reasoning that the worst that will happen\nis disk space leakage from no-longer-referenced files that we failed to\nunlink. (Which is bad, certainly, but not catastrophic; it's immaterial\nto database semantics.) This patch basically needs to raise the level of\nguarantee that exists in this area, or it won't do what it says on the\ntin. But I've not seen any indication that we know how to do that in a\nworkable way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Feb 2020 16:52:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> Up to now, we've sort of looked the other way with respect to failures\n> of file unlinks post-commit, reasoning that the worst that will happen\n> is disk space leakage from no-longer-referenced files that we failed to\n> unlink. (Which is bad, certainly, but not catastrophic; it's immaterial\n> to database semantics.) This patch basically needs to raise the level of\n> guarantee that exists in this area, or it won't do what it says on the\n> tin. But I've not seen any indication that we know how to do that in a\n> workable way.\n\nHmm, the error case is a headache.Even if the bgworker does the erasure, it could hit the same error repeatedly when the file system or disk is broken, causing repeated I/O that may hamper performance.\n\nDo we have no good choice but to leave it up to the user to erase the file content like the following?\n\nhttps://docs.oracle.com/en/database/oracle/oracle-database/19/asoag/general-considerations-of-using-transparent-data-encryption.html#GUID-F02C9CBF-0374-408B-8655-F7531B681D41\n--------------------------------------------------\nOracle Database\nAdvanced Security Guide\n7 General Considerations of Using Transparent Data Encryption \n\nManaging Security for Plaintext Fragments\n\n\nYou should remove old plaintext fragments that can appear over time.\n\n\nOld plaintext fragments may be present for some time until the database overwrites the blocks containing such values. If privileged operating system users bypass the access controls of the database, then they might be able to directly access these values in the data file holding the tablespace. \n\nTo minimize this risk:\n \n\n1.Create a new tablespace in a new data file. \n\nYou can use the CREATE TABLESPACE statement to create this tablespace. \n\n\n2.Move the table containing encrypted columns to the new tablespace. You can use the ALTER TABLE.....MOVE statement. \n\nRepeat this step for all of the objects in the original tablespace.\n\n\n3.Drop the original tablespace. \n\nYou can use the DROP TABLESPACE tablespace INCLUDING CONTENTS KEEP DATAFILES statement. Oracle recommends that you securely delete data files using platform-specific utilities. \n\n\n4.Use platform-specific and file system-specific utilities to securely delete the old data file. Examples of such utilities include shred (on Linux) and sdelete (on Windows). \n--------------------------------------------------\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Wed, 5 Feb 2020 00:34:19 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "On Tue, 4 Feb 2020 at 09:53, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> > That's not really what I meant - let me explain. When I said DROP TABLE\n> > should do everything as usual, that includes catalog changes. I.e. after\n> > the commit there would not be any remaining entries in system catalogs\n> > or anything like that.\n> >\n> > The only thing we'd do differently is that instead of unlinking the\n> > relfilenode segments, we'd move the relfilenode to a persistent queue\n> > (essentially a regular table used as a queue relfilenodes). The\n> > background worker would watch the queue, and when it gets a new\n> > relfilenode it'd \"delete\" the data and then remove the relfilenode from\n> > the queue.\n> >\n> > So essentially others would not be able to even see the (now dropped)\n> > object, they could create new object with the same name etc.\n>\n> That sounds good. I think we can also follow the way the WAL archiver does its job, instead of using a regular table. That is, when the transaction that performed DROP TABLE commits, it puts the data files in the \"trash bin,\" which is actually a filesystem directory. Or, it just renames the data files in the original directory by appending some suffix such as \".del\". Then, the background worker scans the trash bin or the data directory to erase the file content and delete the file.\n>\n> The trash bin mechanism may open up the application for restoring mistakenly dropped tables, a feature like Oracle's Flash Drop. The dropping transaction puts the table metadata (system catalog data or DDL) in the trash bin as well as the data file.\n>\n>\n> > I imagine we might provide a way to wait for the deletion to actually\n> > complete (can't do that as part of the DROP TABLE, though), so that\n> > people can be sure when the data is actually gone (for scripts etc.).\n> > A simple function waiting for the queue to get empty might be enough, I\n> > guess, but maybe not.\n>\n> Agreed, because the user should expect the disk space to be available after DROP TABLE has been committed. Can't we really make the COMMIT to wait for the erasure to complete? Do we have to use an asynchronous erasure method with a background worker? For example, COMMIT performs:\n>\n> 1. Writes a commit WAL record, finalizing the system catalog change.\n> 2. Puts the data files in the trash bin or renames them.\n> 3. Erase the file content and delete the file. This could take a long time.\n> 4. COMMIT replies success to the client.\n>\n> What is concerned about is that the need to erase and delete the data file would be forgotten if the server crashes during step 3. If so, postmaster can do the job at startup, just like it deletes temporary files (although it delays the startup.)\n\nPlease note that we need to erase files not only when dropping or\ntruncating tables but also when aborting the transaction that created\na new table. If user wants to sure the data is actually erased they\nneeds to wait for rollback as well that could be ROLLBACK command by\nuser or an error during transaction etc.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Feb 2020 19:19:14 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Hello Stephen,\n\nFrom: Stephen Frost <sfrost@snowman.net>\n> I disagree- it's a feature that's been asked for multiple times and does\n> have value in some situations.\n\nI'm rethinking the need for this feature although I think that it improves the security.\nYou said that this feature has value in some situations.\nCould you tell me about that situations?\n\nRegards,\n\n--\nTakanori Asaba\n\n\n",
"msg_date": "Mon, 10 Feb 2020 06:57:26 +0000",
"msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "Greetings,\n\nFrom: asaba.takanori@fujitsu.com <asaba.takanori@fujitsu.com>\n> Hello Stephen,\n> \n> From: Stephen Frost <sfrost@snowman.net>\n> > I disagree- it's a feature that's been asked for multiple times and does\n> > have value in some situations.\n> \n> I'm rethinking the need for this feature although I think that it improves the\n> security.\n> You said that this feature has value in some situations.\n> Could you tell me about that situations?\n> \n> Regards,\n> \n> --\n> Takanori Asaba\n> \n\nI think that the use scene is to ensure that no data remains.\nThis feature will give users peace of mind.\n\nThere is a risk of leakage as long as data remains.\nI think that there are some things that users are worried about.\nFor example, there is a possibility that even if it takes years, attackers decrypt encrypted data.\nOr some users may be concerned about disk management in cloud environments.\nThese concerns will be resolved if they can erase data themselves.\n\nI think that this feature is valuable, so I would appreciate your continued cooperation.\n\nRegards,\n\n--\nTakanori Asaba\n\n\n\n\n",
"msg_date": "Thu, 20 Feb 2020 08:27:14 +0000",
"msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "Hello Tom,\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > I think it depends how exactly it's implemented. As Tom pointed out in\n> > his message [1], we can't do the erasure itself in the post-commit is\n> > not being able to handle errors. But if the files are renamed durably,\n> > and the erasure happens in a separate process, that could be OK. The\n> > COMMIT may wayt for it or not, that's mostly irrelevant I think.\n> \n> How is requiring a file rename to be completed post-commit any less\n> problematic than the other way? You still have a non-negligible\n> chance of failure.\n\nI think that errors of rename(2) listed in [1] cannot occur or can be handled.\nWhat do you think?\n\n[1] http://man7.org/linux/man-pages/man2/rename.2.html\n\n\nRegards,\n\n--\nTakanori Asaba\n\n\n\n\n",
"msg_date": "Thu, 20 Feb 2020 08:29:35 +0000",
"msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "Hello Tom,\n\nFrom: asaba.takanori@fujitsu.com <asaba.takanori@fujitsu.com>\n> Hello Tom,\n> \n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> > Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > > I think it depends how exactly it's implemented. As Tom pointed out in\n> > > his message [1], we can't do the erasure itself in the post-commit is\n> > > not being able to handle errors. But if the files are renamed durably,\n> > > and the erasure happens in a separate process, that could be OK. The\n> > > COMMIT may wayt for it or not, that's mostly irrelevant I think.\n> >\n> > How is requiring a file rename to be completed post-commit any less\n> > problematic than the other way? You still have a non-negligible\n> > chance of failure.\n> \n> I think that errors of rename(2) listed in [1] cannot occur or can be handled.\n> What do you think?\n> \n> [1] http://man7.org/linux/man-pages/man2/rename.2.html\n> \n\nI have another idea.\nHow about managing status of data file like the WAL archiver?\nFor example,\n\n1. Create a status file \"...ready\" in a transaction that has DROP TABLE. (not rename the data file)\n2. Background worker scans the directory that has status file.\n3. Rename the status file to \"...progress\" when the erase of the data file starts.\n4. Rename the status file to \"...done\" when the erase of the data file finished.\n\nI think that it's OK because step1 is not post-commit and background worker can handle error of the erase.\n\nRegards,\n\n--\nTakanori Asaba\n\n\n\n\n",
"msg_date": "Wed, 18 Mar 2020 06:16:05 +0000",
"msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "Hello,\n\nI was off the point.\nI want to organize the discussion and suggest feature design.\n\nThere are two opinions.\n1. COMMIT should not take a long time because errors are more likely to occur.\n2. The data area should be released when COMMIT is completed because COMMIT has to be an atomic action.\n\nThese opinions are correct.\nBut it is difficult to satisfy them at the same time.\nSo I suggest that users have the option to choose.\nDROP TABLE works as following two patterns:\n\n1. Rename data file to \"...del\" instead of ftruncate(fd,0).\n After that, bgworker scan the directory and run erase_command.\n (erase_command is command set by user like archive_command.\n For example, shred on Linux.)\n\n2. Run erase_command for data file immediately before ftruncate(fd,0).\n Wait until it completes, then reply COMMIT to the client.\n After that, it is the same as normal processing.\n\nIf error of erase_command occurs, it issues WARNING and don't request unlink to CheckPointer.\nIt’s not a security failure because I think that there is a risk when data area is returned to OS.\n\nI will implement from pattern 2 because it's more similar to user experience than pattern 1.\nThis method has been pointed out as follows.\n\n From Stephen\n> We certainly can't run external commands during transaction COMMIT, so\n> this can't be part of a regular DROP TABLE.\n\nI think it means that error of external commands can't be handled.\nIf so, it's no problem because I determined behavior after error.\nAre there any other problems?\n\nRegards,\n\n--\nTakanori Asaba\n\n\n\n\n",
"msg_date": "Fri, 10 Apr 2020 08:23:32 +0000",
"msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Complete data erasure"
},
{
"msg_contents": "On Fri, Apr 10, 2020 at 08:23:32AM +0000, asaba.takanori@fujitsu.com wrote:\n>Hello,\n>\n>I was off the point.\n>I want to organize the discussion and suggest feature design.\n>\n>There are two opinions.\n>1. COMMIT should not take a long time because errors are more likely to occur.\n\nI don't think it's a matter of commit duration but a question what to do\nin response to errors in the data erasure code - which is something we\ncan't really rule out if we allow custom scripts to perform the erasure.\nIf the erasure took very long but couldn't possibly fail, it'd be much\neasier to handle than fast erasure failing often.\n\nThe difficulty of error-handling is why adding new stuff to commit may\nbe tricky. Which is why I proposed not to do the failure-prone code in\ncommit itself, but move it to a separate process.\n\n>2. The data area should be released when COMMIT is completed because COMMIT has to be an atomic action.\n>\n\nI don't think \"commit is atomic\" really implies \"data should be released\nat commit\". This is precisely what makes the feature extremely hard to\nimplement, IMHO.\n\nWhy wouldn't it be acceptable to do something like this?\n\n BEGIN;\n ...\n DROP TABLE x ERASE;\n ...\n COMMIT; <-- Don't do data erasure, just add \"x\" to queue.\n\n -- wait for another process to complete the erasure\n SELECT pg_wait_for_erasure();\n\nThat means we're not running any custom commands / code during commit,\nwhich should (hopefully) make it easier to handle errors.\n\n>\n>These opinions are correct.\n>But it is difficult to satisfy them at the same time.\n>So I suggest that users have the option to choose.\n>DROP TABLE works as following two patterns:\n>\n>1. Rename data file to \"...del\" instead of ftruncate(fd,0).\n> After that, bgworker scan the directory and run erase_command.\n> (erase_command is command set by user like archive_command.\n> For example, shred on Linux.)\n>\n>2. Run erase_command for data file immediately before ftruncate(fd,0).\n> Wait until it completes, then reply COMMIT to the client.\n> After that, it is the same as normal processing.\n>\n>If error of erase_command occurs, it issues WARNING and don't request unlink to CheckPointer.\n>It’s not a security failure because I think that there is a risk when data area is returned to OS.\n>\n\nI think it was already disicussed why doing file renames and other\nexpensive stuff that could fail is a bad idea. And I'm not sure just\nignoring erase_command failures (because that's what WARNING does) is\nreally appropriate for this feature.\n\n>I will implement from pattern 2 because it's more similar to user experience than pattern 1.\n>This method has been pointed out as follows.\n>\n>From Stephen\n>> We certainly can't run external commands during transaction COMMIT, so\n>> this can't be part of a regular DROP TABLE.\n>\n>I think it means that error of external commands can't be handled.\n>If so, it's no problem because I determined behavior after error.\n>Are there any other problems?\n\nI'm not sure what you mean by \"determined behavior after error\"? You\nessentially propose to just print a warning and be done with it. But\nthat means we can simply leave data files with sensitive data on the\ndisk, which seems ... not great.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 11 Apr 2020 19:33:06 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I don't think \"commit is atomic\" really implies \"data should be released\n> at commit\". This is precisely what makes the feature extremely hard to\n> implement, IMHO.\n\n> Why wouldn't it be acceptable to do something like this?\n\n> BEGIN;\n> ...\n> DROP TABLE x ERASE;\n> ...\n> COMMIT; <-- Don't do data erasure, just add \"x\" to queue.\n\n> -- wait for another process to complete the erasure\n> SELECT pg_wait_for_erasure();\n\n> That means we're not running any custom commands / code during commit,\n> which should (hopefully) make it easier to handle errors.\n\nYeah, adding actions-that-could-fail to commit is a very hard sell,\nso something like this API would probably have a better chance.\n\nHowever ... the whole concept of erasure being a committable action\nseems basically misguided from here. Consider this scenario:\n\n\tbegin;\n\n\tcreate table full_o_secrets (...);\n\n\t... manipulate secret data in full_o_secrets ...\n\n\tdrop table full_o_secrets erase;\n\n\t... do something that unintentionally fails, causing xact abort ...\n\n\tcommit;\n\nNow what? Your secret data is all over the disk and you have *no*\nrecourse to get rid of it; that's true even at a very low level,\nbecause we unlinked the file when rolling back the transaction.\nIf the error occurred before getting to \"drop table full_o_secrets\nerase\" then there isn't even any way in principle for the server\nto know that you might not be happy about leaving that data lying\naround.\n\nAnd I haven't even spoken of copies that may exist in WAL, or\nhave been propagated to standby servers by now.\n\nI have no idea what an actual solution that accounted for those\nproblems would look like. But as presented, this is a toy feature\noffering no real security gain, if you ask me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Apr 2020 13:56:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
},
{
"msg_contents": "On Sat, Apr 11, 2020 at 01:56:10PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I don't think \"commit is atomic\" really implies \"data should be released\n>> at commit\". This is precisely what makes the feature extremely hard to\n>> implement, IMHO.\n>\n>> Why wouldn't it be acceptable to do something like this?\n>\n>> BEGIN;\n>> ...\n>> DROP TABLE x ERASE;\n>> ...\n>> COMMIT; <-- Don't do data erasure, just add \"x\" to queue.\n>\n>> -- wait for another process to complete the erasure\n>> SELECT pg_wait_for_erasure();\n>\n>> That means we're not running any custom commands / code during commit,\n>> which should (hopefully) make it easier to handle errors.\n>\n>Yeah, adding actions-that-could-fail to commit is a very hard sell,\n>so something like this API would probably have a better chance.\n>\n>However ... the whole concept of erasure being a committable action\n>seems basically misguided from here. Consider this scenario:\n>\n>\tbegin;\n>\n>\tcreate table full_o_secrets (...);\n>\n>\t... manipulate secret data in full_o_secrets ...\n>\n>\tdrop table full_o_secrets erase;\n>\n>\t... do something that unintentionally fails, causing xact abort ...\n>\n>\tcommit;\n>\n>Now what? Your secret data is all over the disk and you have *no*\n>recourse to get rid of it; that's true even at a very low level,\n>because we unlinked the file when rolling back the transaction.\n>If the error occurred before getting to \"drop table full_o_secrets\n>erase\" then there isn't even any way in principle for the server\n>to know that you might not be happy about leaving that data lying\n>around.\n>\n>And I haven't even spoken of copies that may exist in WAL, or\n>have been propagated to standby servers by now.\n>\n>I have no idea what an actual solution that accounted for those\n>problems would look like. But as presented, this is a toy feature\n>offering no real security gain, if you ask me.\n>\n\nYeah, unfortunately the feature as proposed has these weaknesses.\n\nThis is why I proposed that a solution based on encryption and throwing\naway a key might be more reliable - if you don't have a key, who cares\nif the encrypted data file (or parts of it) is still on disk?\n\nIt has issues too, though - a query might need a temporary file to do a\nsort, hash join spills to disk, or something like that. And those won't\nbe encrypted without some executor changes (e.g. we might propagate\n\"needs erasure\" to temp files, and do erasure when necessary).\n\nI doubt a perfect solution would be so complex it's not feasible in\npractice, especially in v1. So maybe the best thing we can do is\ndocumenting those limitations, but I'm not sure where to draw the line\nbetween acceptable and unacceptable limitations.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 11 Apr 2020 20:20:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Complete data erasure"
}
] |
[
{
"msg_contents": "Hi all,\n\nI reviewed the latest version of the patch. Overall some good improvements I think. Please find my feedback below.\n\n- I think I mentioned this before - it's not that big of a deal, but it just looks weird and inconsistent to me:\ncreate table t2 as (select a, b, c, 10 d from generate_series(1,5) a, generate_series(1,100) b, generate_series(1,10000) c); create index on t2 (a,b,c desc);\n\npostgres=# explain select distinct on (a,b) a,b,c from t2 where a=2 and b>=5 and b<=5 order by a,b,c desc;\n QUERY PLAN \n---------------------------------------------------------------------------------\n Index Only Scan using t2_a_b_c_idx on t2 (cost=0.43..216.25 rows=500 width=12)\n Skip scan: true\n Index Cond: ((a = 2) AND (b >= 5) AND (b <= 5))\n(3 rows)\n\npostgres=# explain select distinct on (a,b) a,b,c from t2 where a=2 and b=5 order by a,b,c desc;\n QUERY PLAN \n-----------------------------------------------------------------------------------------\n Unique (cost=0.43..8361.56 rows=500 width=12)\n -> Index Only Scan using t2_a_b_c_idx on t2 (cost=0.43..8361.56 rows=9807 width=12)\n Index Cond: ((a = 2) AND (b = 5))\n(3 rows)\n\nWhen doing a distinct on (params) and having equality conditions for all params, it falls back to the regular index scan even though there's no reason not to use the skip scan here. It's much faster to write b between 5 and 5 now rather than writing b=5. I understand this was a limitation of the unique-keys patch at the moment which could be addressed in the future. I think for the sake of consistency it would make sense if this eventually gets addressed.\n\n- nodeIndexScan.c, line 126\nThis sets xs_want_itup to true in all cases (even for non skip-scans). I don't think this is acceptable given the side-effects this has (page will never be unpinned in between returned tuples in _bt_drop_lock_and_maybe_pin)\n\n- nbsearch.c, _bt_skip, line 1440\n_bt_update_skip_scankeys(scan, indexRel); This function is called twice now - once in the else {} and immediately after that outside of the else. The second call can be removed I think.\n\n- nbtsearch.c _bt_skip line 1597\n\t\t\t\tLockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);\n\t\t\t\tscan->xs_itup = (IndexTuple) PageGetItem(page, itemid);\n\nThis is an UNLOCK followed by a read of the unlocked page. That looks incorrect?\n\n- nbtsearch.c _bt_skip line 1440\nif (BTScanPosIsValid(so->currPos) &&\n\t\t_bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir))\n\nIs it allowed to look at the high key / low key of the page without have a read lock on it?\n\n- nbtsearch.c line 1634\nif (_bt_readpage(scan, indexdir, offnum)) ...\nelse\n error()\n\nIs it really guaranteed that a match can be found on the page itself? Isn't it possible that an extra index condition, not part of the scan key, makes none of the keys on the page match?\n\n- nbtsearch.c in general\nMost of the code seems to rely quite heavily on the fact that xs_want_itup forces _bt_drop_lock_and_maybe_pin to never release the buffer pin. Have you considered that compacting of a page may still happen even if you hold the pin? [1] I've been trying to come up with cases in which this may break the patch, but I haven't able to produce such a scenario - so it may be fine. But it would be good to consider again. One thing I was thinking of was a scenario where page splits and/or compacting would happen in between returning tuples. Could this break the _bt_scankey_within_page check such that it thinks the scan key is within the current page, while it actually isn't? Mainly for backward and/or cursor scans. Forward scans shouldn't be a problem I think. Apologies for being a bit vague as I don't have a clear example ready when it would go wrong. It may well be fine, but it was one of the things on my mind.\n\n[1] https://postgrespro.com/list/id/1566683972147.11682@Optiver.com\n\n-Floris\n\n\n",
"msg_date": "Wed, 15 Jan 2020 13:33:41 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "Hi Floris,\n\nOn 1/15/20 8:33 AM, Floris Van Nee wrote:\n> I reviewed the latest version of the patch. Overall some good improvements I think. Please find my feedback below.\n>\n\nThanks for your review !\n\n> - I think I mentioned this before - it's not that big of a deal, but it just looks weird and inconsistent to me:\n> create table t2 as (select a, b, c, 10 d from generate_series(1,5) a, generate_series(1,100) b, generate_series(1,10000) c); create index on t2 (a,b,c desc);\n> \n> postgres=# explain select distinct on (a,b) a,b,c from t2 where a=2 and b>=5 and b<=5 order by a,b,c desc;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------\n> Index Only Scan using t2_a_b_c_idx on t2 (cost=0.43..216.25 rows=500 width=12)\n> Skip scan: true\n> Index Cond: ((a = 2) AND (b >= 5) AND (b <= 5))\n> (3 rows)\n> \n> postgres=# explain select distinct on (a,b) a,b,c from t2 where a=2 and b=5 order by a,b,c desc;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------\n> Unique (cost=0.43..8361.56 rows=500 width=12)\n> -> Index Only Scan using t2_a_b_c_idx on t2 (cost=0.43..8361.56 rows=9807 width=12)\n> Index Cond: ((a = 2) AND (b = 5))\n> (3 rows)\n> \n> When doing a distinct on (params) and having equality conditions for all params, it falls back to the regular index scan even though there's no reason not to use the skip scan here. It's much faster to write b between 5 and 5 now rather than writing b=5. I understand this was a limitation of the unique-keys patch at the moment which could be addressed in the future. I think for the sake of consistency it would make sense if this eventually gets addressed.\n> \n\nAgreed, that it is an improvement that should be made. I would like \nDavid's view on this since it relates to the UniqueKey patch.\n\n> - nodeIndexScan.c, line 126\n> This sets xs_want_itup to true in all cases (even for non skip-scans). I don't think this is acceptable given the side-effects this has (page will never be unpinned in between returned tuples in _bt_drop_lock_and_maybe_pin)\n>\n\nCorrect - fixed.\n\n> - nbsearch.c, _bt_skip, line 1440\n> _bt_update_skip_scankeys(scan, indexRel); This function is called twice now - once in the else {} and immediately after that outside of the else. The second call can be removed I think.\n> \n\nYes, removed the \"else\" call site.\n\n> - nbtsearch.c _bt_skip line 1597\n> \t\t\t\tLockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);\n> \t\t\t\tscan->xs_itup = (IndexTuple) PageGetItem(page, itemid);\n> \n> This is an UNLOCK followed by a read of the unlocked page. That looks incorrect?\n> \n\nYes, that needed to be changed.\n\n> - nbtsearch.c _bt_skip line 1440\n> if (BTScanPosIsValid(so->currPos) &&\n> \t\t_bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir))\n> \n> Is it allowed to look at the high key / low key of the page without have a read lock on it?\n> \n\nIn case of a split the page will still contain a high key and a low key, \nso this should be ok.\n\n> - nbtsearch.c line 1634\n> if (_bt_readpage(scan, indexdir, offnum)) ...\n> else\n> error()\n> \n> Is it really guaranteed that a match can be found on the page itself? Isn't it possible that an extra index condition, not part of the scan key, makes none of the keys on the page match?\n> \n\nThe logic for this has been changed.\n\n> - nbtsearch.c in general\n> Most of the code seems to rely quite heavily on the fact that xs_want_itup forces _bt_drop_lock_and_maybe_pin to never release the buffer pin. Have you considered that compacting of a page may still happen even if you hold the pin? [1] I've been trying to come up with cases in which this may break the patch, but I haven't able to produce such a scenario - so it may be fine. But it would be good to consider again. One thing I was thinking of was a scenario where page splits and/or compacting would happen in between returning tuples. Could this break the _bt_scankey_within_page check such that it thinks the scan key is within the current page, while it actually isn't? Mainly for backward and/or cursor scans. Forward scans shouldn't be a problem I think. Apologies for being a bit vague as I don't have a clear example ready when it would go wrong. It may well be fine, but it was one of the things on my mind.\n> \n\nThere is a BT_READ lock in place when finding the correct leaf page, or \nsearching within the leaf page itself. _bt_vacuum_one_page deletes only \nLP_DEAD tuples, but those are already ignored in _bt_readpage. Peter, do \nyou have some feedback for this ?\n\n\nPlease, find the updated patches attached that Dmitry and I made.\n\nThanks again !\n\nBest regards,\n Jesper",
"msg_date": "Mon, 20 Jan 2020 14:01:20 -0500",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 11:01 AM Jesper Pedersen\n<jesper.pedersen@redhat.com> wrote:\n> > - nbtsearch.c _bt_skip line 1440\n> > if (BTScanPosIsValid(so->currPos) &&\n> > _bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir))\n> >\n> > Is it allowed to look at the high key / low key of the page without have a read lock on it?\n> >\n>\n> In case of a split the page will still contain a high key and a low key,\n> so this should be ok.\n\nThis is definitely not okay.\n\n> > - nbtsearch.c in general\n> > Most of the code seems to rely quite heavily on the fact that xs_want_itup forces _bt_drop_lock_and_maybe_pin to never release the buffer pin. Have you considered that compacting of a page may still happen even if you hold the pin? [1] I've been trying to come up with cases in which this may break the patch, but I haven't able to produce such a scenario - so it may be fine.\n\nTry making _bt_findinsertloc() call _bt_vacuum_one_page() whenever the\npage is P_HAS_GARBAGE(), regardless of whether or not the page is\nabout to split. That will still be correct, while having a much better\nchance of breaking the patch during stress-testing.\n\nRelying on a buffer pin to prevent the B-Tree structure itself from\nchanging in any important way seems likely to be broken already. Even\nif it isn't, it sounds fragile.\n\nA leaf page doesn't really have anything called a low key. It usually\nhas a current first \"data item\"/non-pivot tuple, which is an\ninherently unstable thing. Also, it has a very loose relationship with\nthe high key of the left sibling page, which the the closest thing to\na low key that exists (often they'll have almost the same key values,\nbut that is not guaranteed at all). While I haven't studied the patch,\nthe logic within _bt_scankey_within_page() seems fishy to me for that\nreason.\n\n> There is a BT_READ lock in place when finding the correct leaf page, or\n> searching within the leaf page itself. _bt_vacuum_one_page deletes only\n> LP_DEAD tuples, but those are already ignored in _bt_readpage. Peter, do\n> you have some feedback for this ?\n\nIt sounds like the design of the patch relies on doing something other\nthan stopping a scan \"between\" pages, in the sense that is outlined in\nthe commit message of commit 09cb5c0e. If so, then that's a serious\nflaw in its design.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 20 Jan 2020 13:19:30 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 1:19 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Jan 20, 2020 at 11:01 AM Jesper Pedersen\n> <jesper.pedersen@redhat.com> wrote:\n> > > - nbtsearch.c _bt_skip line 1440\n> > > if (BTScanPosIsValid(so->currPos) &&\n> > > _bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir))\n> > >\n> > > Is it allowed to look at the high key / low key of the page without have a read lock on it?\n> > >\n> >\n> > In case of a split the page will still contain a high key and a low key,\n> > so this should be ok.\n>\n> This is definitely not okay.\n\nI suggest that you find a way to add assertions to code like\n_bt_readpage() that verify that we do in fact have the buffer content\nlock. Actually, there is an existing assertion here that covers the\npin, but not the buffer content lock:\n\nstatic bool\n_bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)\n{\n <declare variables>\n ...\n\n /*\n * We must have the buffer pinned and locked, but the usual macro can't be\n * used here; this function is what makes it good for currPos.\n */\n Assert(BufferIsValid(so->currPos.buf));\n\nYou can add another assertion that calls a new utility function in\nbufmgr.c. That can use the same logic as this existing assertion in\nFlushOneBuffer():\n\nAssert(LWLockHeldByMe(BufferDescriptorGetContentLock(bufHdr)));\n\nWe haven't needed assertions like this so far because it's usually it\nis clear whether or not a buffer lock is held (plus the bufmgr.c\nassertions help on their own). The fact that it isn't clear whether or\nnot a buffer lock will be held by caller here suggests a problem. Even\nstill, having some guard rails in the form of these assertions could\nbe helpful. Also, it seems like _bt_scankey_within_page() should have\na similar set of assertions.\n\nBTW, there is a paper that describes optimizations like loose index\nscan and skip scan together, in fairly general terms: \"Efficient\nSearch of Multidimensional B-Trees\". Loose index scans are given the\nname \"MDAM duplicate elimination\" in the paper. See:\n\nhttp://vldb.org/conf/1995/P710.PDF\n\nGoetz Graefe told me about the paper. It seems like the closest thing\nthat exists to a taxonomy or conceptual framework for these\ntechniques.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 20 Jan 2020 17:05:33 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Mon, Jan 20, 2020 at 05:05:33PM -0800, Peter Geoghegan wrote:\n>\n> I suggest that you find a way to add assertions to code like\n> _bt_readpage() that verify that we do in fact have the buffer content\n> lock. Actually, there is an existing assertion here that covers the\n> pin, but not the buffer content lock:\n>\n> static bool\n> _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)\n> {\n> <declare variables>\n> ...\n>\n> /*\n> * We must have the buffer pinned and locked, but the usual macro can't be\n> * used here; this function is what makes it good for currPos.\n> */\n> Assert(BufferIsValid(so->currPos.buf));\n>\n> You can add another assertion that calls a new utility function in\n> bufmgr.c. That can use the same logic as this existing assertion in\n> FlushOneBuffer():\n>\n> Assert(LWLockHeldByMe(BufferDescriptorGetContentLock(bufHdr)));\n>\n> We haven't needed assertions like this so far because it's usually it\n> is clear whether or not a buffer lock is held (plus the bufmgr.c\n> assertions help on their own). The fact that it isn't clear whether or\n> not a buffer lock will be held by caller here suggests a problem. Even\n> still, having some guard rails in the form of these assertions could\n> be helpful. Also, it seems like _bt_scankey_within_page() should have\n> a similar set of assertions.\n\nThanks for suggestion. Agree, we will add such guards. It seems that in\ngeneral I need to go through the locking in the patch one more time,\nsince there are some gaps I din't notice/didn't know about before.\n\n> BTW, there is a paper that describes optimizations like loose index\n> scan and skip scan together, in fairly general terms: \"Efficient\n> Search of Multidimensional B-Trees\". Loose index scans are given the\n> name \"MDAM duplicate elimination\" in the paper. See:\n>\n> http://vldb.org/conf/1995/P710.PDF\n>\n> Goetz Graefe told me about the paper. It seems like the closest thing\n> that exists to a taxonomy or conceptual framework for these\n> techniques.\n\nYes, I've read this paper, as it's indeed the only reference I found\nabout this topic in literature. But unfortunately it's not much and (at\nleast from the first read) gives only an overview of the idea.\n\n\n",
"msg_date": "Tue, 21 Jan 2020 11:00:08 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Mon, Jan 20, 2020 at 01:19:30PM -0800, Peter Geoghegan wrote:\n\nThanks for the commentaries. I'm trying to clarify your conclusions for\nmyself, so couple of questions.\n\n> > > - nbtsearch.c in general\n> > > Most of the code seems to rely quite heavily on the fact that xs_want_itup forces _bt_drop_lock_and_maybe_pin to never release the buffer pin. Have you considered that compacting of a page may still happen even if you hold the pin? [1] I've been trying to come up with cases in which this may break the patch, but I haven't able to produce such a scenario - so it may be fine.\n>\n> Try making _bt_findinsertloc() call _bt_vacuum_one_page() whenever the\n> page is P_HAS_GARBAGE(), regardless of whether or not the page is\n> about to split. That will still be correct, while having a much better\n> chance of breaking the patch during stress-testing.\n>\n> Relying on a buffer pin to prevent the B-Tree structure itself from\n> changing in any important way seems likely to be broken already. Even\n> if it isn't, it sounds fragile.\n\nExcept for checking low/high key (which should be done with a lock), I\nbelieve the current implementation follows the same pattern I see quite\noften, namely\n\n* get a lock on a page of interest and test it's values (if we can find\n next distinct value right on the next one without goind down the tree).\n\n* if not, unlock the current page, search within the tree with\n _bt_search (which locks a resuling new page) and examine values on a\n new page, when necessary do _bt_steppage\n\nIs there an obvious problem with this approach, when it comes to the\npage structure modification?\n\n> A leaf page doesn't really have anything called a low key. It usually\n> has a current first \"data item\"/non-pivot tuple, which is an\n> inherently unstable thing.\n\nWould this inherent instability be resolved for this particular case by\nhaving a lock on a page while checking a first data item, or there is\nsomething else I need to take into account?\n\n> > There is a BT_READ lock in place when finding the correct leaf page, or\n> > searching within the leaf page itself. _bt_vacuum_one_page deletes only\n> > LP_DEAD tuples, but those are already ignored in _bt_readpage. Peter, do\n> > you have some feedback for this ?\n>\n> It sounds like the design of the patch relies on doing something other\n> than stopping a scan \"between\" pages, in the sense that is outlined in\n> the commit message of commit 09cb5c0e. If so, then that's a serious\n> flaw in its design.\n\nCould you please elaborate why does it sound like that? If I understand\ncorrectly, to stop a scan only \"between\" pages one need to use only\n_bt_readpage/_bt_steppage? Other than that there is no magic with scan\nposition in the patch, so I'm not sure if I'm missing something here.\n\n\n",
"msg_date": "Tue, 21 Jan 2020 17:09:42 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "Hi Peter,\n\nThanks for your feedback; Dmitry has followed-up with some additional \nquestions.\n\nOn 1/20/20 8:05 PM, Peter Geoghegan wrote:\n>> This is definitely not okay.\n> \n> I suggest that you find a way to add assertions to code like\n> _bt_readpage() that verify that we do in fact have the buffer content\n> lock. \n\nIf you apply the attached patch on master it will fail the test suite; \ndid you mean something else ?\n\nBest regards,\n Jesper",
"msg_date": "Tue, 21 Jan 2020 12:06:12 -0500",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> \n> Could you please elaborate why does it sound like that? If I understand\n> correctly, to stop a scan only \"between\" pages one need to use only\n> _bt_readpage/_bt_steppage? Other than that there is no magic with scan\n> position in the patch, so I'm not sure if I'm missing something here.\n\nAnyone please correct me if I'm wrong, but I think one case where the current patch relies on some data from the page it has locked before it in checking this hi/lo key. I think it's possible for the following sequence to happen. Suppose we have a very simple one leaf-page btree containing four elements: leaf page 1 = [2,4,6,8]\nWe do a backwards index skip scan on this and have just returned our first tuple (8). The buffer is left pinned but unlocked. Now, someone else comes in and inserts a tuple (value 5) into this page, but suppose the page happens to be full. So a page split occurs. As far as I know, a page split could happen at any random element in the page. One of the situations we could be left with is:\nLeaf page 1 = [2,4]\nLeaf page 2 = [5,6,8]\nHowever, our scan is still pointing to leaf page 1. For non-skip scans this is not a problem, as we already read all matching elements in our local buffer and we'll return those. But the skip scan currently:\na) checks the lo-key of the page to see if the next prefix can be found on the leaf page 1\nb) finds out that this is actually true\nc) does a search on the page and returns value=4 (while it should have returned value=6)\n\nPeter, is my understanding about the btree internals correct so far?\n\nNow that I look at the patch again, I fear there currently may also be such a dependency in the \"Advance forward but read backward\"-case. It saves the offset number of a tuple in a variable, then does a _bt_search (releasing the lock and pin on the page). At this point, anything can happen to the tuples on this page - the page may be compacted by vacuum such that the offset number you have in your variable does not match the actual offset number of the tuple on the page anymore. Then, at the check for (nextOffset == startOffset) later, there's a possibility the offsets are different even though they relate to the same tuple.\n\n\n-Floris\n\n\n\n",
"msg_date": "Wed, 22 Jan 2020 07:50:30 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "> On Wed, Jan 22, 2020 at 07:50:30AM +0000, Floris Van Nee wrote:\n>\n> Anyone please correct me if I'm wrong, but I think one case where the current patch relies on some data from the page it has locked before it in checking this hi/lo key. I think it's possible for the following sequence to happen. Suppose we have a very simple one leaf-page btree containing four elements: leaf page 1 = [2,4,6,8]\n> We do a backwards index skip scan on this and have just returned our first tuple (8). The buffer is left pinned but unlocked. Now, someone else comes in and inserts a tuple (value 5) into this page, but suppose the page happens to be full. So a page split occurs. As far as I know, a page split could happen at any random element in the page. One of the situations we could be left with is:\n> Leaf page 1 = [2,4]\n> Leaf page 2 = [5,6,8]\n> However, our scan is still pointing to leaf page 1.\n\nIn case if we just returned a tuple, the next action would be either\ncheck the next page for another key or search down to the tree. Maybe\nI'm missing something in your scenario, but the latter will land us on a\nrequired page (we do not point to any leaf here), and before the former\nthere is a check for high/low key. Is there anything else missing?\n\n> Now that I look at the patch again, I fear there currently may also be such a dependency in the \"Advance forward but read backward\"-case. It saves the offset number of a tuple in a variable, then does a _bt_search (releasing the lock and pin on the page). At this point, anything can happen to the tuples on this page - the page may be compacted by vacuum such that the offset number you have in your variable does not match the actual offset number of the tuple on the page anymore. Then, at the check for (nextOffset == startOffset) later, there's a possibility the offsets are different even though they relate to the same tuple.\n\nInteresting point. The original idea here was to check that we're not\nreturned to the same position after jumping, so maybe instead of offsets\nwe can check a tuple we found.\n\n\n",
"msg_date": "Wed, 22 Jan 2020 17:04:41 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "Hi Dmitry,\n\n\n> > On Wed, Jan 22, 2020 at 07:50:30AM +0000, Floris Van Nee wrote:\n> >\n> > Anyone please correct me if I'm wrong, but I think one case where the current patch relies on some data from the page it has locked before it in checking this hi/lo key. I think it's possible for the following sequence to happen. Suppose we have a very simple one leaf-page btree containing four elements: leaf page 1 = [2,4,6,8]\n> > We do a backwards index skip scan on this and have just returned our first tuple (8). The buffer is left pinned but unlocked. Now, someone else comes in and inserts a tuple (value 5) into this page, but suppose the page happens to be full. So a page split occurs. As far as I know, a page split could happen at any random element in the page. One of the situations we could be left with is:\n> > Leaf page 1 = [2,4]\n> > Leaf page 2 = [5,6,8]\n> > However, our scan is still pointing to leaf page 1.\n\n> In case if we just returned a tuple, the next action would be either\n>> check the next page for another key or search down to the tree. Maybe\n\nBut it won't look at the 'next page for another key', but rather at the 'same page or another key', right? In the _bt_scankey_within_page shortcut we're taking, there's no stepping to a next page involved. It just locks the page again that it previously also locked.\n\n> I'm missing something in your scenario, but the latter will land us on a\n> required page (we do not point to any leaf here), and before the former\n> there is a check for high/low key. Is there anything else missing?\n\nLet me try to clarify. After we return the first tuple, so->currPos.buf is pointing to page=1 in my example (it's the only page after all). We've returned item=8. Then the split happens and the items get rearranged as in my example. We're still pointing with so->currPos.buf to page=1, but the page now contains [2,4]. The split happened to the right, so there's a page=2 with [5,6,8], however the ongoing index scan is unaware of that.\nNow _bt_skip gets called to fetch the next tuple. It starts by checking _bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir), the result of which will be 'true': we're comparing the skip key to the low key of the page. So it thinks the next key can be found on the current page. It locks the page and does a _binsrch, finding item=4 to be returned.\n\nThe problem here is that _bt_scankey_within_page mistakenly returns true, thereby limiting the search to just the page that it's pointing to already.\nIt may be fine to just fix this function to return the proper value (I guess it'd also need to look at the high key in this example). It could also be fixed by not looking at the lo/hi key of the page, but to use the local tuple buffer instead. We already did a _read_page once, so if we have any matching tuples on that specific page, we have them locally in the buffer already. That way we never need to lock the same page twice.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Dmitry,\n\n\n\n\n\n\n\n\n\n> > On Wed, Jan 22, 2020 at 07:50:30AM +0000, Floris Van Nee wrote:\n> >\n> > Anyone please correct me if I'm wrong, but I think one case where the current patch relies on some data from the page it has locked before it in checking this hi/lo key. I think it's possible for the following sequence to happen. Suppose we have a very\n simple one leaf-page btree containing four elements: leaf page 1 = [2,4,6,8]\n> > We do a backwards index skip scan on this and have just returned our first tuple (8). The buffer is left pinned but unlocked. Now, someone else comes in and inserts a tuple (value 5) into this page, but suppose the page happens to be full. So a page split\n occurs. As far as I know, a page split could happen at any random element in the page. One of the situations we could be left with is:\n> > Leaf page 1 = [2,4]\n> > Leaf page 2 = [5,6,8]\n> > However, our scan is still pointing to leaf page 1.\n\n> In case if we just returned a tuple, the next action would be either\n>> check the next page for another key or search down to the tree. Maybe\n\n\n\n\nBut it won't look at the 'next page for another key', but rather at the 'same page or another key', right? In the _bt_scankey_within_page shortcut we're taking, there's no stepping to a next page involved. It just locks the page again that it previously\n also locked.\n\n\n> I'm missing something in your scenario, but the latter will land us on a\n> required page (we do not point to any leaf here), and before the former\n> there is a check for high/low key. Is there anything else missing?\n\n\n\nLet me try to clarify. After we return the first tuple, so->currPos.buf is pointing to page=1 in my example (it's the only page after all). We've returned item=8. Then the split happens\n and the items get rearranged as in my example. We're still pointing with so->currPos.buf to page=1, but the page now contains [2,4]. The split happened to the right, so there's a page=2 with [5,6,8], however the ongoing index scan is unaware of that.\nNow _bt_skip gets called to fetch the next tuple. It starts by checking _bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir), the result of which will be 'true': we're\n comparing the skip key to the low key of the page. So it thinks the next key can be found on the current page. It locks the page and does a _binsrch, finding item=4 to be returned.\n\n\nThe problem here is that _bt_scankey_within_page mistakenly returns true, thereby limiting the search to just the page that it's pointing to already.\n\n\nIt may be fine to just fix this function to return the proper value (I guess it'd also need to look at the high key in this example). It could also be fixed by not looking at the lo/hi\n key of the page, but to use the local tuple buffer instead. We already did a _read_page once, so if we have any matching tuples on that specific page, we have them locally in the buffer already. That way we never need to lock the same page twice.",
"msg_date": "Wed, 22 Jan 2020 17:24:43 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 9:06 AM Jesper Pedersen\n<jesper.pedersen@redhat.com> wrote:\n> If you apply the attached patch on master it will fail the test suite;\n> did you mean something else ?\n\nYeah, this is exactly what I had in mind for the _bt_readpage() assertion.\n\nAs I said, it isn't a great sign that this kind of assertion is even\nnecessary in index access method code (code like bufmgr.c is another\nmatter). Usually it's just obvious that a buffer lock is held. I can't\nreally blame this patch for that, though. You could say the same thing\nabout the existing \"buffer pin held\" _bt_readpage() assertion. It's\ngood that it verifies what is actually a fragile assumption, even\nthough I'd prefer to not make a fragile assumption.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Jan 2020 10:40:33 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 11:50 PM Floris Van Nee\n<florisvannee@optiver.com> wrote:\n> Anyone please correct me if I'm wrong, but I think one case where the current patch relies on some data from the page it has locked before it in checking this hi/lo key. I think it's possible for the following sequence to happen. Suppose we have a very simple one leaf-page btree containing four elements: leaf page 1 = [2,4,6,8]\n> We do a backwards index skip scan on this and have just returned our first tuple (8). The buffer is left pinned but unlocked. Now, someone else comes in and inserts a tuple (value 5) into this page, but suppose the page happens to be full. So a page split occurs. As far as I know, a page split could happen at any random element in the page. One of the situations we could be left with is:\n> Leaf page 1 = [2,4]\n> Leaf page 2 = [5,6,8]\n> However, our scan is still pointing to leaf page 1. For non-skip scans this is not a problem, as we already read all matching elements in our local buffer and we'll return those. But the skip scan currently:\n> a) checks the lo-key of the page to see if the next prefix can be found on the leaf page 1\n> b) finds out that this is actually true\n> c) does a search on the page and returns value=4 (while it should have returned value=6)\n>\n> Peter, is my understanding about the btree internals correct so far?\n\nThis is a good summary. This is the kind of scenario I had in mind\nwhen I expressed a general concern about \"stopping between pages\".\nProcessing a whole page at a time is a crucial part of how\n_bt_readpage() currently deals with concurrent page splits.\n\nHolding a buffer pin on a leaf page is only effective as an interlock\nagainst VACUUM completely removing a tuple, which could matter with\nnon-MVCC scans.\n\n> Now that I look at the patch again, I fear there currently may also be such a dependency in the \"Advance forward but read backward\"-case. It saves the offset number of a tuple in a variable, then does a _bt_search (releasing the lock and pin on the page). At this point, anything can happen to the tuples on this page - the page may be compacted by vacuum such that the offset number you have in your variable does not match the actual offset number of the tuple on the page anymore. Then, at the check for (nextOffset == startOffset) later, there's a possibility the offsets are different even though they relate to the same tuple.\n\nIf skip scan is restricted to heapkeyspace indexes (i.e. those created\non Postgres 12+), then it might be reasonable to save an index tuple,\nand relocate it within the same page using a fresh binary search that\nuses a scankey derived from the same index tuple -- without unsetting\nscantid/the heap TID scankey attribute. I suppose that you'll need to\n\"find your place again\" after releasing the buffer lock on a leaf page\nfor a time. Also, I think that this will only be safe with MVCC scans,\nbecause otherwise the page could be concurrently deleted by VACUUM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Jan 2020 10:55:21 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 10:55 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> This is a good summary. This is the kind of scenario I had in mind\n> when I expressed a general concern about \"stopping between pages\".\n> Processing a whole page at a time is a crucial part of how\n> _bt_readpage() currently deals with concurrent page splits.\n\nNote in particular that index scans cannot return the same index tuple\ntwice -- processing a page at a time ensures that that cannot happen.\n\nCan a loose index scan return the same tuple (i.e. a tuple with the\nsame heap TID) to the executor more than once?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Jan 2020 11:09:35 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> Note in particular that index scans cannot return the same index tuple twice -\r\n> - processing a page at a time ensures that that cannot happen.\r\n> \r\n> Can a loose index scan return the same tuple (i.e. a tuple with the same heap\r\n> TID) to the executor more than once?\r\n> \r\n\r\nThe loose index scan shouldn't return a tuple twice. It should only be able to skip 'further', so that shouldn't be a problem. Out of curiosity, why can't index scans return the same tuple twice? Is there something in the executor that isn't able to handle this?\r\n\r\n-Floris\r\n\r\n",
"msg_date": "Wed, 22 Jan 2020 21:08:59 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "> On Wed, Jan 22, 2020 at 05:24:43PM +0000, Floris Van Nee wrote:\n>\n> > > Anyone please correct me if I'm wrong, but I think one case where the current patch relies on some data from the page it has locked before it in checking this hi/lo key. I think it's possible for the following sequence to happen. Suppose we have a very simple one leaf-page btree containing four elements: leaf page 1 = [2,4,6,8]\n> > > We do a backwards index skip scan on this and have just returned our first tuple (8). The buffer is left pinned but unlocked. Now, someone else comes in and inserts a tuple (value 5) into this page, but suppose the page happens to be full. So a page split occurs. As far as I know, a page split could happen at any random element in the page. One of the situations we could be left with is:\n> > > Leaf page 1 = [2,4]\n> > > Leaf page 2 = [5,6,8]\n> > > However, our scan is still pointing to leaf page 1.\n> \n> > In case if we just returned a tuple, the next action would be either\n> >> check the next page for another key or search down to the tree. Maybe\n> \n> But it won't look at the 'next page for another key', but rather at the 'same page or another key', right? In the _bt_scankey_within_page shortcut we're taking, there's no stepping to a next page involved. It just locks the page again that it previously also locked.\n\nYep, it would look only on the same page. Not sure what do you mean by\n\"another key\", if the current key is not found within the current page\nat the first stage, we restart from the root.\n\n> > I'm missing something in your scenario, but the latter will land us on a\n> > required page (we do not point to any leaf here), and before the former\n> > there is a check for high/low key. Is there anything else missing?\n> \n> Let me try to clarify. After we return the first tuple, so->currPos.buf is pointing to page=1 in my example (it's the only page after all). We've returned item=8. Then the split happens and the items get rearranged as in my example. We're still pointing with so->currPos.buf to page=1, but the page now contains [2,4]. The split happened to the right, so there's a page=2 with [5,6,8], however the ongoing index scan is unaware of that.\n> Now _bt_skip gets called to fetch the next tuple. It starts by checking _bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir), the result of which will be 'true': we're comparing the skip key to the low key of the page. So it thinks the next key can be found on the current page. It locks the page and does a _binsrch, finding item=4 to be returned.\n> \n> The problem here is that _bt_scankey_within_page mistakenly returns true, thereby limiting the search to just the page that it's pointing to already.\n> It may be fine to just fix this function to return the proper value (I guess it'd also need to look at the high key in this example). It could also be fixed by not looking at the lo/hi key of the page, but to use the local tuple buffer instead. We already did a _read_page once, so if we have any matching tuples on that specific page, we have them locally in the buffer already. That way we never need to lock the same page twice.\n\nOh, that's what you mean. Yes, I was somehow tricked by the name of this\nfunction and didn't notice that it checks only one boundary, so in case\nof backward scan it returns wrong result. I think in the situation\nyou've describe it would actually not find any item on the current page\nand restart from the root, but nevertheless we need to check for both\nkeys in _bt_scankey_within_page.\n\n\n",
"msg_date": "Wed, 22 Jan 2020 22:36:03 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Wed, Jan 22, 2020 at 09:08:59PM +0000, Floris Van Nee wrote:\n> > Note in particular that index scans cannot return the same index tuple twice -\n> > - processing a page at a time ensures that that cannot happen.\n> >\n> > Can a loose index scan return the same tuple (i.e. a tuple with the same heap\n> > TID) to the executor more than once?\n> >\n>\n> The loose index scan shouldn't return a tuple twice. It should only be able to skip 'further', so that shouldn't be a problem.\n\nYes, it shouldn't happen.\n\n\n",
"msg_date": "Wed, 22 Jan 2020 22:37:16 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 1:09 PM Floris Van Nee <florisvannee@optiver.com> wrote:\n> The loose index scan shouldn't return a tuple twice. It should only be able to skip 'further', so that shouldn't be a problem. Out of curiosity, why can't index scans return the same tuple twice? Is there something in the executor that isn't able to handle this?\n\nI have no reason to believe that the executor has a problem with index\nscans that return a tuple more than once, aside from the very obvious:\nin general, that will often be wrong. It might not be wrong when the\nscan happens to be input to a unique node anyway, or something like\nthat.\n\nI'm not particularly concerned about it. Just wanted to be clear on\nour assumptions for loose index scans -- if loose index scans were\nallowed to return a tuple more than once, then that would at least\nhave to at least be considered in the wider context of the executor\n(but apparently they're not, so no need to worry about it). This may\nhave been mentioned somewhere already. If it is then I must have\nmissed it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Jan 2020 14:23:36 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 1:35 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Oh, that's what you mean. Yes, I was somehow tricked by the name of this\n> function and didn't notice that it checks only one boundary, so in case\n> of backward scan it returns wrong result. I think in the situation\n> you've describe it would actually not find any item on the current page\n> and restart from the root, but nevertheless we need to check for both\n> keys in _bt_scankey_within_page.\n\nI suggest reading the nbtree README file's description of backwards\nscans. Read the paragraph that begins with 'We support the notion of\nan ordered \"scan\" of an index...'. I also suggest that you read a bit\nof the stuff in the large section on page deletion. Certainly read the\nparagraph that begins with 'Moving left in a backward scan is\ncomplicated because...'.\n\nIt's important to grok why it's okay that we don't \"couple\" or \"crab\"\nbuffer locks as we descend the tree with Lehman & Yao's design -- we\ncan get away with having *no* interlock against page splits (e.g.,\npin, buffer lock) when we are \"between\" levels of the tree. This is\nsafe, since the page that we land on must still be \"substantively the\nsame page\", no matter how much time passes. That is, it must at least\ncover the leftmost portion of the keyspace covered by the original\nversion of the page that we saw that we needed to descend to within\nthe parent page. The worst that can happen is that we have to recover\nfrom a concurrent page split by moving right one or more times.\n(Actually, page deletion can change the contents of a page entirely,\nbut that's not really an exception to the general rule -- page\ndeletion is careful about recycling pages that an in flight index scan\nmight land on.)\n\nLehman & Yao don't have backwards scans (or left links, or page\ndeletion). Unlike nbtree. This is why the usual Lehman & Yao\nguarantees don't quite work with backward scans. We must therefore\ncompensate as described by the README file (basically, we check and\nre-check for races, possibly returning to the original page when we\nthink that we might have overlooked something and need to make sure).\nIt's an exception to the general rule, you could say.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Jan 2020 15:13:21 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 10:55 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Jan 21, 2020 at 11:50 PM Floris Van Nee\n> <florisvannee@optiver.com> wrote:\n> > As far as I know, a page split could happen at any random element in the page. One of the situations we could be left with is:\n> > Leaf page 1 = [2,4]\n> > Leaf page 2 = [5,6,8]\n> > However, our scan is still pointing to leaf page 1. For non-skip scans this is not a problem, as we already read all matching elements in our local buffer and we'll return those. But the skip scan currently:\n> > a) checks the lo-key of the page to see if the next prefix can be found on the leaf page 1\n> > b) finds out that this is actually true\n> > c) does a search on the page and returns value=4 (while it should have returned value=6)\n> >\n> > Peter, is my understanding about the btree internals correct so far?\n>\n> This is a good summary. This is the kind of scenario I had in mind\n> when I expressed a general concern about \"stopping between pages\".\n> Processing a whole page at a time is a crucial part of how\n> _bt_readpage() currently deals with concurrent page splits.\n\nI want to be clear about what it means that the page doesn't have a\n\"low key\". Let us once again start with a very simple one leaf-page\nbtree containing four elements: leaf page 1 = [2,4,6,8] -- just like\nin Floris' original page split scenario.\n\nLet us also say that page 1 has a left sibling page -- page 0. Page 0\nhappens to have a high key with the integer value 0. So you could\n*kind of* claim that the \"low key\" of page 1 is the integer value 0\n(page 1 values must be > 0) -- *not* the integer value 2 (the\nso-called \"low key\" here is neither > 2, nor >= 2). More formally, an\ninvariant exists that says that all values on page 1 must be\n*strictly* greater than the integer value 0. However, this formal\ninvariant thing is hard or impossible to rely on when we actually\nreach page 1 and want to know about its lower bound -- since there is\nno \"low key\" pivot tuple on page 1 (we can only speak of a \"low key\"\nas an abstract concept, or something that works transitively from the\nparent -- there is only a physical high key pivot tuple on page 1\nitself).\n\nSuppose further that Page 0 is now empty, apart from its \"all values\non page are <= 0\" high key (page 0 must have had a few negative\ninteger values in its tuples at some point, but not anymore). VACUUM\nwill delete the page, *changing the effective low key* of Page 0 in\nthe process. The lower bound from the shared parent page will move\nlower/left as a consequence of the deletion of page 0. nbtree page\ndeletion makes the \"keyspace move right, not left\". So the \"conceptual\nlow key\" of page 1 just went down from 0 to -5 (say), without there\nbeing any practical way of a skip scan reading page 1 noticing the\nchange (the left sibling of page 0, page -1, has a high key of <= -5,\nsay).\n\nNot only is it possible for somebody to insert the value 1 in page 1\n-- now they can insert the value -3 or -4!\n\nMore concretely, the pivot tuple in the parent that originally pointed\nto page 0 is still there -- all that page deletion changed about this\ntuple is its downlink, which now points to page 1 instead or page 0.\nConfusingly, page deletion removes the pivot tuple of the right\nsibling page from the parent -- *not* the pivot tuple of the empty\npage that gets deleted (in this case page 0) itself.\n\nNote: this example ignores things like negative infinity values in\ntruncated pivot tuples, and the heap TID tiebreaker column -- in\nreality this would look a bit different because of those factors.\n\nSee also: amcheck's bt_right_page_check_scankey() function, which has\na huge comment that reasons about a race involving page deletion. In\ngeneral, page deletion is by far the biggest source of complexity when\nreasoning about the key space.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Jan 2020 16:30:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Wed, Jan 22, 2020 at 10:36:03PM +0100, Dmitry Dolgov wrote:\n>\n> > Let me try to clarify. After we return the first tuple, so->currPos.buf is pointing to page=1 in my example (it's the only page after all). We've returned item=8. Then the split happens and the items get rearranged as in my example. We're still pointing with so->currPos.buf to page=1, but the page now contains [2,4]. The split happened to the right, so there's a page=2 with [5,6,8], however the ongoing index scan is unaware of that.\n> > Now _bt_skip gets called to fetch the next tuple. It starts by checking _bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir), the result of which will be 'true': we're comparing the skip key to the low key of the page. So it thinks the next key can be found on the current page. It locks the page and does a _binsrch, finding item=4 to be returned.\n> >\n> > The problem here is that _bt_scankey_within_page mistakenly returns true, thereby limiting the search to just the page that it's pointing to already.\n> > It may be fine to just fix this function to return the proper value (I guess it'd also need to look at the high key in this example). It could also be fixed by not looking at the lo/hi key of the page, but to use the local tuple buffer instead. We already did a _read_page once, so if we have any matching tuples on that specific page, we have them locally in the buffer already. That way we never need to lock the same page twice.\n>\n> Oh, that's what you mean. Yes, I was somehow tricked by the name of this\n> function and didn't notice that it checks only one boundary, so in case\n> of backward scan it returns wrong result. I think in the situation\n> you've describe it would actually not find any item on the current page\n> and restart from the root, but nevertheless we need to check for both\n> keys in _bt_scankey_within_page.\n\nThanks again everyone for commentaries and clarification. Here is the\nversion, where hopefully I've addressed all the mentioned issues.\n\nAs mentioned in the _bt_skip commentaries, before we were moving left to\ncheck the next page to avoid significant issues in case if ndistinct was\nunderestimated and we need to skip too often. To make it work safe in\npresense of splits we need to remember an original page and move right\nagain until we find a page with the right link pointing to it. It's not\nclear whether it's worth to increase complexity for such sort of \"edge\ncase\" with ndistinct estimation while moving left, so at least for now\nwe ignore this in the implementation and just start from the root\nimmediately.\n\nOffset based code in moving forward/reading backward was replaced with\nremembering a start index tuple and an attempt to find it on the new\npage. Also a missing page lock before _bt_scankey_within_page was added\nand _bt_scankey_within_page checks for both page boundaries.",
"msg_date": "Mon, 27 Jan 2020 11:30:16 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "Hi Dmitry,\n\nThanks for the new patch! I tested it and managed to find a case that causes some issues. Here's how to reproduce:\n\ndrop table if exists t; \ncreate table t as select a,b,b%2 as c,10 as d from generate_series(1,5) a, generate_series(1,1000) b;\ncreate index on t (a,b,c,d);\n\n-- correct\npostgres=# begin; declare c scroll cursor for select distinct on (a) a,b,c,d from t order by a desc, b desc; fetch forward all from c; fetch backward all from c; commit; \nBEGIN\nDECLARE CURSOR\n a | b | c | d \n---+------+---+----\n 5 | 1000 | 0 | 10\n 4 | 1000 | 0 | 10\n 3 | 1000 | 0 | 10\n 2 | 1000 | 0 | 10\n 1 | 1000 | 0 | 10\n(5 rows)\n\n a | b | c | d \n---+------+---+----\n 1 | 1000 | 0 | 10\n 2 | 1000 | 0 | 10\n 3 | 1000 | 0 | 10\n 4 | 1000 | 0 | 10\n 5 | 1000 | 0 | 10\n(5 rows)\n\n-- now delete some rows\npostgres=# delete from t where a=3;\nDELETE 1000\n\n-- and rerun: error is thrown\npostgres=# begin; declare c scroll cursor for select distinct on (a) a,b,c,d from t order by a desc, b desc; fetch forward all from c; fetch backward all from c; commit; \nBEGIN\nDECLARE CURSOR\n a | b | c | d \n---+------+---+----\n 5 | 1000 | 0 | 10\n 4 | 1000 | 0 | 10\n 2 | 1000 | 0 | 10\n 1 | 1000 | 0 | 10\n(4 rows)\n\nERROR: lock buffer_content is not held\nROLLBACK\n\n\nA slightly different situation arises when executing the cursor with an ORDER BY a, b instead of the ORDER BY a DESC, b DESC:\n-- recreate table again and execute the delete as above\n\npostgres=# begin; declare c scroll cursor for select distinct on (a) a,b,c,d from t order by a, b; fetch forward all from c; fetch backward all from c; commit; \nBEGIN\nDECLARE CURSOR\n a | b | c | d \n---+---+---+----\n 1 | 1 | 1 | 10\n 2 | 1 | 1 | 10\n 4 | 1 | 1 | 10\n 5 | 1 | 1 | 10\n(4 rows)\n\n a | b | c | d \n---+-----+---+----\n 5 | 1 | 1 | 10\n 4 | 1 | 1 | 10\n 2 | 827 | 1 | 10\n 1 | 1 | 1 | 10\n(4 rows)\n\nCOMMIT\n\nAnd lastly, you'll also get incorrect results if you do the delete slightly differently:\n-- leave one row where a=3 and b=1000\npostgres=# delete from t where a=3 and b<=999;\n-- the cursor query above won't show any of the a=3 rows even though they should\n\n\n-Floris\n\n\n\n",
"msg_date": "Mon, 27 Jan 2020 14:00:39 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "Oh, interesting, thank you. I believe I know what happened, there is\none unnecessary locking part that eventually gives only problems, plus\none direct access to a page items without _bt_readpage. Will post a\nnew version soon.\n\nOn Mon, Jan 27, 2020 at 3:00 PM Floris Van Nee <florisvannee@optiver.com> wrote:\n>\n> Hi Dmitry,\n>\n> Thanks for the new patch! I tested it and managed to find a case that causes some issues. Here's how to reproduce:\n>\n> drop table if exists t;\n> create table t as select a,b,b%2 as c,10 as d from generate_series(1,5) a, generate_series(1,1000) b;\n> create index on t (a,b,c,d);\n>\n> -- correct\n> postgres=# begin; declare c scroll cursor for select distinct on (a) a,b,c,d from t order by a desc, b desc; fetch forward all from c; fetch backward all from c; commit;\n> BEGIN\n> DECLARE CURSOR\n> a | b | c | d\n> ---+------+---+----\n> 5 | 1000 | 0 | 10\n> 4 | 1000 | 0 | 10\n> 3 | 1000 | 0 | 10\n> 2 | 1000 | 0 | 10\n> 1 | 1000 | 0 | 10\n> (5 rows)\n>\n> a | b | c | d\n> ---+------+---+----\n> 1 | 1000 | 0 | 10\n> 2 | 1000 | 0 | 10\n> 3 | 1000 | 0 | 10\n> 4 | 1000 | 0 | 10\n> 5 | 1000 | 0 | 10\n> (5 rows)\n>\n> -- now delete some rows\n> postgres=# delete from t where a=3;\n> DELETE 1000\n>\n> -- and rerun: error is thrown\n> postgres=# begin; declare c scroll cursor for select distinct on (a) a,b,c,d from t order by a desc, b desc; fetch forward all from c; fetch backward all from c; commit;\n> BEGIN\n> DECLARE CURSOR\n> a | b | c | d\n> ---+------+---+----\n> 5 | 1000 | 0 | 10\n> 4 | 1000 | 0 | 10\n> 2 | 1000 | 0 | 10\n> 1 | 1000 | 0 | 10\n> (4 rows)\n>\n> ERROR: lock buffer_content is not held\n> ROLLBACK\n>\n>\n> A slightly different situation arises when executing the cursor with an ORDER BY a, b instead of the ORDER BY a DESC, b DESC:\n> -- recreate table again and execute the delete as above\n>\n> postgres=# begin; declare c scroll cursor for select distinct on (a) a,b,c,d from t order by a, b; fetch forward all from c; fetch backward all from c; commit;\n> BEGIN\n> DECLARE CURSOR\n> a | b | c | d\n> ---+---+---+----\n> 1 | 1 | 1 | 10\n> 2 | 1 | 1 | 10\n> 4 | 1 | 1 | 10\n> 5 | 1 | 1 | 10\n> (4 rows)\n>\n> a | b | c | d\n> ---+-----+---+----\n> 5 | 1 | 1 | 10\n> 4 | 1 | 1 | 10\n> 2 | 827 | 1 | 10\n> 1 | 1 | 1 | 10\n> (4 rows)\n>\n> COMMIT\n>\n> And lastly, you'll also get incorrect results if you do the delete slightly differently:\n> -- leave one row where a=3 and b=1000\n> postgres=# delete from t where a=3 and b<=999;\n> -- the cursor query above won't show any of the a=3 rows even though they should\n>\n>\n> -Floris\n>\n\n\n",
"msg_date": "Tue, 28 Jan 2020 14:45:49 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Mon, Jan 27, 2020 at 02:00:39PM +0000, Floris Van Nee wrote:\n>\n> Thanks for the new patch! I tested it and managed to find a case that causes\n> some issues. Here's how to reproduce:\n\nSo, after a bit of investigation I found out the issue (it was actually there\neven in the previous version). In this only case of moving forward and reading\nbackward, exactly scenarious you've described above, current implementation was\nnot ignoring deleted tuples.\n\nMy first idea to fix this was to use _bt_readpage when necessary and put\ncouple of _bt_killitems when we leave a page while jumping before, so that\ndeleted tuples will be ignored. To demonstrate it visually, let's say we\nwant to go backward on a cursor over an ORDER BY a DESC, b DESC query,\ni.e. return:\n\n (1,100), (2, 100), (3, 100) etc.\n\nTo achieve that we jump from (1,1) to (1,100), from (2,1) to (2,100) and so on.\nIf some values are deleted, we need to read backward. E.g. if (3,100) is\ndeleted, we need to return (3,99).\n \n +---------------+---------------+---------------+---------------+ \n | | | | | \n | 1,1 ... 1,100 | 2,1 ... 2,100 | 3,1 ... 3,100 | 4,1 ... 4,100 | \n | | | | | \n +---------------+---------------+---------------+---------------+ \n \n | ^ | ^ | ^ | ^ \n | | | | | | | | \n +-------------+ +-------------+ +-------------+ +-------------+ \n \nIf it happened that a whole value series is deleted, we return to the\nprevious value and need to detect such situation. E.g. if all the values\nfrom (3,1) to (3,100) were deleted, we will return to (2,100).\n \n +---------------+---------------+ +---------------+ \n | | | | | \n | 1,1 ... 1,100 | 2,1 ... 2,100 |<--------------+ 4,1 ... 4,100 | \n | | | | | \n +---------------+---------------+ +---------------+ \n ^ \n | ^ | ^ | ^ | \n | | | | | | | \n +-------------+ +-------------+ +-------------+ | \n +-----------------------------+ \n \nThis all is implemented inside _bt_skip. Unfortunately as I see it now the idea\nof relying on ability to skip dead index tuples without checking a heap tuple\nis not reliable, since an index tuple will be added into killedItems and can be\nmarked as dead only when not a single transaction can see it anymore.\n\nPotentially there are two possible solutions:\n\n* Adjust the code in nodeIndexOnlyscan to perform a proper visibility check and\n understand if we returned back. Obviously it will make the patch more invasive.\n\n* Reduce scope of the patch, and simply do not apply jumping in this case. This\n means less functionality but hopefully still brings some value.\n\nAt this point me and Jesper inclined to go with the second option. But maybe\nI'm missing something, are there any other suggestions?\n\n\n",
"msg_date": "Tue, 4 Feb 2020 21:02:05 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "\n> this point me and Jesper inclined to go with the second option. But maybe\n> I'm missing something, are there any other suggestions?\n\nUnfortunately I figured this would need a more invasive fix. I tend to agree that it'd be better to not skip in situations like this. I think it'd make most sense to make any plan for these 'prepare/fetch' queries would not use skip, but rather a materialize node, right?\n\n-Floris\n\n",
"msg_date": "Tue, 4 Feb 2020 20:34:09 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "> On Tue, Feb 04, 2020 at 08:34:09PM +0000, Floris Van Nee wrote:\n>\n> > this point me and Jesper inclined to go with the second option. But maybe\n> > I'm missing something, are there any other suggestions?\n>\n> Unfortunately I figured this would need a more invasive fix. I tend to agree that it'd be better to not skip in situations like this. I think it'd make most sense to make any plan for these 'prepare/fetch' queries would not use skip, but rather a materialize node, right?\n\nYes, sort of, without a skip scan it would be just an index only scan\nwith unique on top. Actually it's not immediately clean how to achieve\nthis, since at the moment, when planner is deciding to consider index\nskip scan, there is no information about neither direction nor whether\nwe're dealing with a cursor. Maybe we can somehow signal to the decision\nlogic that the root was a DeclareCursorStmt by e.g. introducing a new\nfield to the query structure (or abusing an existing one, since\nDeclareCursorStmt is being processed by standard_ProcessUtility, just\nfor a test I've tried to use utilityStmt of a nested statement hoping\nthat it's unused and it didn't break tests yet).\n\n\n",
"msg_date": "Wed, 5 Feb 2020 17:37:30 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "At Wed, 5 Feb 2020 17:37:30 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in \n> > On Tue, Feb 04, 2020 at 08:34:09PM +0000, Floris Van Nee wrote:\n> >\n> > > this point me and Jesper inclined to go with the second option. But maybe\n> > > I'm missing something, are there any other suggestions?\n> >\n> > Unfortunately I figured this would need a more invasive fix. I tend to agree that it'd be better to not skip in situations like this. I think it'd make most sense to make any plan for these 'prepare/fetch' queries would not use skip, but rather a materialize node, right?\n> \n> Yes, sort of, without a skip scan it would be just an index only scan\n> with unique on top. Actually it's not immediately clean how to achieve\n> this, since at the moment, when planner is deciding to consider index\n> skip scan, there is no information about neither direction nor whether\n> we're dealing with a cursor. Maybe we can somehow signal to the decision\n> logic that the root was a DeclareCursorStmt by e.g. introducing a new\n> field to the query structure (or abusing an existing one, since\n> DeclareCursorStmt is being processed by standard_ProcessUtility, just\n> for a test I've tried to use utilityStmt of a nested statement hoping\n> that it's unused and it didn't break tests yet).\n\nUmm. I think it's a wrong direction. While defining a cursor,\ndefault scrollability is decided based on the query allows backward\nscan or not. That is, the definition of backward-scan'ability is not\njust whether it can scan from the end toward the beginning, but\nwhether it can go back and forth freely or not. In that definition,\nthe *current* skip scan does not supporting backward scan. If we want\nto allow descending order-by in a query, we should support scrollable\ncursor, too.\n\nWe could add an additional parameter \"in_cursor\" to\nExecSupportBackwardScan and let skip scan return false if in_cursor is\ntrue, but I'm not sure it's acceptable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Feb 2020 10:24:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Thu, Feb 06, 2020 at 10:24:50AM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 5 Feb 2020 17:37:30 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in\n> > > On Tue, Feb 04, 2020 at 08:34:09PM +0000, Floris Van Nee wrote:\n> > >\n> > > > this point me and Jesper inclined to go with the second option. But maybe\n> > > > I'm missing something, are there any other suggestions?\n> > >\n> > > Unfortunately I figured this would need a more invasive fix. I tend to agree that it'd be better to not skip in situations like this. I think it'd make most sense to make any plan for these 'prepare/fetch' queries would not use skip, but rather a materialize node, right?\n> >\n> > Yes, sort of, without a skip scan it would be just an index only scan\n> > with unique on top. Actually it's not immediately clean how to achieve\n> > this, since at the moment, when planner is deciding to consider index\n> > skip scan, there is no information about neither direction nor whether\n> > we're dealing with a cursor. Maybe we can somehow signal to the decision\n> > logic that the root was a DeclareCursorStmt by e.g. introducing a new\n> > field to the query structure (or abusing an existing one, since\n> > DeclareCursorStmt is being processed by standard_ProcessUtility, just\n> > for a test I've tried to use utilityStmt of a nested statement hoping\n> > that it's unused and it didn't break tests yet).\n>\n> Umm. I think it's a wrong direction. While defining a cursor,\n> default scrollability is decided based on the query allows backward\n> scan or not. That is, the definition of backward-scan'ability is not\n> just whether it can scan from the end toward the beginning, but\n> whether it can go back and forth freely or not. In that definition,\n> the *current* skip scan does not supporting backward scan. If we want\n> to allow descending order-by in a query, we should support scrollable\n> cursor, too.\n>\n> We could add an additional parameter \"in_cursor\" to\n> ExecSupportBackwardScan and let skip scan return false if in_cursor is\n> true, but I'm not sure it's acceptable.\n\nI also was thinking about whether it's possible to use\nExecSupportBackwardScan here, but skip scan is just a mode of an\nindex/indexonly scan. Which means that ExecSupportBackwardScan also need\nto know somehow if this mode is being used, and then, since this\nfunction is called after it's already decided to use skip scan in the\nresulting plan, somehow correct the plan (exclude skipping and try to\nfind next best path?) - do I understand your suggestion correct?\n\n\n",
"msg_date": "Thu, 6 Feb 2020 11:57:07 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "At Thu, 6 Feb 2020 11:57:07 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in \n> > On Thu, Feb 06, 2020 at 10:24:50AM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 5 Feb 2020 17:37:30 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in\n> > We could add an additional parameter \"in_cursor\" to\n> > ExecSupportBackwardScan and let skip scan return false if in_cursor is\n> > true, but I'm not sure it's acceptable.\n> \n> I also was thinking about whether it's possible to use\n> ExecSupportBackwardScan here, but skip scan is just a mode of an\n> index/indexonly scan. Which means that ExecSupportBackwardScan also need\n> to know somehow if this mode is being used, and then, since this\n> function is called after it's already decided to use skip scan in the\n> resulting plan, somehow correct the plan (exclude skipping and try to\n> find next best path?) - do I understand your suggestion correct?\n\nI didn't thought so hardly, but a bit of confirmation told me that\nIndexSupportsBackwardScan returns fixed flag for AM. It seems that\nthings are not that simple.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Feb 2020 21:22:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "Sorry, I forgot to write more significant thing.\n\nOn 2020/02/06 21:22, Kyotaro Horiguchi wrote:\n> At Thu, 6 Feb 2020 11:57:07 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in\n>>> On Thu, Feb 06, 2020 at 10:24:50AM +0900, Kyotaro Horiguchi wrote:\n>>> At Wed, 5 Feb 2020 17:37:30 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in\n>>> We could add an additional parameter \"in_cursor\" to\n>>> ExecSupportBackwardScan and let skip scan return false if in_cursor is\n>>> true, but I'm not sure it's acceptable.\n>> I also was thinking about whether it's possible to use\n>> ExecSupportBackwardScan here, but skip scan is just a mode of an\n>> index/indexonly scan. Which means that ExecSupportBackwardScan also need\n>> to know somehow if this mode is being used, and then, since this\n>> function is called after it's already decided to use skip scan in the\n>> resulting plan, somehow correct the plan (exclude skipping and try to\n>> find next best path?) - do I understand your suggestion correct?\n\nNo. I thought of the opposite thing. I meant that\nIndexSupportsBackwardScan returns false if Index(Only)Scan is\ngoing to do skip scan. But I found that the function doesn't have\naccess to plan node nor executor node. So I wrote as the follows.\n\n>> I didn't thought so hardly, but a bit of confirmation told me that\n>> IndexSupportsBackwardScan returns fixed flag for AM. It seems that\n>> things are not that simple.\n\nregards.\n\n-- \n\nKyotaro Horiguchi\n\n\n\n\n\n\n",
"msg_date": "Thu, 6 Feb 2020 22:56:40 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Thu, Feb 06, 2020 at 09:22:20PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 6 Feb 2020 11:57:07 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in \n> > > On Thu, Feb 06, 2020 at 10:24:50AM +0900, Kyotaro Horiguchi wrote:\n> > > At Wed, 5 Feb 2020 17:37:30 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in\n> > > We could add an additional parameter \"in_cursor\" to\n> > > ExecSupportBackwardScan and let skip scan return false if in_cursor is\n> > > true, but I'm not sure it's acceptable.\n> > \n> > I also was thinking about whether it's possible to use\n> > ExecSupportBackwardScan here, but skip scan is just a mode of an\n> > index/indexonly scan. Which means that ExecSupportBackwardScan also need\n> > to know somehow if this mode is being used, and then, since this\n> > function is called after it's already decided to use skip scan in the\n> > resulting plan, somehow correct the plan (exclude skipping and try to\n> > find next best path?) - do I understand your suggestion correct?\n> \n> I didn't thought so hardly, but a bit of confirmation told me that\n> IndexSupportsBackwardScan returns fixed flag for AM. It seems that\n> things are not that simple.\n\nYes, I've mentioned that already in one of the previous emails :) The\nsimplest way I see to achieve what we want is to do something like in\nattached modified version with a new hasDeclaredCursor field. It's not a\nfinal version though, but posted just for discussion, so feel free to\nsuggest any improvements or alternatives.",
"msg_date": "Fri, 7 Feb 2020 17:25:43 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Fri, Feb 07, 2020 at 05:25:43PM +0100, Dmitry Dolgov wrote:\n>> On Thu, Feb 06, 2020 at 09:22:20PM +0900, Kyotaro Horiguchi wrote:\n>> At Thu, 6 Feb 2020 11:57:07 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in\n>> > > On Thu, Feb 06, 2020 at 10:24:50AM +0900, Kyotaro Horiguchi wrote:\n>> > > At Wed, 5 Feb 2020 17:37:30 +0100, Dmitry Dolgov <9erthalion6@gmail.com> wrote in\n>> > > We could add an additional parameter \"in_cursor\" to\n>> > > ExecSupportBackwardScan and let skip scan return false if in_cursor is\n>> > > true, but I'm not sure it's acceptable.\n>> >\n>> > I also was thinking about whether it's possible to use\n>> > ExecSupportBackwardScan here, but skip scan is just a mode of an\n>> > index/indexonly scan. Which means that ExecSupportBackwardScan also need\n>> > to know somehow if this mode is being used, and then, since this\n>> > function is called after it's already decided to use skip scan in the\n>> > resulting plan, somehow correct the plan (exclude skipping and try to\n>> > find next best path?) - do I understand your suggestion correct?\n>>\n>> I didn't thought so hardly, but a bit of confirmation told me that\n>> IndexSupportsBackwardScan returns fixed flag for AM. It seems that\n>> things are not that simple.\n>\n>Yes, I've mentioned that already in one of the previous emails :) The\n>simplest way I see to achieve what we want is to do something like in\n>attached modified version with a new hasDeclaredCursor field. It's not a\n>final version though, but posted just for discussion, so feel free to\n>suggest any improvements or alternatives.\n\nIMO the proper fix for this case (moving forward, reading backwards) is\nsimply making it work by properly checking deleted tuples etc. Not sure\nwhy that would be so much complex (haven't tried implementing it)?\n\nI think making this depend on things like declared cursor etc. is going\nto be tricky, may easily be more complex than checking deleted tuples,\nand the behavior may be quite surprising.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 8 Feb 2020 14:11:59 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "Hi,\n\nI've done some testing and benchmarking of the v31 patch, looking for\nregressions, costing issues etc. Essentially, I've ran a bunch of SELECT\nDISTINCT queries on data sets of various size, number of distinct values\netc. The results are fairly large, so I've uploaded them to github\n\n https://github.com/tvondra/skip-scan-test\n\nThere are four benchmark groups, depending on how the data are generated\nand availability of extended stats and if the columns are independent:\n\n1) skipscan - just indexes, columns are independent\n\n2) skipscan-with-stats - indexes and extended stats, independent columns\n\n3) skipscan-correlated - just indexes, correlated columns\n\n4) skipscan-correlated-with-stats - indexes and extended stats,\ncorrelated columns\n\nThe github repository contains *.ods spreadsheet comparing duration with\nthe regular query plan (no skip scan) and skip scan. In general, there\nare pretty massive speedups, often by about two orders of magnitude.\n\nThere are a couple of regressions, where the plan with skipscan enables\nis ~10x slower. But this seems to happen only in high-cardinality cases\nwhere we misestimate the number of groups. Consider a table with two\nindependent columns\n\n CREATE TABLE t (a text, b text);\n INSERT INTO t SELECT\n md5((10000*random())::int::text),\n md5((10000*random())::int::text)\n FROM generate_series(1,1000000) s(i);\n\n CREATE INDEX ON t(a,b);\n\n ANALYZE;\n\nwhich then behaves like this:\n\n test=# select * from (select distinct a,b from t) foo offset 10000000;\n Time: 3138.222 ms (00:03.138)\n test=# set enable_indexskipscan = off;\n Time: 0.312 ms\n test=# select * from (select distinct a,b from t) foo offset 10000000;\n Time: 199.749 ms\n\nSo in this case the skip scan is ~15x slower than the usual plan (index\nonly scan + unique). The reason why this happens is pretty simple - to\nestimate the number of groups we multiply the ndistinct estimates for\nthe two columns (which both have n_distinct = 10000), but then we cap\nthe estimate to 10% of the table. But when the columns are independent\nwith high cardinalities that under-estimates the actual value, making\nthe cost for skip scan much lower than it should be.\n\nI don't think this is an issue the skipscan patch needs to fix, though.\nFirstly, the regressed cases are a tiny minority. Secondly, we already\nhave a way to improve the root cause - creating extended stats with\nndistinct coefficients generally makes the problem go away.\n\nOne interesting observation however is that this regression only\nhappened with text columns but not with int or bigint. My assumption is\nthat this is due to text comparisons being much more expensive. Not sure\nif there is something we could do to deal with this - reduce the number\nof comparisons or something?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 8 Feb 2020 15:22:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "OK,\n\nA couple more comments based on quick review of the patch, particularly\nthe part related to planning:\n\n1) create_skipscan_unique_path has one assert commented out. Either it's\nsomething we want to enforce, or we should remove it.\n\n /*Assert(distinctPrefixKeys <= list_length(pathnode->path.pathkeys));*/\n\n\n2) I wonder if the current costing model is overly optimistic. We simply\ncopy the startup cost from the IndexPath, which seems fine. But for\ntotal cost we do this:\n\n pathnode->path.total_cost = basepath->startup_cost * numGroups;\n\nwhich seems a bit too simplistic. The startup cost is pretty much just\nthe cost to find the first item in the index, but surely we need to do\nmore to find the next group - we need to do comparisons to skip some of\nthe items, etc. If we think that's unnecessary, we need to explain it in\na comment or somthing.\n\n\n3) I don't think we should make planning dependent on hasDeclareCursor.\n\n\n4) I'm not quite sure how sensible it's to create a new IndexPath in\ncreate_skipscan_unique_path. On the one hand it works, but I don't think\nany other path is constructed like this so I wonder if we're missing\nsomething. Perhaps it'd be better to just add a new path node on top of\nthe IndexPath, and then handle this in create_plan. We already do\nsomething similar for Bitmap Index Scans, where we create a different\nexecutor node from IndexPath depending on the parent node.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 8 Feb 2020 15:56:35 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Sat, Feb 08, 2020 at 03:22:17PM +0100, Tomas Vondra wrote:\n>\n> I've done some testing and benchmarking of the v31 patch, looking for\n> regressions, costing issues etc. Essentially, I've ran a bunch of SELECT\n> DISTINCT queries on data sets of various size, number of distinct values\n> etc. The results are fairly large, so I've uploaded them to github\n>\n> https://github.com/tvondra/skip-scan-test\n\nThanks a lot for testing!\n\n> There are a couple of regressions, where the plan with skipscan enables\n> is ~10x slower. But this seems to happen only in high-cardinality cases\n> where we misestimate the number of groups. Consider a table with two\n> independent columns\n>\n> CREATE TABLE t (a text, b text);\n> INSERT INTO t SELECT\n> md5((10000*random())::int::text),\n> md5((10000*random())::int::text)\n> FROM generate_series(1,1000000) s(i);\n>\n> CREATE INDEX ON t(a,b);\n>\n> ANALYZE;\n>\n> which then behaves like this:\n>\n> test=# select * from (select distinct a,b from t) foo offset 10000000;\n> Time: 3138.222 ms (00:03.138)\n> test=# set enable_indexskipscan = off;\n> Time: 0.312 ms\n> test=# select * from (select distinct a,b from t) foo offset 10000000;\n> Time: 199.749 ms\n>\n> So in this case the skip scan is ~15x slower than the usual plan (index\n> only scan + unique). The reason why this happens is pretty simple - to\n> estimate the number of groups we multiply the ndistinct estimates for\n> the two columns (which both have n_distinct = 10000), but then we cap\n> the estimate to 10% of the table. But when the columns are independent\n> with high cardinalities that under-estimates the actual value, making\n> the cost for skip scan much lower than it should be.\n\nThe current implementation checks if we can find the next value on the\nsame page to do a shortcut instead of tree traversal and improve such\nkind of situations, but I can easily imagine that it's still not enough\nin some extreme situations.\n\n> I don't think this is an issue the skipscan patch needs to fix, though.\n> Firstly, the regressed cases are a tiny minority. Secondly, we already\n> have a way to improve the root cause - creating extended stats with\n> ndistinct coefficients generally makes the problem go away.\n\nYes, I agree.\n\n> One interesting observation however is that this regression only\n> happened with text columns but not with int or bigint. My assumption is\n> that this is due to text comparisons being much more expensive. Not sure\n> if there is something we could do to deal with this - reduce the number\n> of comparisons or something?\n\nHm, interesting. I need to check that we do not do any unnecessary\ncomparisons.\n\n> On Sat, Feb 08, 2020 at 02:11:59PM +0100, Tomas Vondra wrote:\n> > Yes, I've mentioned that already in one of the previous emails :) The\n> > simplest way I see to achieve what we want is to do something like in\n> > attached modified version with a new hasDeclaredCursor field. It's not a\n> > final version though, but posted just for discussion, so feel free to\n> > suggest any improvements or alternatives.\n>\n> IMO the proper fix for this case (moving forward, reading backwards) is\n> simply making it work by properly checking deleted tuples etc. Not sure\n> why that would be so much complex (haven't tried implementing it)?\n\nIt's probably not that complex by itself, but requires changing\nresponsibilities isolation. At the moment current implementation leaves\njumping over a tree fully to _bt_skip, and heap visibility checks only\nto IndexOnlyNext. To check deleted tuples properly we need to either\nverify a corresponding heap tuple visibility inside _bt_skip (as I've\nmentioned in one of the previous emails, checking if an index tuple is\ndead is not enough), or teach the code in IndexOnlyNext to understand\nthat _bt_skip can lead to returning the same tuple while moving forward\n& reading backward. Do you think it's still makes sense to go this way?\n\n\n",
"msg_date": "Sat, 8 Feb 2020 16:24:40 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Sat, Feb 08, 2020 at 04:24:40PM +0100, Dmitry Dolgov wrote:\n>> On Sat, Feb 08, 2020 at 03:22:17PM +0100, Tomas Vondra wrote:\n>>\n>> I've done some testing and benchmarking of the v31 patch, looking for\n>> regressions, costing issues etc. Essentially, I've ran a bunch of SELECT\n>> DISTINCT queries on data sets of various size, number of distinct values\n>> etc. The results are fairly large, so I've uploaded them to github\n>>\n>> https://github.com/tvondra/skip-scan-test\n>\n>Thanks a lot for testing!\n>\n>> There are a couple of regressions, where the plan with skipscan enables\n>> is ~10x slower. But this seems to happen only in high-cardinality cases\n>> where we misestimate the number of groups. Consider a table with two\n>> independent columns\n>>\n>> CREATE TABLE t (a text, b text);\n>> INSERT INTO t SELECT\n>> md5((10000*random())::int::text),\n>> md5((10000*random())::int::text)\n>> FROM generate_series(1,1000000) s(i);\n>>\n>> CREATE INDEX ON t(a,b);\n>>\n>> ANALYZE;\n>>\n>> which then behaves like this:\n>>\n>> test=# select * from (select distinct a,b from t) foo offset 10000000;\n>> Time: 3138.222 ms (00:03.138)\n>> test=# set enable_indexskipscan = off;\n>> Time: 0.312 ms\n>> test=# select * from (select distinct a,b from t) foo offset 10000000;\n>> Time: 199.749 ms\n>>\n>> So in this case the skip scan is ~15x slower than the usual plan (index\n>> only scan + unique). The reason why this happens is pretty simple - to\n>> estimate the number of groups we multiply the ndistinct estimates for\n>> the two columns (which both have n_distinct = 10000), but then we cap\n>> the estimate to 10% of the table. But when the columns are independent\n>> with high cardinalities that under-estimates the actual value, making\n>> the cost for skip scan much lower than it should be.\n>\n>The current implementation checks if we can find the next value on the\n>same page to do a shortcut instead of tree traversal and improve such\n>kind of situations, but I can easily imagine that it's still not enough\n>in some extreme situations.\n>\n\nYeah. I'm not sure there's room for further improvements. The regressed\ncases were subject to the 10% cap, and with ndistinct being more than\n10% of the table, we probably can find many distinct keys on each index\npage - we know that every ~10 rows the values change.\n\n>> I don't think this is an issue the skipscan patch needs to fix, though.\n>> Firstly, the regressed cases are a tiny minority. Secondly, we already\n>> have a way to improve the root cause - creating extended stats with\n>> ndistinct coefficients generally makes the problem go away.\n>\n>Yes, I agree.\n>\n>> One interesting observation however is that this regression only\n>> happened with text columns but not with int or bigint. My assumption is\n>> that this is due to text comparisons being much more expensive. Not sure\n>> if there is something we could do to deal with this - reduce the number\n>> of comparisons or something?\n>\n>Hm, interesting. I need to check that we do not do any unnecessary\n>comparisons.\n>\n>> On Sat, Feb 08, 2020 at 02:11:59PM +0100, Tomas Vondra wrote:\n>> > Yes, I've mentioned that already in one of the previous emails :) The\n>> > simplest way I see to achieve what we want is to do something like in\n>> > attached modified version with a new hasDeclaredCursor field. It's not a\n>> > final version though, but posted just for discussion, so feel free to\n>> > suggest any improvements or alternatives.\n>>\n>> IMO the proper fix for this case (moving forward, reading backwards) is\n>> simply making it work by properly checking deleted tuples etc. Not sure\n>> why that would be so much complex (haven't tried implementing it)?\n>\n>It's probably not that complex by itself, but requires changing\n>responsibilities isolation. At the moment current implementation leaves\n>jumping over a tree fully to _bt_skip, and heap visibility checks only\n>to IndexOnlyNext. To check deleted tuples properly we need to either\n>verify a corresponding heap tuple visibility inside _bt_skip (as I've\n>mentioned in one of the previous emails, checking if an index tuple is\n>dead is not enough), or teach the code in IndexOnlyNext to understand\n>that _bt_skip can lead to returning the same tuple while moving forward\n>& reading backward. Do you think it's still makes sense to go this way?\n\nNot sure. I have to think about this first.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 8 Feb 2020 17:59:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Sat, Feb 8, 2020 at 10:24 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Sat, Feb 08, 2020 at 03:22:17PM +0100, Tomas Vondra wrote:\n> > So in this case the skip scan is ~15x slower than the usual plan (index\n> > only scan + unique). The reason why this happens is pretty simple - to\n> > estimate the number of groups we multiply the ndistinct estimates for\n> > the two columns (which both have n_distinct = 10000), but then we cap\n> > the estimate to 10% of the table. But when the columns are independent\n> > with high cardinalities that under-estimates the actual value, making\n> > the cost for skip scan much lower than it should be.\n>\n> The current implementation checks if we can find the next value on the\n> same page to do a shortcut instead of tree traversal and improve such\n> kind of situations, but I can easily imagine that it's still not enough\n> in some extreme situations.\n\nThis is almost certainly rehashing already covered ground, but since I\ndoubt it's been discussed recently, would you be able to summarize\nthat choice (not to always get the next tuple by scanning from the top\nof the tree again) and the performance/complexity tradeoffs?\n\nThanks,\nJames\n\n\n",
"msg_date": "Sat, 8 Feb 2020 13:31:02 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Sat, Feb 08, 2020 at 01:31:02PM -0500, James Coleman wrote:\n> On Sat, Feb 8, 2020 at 10:24 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> >\n> > > On Sat, Feb 08, 2020 at 03:22:17PM +0100, Tomas Vondra wrote:\n> > > So in this case the skip scan is ~15x slower than the usual plan (index\n> > > only scan + unique). The reason why this happens is pretty simple - to\n> > > estimate the number of groups we multiply the ndistinct estimates for\n> > > the two columns (which both have n_distinct = 10000), but then we cap\n> > > the estimate to 10% of the table. But when the columns are independent\n> > > with high cardinalities that under-estimates the actual value, making\n> > > the cost for skip scan much lower than it should be.\n> >\n> > The current implementation checks if we can find the next value on the\n> > same page to do a shortcut instead of tree traversal and improve such\n> > kind of situations, but I can easily imagine that it's still not enough\n> > in some extreme situations.\n>\n> This is almost certainly rehashing already covered ground, but since I\n> doubt it's been discussed recently, would you be able to summarize\n> that choice (not to always get the next tuple by scanning from the top\n> of the tree again) and the performance/complexity tradeoffs?\n\nYeah, this part of discussion happened already some time ago. The idea\n[1] is to protect ourselves at least partially from incorrect ndistinct\nestimations. Simply doing jumping over an index means that even if the\nnext key we're searching for is on the same page as previous, we still\nend up doing a search from the root of the tree, which is of course less\nefficient than just check right on the page before jumping further.\n\nPerformance tradeoff in this case is simple, we make regular use case\nslightly slower, but can perform better in the worst case scenarios.\nComplexity tradeoff was never discussed, but I guess everyone assumed\nit's relatively straightforward to check the current page and return if\nsomething was found before jumping.\n\n[1]: https://www.postgresql.org/message-id/CA%2BTgmoY7QTHhzLWZupNSyyqFRBfMgYocg3R-6g%3DDRgT4-KBGqg%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 10 Feb 2020 20:30:56 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "Thank you very much for the benchmarking!\n\nA bit different topic from the latest branch..\n\nAt Sat, 8 Feb 2020 14:11:59 +0100, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote in \n> >Yes, I've mentioned that already in one of the previous emails :) The\n> >simplest way I see to achieve what we want is to do something like in\n> >attached modified version with a new hasDeclaredCursor field. It's not\n> >a\n> >final version though, but posted just for discussion, so feel free to\n> >suggest any improvements or alternatives.\n> \n> IMO the proper fix for this case (moving forward, reading backwards)\n> is\n> simply making it work by properly checking deleted tuples etc. Not\n> sure\n> why that would be so much complex (haven't tried implementing it)?\n\nI don't think it's not so complex. But I suspect that it might be a\nbit harder starting from the current shpae.\n\nThe first attached (renamed to .txt not to confuse the cfbots) is a\nsmall patch that makes sure if _bt_readpage is called with the proper\ncondition as written in its comment, that is, caller must have pinned\nand read-locked so->currPos.buf. This patch reveals many instances of\nbreakage of the contract.\n\nThe second is a crude fix the breakages, but the result seems far from\nneat.. I think we need rethinking taking modification of support\nfunctions into consideration.\n\n> I think making this depend on things like declared cursor etc. is\n> going\n> to be tricky, may easily be more complex than checking deleted tuples,\n> and the behavior may be quite surprising.\n\nSure.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n From de129e5a261ed43f002c1684dc9d6575f3880b16 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 6 Feb 2020 14:31:36 +0900\nSubject: [PATCH 1/2] debug aid\n\n---\n src/backend/access/nbtree/nbtsearch.c | 1 +\n src/backend/storage/buffer/bufmgr.c | 13 +++++++++++++\n src/include/storage/bufmgr.h | 1 +\n 3 files changed, 15 insertions(+)\n\ndiff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c\nindex c5f5d228f2..5cd97d8bb5 100644\n--- a/src/backend/access/nbtree/nbtsearch.c\n+++ b/src/backend/access/nbtree/nbtsearch.c\n@@ -1785,6 +1785,7 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)\n \t * used here; this function is what makes it good for currPos.\n \t */\n \tAssert(BufferIsValid(so->currPos.buf));\n+\tAssert(BufferLockAndPinHeldByMe(so->currPos.buf));\n \n \tpage = BufferGetPage(so->currPos.buf);\n \topaque = (BTPageOpaque) PageGetSpecialPointer(page);\ndiff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\nindex aba3960481..08a75a6846 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -1553,6 +1553,19 @@ ReleaseAndReadBuffer(Buffer buffer,\n \treturn ReadBuffer(relation, blockNum);\n }\n \n+/* tmp function for debugging */\n+bool\n+BufferLockAndPinHeldByMe(Buffer buffer)\n+{\n+\tBufferDesc *b = GetBufferDescriptor(buffer - 1);\n+\n+\tif (BufferIsPinned(buffer) &&\n+\t\tLWLockHeldByMe(BufferDescriptorGetContentLock(b)))\n+\t\treturn true;\n+\n+\treturn false;\n+}\n+\n /*\n * PinBuffer -- make buffer unavailable for replacement.\n *\ndiff --git a/src/include/storage/bufmgr.h b/src/include/storage/bufmgr.h\nindex 73c7e9ba38..8e5fc639a0 100644\n--- a/src/include/storage/bufmgr.h\n+++ b/src/include/storage/bufmgr.h\n@@ -177,6 +177,7 @@ extern void MarkBufferDirty(Buffer buffer);\n extern void IncrBufferRefCount(Buffer buffer);\n extern Buffer ReleaseAndReadBuffer(Buffer buffer, Relation relation,\n \t\t\t\t\t\t\t\t BlockNumber blockNum);\n+extern bool BufferLockAndPinHeldByMe(Buffer buffer);\n \n extern void InitBufferPool(void);\n extern void InitBufferPoolAccess(void);\n-- \n2.18.2\n\n\n From 912bad2ec8c66ccd01cebf1f69233b004c633243 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 6 Feb 2020 19:09:09 +0900\nSubject: [PATCH 2/2] crude fix\n\n---\n src/backend/access/nbtree/nbtsearch.c | 43 +++++++++++++++++----------\n 1 file changed, 27 insertions(+), 16 deletions(-)\n\ndiff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c\nindex 5cd97d8bb5..1f18b38ca5 100644\n--- a/src/backend/access/nbtree/nbtsearch.c\n+++ b/src/backend/access/nbtree/nbtsearch.c\n@@ -1619,6 +1619,9 @@ _bt_skip(IndexScanDesc scan, ScanDirection dir,\n \n \t\t\tnextOffset = startOffset = ItemPointerGetOffsetNumber(&scan->xs_itup->t_tid);\n \n+\t\t\tif (nextOffset != startOffset)\n+\t\t\t\tLockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);\n+\n \t\t\twhile (nextOffset == startOffset)\n \t\t\t{\n \t\t\t\tIndexTuple itup;\n@@ -1653,7 +1656,7 @@ _bt_skip(IndexScanDesc scan, ScanDirection dir,\n \t\t\t\toffnum = OffsetNumberPrev(offnum);\n \n \t\t\t\t/* Check if _bt_readpage returns already found item */\n-\t\t\t\tif (!_bt_readpage(scan, indexdir, offnum))\n+\t\t\t\tif (!_bt_readpage(scan, dir, offnum))\n \t\t\t\t{\n \t\t\t\t\t/*\n \t\t\t\t\t * There's no actually-matching data on this page. Try to\n@@ -1668,6 +1671,8 @@ _bt_skip(IndexScanDesc scan, ScanDirection dir,\n \t\t\t\t\t\treturn false;\n \t\t\t\t\t}\n \t\t\t\t}\n+\t\t\t\telse\n+\t\t\t\t\tLockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);\n \n \t\t\t\tcurrItem = &so->currPos.items[so->currPos.lastItem];\n \t\t\t\titup = (IndexTuple) (so->currTuples + currItem->tupleOffset);\n@@ -1721,24 +1726,30 @@ _bt_skip(IndexScanDesc scan, ScanDirection dir,\n \t}\n \n \t/* Now read the data */\n-\tif (!_bt_readpage(scan, indexdir, offnum))\n+\tif (!(ScanDirectionIsForward(dir) &&\n+\t\t ScanDirectionIsBackward(indexdir)) ||\n+\t\tscanstart)\n \t{\n-\t\t/*\n-\t\t * There's no actually-matching data on this page. Try to advance to\n-\t\t * the next page. Return false if there's no matching data at all.\n-\t\t */\n-\t\tLockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);\n-\t\tif (!_bt_steppage(scan, dir))\n+\t\tif (!_bt_readpage(scan, dir, offnum))\n \t\t{\n-\t\t\tpfree(so->skipScanKey);\n-\t\t\tso->skipScanKey = NULL;\n-\t\t\treturn false;\n+\t\t\t/*\n+\t\t\t * There's no actually-matching data on this page. Try to advance\n+\t\t\t * to the next page. Return false if there's no matching data at\n+\t\t\t * all.\n+\t\t\t */\n+\t\t\tLockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);\n+\t\t\tif (!_bt_steppage(scan, dir))\n+\t\t\t{\n+\t\t\t\tpfree(so->skipScanKey);\n+\t\t\t\tso->skipScanKey = NULL;\n+\t\t\t\treturn false;\n+\t\t\t}\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t\t/* Drop the lock, and maybe the pin, on the current page */\n+\t\t\tLockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);\n \t\t}\n-\t}\n-\telse\n-\t{\n-\t\t/* Drop the lock, and maybe the pin, on the current page */\n-\t\tLockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);\n \t}\n \n \t/* And set IndexTuple */\n-- \n2.18.2",
"msg_date": "Fri, 14 Feb 2020 17:23:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Fri, Feb 14, 2020 at 05:23:13PM +0900, Kyotaro Horiguchi wrote:\n> The first attached (renamed to .txt not to confuse the cfbots) is a\n> small patch that makes sure if _bt_readpage is called with the proper\n> condition as written in its comment, that is, caller must have pinned\n> and read-locked so->currPos.buf. This patch reveals many instances of\n> breakage of the contract.\n\nThanks! On top of which patch version one can apply it? I'm asking\nbecause I believe I've addressed similar issues in the last version, and\nthe last proposed diff (after resolving some conflicts) breaks tests for\nme, so not sure if I miss something.\n\nAt the same time if you and Tomas strongly agree that it actually makes\nsense to make moving forward/reading backward case work with dead tuples\ncorrectly, I'll take a shot and try to teach the code around _bt_skip to\ndo what is required for that. I can merge your changes there and we can\nsee what would be the result.\n\n\n",
"msg_date": "Fri, 14 Feb 2020 13:18:20 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Fri, Feb 14, 2020 at 01:18:20PM +0100, Dmitry Dolgov wrote:\n> > On Fri, Feb 14, 2020 at 05:23:13PM +0900, Kyotaro Horiguchi wrote:\n> > The first attached (renamed to .txt not to confuse the cfbots) is a\n> > small patch that makes sure if _bt_readpage is called with the proper\n> > condition as written in its comment, that is, caller must have pinned\n> > and read-locked so->currPos.buf. This patch reveals many instances of\n> > breakage of the contract.\n>\n> Thanks! On top of which patch version one can apply it? I'm asking\n> because I believe I've addressed similar issues in the last version, and\n> the last proposed diff (after resolving some conflicts) breaks tests for\n> me, so not sure if I miss something.\n>\n> At the same time if you and Tomas strongly agree that it actually makes\n> sense to make moving forward/reading backward case work with dead tuples\n> correctly, I'll take a shot and try to teach the code around _bt_skip to\n> do what is required for that. I can merge your changes there and we can\n> see what would be the result.\n\nHere is something similar to what I had in mind. In this version of the\npatch IndexOnlyNext now verify if we returned to the same position as\nbefore while reading in opposite to the advancing direction due to\nvisibility checks (similar to what is implemented inside _bt_skip for\nthe situation when some distinct keys being eliminated due to scankey\nconditions). It's actually not that invasive as I feared, but still\npretty hacky. I'm not sure if it's ok to compare resulting heaptid in\nthis situation, but all the mention tests are passing. Also, this version\ndoesn't incorporate any planner feedback from Tomas yet, my intention is\njust to check if it could be the right direction.",
"msg_date": "Mon, 17 Feb 2020 17:24:51 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Tue, 18 Feb 2020 at 05:24, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Here is something similar to what I had in mind.\n\n(changing to this email address for future emails)\n\nHi,\n\nI've been looking over v32 of the patch and have a few comments\nregarding the planner changes.\n\nI think the changes in create_distinct_paths() need more work. The\nway I think this should work is that create_distinct_paths() gets to\nknow exactly nothing about what path types support the elimination of\nduplicate values. The Path should carry the UniqueKeys so that can be\ndetermined. In create_distinct_paths() you should just be able to make\nuse of those paths, which should already have been created when\ncreating index paths for the rel due to PlannerInfo's query_uniquekeys\nhaving been set.\n\nThe reason it must be done this way is that when the RelOptInfo that\nwe're performing the DISTINCT on is a joinrel, then we're not going to\nsee any IndexPaths in the RelOptInfo's pathlist. We'll have some sort\nof Join path instead. I understand you're not yet supporting doing\nthis optimisation when joins are involved, but it should be coded in\nsuch a way that it'll work when we do. (It's probably also a separate\nquestion as to whether we should have this only work when there are no\njoins. I don't think I personally object to it for stage 1, but\nperhaps someone else might think differently.)\n\nFor storing these new paths with UniqueKeys, I'm not sure exactly if\nwe can just add_path() such paths into the RelOptInfo's pathlist.\nWhat we don't want to do is accidentally make use of paths which\neliminate duplicate values when we don't want that behaviour. If we\ndid store these paths in RelOptInfo->pathlist then we'd need to go and\nmodify a bunch of places to ignore such paths. set_cheapest() would\nhave to do something special for them too, which makes me think\npathlist is the incorrect place. Parallel query added\npartial_pathlist, so perhaps we need unique_pathlist to make this\nwork.\n\nAlso, should create_grouping_paths() be getting the same code?\nJesper's UniqueKey patch seems to set query_uniquekeys when there's a\nGROUP BY with no aggregates. So it looks like he has intended that\nsomething like:\n\nSELECT x FROM t GROUP BY x;\n\nshould work the same way as\n\nSELECT DISTINCT x FROM t;\n\nbut the 0002 patch does not make this work. Has that just been overlooked?\n\nThere's also some weird looking assumptions that an EquivalenceMember\ncan only be a Var in create_distinct_paths(). I think you're only\nsaved from crashing there because a ProjectionPath will be created\natop of the IndexPath to evaluate expressions, in which case you're\nnot seeing the IndexPath. This results in the optimisation not\nworking in cases like:\n\npostgres=# create table t (a int); create index on t ((a+1)); explain\nselect distinct a+1 from t;\nCREATE TABLE\nCREATE INDEX\n QUERY PLAN\n-----------------------------------------------------------\n HashAggregate (cost=48.25..50.75 rows=200 width=4)\n Group Key: (a + 1)\n -> Seq Scan on t (cost=0.00..41.88 rows=2550 width=4)\n\nUsing unique paths as I mentioned above should see that fixed.\n\nDavid\n\n\n",
"msg_date": "Wed, 4 Mar 2020 11:32:00 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, Mar 04, 2020 at 11:32:00AM +1300, David Rowley wrote:\n>On Tue, 18 Feb 2020 at 05:24, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>> Here is something similar to what I had in mind.\n>\n>(changing to this email address for future emails)\n>\n>Hi,\n>\n>I've been looking over v32 of the patch and have a few comments\n>regarding the planner changes.\n>\n>I think the changes in create_distinct_paths() need more work. The\n>way I think this should work is that create_distinct_paths() gets to\n>know exactly nothing about what path types support the elimination of\n>duplicate values. The Path should carry the UniqueKeys so that can be\n>determined. In create_distinct_paths() you should just be able to make\n>use of those paths, which should already have been created when\n>creating index paths for the rel due to PlannerInfo's query_uniquekeys\n>having been set.\n>\n\n+1 to code this in a generic way, using query_uniquekeys (if possible)\n\n>The reason it must be done this way is that when the RelOptInfo that\n>we're performing the DISTINCT on is a joinrel, then we're not going to\n>see any IndexPaths in the RelOptInfo's pathlist. We'll have some sort\n>of Join path instead. I understand you're not yet supporting doing\n>this optimisation when joins are involved, but it should be coded in\n>such a way that it'll work when we do. (It's probably also a separate\n>question as to whether we should have this only work when there are no\n>joins. I don't think I personally object to it for stage 1, but\n>perhaps someone else might think differently.)\n>\n\nI don't follow. Can you elaborate more?\n\nAFAICS skip-scan is essentially a capability of an (index) AM. I don't\nsee how we could ever do that for joinrels? We can do that at the scan\nlevel, below a join, but that's what this patch already supports, I\nthink. When you say \"supporting this optimisation\" with joins, do you\nmean doing skip-scan for join inputs, or on top of the join?\n\n>For storing these new paths with UniqueKeys, I'm not sure exactly if\n>we can just add_path() such paths into the RelOptInfo's pathlist.\n>What we don't want to do is accidentally make use of paths which\n>eliminate duplicate values when we don't want that behaviour. If we\n>did store these paths in RelOptInfo->pathlist then we'd need to go and\n>modify a bunch of places to ignore such paths. set_cheapest() would\n>have to do something special for them too, which makes me think\n>pathlist is the incorrect place. Parallel query added\n>partial_pathlist, so perhaps we need unique_pathlist to make this\n>work.\n>\n\nHmmm, good point. Do we actually produce incorrect plans with the\ncurrent patch, using skip-scan path when we should not?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Mar 2020 15:49:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Sat, 7 Mar 2020 at 03:49, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Mar 04, 2020 at 11:32:00AM +1300, David Rowley wrote:\n> >The reason it must be done this way is that when the RelOptInfo that\n> >we're performing the DISTINCT on is a joinrel, then we're not going to\n> >see any IndexPaths in the RelOptInfo's pathlist. We'll have some sort\n> >of Join path instead. I understand you're not yet supporting doing\n> >this optimisation when joins are involved, but it should be coded in\n> >such a way that it'll work when we do. (It's probably also a separate\n> >question as to whether we should have this only work when there are no\n> >joins. I don't think I personally object to it for stage 1, but\n> >perhaps someone else might think differently.)\n> >\n>\n> I don't follow. Can you elaborate more?\n>\n> AFAICS skip-scan is essentially a capability of an (index) AM. I don't\n> see how we could ever do that for joinrels? We can do that at the scan\n> level, below a join, but that's what this patch already supports, I\n> think. When you say \"supporting this optimisation\" with joins, do you\n> mean doing skip-scan for join inputs, or on top of the join?\n\nThe skip index scan Path would still be created at the base rel level,\nbut the join path on the join relation would have one of the sub-paths\nof the join as an index skip scan.\n\nAn example query that could make use of this is:\n\nSELECT * FROM some_table WHERE a IN(SELECT\nindexed_col_with_few_distinct_values FROM big_table);\n\nIn this case, we might want to create a Skip Scan path on big_table\nusing the index on the \"indexed_col_with_few_distinct_values\", then\nHash Join to \"some_table\". That class of query is likely stage 2 or 3\nof this work, but we need to lay foundations that'll support it.\n\nAs for not having IndexScan paths in joinrels. Yes, of course, but\nthat's exactly why create_distinct_paths() cannot work the way the\npatch currently codes it. The patch does:\n\n+ /*\n+ * XXX: In case of index scan quals evaluation happens\n+ * after ExecScanFetch, which means skip results could be\n+ * fitered out. Consider the following query:\n+ *\n+ * select distinct (a, b) a, b, c from t where c < 100;\n+ *\n+ * Skip scan returns one tuple for one distinct set of (a,\n+ * b) with arbitrary one of c, so if the choosed c does\n+ * not match the qual and there is any c that matches the\n+ * qual, we miss that tuple.\n+ */\n+ if (path->pathtype == T_IndexScan &&\n\nwhich will never work for join relations since they'll only have paths\nfor Loop/Merge/Hash type joins. The key here is to determine which\nskip scan paths we should create when we're building the normal index\npaths then see if we can make use of those when planning joins.\nSubsequently, we'll then see if we can make use of the resulting join\npaths during create_distinct_paths(). Doing it this way will allow us\nto use skip scans in queries such as:\n\nSELECT DISTINCT t1.z FROM t1 INNER JOIN t2 ON t1.a = t2.unique_col;\n\nWe'll first create the skip scan paths on t1, then when creating the\njoin paths we'll create additional join paths which use the skipscan\npath. Because t1.unique_col will at most have 1 join partner for each\nt2 row, then the join path will have the same unique_keys as the\nskipscan path. That'll allow us to use the join path which has the\nskip scan on whichever side of the join the t1 relation ends up. All\ncreate_distinct_paths() should be doing is looking for paths that are\nalready implicitly unique on the distinct clause and consider using\nthose in a cost-based way. It shouldn't be making such paths itself.\n\n> >For storing these new paths with UniqueKeys, I'm not sure exactly if\n> >we can just add_path() such paths into the RelOptInfo's pathlist.\n> >What we don't want to do is accidentally make use of paths which\n> >eliminate duplicate values when we don't want that behaviour. If we\n> >did store these paths in RelOptInfo->pathlist then we'd need to go and\n> >modify a bunch of places to ignore such paths. set_cheapest() would\n> >have to do something special for them too, which makes me think\n> >pathlist is the incorrect place. Parallel query added\n> >partial_pathlist, so perhaps we need unique_pathlist to make this\n> >work.\n> >\n>\n> Hmmm, good point. Do we actually produce incorrect plans with the\n> current patch, using skip-scan path when we should not?\n\nI don't think so. The patch is only creating skip scan paths on the\nbase rel when we discover it's valid to do so. That's not the way it\nshould work though. How the patch currently works would be similar to\ninitially only creating a SeqScan path for a query such as: SELECT *\nFROM tab ORDER BY a;, but then, during create_ordered_paths() go and\ncreate some IndexPath to scan the btree index on tab.a because we\nsuddenly realise that it'll be good to use that for the ORDER BY.\nThe planner does not work that way. We always create all the paths\nthat we think will be useful during set_base_rel_pathlists(). We then\nmake use of only existing paths in the upper planner. See what\nbuild_index_paths() in particular:\n\n/* see if we can generate ordering operators for query_pathkeys */\nmatch_pathkeys_to_index(index, root->query_pathkeys,\n&orderbyclauses,\n&orderbyclausecols);\n\nWe'll need something similar to that but for the query_uniquekeys and\nensure we build the skip scan paths when we think they'll be useful\nand do so during the call to set_base_rel_pathlists(). Later in stage\n2 or 3, we can go build skip scan paths when there are semi/anti joins\nthat could make use of them. Making that work will just be some\nplumbing work in build_index_paths() and making use of those paths\nduring add_paths_to_joinrel().\n\nDavid\n\n\n",
"msg_date": "Sun, 8 Mar 2020 18:23:45 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Wed, Mar 04, 2020 at 11:32:00AM +1300, David Rowley wrote:\n>\n> I've been looking over v32 of the patch and have a few comments\n> regarding the planner changes.\n\nThanks for the commentaries!\n\n> I think the changes in create_distinct_paths() need more work. The\n> way I think this should work is that create_distinct_paths() gets to\n> know exactly nothing about what path types support the elimination of\n> duplicate values. The Path should carry the UniqueKeys so that can be\n> determined. In create_distinct_paths() you should just be able to make\n> use of those paths, which should already have been created when\n> creating index paths for the rel due to PlannerInfo's query_uniquekeys\n> having been set.\n\nJust for me to clarify. The idea is to \"move\" information about what\npath types support skipping into UniqueKeys (derived from PlannerInfo's\nquery_uniquekeys), but other checks (e.g. if index am supports that)\nstill perform in create_distinct_paths?\n\n> Also, should create_grouping_paths() be getting the same code?\n> Jesper's UniqueKey patch seems to set query_uniquekeys when there's a\n> GROUP BY with no aggregates. So it looks like he has intended that\n> something like:\n>\n> SELECT x FROM t GROUP BY x;\n>\n> should work the same way as\n>\n> SELECT DISTINCT x FROM t;\n>\n> but the 0002 patch does not make this work. Has that just been overlooked?\n\nI believe it wasn't overlooked in 0002 patch, but rather added just in\ncase in 0001. I guess there are no theoretical problems in implementing\nit, but since we wanted to keep scope of the patch under control and\nconcentrate on the existing functionality it probably makes sense to\nplan it as one of the next steps?\n\n> There's also some weird looking assumptions that an EquivalenceMember\n> can only be a Var in create_distinct_paths(). I think you're only\n> saved from crashing there because a ProjectionPath will be created\n> atop of the IndexPath to evaluate expressions, in which case you're\n> not seeing the IndexPath. This results in the optimisation not\n> working in cases like:\n>\n> postgres=# create table t (a int); create index on t ((a+1)); explain\n> select distinct a+1 from t;\n> CREATE TABLE\n> CREATE INDEX\n> QUERY PLAN\n> -----------------------------------------------------------\n> HashAggregate (cost=48.25..50.75 rows=200 width=4)\n> Group Key: (a + 1)\n> -> Seq Scan on t (cost=0.00..41.88 rows=2550 width=4)\n\nYes, I need to fix it.\n\n> Using unique paths as I mentioned above should see that fixed.\n\nI'm a bit confused about this statement, how exactly unique paths should\nfix this?\n\n\n",
"msg_date": "Sun, 8 Mar 2020 15:22:28 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Mon, 9 Mar 2020 at 03:21, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> >\n> > I've been looking over v32 of the patch and have a few comments\n> > regarding the planner changes.\n>\n> Thanks for the commentaries!\n>\n> > I think the changes in create_distinct_paths() need more work. The\n> > way I think this should work is that create_distinct_paths() gets to\n> > know exactly nothing about what path types support the elimination of\n> > duplicate values. The Path should carry the UniqueKeys so that can be\n> > determined. In create_distinct_paths() you should just be able to make\n> > use of those paths, which should already have been created when\n> > creating index paths for the rel due to PlannerInfo's query_uniquekeys\n> > having been set.\n>\n> Just for me to clarify. The idea is to \"move\" information about what\n> path types support skipping into UniqueKeys (derived from PlannerInfo's\n> query_uniquekeys), but other checks (e.g. if index am supports that)\n> still perform in create_distinct_paths?\n\ncreate_distinct_paths() shouldn't know any details specific to the\npathtype that it's using or considering using. All the details should\njust be in Path. e.g. uniquekeys, pathkeys, costs etc. There should be\nno IsA(path, ...). Please have a look over the details in my reply to\nTomas. I hope that reply has enough information in it, but please\nreply there if I've missed something.\n\n> > On Wed, Mar 04, 2020 at 11:32:00AM +1300, David Rowley wrote:\n> > There's also some weird looking assumptions that an EquivalenceMember\n> > can only be a Var in create_distinct_paths(). I think you're only\n> > saved from crashing there because a ProjectionPath will be created\n> > atop of the IndexPath to evaluate expressions, in which case you're\n> > not seeing the IndexPath. This results in the optimisation not\n> > working in cases like:\n> >\n> > postgres=# create table t (a int); create index on t ((a+1)); explain\n> > select distinct a+1 from t;\n> > CREATE TABLE\n> > CREATE INDEX\n> > QUERY PLAN\n> > -----------------------------------------------------------\n> > HashAggregate (cost=48.25..50.75 rows=200 width=4)\n> > Group Key: (a + 1)\n> > -> Seq Scan on t (cost=0.00..41.88 rows=2550 width=4)\n>\n> Yes, I need to fix it.\n>\n> > Using unique paths as I mentioned above should see that fixed.\n>\n> I'm a bit confused about this statement, how exactly unique paths should\n> fix this?\n\nThe path's uniquekeys would mention that it's unique on (a+1). You'd\ncompare the uniquekeys of the path to the DISTINCT clause and see that\nthe uniquekeys are a subset of the DISTINCT clause therefore the\nDISTINCT is a no-op. If that uniquekey path is cheaper than the\ncheapest_total_path + <cost of uniquification method>, then you should\npick the unique path, otherwise use the cheapest_total_path and\nuniquify that.\n\nI think the UniqueKeys may need to be changed from using\nEquivalenceClasses to use Exprs instead.\n\n\n",
"msg_date": "Mon, 9 Mar 2020 10:27:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Mon, Mar 09, 2020 at 10:27:26AM +1300, David Rowley wrote:\n>\n> > > I think the changes in create_distinct_paths() need more work. The\n> > > way I think this should work is that create_distinct_paths() gets to\n> > > know exactly nothing about what path types support the elimination of\n> > > duplicate values. The Path should carry the UniqueKeys so that can be\n> > > determined. In create_distinct_paths() you should just be able to make\n> > > use of those paths, which should already have been created when\n> > > creating index paths for the rel due to PlannerInfo's query_uniquekeys\n> > > having been set.\n> >\n> > Just for me to clarify. The idea is to \"move\" information about what\n> > path types support skipping into UniqueKeys (derived from PlannerInfo's\n> > query_uniquekeys), but other checks (e.g. if index am supports that)\n> > still perform in create_distinct_paths?\n>\n> create_distinct_paths() shouldn't know any details specific to the\n> pathtype that it's using or considering using. All the details should\n> just be in Path. e.g. uniquekeys, pathkeys, costs etc. There should be\n> no IsA(path, ...). Please have a look over the details in my reply to\n> Tomas. I hope that reply has enough information in it, but please\n> reply there if I've missed something.\n\nYes, I've read this reply, just wanted to ask here, since I had other\nquestions as well. Speaking of which:\n\n> > > On Wed, Mar 04, 2020 at 11:32:00AM +1300, David Rowley wrote:\n> > > There's also some weird looking assumptions that an EquivalenceMember\n> > > can only be a Var in create_distinct_paths(). I think you're only\n> > > saved from crashing there because a ProjectionPath will be created\n> > > atop of the IndexPath to evaluate expressions, in which case you're\n> > > not seeing the IndexPath.\n\nI'm probably missing something, so to eliminate any misunderstanding\nfrom my side:\n\n> > > This results in the optimisation not working in cases like:\n> > >\n> > > postgres=# create table t (a int); create index on t ((a+1)); explain\n> > > select distinct a+1 from t;\n> > > CREATE TABLE\n> > > CREATE INDEX\n> > > QUERY PLAN\n> > > -----------------------------------------------------------\n> > > HashAggregate (cost=48.25..50.75 rows=200 width=4)\n> > > Group Key: (a + 1)\n> > > -> Seq Scan on t (cost=0.00..41.88 rows=2550 width=4)\n\nIn this particular example skipping is not applied because, as you've\nmentioned, we're dealing with ProjectionPath (not IndexScan /\nIndexOnlyScan). Which means we're not even reaching the code with\nEquivalenceMember, so I'm still not sure how do they connected?\n\nAssuming we'll implement it in a way that we do not know about what kind\nof path type is that in create_distinct_path, then it can also work for\nProjectionPath or anything else (if UniqueKeys are present). But then\nstill EquivalenceMember are used only to figure out correct\ndistinctPrefixKeys and do not affect whether or not skipping is applied.\nWhat do I miss?\n\n\n",
"msg_date": "Mon, 9 Mar 2020 20:57:14 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Tue, 10 Mar 2020 at 08:56, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Assuming we'll implement it in a way that we do not know about what kind\n> of path type is that in create_distinct_path, then it can also work for\n> ProjectionPath or anything else (if UniqueKeys are present). But then\n> still EquivalenceMember are used only to figure out correct\n> distinctPrefixKeys and do not affect whether or not skipping is applied.\n> What do I miss?\n\nI'm not sure I fully understand the question correctly, but let me\nexplain further.\n\nIn the 0001 patch, standard_qp_callback sets the query_uniquekeys\ndepending on the DISTINCT / GROUP BY clause. When building index\npaths in build_index_paths(), the 0002 patch should be looking at the\nroot->query_uniquekeys to see if it can build any index paths that\nsuit those keys. Such paths should be tagged with the uniquekeys they\nsatisfy, basically, exactly the same as how pathkeys work. Many\ncreate_*_path functions will need to be modified to carry forward\ntheir uniquekeys. For example, create_projection_path(),\ncreate_limit_path() don't do anything which would cause the created\npath to violate the unique keys. This way when you get down to\ncreate_distinct_paths(), paths other than IndexPath may have\nuniquekeys. You'll be able to check which existing paths satisfy the\nunique keys required by the DISTINCT / GROUP BY and select those paths\ninstead of having to create any HashAggregate / Unique paths.\n\nDoes that answer the question?\n\n\n",
"msg_date": "Tue, 10 Mar 2020 09:29:32 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Mon, Mar 9, 2020 at 3:56 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> Assuming we'll implement it in a way that we do not know about what kind\n> of path type is that in create_distinct_path, then it can also work for\n> ProjectionPath or anything else (if UniqueKeys are present). But then\n> still EquivalenceMember are used only to figure out correct\n> distinctPrefixKeys and do not affect whether or not skipping is applied.\n> What do I miss?\n\n\nPart of the puzzle seems to me to this part of the response:\n\n> I think the UniqueKeys may need to be changed from using\n> EquivalenceClasses to use Exprs instead.\n\nBut I can't say I'm being overly helpful by pointing that out, since I\ndon't have my head in the code enough to understand how you'd\naccomplish that :)\n\nJames\n\n\n",
"msg_date": "Mon, 9 Mar 2020 16:31:40 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> >On Tue, Mar 10, 2020 at 09:29:32AM +1300, David Rowley wrote:\n> On Tue, 10 Mar 2020 at 08:56, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > Assuming we'll implement it in a way that we do not know about what kind\n> > of path type is that in create_distinct_path, then it can also work for\n> > ProjectionPath or anything else (if UniqueKeys are present). But then\n> > still EquivalenceMember are used only to figure out correct\n> > distinctPrefixKeys and do not affect whether or not skipping is applied.\n> > What do I miss?\n>\n> I'm not sure I fully understand the question correctly, but let me\n> explain further.\n>\n> In the 0001 patch, standard_qp_callback sets the query_uniquekeys\n> depending on the DISTINCT / GROUP BY clause. When building index\n> paths in build_index_paths(), the 0002 patch should be looking at the\n> root->query_uniquekeys to see if it can build any index paths that\n> suit those keys. Such paths should be tagged with the uniquekeys they\n> satisfy, basically, exactly the same as how pathkeys work. Many\n> create_*_path functions will need to be modified to carry forward\n> their uniquekeys. For example, create_projection_path(),\n> create_limit_path() don't do anything which would cause the created\n> path to violate the unique keys. This way when you get down to\n> create_distinct_paths(), paths other than IndexPath may have\n> uniquekeys. You'll be able to check which existing paths satisfy the\n> unique keys required by the DISTINCT / GROUP BY and select those paths\n> instead of having to create any HashAggregate / Unique paths.\n>\n> Does that answer the question?\n\nHmm... I'm afraid no, this was already clear. But looks like now I see\nthat I've misinterpreted one part.\n\n> There's also some weird looking assumptions that an EquivalenceMember\n> can only be a Var in create_distinct_paths(). I think you're only\n> saved from crashing there because a ProjectionPath will be created\n> atop of the IndexPath to evaluate expressions, in which case you're\n> not seeing the IndexPath. This results in the optimisation not\n> working in cases like:\n\nI've read it as \"an assumption that an EquivalenceMember can only be a\nVar\" results in \"the optimisation not working in cases like this\". But\nyou've meant that ignoring a ProjectionPath with an IndexPath inside\nresults in this optimisation not working, right? If so, then everything\nis clear, and my apologies, maybe I need to finally fix my sleep\nschedule :)\n\n\n",
"msg_date": "Tue, 10 Mar 2020 13:39:42 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, 11 Mar 2020 at 01:38, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > >On Tue, Mar 10, 2020 at 09:29:32AM +1300, David Rowley wrote:\n> > There's also some weird looking assumptions that an EquivalenceMember\n> > can only be a Var in create_distinct_paths(). I think you're only\n> > saved from crashing there because a ProjectionPath will be created\n> > atop of the IndexPath to evaluate expressions, in which case you're\n> > not seeing the IndexPath. This results in the optimisation not\n> > working in cases like:\n>\n> I've read it as \"an assumption that an EquivalenceMember can only be a\n> Var\" results in \"the optimisation not working in cases like this\". But\n> you've meant that ignoring a ProjectionPath with an IndexPath inside\n> results in this optimisation not working, right? If so, then everything\n> is clear, and my apologies, maybe I need to finally fix my sleep\n> schedule :)\n\nYes, I was complaining that a ProjectionPath breaks the optimisation\nand I don't believe there's any reason that it should.\n\nI believe the way to make that work correctly requires paying\nattention to the Path's uniquekeys rather than what type of path it\nis.\n\n\n",
"msg_date": "Wed, 11 Mar 2020 11:17:51 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": ">\n>\n> I think the UniqueKeys may need to be changed from using\n> EquivalenceClasses to use Exprs instead.\n>\n\nWhen I try to understand why UniqueKeys needs EquivalenceClasses,\nsee your comments here. I feel that FuncExpr can't be\nused to as a UniquePath even we can create unique index on f(a)\nand f->strict == true. The reason is even we know a is not null,\n f->strict = true. it is still be possible that f(a) == null. unique index\nallows more than 1 null values. so shall we move further to use varattrno\ninstead of Expr? if so, we can also use a list of Bitmapset to present\nmulti\nunique path of a single RelOptInfo.\n\n\nI think the UniqueKeys may need to be changed from using\nEquivalenceClasses to use Exprs instead.When I try to understand why UniqueKeys needs EquivalenceClasses, see your comments here. I feel that FuncExpr can't beused to as a UniquePath even we can create unique index on f(a)and f->strict == true. The reason is even we know a is not null, f->strict = true. it is still be possible that f(a) == null. unique indexallows more than 1 null values. so shall we move further to use varattrnoinstead of Expr? if so, we can also use a list of Bitmapset to present multiunique path of a single RelOptInfo.",
"msg_date": "Wed, 11 Mar 2020 11:44:23 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Tue, Mar 10, 2020 at 4:32 AM James Coleman <jtc331@gmail.com> wrote:\n\n> On Mon, Mar 9, 2020 at 3:56 PM Dmitry Dolgov <9erthalion6@gmail.com>\n> wrote:\n> >\n> > Assuming we'll implement it in a way that we do not know about what kind\n> > of path type is that in create_distinct_path, then it can also work for\n> > ProjectionPath or anything else (if UniqueKeys are present). But then\n> > still EquivalenceMember are used only to figure out correct\n> > distinctPrefixKeys and do not affect whether or not skipping is applied.\n> > What do I miss?\n>\n>\n> Part of the puzzle seems to me to this part of the response:\n>\n> > I think the UniqueKeys may need to be changed from using\n> > EquivalenceClasses to use Exprs instead.\n>\n> But I can't say I'm being overly helpful by pointing that out, since I\n> don't have my head in the code enough to understand how you'd\n> accomplish that :)\n>\n>\nThere was a dedicated thread [1] where David explain his idea very\ndetailed,\nand you can also check that messages around that message for the context.\nhope it helps.\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvq7i0%3DO97r4Y1pv68%2BtprVczKsXRsV28rM9H-rVPOfeNQ%40mail.gmail.com\n\nOn Tue, Mar 10, 2020 at 4:32 AM James Coleman <jtc331@gmail.com> wrote:On Mon, Mar 9, 2020 at 3:56 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> Assuming we'll implement it in a way that we do not know about what kind\n> of path type is that in create_distinct_path, then it can also work for\n> ProjectionPath or anything else (if UniqueKeys are present). But then\n> still EquivalenceMember are used only to figure out correct\n> distinctPrefixKeys and do not affect whether or not skipping is applied.\n> What do I miss?\n\n\nPart of the puzzle seems to me to this part of the response:\n\n> I think the UniqueKeys may need to be changed from using\n> EquivalenceClasses to use Exprs instead.\n\nBut I can't say I'm being overly helpful by pointing that out, since I\ndon't have my head in the code enough to understand how you'd\naccomplish that :)\n There was a dedicated thread [1] where David explain his idea very detailed,and you can also check that messages around that message for the context.hope it helps.[1] https://www.postgresql.org/message-id/CAApHDvq7i0%3DO97r4Y1pv68%2BtprVczKsXRsV28rM9H-rVPOfeNQ%40mail.gmail.com",
"msg_date": "Wed, 11 Mar 2020 18:56:09 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, 11 Mar 2020 at 16:44, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>>\n>> I think the UniqueKeys may need to be changed from using\n>> EquivalenceClasses to use Exprs instead.\n>\n>\n> When I try to understand why UniqueKeys needs EquivalenceClasses,\n> see your comments here. I feel that FuncExpr can't be\n> used to as a UniquePath even we can create unique index on f(a)\n> and f->strict == true. The reason is even we know a is not null,\n> f->strict = true. it is still be possible that f(a) == null. unique index\n> allows more than 1 null values. so shall we move further to use varattrno\n> instead of Expr? if so, we can also use a list of Bitmapset to present multi\n> unique path of a single RelOptInfo.\n\nWe do need some method to determine if NULL values are possible. At\nthe base relation level that can probably be done by checking NOT NULL\nconstraints and strict base quals. At higher levels, we can use strict\njoin quals as proofs.\n\nAs for bit a Bitmapset of varattnos, that would certainly work well at\nthe base relation level when there are no unique expression indexes,\nbut it's not so simple with join relations when the varattnos only\nmean something when you know which base relation it comes from. I'm\nnot saying that Lists of Exprs is ideal, but I think trying to\noptimise some code that does not yet exist is premature.\n\nThere was some other talk in [1] on how we might make checking if a\nList contains a given Node. That could be advantageous in a few\nplaces in the query planner, and it might be useful for this too.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8v-fUG8YpaAGj309ZuALo3aEk7f6cqMHr_AVwz1fKXug%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 12 Mar 2020 15:08:18 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "Hello hackers,\r\n\r\nRecently I've put some effort in extending the functionality of this patch. So far, we've been trying to keep the scope of this patch relatively small to DISTINCT-clauses only. The advantage of this approach was that it keeps impact to the indexam api to a minimum. However, given the problems we've been facing in getting the implementation to work correctly in all cases, I started wondering if this implementation was the right direction to go in. My main worry is that the current indexam api for skipping is not suited to other future use cases of skipping, but also that we're already struggling with it now to get it to work correctly in all edge cases.\r\n\r\nIn the approach taken so far, the amskip function is defined taking two ScanDirection parameters. The function amgettuple is left unchanged. However, I think we need amgettuple to take two ScanDirection parameters as well (or create a separate function amgetskiptuple). This patch proposes that.\r\n\r\nCurrently, I've just added 'skip' functions to the indexam api for beginscan and gettuple. Maybe it'd be better to just modify the existing functions to take an extra parameter instead. Any thoughts on this?\r\n\r\nThe result is a patch that can apply skipping in many more cases than previous patches. For example, filtering on columns that are not part of the index, properly handling visibility checks without moving these into the nbtree code, skipping not only on prefix but also on extra conditions that become available (eg. prefix a=1 and we happen to have a WHERE clause with b=200, which we can now use to skip all the way to a=1 AND b=200). There's a fair amount of changes in the nbtree code to support this.\r\n\r\nPatch 0001 is Jesper's unique keys patch.\r\nPatch 0002 modifies executor-level code to support skip scans and enables it for DISTINCT queries.\r\nPatch 0003 just provides a very basic planner hack that enables the skip scan for practically all index scans (index, index only and bitmap index). This is not what we would normally want, but this way I could easily test the skip scan code. It's so large because I modify all the test cases expected results that now include an extra line 'Skip scan: All'. The actual code changed is only a few lines in this patch.\r\n\r\nThe planner part of the code still needs work. The planner code in this patch is similar to the previous patch. David's comments about projection paths haven't been addressed yet. Also, there's no proper way of hooking up the index scan for regular (non-DISTINCT) queries yet. That's why I hacked up patch 0003 just to test stuff.\r\n\r\nI'd welcome any help on these patches. If someone with more planner knowledge than me is willing to do part of the planner code, please feel free to do so. I believe supporting this will speed up a large number of queries for all kinds of users. It can be a really powerful feature.\r\n\r\nTomas, would you be willing to repeat the performance tests you did earlier? I believe this version will perform better than the previous patch for the cases where you noticed the 10-20x slow-down. There will obviously still be a performance penalty for cases where the planner picks a skip scan that are not well suited, but I think it'll be smaller.\r\n\r\n-Floris\r\n\r\n-----\r\nTo give a few simple examples:\r\n\r\nInitialization:\r\n-- table t1 has 100 unique values for a\r\n-- and 10000 b values for each a\r\n-- very suitable for skip scan\r\ncreate table t1 as select a,b,b%5 as c, random() as d from generate_series(1, 100) a, generate_series(1,10000) b;\r\ncreate index on t1 (a,b,c);\r\n\r\n-- table t2 has 10000 unique values for a\r\n-- and 100 b values for each a \r\n-- this is not very suitable for skip scan\r\n-- because the next matching value is always either\r\n-- on the current page or on the next page\r\ncreate table t2 as select a,b,b%5 as c, random() as d from generate_series(1, 10000) a, generate_series(1,100) b;\r\ncreate index on t2 (a,b,c);\r\n\r\nanalyze t1;\r\nanalyze t2;\r\n\r\n-- First 'Execution Time' line is this patched version (0001+0002+0003) (without including 0003, the non-DISTINCT queries would be equal to master)\r\n-- Second 'Execution Time' line is master\r\n-- Third 'Execution Time' is previous skip scan patch version\r\n-- Just ran a couple of times to give an indication \r\n-- on order of magnitude, not a full benchmark.\r\nselect distinct on (a) * from t1;\r\n Execution Time: 1.407 ms (current patch)\r\n Execution Time: 480.704 ms (master)\r\n Execution Time: 1.711 ms (previous patch)\r\n\r\nselect distinct on (a) * from t1 where b > 50;\r\n Execution Time: 1.432 ms\r\n Execution Time: 481.530 ms\r\n Execution Time: 481.206 ms\r\n\r\nselect distinct on (a) * from t1 where b > 9990;\r\n Execution Time: 1.074 ms\r\n Execution Time: 33.937 ms\r\n Execution Time: 33.115 ms\r\n\r\nselect distinct on (a) * from t1 where d > 0.5;\r\n Execution Time: 0.811 ms\r\n Execution Time: 446.549 ms\r\n Execution Time: 436.091 ms\r\n\r\nselect * from t1 where b=50;\r\n Execution Time: 1.111 ms\r\n Execution Time: 33.242 ms\r\n Execution Time: 36.555 ms\r\n\r\nselect * from t1 where b between 50 and 75 and d > 0.5;\r\n Execution Time: 2.370 ms\r\n Execution Time: 60.744 ms\r\n Execution Time: 62.820 ms\r\n\r\nselect * from t1 where b in (100, 200);\r\n Execution Time: 2.464 ms\r\n Execution Time: 252.224 ms\r\n Execution Time: 244.872 ms\r\n\r\nselect * from t1 where b in (select distinct a from t1);\r\n Execution Time: 91.000 ms\r\n Execution Time: 842.969 ms\r\n Execution Time: 386.871 ms\r\n\r\nselect distinct on (a) * from t2;\r\n Execution Time: 47.155 ms\r\n Execution Time: 714.102 ms\r\n Execution Time: 56.327 ms\r\n\r\nselect distinct on (a) * from t2 where b > 5;\r\n Execution Time: 60.100 ms\r\n Execution Time: 709.960 ms\r\n Execution Time: 727.949 ms\r\n\r\nselect distinct on (a) * from t2 where b > 95;\r\n Execution Time: 55.420 ms\r\n Execution Time: 71.007 ms\r\n Execution Time: 69.229 ms\r\n\r\nselect distinct on (a) * from t2 where d > 0.5;\r\n Execution Time: 49.254 ms\r\n Execution Time: 719.820 ms\r\n Execution Time: 705.991 ms\r\n\r\n-- slight performance degradation here compared to regular index scan\r\n-- due to data unfavorable data distribution\r\nselect * from t2 where b=50;\r\n Execution Time: 47.603 ms\r\n Execution Time: 37.327 ms\r\n Execution Time: 40.448 ms\r\n\r\nselect * from t2 where b between 50 and 75 and d > 0.5;\r\n Execution Time: 244.546 ms\r\n Execution Time: 228.579 ms\r\n Execution Time: 227.541 ms\r\n\r\nselect * from t2 where b in (100, 200);\r\n Execution Time: 64.021 ms\r\n Execution Time: 242.905 ms\r\n Execution Time: 258.864 ms\r\n\r\nselect * from t2 where b in (select distinct a from t2);\r\n Execution Time: 758.350 ms\r\n Execution Time: 1271.230 ms\r\n Execution Time: 761.311 ms\r\n\r\nI wrote a few things here about the method as well:\r\nhttps://github.com/fvannee/postgres/wiki/Index-Skip-Scan\r\nCode can be found there on Github as well in branch 'skip-scan'",
"msg_date": "Sat, 21 Mar 2020 22:00:01 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "It seems that the documentation build was broken. I've fixed it in attached patch.\r\n\r\nI'm unsure which version number to give this patch (to continue with numbers from previous skip scan patches, or to start numbering from scratch again). It's a rather big change, so one could argue it's mostly a separate patch. I guess it mostly depends on how close the original versions were to be committable. Thoughts?\r\n\r\n-Floris",
"msg_date": "Sun, 22 Mar 2020 12:55:29 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "Hi Floris,\n\nOn Sun, Mar 22, 2020 at 11:00 AM Floris Van Nee\n<florisvannee@optiver.com> wrote:\n> create index on t1 (a,b,c);\n\n> select * from t1 where b in (100, 200);\n> Execution Time: 2.464 ms\n> Execution Time: 252.224 ms\n> Execution Time: 244.872 ms\n\nWow. This is very cool work and I'm sure it will become a major\nheadline feature of PG14 if the requisite planner brains can be sorted\nout.\n\nOn Mon, Mar 23, 2020 at 1:55 AM Floris Van Nee <florisvannee@optiver.com> wrote:\n> I'm unsure which version number to give this patch (to continue with numbers from previous skip scan patches, or to start numbering from scratch again). It's a rather big change, so one could argue it's mostly a separate patch. I guess it mostly depends on how close the original versions were to be committable. Thoughts?\n\nI don't know, but from the sidelines, it'd be nice to see the unique\npath part go into PG13, where IIUC it can power the \"useless unique\nremoval\" patch.\n\n\n",
"msg_date": "Mon, 23 Mar 2020 11:23:32 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": ">\n>\n> On Mon, Mar 23, 2020 at 1:55 AM Floris Van Nee <florisvannee@optiver.com>\n> wrote:\n> > I'm unsure which version number to give this patch (to continue with\n> numbers from previous skip scan patches, or to start numbering from scratch\n> again). It's a rather big change, so one could argue it's mostly a separate\n> patch. I guess it mostly depends on how close the original versions were to\n> be committable. Thoughts?\n>\n> I don't know, but from the sidelines, it'd be nice to see the unique\n> path part go into PG13, where IIUC it can power the \"useless unique\n> removal\" patch.\n>\n\nActually I have a patch to remove the distinct clause some long time ago[1],\nand later it came to the UniqueKey as well, you can see [2] for the current\nstatus.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKU4AWqOORqW900O-%2BL4L2%2B0xknsEqpfcs9FF7SeiO9TmpeZOg%40mail.gmail.com#f5d97cc66b9cd330add2fbb004a4d107\n\n[2]\nhttps://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL=uaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw@mail.gmail.com\n\n\nOn Mon, Mar 23, 2020 at 1:55 AM Floris Van Nee <florisvannee@optiver.com> wrote:\n> I'm unsure which version number to give this patch (to continue with numbers from previous skip scan patches, or to start numbering from scratch again). It's a rather big change, so one could argue it's mostly a separate patch. I guess it mostly depends on how close the original versions were to be committable. Thoughts?\n\nI don't know, but from the sidelines, it'd be nice to see the unique\npath part go into PG13, where IIUC it can power the \"useless unique\nremoval\" patch.Actually I have a patch to remove the distinct clause some long time ago[1],and later it came to the UniqueKey as well, you can see [2] for the currentstatus.[1] https://www.postgresql.org/message-id/flat/CAKU4AWqOORqW900O-%2BL4L2%2B0xknsEqpfcs9FF7SeiO9TmpeZOg%40mail.gmail.com#f5d97cc66b9cd330add2fbb004a4d107 [2] https://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL=uaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw@mail.gmail.com",
"msg_date": "Mon, 23 Mar 2020 19:17:56 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Wed, Mar 11, 2020 at 11:17:51AM +1300, David Rowley wrote:\n>\n> Yes, I was complaining that a ProjectionPath breaks the optimisation\n> and I don't believe there's any reason that it should.\n>\n> I believe the way to make that work correctly requires paying\n> attention to the Path's uniquekeys rather than what type of path it\n> is.\n\nThanks for the suggestion. As a result of the discussion I've modified\nthe patch, does it look similar to what you had in mind?\n\nIn this version if all conditions are met and there are corresponding\nunique keys, a new index skip scan path will be added to\nunique_pathlists. In case if requested distinct clauses match with\nunique keys, create_distinct_paths can choose this path without needen\nto know what kind of path is it. Also unique_keys are passed through\nProjectionPath, so optimization for the example mentioned in this thread\nbefore now should work (I've added one test for that).\n\nI haven't changed anything about UniqueKey structure itself (one of the\nsuggestions was about Expr instead of EquivalenceClass), but I believe\nwe need anyway to figure out how two existing imlementation (in this\npatch and from [1]) of this idea can be connected.\n\n[1]: https://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL%3DuaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw%40mail.gmail.com",
"msg_date": "Tue, 24 Mar 2020 17:39:19 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Wed, Mar 11, 2020 at 06:56:09PM +0800, Andy Fan wrote:\n>\n> There was a dedicated thread [1] where David explain his idea very\n> detailed, and you can also check that messages around that message for\n> the context. hope it helps.\n\nThanks for pointing out to this thread! Somehow I've missed it, and now\nlooks like we need to make some efforts to align patches for index skip\nscan and distincClause elimination.\n\n\n",
"msg_date": "Tue, 24 Mar 2020 17:42:21 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 12:41 AM Dmitry Dolgov <9erthalion6@gmail.com>\nwrote:\n\n> > On Wed, Mar 11, 2020 at 06:56:09PM +0800, Andy Fan wrote:\n> >\n> > There was a dedicated thread [1] where David explain his idea very\n> > detailed, and you can also check that messages around that message for\n> > the context. hope it helps.\n>\n> Thanks for pointing out to this thread! Somehow I've missed it, and now\n> looks like we need to make some efforts to align patches for index skip\n> scan and distincClause elimination.\n>\n\nYes:). Looks Index skip scan is a way of make a distinct result without a\nreal\ndistinct node, which happens after the UniqueKeys check where I try to see\nif\nthe result is unique already and before the place where create a unique node\nfor distinct node(With index skip scan we don't need it all). Currently in\nmy patch,\nthe logical here is 1). Check the UniqueKey to see if the result is not\nunique already.\nif not, go to next 2). After the distinct paths are created, I will add\nthe result of distinct\npath as a unique key. Will you add the index skip scan path during\ncreate_distincts_paths\nand add the UniqueKey to RelOptInfo? if so I guess my current patch can\nhandle it since\nit cares about the result of distinct path but no worried about how we\narchive that.\n\n\nBest Regards\nAndy Fan\n\nOn Wed, Mar 25, 2020 at 12:41 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Wed, Mar 11, 2020 at 06:56:09PM +0800, Andy Fan wrote:\n>\n> There was a dedicated thread [1] where David explain his idea very\n> detailed, and you can also check that messages around that message for\n> the context. hope it helps.\n\nThanks for pointing out to this thread! Somehow I've missed it, and now\nlooks like we need to make some efforts to align patches for index skip\nscan and distincClause elimination.Yes:). Looks Index skip scan is a way of make a distinct result without a realdistinct node, which happens after the UniqueKeys check where I try to see ifthe result is unique already and before the place where create a unique nodefor distinct node(With index skip scan we don't need it all). Currently in my patch,the logical here is 1). Check the UniqueKey to see if the result is not unique already.if not, go to next 2). After the distinct paths are created, I will add the result of distinctpath as a unique key. Will you add the index skip scan path during create_distincts_pathsand add the UniqueKey to RelOptInfo? if so I guess my current patch can handle it sinceit cares about the result of distinct path but no worried about how we archive that. Best RegardsAndy Fan",
"msg_date": "Wed, 25 Mar 2020 08:39:29 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 10:08 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Wed, Mar 11, 2020 at 11:17:51AM +1300, David Rowley wrote:\n> >\n> > Yes, I was complaining that a ProjectionPath breaks the optimisation\n> > and I don't believe there's any reason that it should.\n> >\n> > I believe the way to make that work correctly requires paying\n> > attention to the Path's uniquekeys rather than what type of path it\n> > is.\n>\n> Thanks for the suggestion. As a result of the discussion I've modified\n> the patch, does it look similar to what you had in mind?\n>\n> In this version if all conditions are met and there are corresponding\n> unique keys, a new index skip scan path will be added to\n> unique_pathlists. In case if requested distinct clauses match with\n> unique keys, create_distinct_paths can choose this path without needen\n> to know what kind of path is it. Also unique_keys are passed through\n> ProjectionPath, so optimization for the example mentioned in this thread\n> before now should work (I've added one test for that).\n>\n> I haven't changed anything about UniqueKey structure itself (one of the\n> suggestions was about Expr instead of EquivalenceClass), but I believe\n> we need anyway to figure out how two existing imlementation (in this\n> patch and from [1]) of this idea can be connected.\n>\n> [1]: https://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL%3DuaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw%40mail.gmail.com\n\n---\n src/backend/nodes/outfuncs.c | 14 ++++++\n src/backend/nodes/print.c | 39 +++++++++++++++\n src/backend/optimizer/path/Makefile | 3 +-\n src/backend/optimizer/path/allpaths.c | 8 +++\n src/backend/optimizer/path/indxpath.c | 41 ++++++++++++++++\n src/backend/optimizer/path/pathkeys.c | 71 ++++++++++++++++++++++-----\n src/backend/optimizer/plan/planagg.c | 1 +\n src/backend/optimizer/plan/planmain.c | 1 +\n src/backend/optimizer/plan/planner.c | 37 +++++++++++++-\n src/backend/optimizer/util/pathnode.c | 46 +++++++++++++----\n src/include/nodes/nodes.h | 1 +\n src/include/nodes/pathnodes.h | 19 +++++++\n src/include/nodes/print.h | 1 +\n src/include/optimizer/pathnode.h | 2 +\n src/include/optimizer/paths.h | 11 +++++\n 15 files changed, 272 insertions(+), 23 deletions(-)\n\nSeems like you forgot to add the uniquekey.c file in the\nv33-0001-Unique-key.patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Mar 2020 11:31:56 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Wed, Mar 25, 2020 at 11:31:56AM +0530, Dilip Kumar wrote:\n>\n> Seems like you forgot to add the uniquekey.c file in the\n> v33-0001-Unique-key.patch.\n\nOh, you're right, thanks. Here is the corrected patch.",
"msg_date": "Wed, 25 Mar 2020 09:51:00 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 2:19 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Wed, Mar 25, 2020 at 11:31:56AM +0530, Dilip Kumar wrote:\n> >\n> > Seems like you forgot to add the uniquekey.c file in the\n> > v33-0001-Unique-key.patch.\n>\n> Oh, you're right, thanks. Here is the corrected patch.\n\nI was just wondering how the distinct will work with the \"skip scan\"\nif we have some filter? I mean every time we select the unique row\nbased on the prefix key and that might get rejected by an external\nfilter right? So I tried an example to check this.\n\npostgres[50006]=# \\d t\n Table \"public.t\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\nIndexes:\n \"idx\" btree (a, b)\n\npostgres[50006]=# insert into t select 2, i from generate_series(1, 200)i;\nINSERT 0 200\npostgres[50006]=# insert into t select 1, i from generate_series(1, 200)i;\nINSERT 0 200\n\npostgres[50006]=# set enable_indexskipscan =off;\nSET\npostgres[50006]=# select distinct(a) from t where b%100=0;\n a\n---\n 1\n 2\n(2 rows)\n\npostgres[50006]=# set enable_indexskipscan =on;\nSET\npostgres[50006]=# select distinct(a) from t where b%100=0;\n a\n---\n(0 rows)\n\npostgres[50006]=# explain select distinct(a) from t where b%100=0;\n QUERY PLAN\n-------------------------------------------------------------------\n Index Only Scan using idx on t (cost=0.15..1.55 rows=10 width=4)\n Skip scan: true\n Filter: ((b % 100) = 0)\n(3 rows)\n\nI think in such cases we should not select the skip scan. This should\nbehave like we have a filter on the non-index field.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 5 Apr 2020 16:30:51 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Sun, Apr 05, 2020 at 04:30:51PM +0530, Dilip Kumar wrote:\n>\n> I was just wondering how the distinct will work with the \"skip scan\"\n> if we have some filter? I mean every time we select the unique row\n> based on the prefix key and that might get rejected by an external\n> filter right?\n\nNot exactly. In the case of index-only scan, we skipping to the first\nunique position, and then use already existing functionality\n(_bt_readpage with stepping to the next pages) to filter out those\nrecords that do not pass the condition. There are even couple of tests\nin the patch for this. In case of index scan, when there are some\nconditions, current implementation do not consider skipping.\n\n> So I tried an example to check this.\n\nCan you tell on which version of the patch you were testing?\n\n\n",
"msg_date": "Sun, 5 Apr 2020 18:10:29 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Sun, Apr 5, 2020 at 9:39 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Sun, Apr 05, 2020 at 04:30:51PM +0530, Dilip Kumar wrote:\n> >\n> > I was just wondering how the distinct will work with the \"skip scan\"\n> > if we have some filter? I mean every time we select the unique row\n> > based on the prefix key and that might get rejected by an external\n> > filter right?\n>\n> Not exactly. In the case of index-only scan, we skipping to the first\n> unique position, and then use already existing functionality\n> (_bt_readpage with stepping to the next pages) to filter out those\n> records that do not pass the condition.\n\nI agree but that will work if we have a valid index clause, but\n\"b%100=0\" condition will not create an index clause, right? However,\nif we change the query to\nselect distinct(a) from t where b=100 then it works fine because this\ncondition will create an index clause.\n\n There are even couple of tests\n> in the patch for this. In case of index scan, when there are some\n> conditions, current implementation do not consider skipping.\n>\n> > So I tried an example to check this.\n>\n> Can you tell on which version of the patch you were testing?\n\nI have tested on v33.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Apr 2020 09:56:02 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> > > On Sun, Apr 05, 2020 at 04:30:51PM +0530, Dilip Kumar wrote:\r\n> > >\r\n> > > I was just wondering how the distinct will work with the \"skip scan\"\r\n> > > if we have some filter? I mean every time we select the unique row\r\n> > > based on the prefix key and that might get rejected by an external\r\n> > > filter right?\r\n> >\r\n\r\nYeah, you're correct. This patch only handles the index conditions and doesn't handle any filters correctly. There's a check in the planner for the IndexScan for example that only columns that exist in the index are used. However, this check is not sufficient as your example shows. There's a number of ways we can force a 'filter' rather than an 'index condition' and still choose a skip scan (WHERE b!=0 is another one I think). This leads to incorrect query results.\r\n\r\nThis patch would need some logic in the planner to never choose the skip scan in these cases. Better long-term solution is to adapt the rest of the executor to work correctly in the cases of external filters (this ties in with the previous visibility discussion as well, as that's basically also an external filter, although a special case).\r\nIn the patch I posted a week ago these cases are all handled correctly, as it introduces this extra logic in the Executor.\r\n\r\n-Floris\r\n\r\n\r\n",
"msg_date": "Mon, 6 Apr 2020 07:44:00 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "On Mon, Apr 6, 2020 at 1:14 PM Floris Van Nee <florisvannee@optiver.com> wrote:\n>\n> > > > On Sun, Apr 05, 2020 at 04:30:51PM +0530, Dilip Kumar wrote:\n> > > >\n> > > > I was just wondering how the distinct will work with the \"skip scan\"\n> > > > if we have some filter? I mean every time we select the unique row\n> > > > based on the prefix key and that might get rejected by an external\n> > > > filter right?\n> > >\n>\n> Yeah, you're correct. This patch only handles the index conditions and doesn't handle any filters correctly. There's a check in the planner for the IndexScan for example that only columns that exist in the index are used. However, this check is not sufficient as your example shows. There's a number of ways we can force a 'filter' rather than an 'index condition' and still choose a skip scan (WHERE b!=0 is another one I think). This leads to incorrect query results.\n\nRight\n\n> This patch would need some logic in the planner to never choose the skip scan in these cases. Better long-term solution is to adapt the rest of the executor to work correctly in the cases of external filters (this ties in with the previous visibility discussion as well, as that's basically also an external filter, although a special case).\n\nI agree\n\n> In the patch I posted a week ago these cases are all handled correctly, as it introduces this extra logic in the Executor.\n\nOkay, So I think we can merge those fixes in Dmitry's patch set.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Apr 2020 13:38:48 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Mon, Apr 6, 2020 at 1:14 PM Floris Van Nee <florisvannee@optiver.com> wrote:\n> >\n> > There's a number of ways we can force a 'filter' rather than an\n> > 'index condition'.\n\nHm, I wasn't aware about this one, thanks for bringing this up. Btw,\nFloris, I would appreciate if in the future you can make it more visible\nthat changes you suggest contain some fixes. E.g. it wasn't clear for me\nfrom your previous email that that's the case, and it doesn't make sense\nto pull into different direction when we're trying to achieve the same\ngoal :)\n\n> > In the patch I posted a week ago these cases are all handled\n> > correctly, as it introduces this extra logic in the Executor.\n>\n> Okay, So I think we can merge those fixes in Dmitry's patch set.\n\nI'll definitely take a look at suggested changes in filtering part.\n\n\n",
"msg_date": "Mon, 6 Apr 2020 18:39:41 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "\n> \n> Hm, I wasn't aware about this one, thanks for bringing this up. Btw, Floris, I\n> would appreciate if in the future you can make it more visible that changes you\n> suggest contain some fixes. E.g. it wasn't clear for me from your previous email\n> that that's the case, and it doesn't make sense to pull into different direction\n> when we're trying to achieve the same goal :)\n\nI wasn't aware that this particular case could be triggered before I saw Dilip's email, otherwise I'd have mentioned it here of course. It's just that because my patch handles filter conditions in general, it works for this case too.\n\n> \n> > > In the patch I posted a week ago these cases are all handled\n> > > correctly, as it introduces this extra logic in the Executor.\n> >\n> > Okay, So I think we can merge those fixes in Dmitry's patch set.\n> \n> I'll definitely take a look at suggested changes in filtering part.\n\nIt may be possible to just merge the filtering part into your patch, but I'm not entirely sure. Basically you have to pull the information about skipping one level up, out of the node, into the generic IndexNext code. \n\nI'm eager to get some form of skip scans into master - any kind of patch that makes this possible is fine by me. Long term I think my version provides a more generic approach, with which we can optimize a much broader range of queries. However, since many more eyes have seen your patch so far, I hope yours can be committed much sooner. My knowledge on this committer process is limited though. That's why I've just posted mine so far in the hope of collecting some feedback, also on how we should continue with the process.\n\n-Floris\n\n\n\n\n",
"msg_date": "Mon, 6 Apr 2020 18:31:08 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "> On Mon, Apr 06, 2020 at 06:31:08PM +0000, Floris Van Nee wrote:\n>\n> > Hm, I wasn't aware about this one, thanks for bringing this up. Btw, Floris, I\n> > would appreciate if in the future you can make it more visible that changes you\n> > suggest contain some fixes. E.g. it wasn't clear for me from your previous email\n> > that that's the case, and it doesn't make sense to pull into different direction\n> > when we're trying to achieve the same goal :)\n>\n> I wasn't aware that this particular case could be triggered before I saw Dilip's email, otherwise I'd have mentioned it here of course. It's just that because my patch handles filter conditions in general, it works for this case too.\n\nOh, then fortunately I've got a wrong impression, sorry and thanks for\nclarification :)\n\n> > > > In the patch I posted a week ago these cases are all handled\n> > > > correctly, as it introduces this extra logic in the Executor.\n> > >\n> > > Okay, So I think we can merge those fixes in Dmitry's patch set.\n> >\n> > I'll definitely take a look at suggested changes in filtering part.\n>\n> It may be possible to just merge the filtering part into your patch, but I'm not entirely sure. Basically you have to pull the information about skipping one level up, out of the node, into the generic IndexNext code.\n\nI was actually thinking more about just preventing skip scan in this\nsituation, which is if I'm not mistaken could be solved by inspecting\nqual conditions to figure out if they're covered in the index -\nsomething like in attachments (this implementation is actually too\nrestrictive in this sense and will not allow e.g. expressions, that's\nwhy I haven't bumped patch set version for it - soon I'll post an\nextended version).\n\nOther than that to summarize current open points for future readers\n(this thread somehow became quite big):\n\n* Making UniqueKeys usage more generic to allow using skip scan for more\n use cases (hopefully it was covered by the v33, but I still need a\n confirmation from David, like blinking twice or something).\n\n* Suspicious performance difference between different type of workload,\n mentioned by Tomas (unfortunately I had no chance yet to investigate).\n\n* Thinking about supporting conditions, that are not covered by the index,\n to make skipping more flexible (one of the potential next steps in the\n future, as suggested by Floris).",
"msg_date": "Tue, 7 Apr 2020 21:42:20 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> \n> * Suspicious performance difference between different type of workload,\n> mentioned by Tomas (unfortunately I had no chance yet to investigate).\n> \n\nHis benchmark results indeed most likely point to multiple comparisons being done. Since the most likely place where these occur is _bt_readpage, I suspect this is called multiple times. Looking at your patch, I think that's indeed the case. For example, suppose a page contains [1,2,3,4,5] and the planner makes a complete misestimation and chooses a skip scan here. First call to _bt_readpage will compare every tuple on the page already and store everything in the workspace, which will now contain [1,2,3,4,5]. However, when a skip is done the elements on the page (not the workspace) are compared to find the next one. Then, another _bt_readpage is done, starting at the new offnum. So we'll compare every tuple (except 1) on the page again. Workspace now contains [2,3,4,5]. Next tuple we'll end up with [3,4,5] etc. So tuple 5 actually gets compared 5 times in _bt_readpage alone.\n\n-Floris\n\n\n\n",
"msg_date": "Tue, 7 Apr 2020 20:19:08 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Index Skip Scan"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 1:10 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Mon, Apr 06, 2020 at 06:31:08PM +0000, Floris Van Nee wrote:\n> >\n> > > Hm, I wasn't aware about this one, thanks for bringing this up. Btw, Floris, I\n> > > would appreciate if in the future you can make it more visible that changes you\n> > > suggest contain some fixes. E.g. it wasn't clear for me from your previous email\n> > > that that's the case, and it doesn't make sense to pull into different direction\n> > > when we're trying to achieve the same goal :)\n> >\n> > I wasn't aware that this particular case could be triggered before I saw Dilip's email, otherwise I'd have mentioned it here of course. It's just that because my patch handles filter conditions in general, it works for this case too.\n>\n> Oh, then fortunately I've got a wrong impression, sorry and thanks for\n> clarification :)\n>\n> > > > > In the patch I posted a week ago these cases are all handled\n> > > > > correctly, as it introduces this extra logic in the Executor.\n> > > >\n> > > > Okay, So I think we can merge those fixes in Dmitry's patch set.\n> > >\n> > > I'll definitely take a look at suggested changes in filtering part.\n> >\n> > It may be possible to just merge the filtering part into your patch, but I'm not entirely sure. Basically you have to pull the information about skipping one level up, out of the node, into the generic IndexNext code.\n>\n> I was actually thinking more about just preventing skip scan in this\n> situation, which is if I'm not mistaken could be solved by inspecting\n> qual conditions to figure out if they're covered in the index -\n> something like in attachments (this implementation is actually too\n> restrictive in this sense and will not allow e.g. expressions, that's\n> why I haven't bumped patch set version for it - soon I'll post an\n> extended version).\n\nSome more comments...\n\n+ so->skipScanKey->nextkey = ScanDirectionIsForward(dir);\n+ _bt_update_skip_scankeys(scan, indexRel);\n+\n.......\n+ /*\n+ * We haven't found scan key within the current page, so let's scan from\n+ * the root. Use _bt_search and _bt_binsrch to get the buffer and offset\n+ * number\n+ */\n+ so->skipScanKey->nextkey = ScanDirectionIsForward(dir);\n+ stack = _bt_search(scan->indexRelation, so->skipScanKey,\n+ &buf, BT_READ, scan->xs_snapshot);\n\nWhy do we need to set so->skipScanKey->nextkey =\nScanDirectionIsForward(dir); multiple times? I think it is fine to\njust\nset it once?\n\n+static inline bool\n+_bt_scankey_within_page(IndexScanDesc scan, BTScanInsert key,\n+ Buffer buf, ScanDirection dir)\n+{\n+ OffsetNumber low, high;\n+ Page page = BufferGetPage(buf);\n+ BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);\n+\n+ low = P_FIRSTDATAKEY(opaque);\n+ high = PageGetMaxOffsetNumber(page);\n+\n+ if (unlikely(high < low))\n+ return false;\n+\n+ return (_bt_compare(scan->indexRelation, key, page, low) > 0 &&\n+ _bt_compare(scan->indexRelation, key, page, high) < 1);\n+}\n\nI think the high key condition should be changed to\n_bt_compare(scan->indexRelation, key, page, high) < 0 ? Because if\nprefix qual is equal to the high key then also\nthere is no point in searching on the current page so we can directly skip.\n\n\n+ /* Check if an index skip scan is possible. */\n+ can_skip = enable_indexskipscan & index->amcanskip;\n+\n+ /*\n+ * Skip scan is not supported when there are qual conditions, which are not\n+ * covered by index. The reason for that is that those conditions are\n+ * evaluated later, already after skipping was applied.\n+ *\n+ * TODO: This implementation is too restrictive, and doesn't allow e.g.\n+ * index expressions. For that we need to examine index_clauses too.\n+ */\n+ if (root->parse->jointree != NULL)\n+ {\n+ ListCell *lc;\n+\n+ foreach(lc, (List *)root->parse->jointree->quals)\n+ {\n+ Node *expr, *qual = (Node *) lfirst(lc);\n+ Var *var;\n\nI think we can avoid checking for expression if can_skip is already false.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 11 Apr 2020 15:17:25 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, 8 Apr 2020 at 07:40, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Other than that to summarize current open points for future readers\n> (this thread somehow became quite big):\n>\n> * Making UniqueKeys usage more generic to allow using skip scan for more\n> use cases (hopefully it was covered by the v33, but I still need a\n> confirmation from David, like blinking twice or something).\n\nI've not yet looked at the latest patch, but I did put some thoughts\ninto an email on the other thread that's been discussing UniqueKeys\n[1].\n\nI'm keen to hear thoughts on the plan I mentioned over there. Likely\nit would be best to discuss the specifics of what additional features\nwe need to add to UniqueKeys for skip scans over here, but discuss any\nchances which affect both patches over there. We certainly can't have\ntwo separate implementations of UniqueKeys, so I believe the skip\nscans UniqueKeys patch should most likely be based on the one in [1]\nor some descendant of it.\n\n[1] https://www.postgresql.org/message-id/CAApHDvpx1qED1uLqubcKJ=oHatCMd7pTUKkdq0B72_08nbR3Hw@mail.gmail.com\n\n\n",
"msg_date": "Tue, 14 Apr 2020 21:19:22 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "Sorry for late reply.\n\n> On Tue, Apr 14, 2020 at 09:19:22PM +1200, David Rowley wrote:\n>\n> I've not yet looked at the latest patch, but I did put some thoughts\n> into an email on the other thread that's been discussing UniqueKeys\n> [1].\n>\n> I'm keen to hear thoughts on the plan I mentioned over there. Likely\n> it would be best to discuss the specifics of what additional features\n> we need to add to UniqueKeys for skip scans over here, but discuss any\n> chances which affect both patches over there. We certainly can't have\n> two separate implementations of UniqueKeys, so I believe the skip\n> scans UniqueKeys patch should most likely be based on the one in [1]\n> or some descendant of it.\n>\n> [1] https://www.postgresql.org/message-id/CAApHDvpx1qED1uLqubcKJ=oHatCMd7pTUKkdq0B72_08nbR3Hw@mail.gmail.com\n\nYes, I've come to the same conclusion, although I have my concerns about\nhaving such a dependency between patches. Will look at the suggested\npatches soon.\n\n\n",
"msg_date": "Sun, 10 May 2020 19:36:58 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Sat, Apr 11, 2020 at 03:17:25PM +0530, Dilip Kumar wrote:\n>\n> Some more comments...\n\nThanks for reviewing. Since this patch took much longer than I expected,\nit's useful to have someone to look at it with a \"fresh eyes\".\n\n> + so->skipScanKey->nextkey = ScanDirectionIsForward(dir);\n> + _bt_update_skip_scankeys(scan, indexRel);\n> +\n> .......\n> + /*\n> + * We haven't found scan key within the current page, so let's scan from\n> + * the root. Use _bt_search and _bt_binsrch to get the buffer and offset\n> + * number\n> + */\n> + so->skipScanKey->nextkey = ScanDirectionIsForward(dir);\n> + stack = _bt_search(scan->indexRelation, so->skipScanKey,\n> + &buf, BT_READ, scan->xs_snapshot);\n>\n> Why do we need to set so->skipScanKey->nextkey =\n> ScanDirectionIsForward(dir); multiple times? I think it is fine to\n> just set it once?\n\nI believe it was necessary for previous implementations, but in the\ncurrent version we can avoid this, you're right.\n\n> +static inline bool\n> +_bt_scankey_within_page(IndexScanDesc scan, BTScanInsert key,\n> + Buffer buf, ScanDirection dir)\n> +{\n> + OffsetNumber low, high;\n> + Page page = BufferGetPage(buf);\n> + BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);\n> +\n> + low = P_FIRSTDATAKEY(opaque);\n> + high = PageGetMaxOffsetNumber(page);\n> +\n> + if (unlikely(high < low))\n> + return false;\n> +\n> + return (_bt_compare(scan->indexRelation, key, page, low) > 0 &&\n> + _bt_compare(scan->indexRelation, key, page, high) < 1);\n> +}\n>\n> I think the high key condition should be changed to\n> _bt_compare(scan->indexRelation, key, page, high) < 0 ? Because if\n> prefix qual is equal to the high key then also there is no point in\n> searching on the current page so we can directly skip.\n\n From nbtree/README and comments to functions like _bt_split I've got an\nimpression that the high key could be equal to the last item on the leaf\npage, so there is a point in searching. Is that incorrect?\n\n> + /* Check if an index skip scan is possible. */\n> + can_skip = enable_indexskipscan & index->amcanskip;\n> +\n> + /*\n> + * Skip scan is not supported when there are qual conditions, which are not\n> + * covered by index. The reason for that is that those conditions are\n> + * evaluated later, already after skipping was applied.\n> + *\n> + * TODO: This implementation is too restrictive, and doesn't allow e.g.\n> + * index expressions. For that we need to examine index_clauses too.\n> + */\n> + if (root->parse->jointree != NULL)\n> + {\n> + ListCell *lc;\n> +\n> + foreach(lc, (List *)root->parse->jointree->quals)\n> + {\n> + Node *expr, *qual = (Node *) lfirst(lc);\n> + Var *var;\n>\n> I think we can avoid checking for expression if can_skip is already false.\n\nYes, that makes sense. I'll include your suggestions into the next\nrebased version I'm preparing.\n\n\n",
"msg_date": "Sun, 10 May 2020 19:49:35 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Sun, May 10, 2020 at 11:17 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Sat, Apr 11, 2020 at 03:17:25PM +0530, Dilip Kumar wrote:\n> >\n> > Some more comments...\n>\n> Thanks for reviewing. Since this patch took much longer than I expected,\n> it's useful to have someone to look at it with a \"fresh eyes\".\n>\n> > + so->skipScanKey->nextkey = ScanDirectionIsForward(dir);\n> > + _bt_update_skip_scankeys(scan, indexRel);\n> > +\n> > .......\n> > + /*\n> > + * We haven't found scan key within the current page, so let's scan from\n> > + * the root. Use _bt_search and _bt_binsrch to get the buffer and offset\n> > + * number\n> > + */\n> > + so->skipScanKey->nextkey = ScanDirectionIsForward(dir);\n> > + stack = _bt_search(scan->indexRelation, so->skipScanKey,\n> > + &buf, BT_READ, scan->xs_snapshot);\n> >\n> > Why do we need to set so->skipScanKey->nextkey =\n> > ScanDirectionIsForward(dir); multiple times? I think it is fine to\n> > just set it once?\n>\n> I believe it was necessary for previous implementations, but in the\n> current version we can avoid this, you're right.\n>\n> > +static inline bool\n> > +_bt_scankey_within_page(IndexScanDesc scan, BTScanInsert key,\n> > + Buffer buf, ScanDirection dir)\n> > +{\n> > + OffsetNumber low, high;\n> > + Page page = BufferGetPage(buf);\n> > + BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);\n> > +\n> > + low = P_FIRSTDATAKEY(opaque);\n> > + high = PageGetMaxOffsetNumber(page);\n> > +\n> > + if (unlikely(high < low))\n> > + return false;\n> > +\n> > + return (_bt_compare(scan->indexRelation, key, page, low) > 0 &&\n> > + _bt_compare(scan->indexRelation, key, page, high) < 1);\n> > +}\n> >\n> > I think the high key condition should be changed to\n> > _bt_compare(scan->indexRelation, key, page, high) < 0 ? Because if\n> > prefix qual is equal to the high key then also there is no point in\n> > searching on the current page so we can directly skip.\n>\n> From nbtree/README and comments to functions like _bt_split I've got an\n> impression that the high key could be equal to the last item on the leaf\n> page, so there is a point in searching. Is that incorrect?\n\nBut IIUC, here we want to decide whether we will get the next key in\nthe current page or not? Is my understanding is correct? So if our\nkey (the last tuple key) is equal to the high key means the max key on\nthis page is the same as what we already got in the last tuple so why\nwould we want to go on this page? because this will not give us the\nnew key. So ideally, we should only be looking into this page if our\nlast tuple key is smaller than the high key. Am I missing something?\n\n>\n> > + /* Check if an index skip scan is possible. */\n> > + can_skip = enable_indexskipscan & index->amcanskip;\n> > +\n> > + /*\n> > + * Skip scan is not supported when there are qual conditions, which are not\n> > + * covered by index. The reason for that is that those conditions are\n> > + * evaluated later, already after skipping was applied.\n> > + *\n> > + * TODO: This implementation is too restrictive, and doesn't allow e.g.\n> > + * index expressions. For that we need to examine index_clauses too.\n> > + */\n> > + if (root->parse->jointree != NULL)\n> > + {\n> > + ListCell *lc;\n> > +\n> > + foreach(lc, (List *)root->parse->jointree->quals)\n> > + {\n> > + Node *expr, *qual = (Node *) lfirst(lc);\n> > + Var *var;\n> >\n> > I think we can avoid checking for expression if can_skip is already false.\n>\n> Yes, that makes sense. I'll include your suggestions into the next\n> rebased version I'm preparing.\n\nOk.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 May 2020 16:04:00 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Mon, May 11, 2020 at 04:04:00PM +0530, Dilip Kumar wrote:\n>\n> > > +static inline bool\n> > > +_bt_scankey_within_page(IndexScanDesc scan, BTScanInsert key,\n> > > + Buffer buf, ScanDirection dir)\n> > > +{\n> > > + OffsetNumber low, high;\n> > > + Page page = BufferGetPage(buf);\n> > > + BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);\n> > > +\n> > > + low = P_FIRSTDATAKEY(opaque);\n> > > + high = PageGetMaxOffsetNumber(page);\n> > > +\n> > > + if (unlikely(high < low))\n> > > + return false;\n> > > +\n> > > + return (_bt_compare(scan->indexRelation, key, page, low) > 0 &&\n> > > + _bt_compare(scan->indexRelation, key, page, high) < 1);\n> > > +}\n> > >\n> > > I think the high key condition should be changed to\n> > > _bt_compare(scan->indexRelation, key, page, high) < 0 ? Because if\n> > > prefix qual is equal to the high key then also there is no point in\n> > > searching on the current page so we can directly skip.\n> >\n> > From nbtree/README and comments to functions like _bt_split I've got an\n> > impression that the high key could be equal to the last item on the leaf\n> > page, so there is a point in searching. Is that incorrect?\n>\n> But IIUC, here we want to decide whether we will get the next key in\n> the current page or not?\n\nIn general this function does what it says, it checks wether or not the\nprovided scankey could be found within the page. All the logic about\nfinding a proper next key to fetch is implemented on the call site, and\nwithin this function we want to test whatever was passed in. Does it\nanswer the question?\n\n\n",
"msg_date": "Mon, 11 May 2020 13:26:55 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Mon, May 11, 2020 at 4:55 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Mon, May 11, 2020 at 04:04:00PM +0530, Dilip Kumar wrote:\n> >\n> > > > +static inline bool\n> > > > +_bt_scankey_within_page(IndexScanDesc scan, BTScanInsert key,\n> > > > + Buffer buf, ScanDirection dir)\n> > > > +{\n> > > > + OffsetNumber low, high;\n> > > > + Page page = BufferGetPage(buf);\n> > > > + BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);\n> > > > +\n> > > > + low = P_FIRSTDATAKEY(opaque);\n> > > > + high = PageGetMaxOffsetNumber(page);\n> > > > +\n> > > > + if (unlikely(high < low))\n> > > > + return false;\n> > > > +\n> > > > + return (_bt_compare(scan->indexRelation, key, page, low) > 0 &&\n> > > > + _bt_compare(scan->indexRelation, key, page, high) < 1);\n> > > > +}\n> > > >\n> > > > I think the high key condition should be changed to\n> > > > _bt_compare(scan->indexRelation, key, page, high) < 0 ? Because if\n> > > > prefix qual is equal to the high key then also there is no point in\n> > > > searching on the current page so we can directly skip.\n> > >\n> > > From nbtree/README and comments to functions like _bt_split I've got an\n> > > impression that the high key could be equal to the last item on the leaf\n> > > page, so there is a point in searching. Is that incorrect?\n> >\n> > But IIUC, here we want to decide whether we will get the next key in\n> > the current page or not?\n>\n> In general this function does what it says, it checks wether or not the\n> provided scankey could be found within the page. All the logic about\n> finding a proper next key to fetch is implemented on the call site, and\n> within this function we want to test whatever was passed in. Does it\n> answer the question?\n\nOk, I agree that the function is doing what it is expected to do.\nBut, then I have a problem with this call site.\n\n+ /* Check if the next unique key can be found within the current page.\n+ * Since we do not lock the current page between jumps, it's possible\n+ * that it was splitted since the last time we saw it. This is fine in\n+ * case of scanning forward, since page split to the right and we are\n+ * still on the left most page. In case of scanning backwards it's\n+ * possible to loose some pages and we need to remember the previous\n+ * page, and then follow the right link from the current page until we\n+ * find the original one.\n+ *\n+ * Since the whole idea of checking the current page is to protect\n+ * ourselves and make more performant statistic mismatch case when\n+ * there are too many distinct values for jumping, it's not clear if\n+ * the complexity of this solution in case of backward scan is\n+ * justified, so for now just avoid it.\n+ */\n+ if (BufferIsValid(so->currPos.buf) && ScanDirectionIsForward(dir))\n+ {\n+ LockBuffer(so->currPos.buf, BT_READ);\n+\n+ if (_bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir))\n+ {\n\nHere we expect whether the \"next\" unique key can be found on this page\nor not, but we are using the function which suggested whether the\n\"current\" key can be found on this page or not. I think in boundary\ncases where the high key is equal to the current key, this function\nwill return true (which is expected from this function), and based on\nthat we will simply scan the current page and IMHO that cost could be\navoided no?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 May 2020 14:37:21 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 5:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> You can add another assertion that calls a new utility function in\n> bufmgr.c. That can use the same logic as this existing assertion in\n> FlushOneBuffer():\n>\n> Assert(LWLockHeldByMe(BufferDescriptorGetContentLock(bufHdr)));\n>\n> We haven't needed assertions like this so far because it's usually it\n> is clear whether or not a buffer lock is held (plus the bufmgr.c\n> assertions help on their own).\n\nJust in case anybody missed it, I am working on a patch that makes\nnbtree use Valgrind instrumentation to detect page accessed without a\nbuffer content lock held:\n\nhttps://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com\n\nThere is also one component that detects when any buffer is accessed\nwithout a buffer pin.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 May 2020 15:55:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Wed, May 13, 2020 at 02:37:21PM +0530, Dilip Kumar wrote:\n>\n> + if (_bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir))\n> + {\n>\n> Here we expect whether the \"next\" unique key can be found on this page\n> or not, but we are using the function which suggested whether the\n> \"current\" key can be found on this page or not. I think in boundary\n> cases where the high key is equal to the current key, this function\n> will return true (which is expected from this function), and based on\n> that we will simply scan the current page and IMHO that cost could be\n> avoided no?\n\nYes, looks like you're right, there is indeed an unecessary extra scan\nhappening. To avoid that we can see the key->nextkey and adjust higher\nboundary correspondingly. Will also add this into the next rebased\npatch, thanks!\n\n\n",
"msg_date": "Fri, 15 May 2020 14:38:02 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Fri, 15 May 2020 at 6:06 PM, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Wed, May 13, 2020 at 02:37:21PM +0530, Dilip Kumar wrote:\n> >\n> > + if (_bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf,\n> dir))\n> > + {\n> >\n> > Here we expect whether the \"next\" unique key can be found on this page\n> > or not, but we are using the function which suggested whether the\n> > \"current\" key can be found on this page or not. I think in boundary\n> > cases where the high key is equal to the current key, this function\n> > will return true (which is expected from this function), and based on\n> > that we will simply scan the current page and IMHO that cost could be\n> > avoided no?\n>\n> Yes, looks like you're right, there is indeed an unecessary extra scan\n> happening. To avoid that we can see the key->nextkey and adjust higher\n> boundary correspondingly. Will also add this into the next rebased\n> patch, thanks!\n\n\nGreat thanks!\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, 15 May 2020 at 6:06 PM, Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Wed, May 13, 2020 at 02:37:21PM +0530, Dilip Kumar wrote:\n>\n> + if (_bt_scankey_within_page(scan, so->skipScanKey, so->currPos.buf, dir))\n> + {\n>\n> Here we expect whether the \"next\" unique key can be found on this page\n> or not, but we are using the function which suggested whether the\n> \"current\" key can be found on this page or not. I think in boundary\n> cases where the high key is equal to the current key, this function\n> will return true (which is expected from this function), and based on\n> that we will simply scan the current page and IMHO that cost could be\n> avoided no?\n\nYes, looks like you're right, there is indeed an unecessary extra scan\nhappening. To avoid that we can see the key->nextkey and adjust higher\nboundary correspondingly. Will also add this into the next rebased\npatch, thanks!Great thanks!-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 16 May 2020 09:55:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 3:41 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Mon, Apr 06, 2020 at 06:31:08PM +0000, Floris Van Nee wrote:\n> >\n> > > Hm, I wasn't aware about this one, thanks for bringing this up. Btw,\n> Floris, I\n> > > would appreciate if in the future you can make it more visible that\n> changes you\n> > > suggest contain some fixes. E.g. it wasn't clear for me from your\n> previous email\n> > > that that's the case, and it doesn't make sense to pull into different\n> direction\n> > > when we're trying to achieve the same goal :)\n> >\n> > I wasn't aware that this particular case could be triggered before I saw\n> Dilip's email, otherwise I'd have mentioned it here of course. It's just\n> that because my patch handles filter conditions in general, it works for\n> this case too.\n>\n> Oh, then fortunately I've got a wrong impression, sorry and thanks for\n> clarification :)\n>\n> > > > > In the patch I posted a week ago these cases are all handled\n> > > > > correctly, as it introduces this extra logic in the Executor.\n> > > >\n> > > > Okay, So I think we can merge those fixes in Dmitry's patch set.\n> > >\n> > > I'll definitely take a look at suggested changes in filtering part.\n> >\n> > It may be possible to just merge the filtering part into your patch, but\n> I'm not entirely sure. Basically you have to pull the information about\n> skipping one level up, out of the node, into the generic IndexNext code.\n>\n> I was actually thinking more about just preventing skip scan in this\n> situation, which is if I'm not mistaken could be solved by inspecting\n> qual conditions to figure out if they're covered in the index -\n> something like in attachments (this implementation is actually too\n> restrictive in this sense and will not allow e.g. expressions, that's\n> why I haven't bumped patch set version for it - soon I'll post an\n> extended version).\n>\n> Other than that to summarize current open points for future readers\n> (this thread somehow became quite big):\n>\n> * Making UniqueKeys usage more generic to allow using skip scan for more\n> use cases (hopefully it was covered by the v33, but I still need a\n> confirmation from David, like blinking twice or something).\n>\n> * Suspicious performance difference between different type of workload,\n> mentioned by Tomas (unfortunately I had no chance yet to investigate).\n>\n> * Thinking about supporting conditions, that are not covered by the index,\n> to make skipping more flexible (one of the potential next steps in the\n> future, as suggested by Floris).\n>\n\nLooks this is the latest patch, which commit it is based on? Thanks\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Apr 8, 2020 at 3:41 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Mon, Apr 06, 2020 at 06:31:08PM +0000, Floris Van Nee wrote:\n>\n> > Hm, I wasn't aware about this one, thanks for bringing this up. Btw, Floris, I\n> > would appreciate if in the future you can make it more visible that changes you\n> > suggest contain some fixes. E.g. it wasn't clear for me from your previous email\n> > that that's the case, and it doesn't make sense to pull into different direction\n> > when we're trying to achieve the same goal :)\n>\n> I wasn't aware that this particular case could be triggered before I saw Dilip's email, otherwise I'd have mentioned it here of course. It's just that because my patch handles filter conditions in general, it works for this case too.\n\nOh, then fortunately I've got a wrong impression, sorry and thanks for\nclarification :)\n\n> > > > In the patch I posted a week ago these cases are all handled\n> > > > correctly, as it introduces this extra logic in the Executor.\n> > >\n> > > Okay, So I think we can merge those fixes in Dmitry's patch set.\n> >\n> > I'll definitely take a look at suggested changes in filtering part.\n>\n> It may be possible to just merge the filtering part into your patch, but I'm not entirely sure. Basically you have to pull the information about skipping one level up, out of the node, into the generic IndexNext code.\n\nI was actually thinking more about just preventing skip scan in this\nsituation, which is if I'm not mistaken could be solved by inspecting\nqual conditions to figure out if they're covered in the index -\nsomething like in attachments (this implementation is actually too\nrestrictive in this sense and will not allow e.g. expressions, that's\nwhy I haven't bumped patch set version for it - soon I'll post an\nextended version).\n\nOther than that to summarize current open points for future readers\n(this thread somehow became quite big):\n\n* Making UniqueKeys usage more generic to allow using skip scan for more\n use cases (hopefully it was covered by the v33, but I still need a\n confirmation from David, like blinking twice or something).\n\n* Suspicious performance difference between different type of workload,\n mentioned by Tomas (unfortunately I had no chance yet to investigate).\n\n* Thinking about supporting conditions, that are not covered by the index,\n to make skipping more flexible (one of the potential next steps in the\n future, as suggested by Floris).\nLooks this is the latest patch, which commit it is based on? Thanks-- Best RegardsAndy Fan",
"msg_date": "Tue, 2 Jun 2020 20:36:31 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "> On Tue, Jun 02, 2020 at 08:36:31PM +0800, Andy Fan wrote:\n>\n> > Other than that to summarize current open points for future readers\n> > (this thread somehow became quite big):\n> >\n> > * Making UniqueKeys usage more generic to allow using skip scan for more\n> > use cases (hopefully it was covered by the v33, but I still need a\n> > confirmation from David, like blinking twice or something).\n> >\n> > * Suspicious performance difference between different type of workload,\n> > mentioned by Tomas (unfortunately I had no chance yet to investigate).\n> >\n> > * Thinking about supporting conditions, that are not covered by the index,\n> > to make skipping more flexible (one of the potential next steps in the\n> > future, as suggested by Floris).\n> >\n>\n> Looks this is the latest patch, which commit it is based on? Thanks\n\nI have a rebased version, if you're about it. Didn't posted it yet\nmostly since I'm in the middle of adapting it to the UniqueKeys from\nother thread. Would it be ok for you to wait a bit until I'll post\nfinished version?\n\n\n",
"msg_date": "Tue, 2 Jun 2020 15:40:06 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
},
{
"msg_contents": "On Tue, Jun 2, 2020 at 9:38 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Tue, Jun 02, 2020 at 08:36:31PM +0800, Andy Fan wrote:\n> >\n> > > Other than that to summarize current open points for future readers\n> > > (this thread somehow became quite big):\n> > >\n> > > * Making UniqueKeys usage more generic to allow using skip scan for\n> more\n> > > use cases (hopefully it was covered by the v33, but I still need a\n> > > confirmation from David, like blinking twice or something).\n> > >\n> > > * Suspicious performance difference between different type of workload,\n> > > mentioned by Tomas (unfortunately I had no chance yet to\n> investigate).\n> > >\n> > > * Thinking about supporting conditions, that are not covered by the\n> index,\n> > > to make skipping more flexible (one of the potential next steps in\n> the\n> > > future, as suggested by Floris).\n> > >\n> >\n> > Looks this is the latest patch, which commit it is based on? Thanks\n>\n> I have a rebased version, if you're about it. Didn't posted it yet\n> mostly since I'm in the middle of adapting it to the UniqueKeys from\n> other thread. Would it be ok for you to wait a bit until I'll post\n> finished version?\n>\n\nSure, that's OK. The discussion on UniqueKey thread looks more complex\nthan what I expected, that's why I want to check the code here, but that's\nfine,\nyou can work on your schedule.\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Jun 2, 2020 at 9:38 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Tue, Jun 02, 2020 at 08:36:31PM +0800, Andy Fan wrote:\n>\n> > Other than that to summarize current open points for future readers\n> > (this thread somehow became quite big):\n> >\n> > * Making UniqueKeys usage more generic to allow using skip scan for more\n> > use cases (hopefully it was covered by the v33, but I still need a\n> > confirmation from David, like blinking twice or something).\n> >\n> > * Suspicious performance difference between different type of workload,\n> > mentioned by Tomas (unfortunately I had no chance yet to investigate).\n> >\n> > * Thinking about supporting conditions, that are not covered by the index,\n> > to make skipping more flexible (one of the potential next steps in the\n> > future, as suggested by Floris).\n> >\n>\n> Looks this is the latest patch, which commit it is based on? Thanks\n\nI have a rebased version, if you're about it. Didn't posted it yet\nmostly since I'm in the middle of adapting it to the UniqueKeys from\nother thread. Would it be ok for you to wait a bit until I'll post\nfinished version?\nSure, that's OK. The discussion on UniqueKey thread looks more complexthan what I expected, that's why I want to check the code here, but that's fine,you can work on your schedule.-- Best RegardsAndy Fan",
"msg_date": "Wed, 3 Jun 2020 07:17:59 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index Skip Scan"
}
] |
[
{
"msg_contents": "Hi,\n\nI think it's probably not relevant, but it confused me for a moment\nthat RelationBuildTupleDesc() might set constr->has_generated_stored to\ntrue, but then throw away the constraint at the end, because nothing\nmatches the\n\t/*\n\t * Set up constraint/default info\n\t */\n\tif (has_not_null || ndef > 0 ||\n\t\tattrmiss || relation->rd_rel->relchecks)\ntest, i.e. there are no defaults.\n\nA quick assert confirms we do indeed pfree() constr in cases where\nhas_generated_stored == true.\n\nI suspect that's just an intermediate catalog, however, e.g. when\nDefineRelation() does\nheap_create_with_catalog();\nCommandCounterIncrement();\nrelation_open();\nAddRelationNewConstraints().\n\nIt does still strike me as not great that we can get a different\nrelcache entry, even if transient, depending on whether there are other\nreasons to create a TupleConstr. Say a NOT NULL column.\n\nI'm inclined to think we should just also check has_generated_stored in\nthe if quoted above?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Jan 2020 10:11:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "relcache sometimes initially ignores has_generated_stored"
},
{
"msg_contents": "At Wed, 15 Jan 2020 10:11:05 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> I think it's probably not relevant, but it confused me for a moment\n> that RelationBuildTupleDesc() might set constr->has_generated_stored to\n> true, but then throw away the constraint at the end, because nothing\n> matches the\n> \t/*\n> \t * Set up constraint/default info\n> \t */\n> \tif (has_not_null || ndef > 0 ||\n> \t\tattrmiss || relation->rd_rel->relchecks)\n> test, i.e. there are no defaults.\n\nIt was as follows before 16828d5c02.\n\n- if (constr->has_not_null || ndef > 0 ||relation->rd_rel->relchecks)\n\nAt that time TupleConstr has only members defval, check and\nhas_not_null other than subsidiary members. The condition apparently\nchecked all of the members.\n\nThen the commit adds attrmiss to the condition since the corresponding\nmember to TupleConstr.\n\n+\tif (constr->has_not_null || ndef > 0 ||\n+\t\tattrmiss || relation->rd_rel->relchecks)\n\nLater fc22b6623b introduced has_generated_stored to TupleConstr but\ndidn't add the corresponding check.\n\n> A quick assert confirms we do indeed pfree() constr in cases where\n> has_generated_stored == true.\n> \n> I suspect that's just an intermediate catalog, however, e.g. when\n> DefineRelation() does\n> heap_create_with_catalog();\n> CommandCounterIncrement();\n> relation_open();\n> AddRelationNewConstraints().\n> \n> It does still strike me as not great that we can get a different\n> relcache entry, even if transient, depending on whether there are other\n> reasons to create a TupleConstr. Say a NOT NULL column.\n> \n> I'm inclined to think we should just also check has_generated_stored in\n> the if quoted above?\n\nI agree to that. We could have a local boolean \"has_any_constraint\"\nto merge them but it would be an overkill.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 16 Jan 2020 15:10:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: relcache sometimes initially ignores has_generated_stored"
},
{
"msg_contents": "On 2020-01-15 19:11, Andres Freund wrote:\n> \t/*\n> \t * Set up constraint/default info\n> \t */\n> \tif (has_not_null || ndef > 0 ||\n> \t\tattrmiss || relation->rd_rel->relchecks)\n> test, i.e. there are no defaults.\n\n> It does still strike me as not great that we can get a different\n> relcache entry, even if transient, depending on whether there are other\n> reasons to create a TupleConstr. Say a NOT NULL column.\n> \n> I'm inclined to think we should just also check has_generated_stored in\n> the if quoted above?\n\nFixed that way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 6 Feb 2020 21:29:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: relcache sometimes initially ignores has_generated_stored"
},
{
"msg_contents": "On 2020-02-06 21:29:58 +0100, Peter Eisentraut wrote:\n> On 2020-01-15 19:11, Andres Freund wrote:\n> > \t/*\n> > \t * Set up constraint/default info\n> > \t */\n> > \tif (has_not_null || ndef > 0 ||\n> > \t\tattrmiss || relation->rd_rel->relchecks)\n> > test, i.e. there are no defaults.\n> \n> > It does still strike me as not great that we can get a different\n> > relcache entry, even if transient, depending on whether there are other\n> > reasons to create a TupleConstr. Say a NOT NULL column.\n> > \n> > I'm inclined to think we should just also check has_generated_stored in\n> > the if quoted above?\n> \n> Fixed that way.\n\nThanks.\n\n\n",
"msg_date": "Thu, 6 Feb 2020 12:40:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: relcache sometimes initially ignores has_generated_stored"
}
] |
[
{
"msg_contents": "The discussion on the backup manifest thread has gotten bogged down on\nthe issue of the format that should be used to store the backup\nmanifest file. I want something simple and ad-hoc; David Steele and\nStephen Frost prefer JSON. That is problematic because our JSON parser\ndoes not work in frontend code, and I want to be able to validate a\nbackup against its manifest, which involves being able to parse the\nmanifest from frontend code. The latest development over there is that\nDavid Steele has posted the JSON parser that he wrote for pgbackrest\nwith an offer to try to adapt it for use in front-end PostgreSQL code,\nan offer which I genuinely appreciate. I'll write more about that over\non that thread. However, I decided to spend today doing some further\ninvestigation of an alternative approach, namely making the backend's\nexisting JSON parser work in frontend code as well. I did not solve\nall the problems there, but I did come up with some patches which I\nthink would be worth committing on independent grounds, and I think\nthe whole series is worth posting. So here goes.\n\n0001 moves wchar.c from src/backend/utils/mb to src/common. Unless I'm\nmissing something, this seems like an overdue cleanup. It's long been\nthe case that wchar.c is actually compiled and linked into both\nfrontend and backend code. Commit\n60f11b87a2349985230c08616fa8a34ffde934c8 added code into src/common\nthat depends on wchar.c being available, but didn't actually make\nwchar.c part of src/common, which seems like an odd decision: the\nfunctions in the library are dependent on code that is not part of any\nlibrary but whose source files get copied around where needed. Eh?\n\n0002 does some basic header cleanup to make it possible to include the\nexisting header file jsonapi.h in frontend code. The state of the JSON\nheaders today looks generally poor. There seems not to have been much\nattempt to get the prototypes for a given source file, say foo.c, into\na header file with the same name, say foo.h. Also, dependencies\nbetween various header files seem to be have added somewhat freely.\nThis patch does not come close to fixing all that, but I consider it a\nmodest down payment on a cleanup that probably ought to be taken\nfurther.\n\n0003 splits json.c into two files, json.c and jsonapi.c. All the\nlexing and parsing stuff (whose prototypes are in jsonapi.h) goes into\njsonapi.c, while the stuff that pertains to the 'json' data type\nremains in json.c. This also seems like a good cleanup, because to me,\nat least, it's not a great idea to mix together code that is used by\nboth the json and jsonb data types as well as other things in the\nsystem that want to generate or parse json together with things that\nare specific to the 'json' data type.\n\nAs far as I know all three of the above patches are committable as-is;\nreview and contrary opinions welcome.\n\nOn the other hand, 0004, 0005, and 0006 are charitably described as\nexperimental or WIP. 0004 and 0005 hack up jsonapi.c so that it can\nstill be compiled even if #include \"postgres.h\" is changed to #include\n\"postgres-fe.h\" and 0006 moves it into src/common. Note that I say\nthat they make it compile, not work. It's not just untested; it's\ndefinitely broken. But it gives a feeling for what the remaining\nobstacles to making this code available in a frontend environment are.\nSince I wrote my very first email complaining about the difficulty of\nmaking the backend's JSON parser work in a frontend environment, one\nobstacle has been knocked down: StringInfo is now available in\nfront-end code (commit 26aaf97b683d6258c098859e6b1268e1f5da242f). The\nremaining problems (that I know about) have to do with error reporting\nand multibyte character support; a read of the patches is suggested\nfor those wanting further details.\n\nSuggestions welcome.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 15 Jan 2020 16:02:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... However, I decided to spend today doing some further\n> investigation of an alternative approach, namely making the backend's\n> existing JSON parser work in frontend code as well. I did not solve\n> all the problems there, but I did come up with some patches which I\n> think would be worth committing on independent grounds, and I think\n> the whole series is worth posting. So here goes.\n\nIn general, if we can possibly get to having one JSON parser in\nsrc/common, that seems like an obviously better place to be than\nhaving two JSON parsers. So I'm encouraged that it might be\nfeasible after all.\n\n> 0001 moves wchar.c from src/backend/utils/mb to src/common. Unless I'm\n> missing something, this seems like an overdue cleanup.\n\nFWIW, I've been wanting to do that for awhile. I've not studied\nyour patch, but +1 for the idea. We might also need to take a\nhard look at mbutils.c to see if any of that code can/should move.\n\n> Since I wrote my very first email complaining about the difficulty of\n> making the backend's JSON parser work in a frontend environment, one\n> obstacle has been knocked down: StringInfo is now available in\n> front-end code (commit 26aaf97b683d6258c098859e6b1268e1f5da242f). The\n> remaining problems (that I know about) have to do with error reporting\n> and multibyte character support; a read of the patches is suggested\n> for those wanting further details.\n\nThe patch I just posted at <2863.1579127649@sss.pgh.pa.us> probably\naffects this in small ways, but not anything major.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jan 2020 17:47:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-15 16:02:49 -0500, Robert Haas wrote:\n> The discussion on the backup manifest thread has gotten bogged down on\n> the issue of the format that should be used to store the backup\n> manifest file. I want something simple and ad-hoc; David Steele and\n> Stephen Frost prefer JSON. That is problematic because our JSON parser\n> does not work in frontend code, and I want to be able to validate a\n> backup against its manifest, which involves being able to parse the\n> manifest from frontend code. The latest development over there is that\n> David Steele has posted the JSON parser that he wrote for pgbackrest\n> with an offer to try to adapt it for use in front-end PostgreSQL code,\n> an offer which I genuinely appreciate. I'll write more about that over\n> on that thread.\n\nI'm not sure where I come down between using json and a simple ad-hoc\nformat, when the dependency for the former is making the existing json\nparser work in the frontend. But if the alternative is to add a second\njson parser, it very clearly shifts towards using an ad-hoc\nformat. Having to maintain a simple ad-hoc parser is a lot less\ntechnical debt than having a second full blown json parser. Imo even\nwhen an external project or three also has to have that simple parser.\n\nIf the alternative were to use that newly proposed json parser to\n*replace* the backend one too, the story would again be different.\n\n\n> 0001 moves wchar.c from src/backend/utils/mb to src/common. Unless I'm\n> missing something, this seems like an overdue cleanup. It's long been\n> the case that wchar.c is actually compiled and linked into both\n> frontend and backend code. Commit\n> 60f11b87a2349985230c08616fa8a34ffde934c8 added code into src/common\n> that depends on wchar.c being available, but didn't actually make\n> wchar.c part of src/common, which seems like an odd decision: the\n> functions in the library are dependent on code that is not part of any\n> library but whose source files get copied around where needed. Eh?\n\nCool.\n\n\n> 0002 does some basic header cleanup to make it possible to include the\n> existing header file jsonapi.h in frontend code. The state of the JSON\n> headers today looks generally poor. There seems not to have been much\n> attempt to get the prototypes for a given source file, say foo.c, into\n> a header file with the same name, say foo.h. Also, dependencies\n> between various header files seem to be have added somewhat freely.\n> This patch does not come close to fixing all that, but I consider it a\n> modest down payment on a cleanup that probably ought to be taken\n> further.\n\nYea, this seems like a necessary cleanup (or well, maybe the start of\nit).\n\n\n> 0003 splits json.c into two files, json.c and jsonapi.c. All the\n> lexing and parsing stuff (whose prototypes are in jsonapi.h) goes into\n> jsonapi.c, while the stuff that pertains to the 'json' data type\n> remains in json.c. This also seems like a good cleanup, because to me,\n> at least, it's not a great idea to mix together code that is used by\n> both the json and jsonb data types as well as other things in the\n> system that want to generate or parse json together with things that\n> are specific to the 'json' data type.\n\n+1\n\n\n> On the other hand, 0004, 0005, and 0006 are charitably described as\n> experimental or WIP. 0004 and 0005 hack up jsonapi.c so that it can\n> still be compiled even if #include \"postgres.h\" is changed to #include\n> \"postgres-fe.h\" and 0006 moves it into src/common. Note that I say\n> that they make it compile, not work. It's not just untested; it's\n> definitely broken. But it gives a feeling for what the remaining\n> obstacles to making this code available in a frontend environment are.\n> Since I wrote my very first email complaining about the difficulty of\n> making the backend's JSON parser work in a frontend environment, one\n> obstacle has been knocked down: StringInfo is now available in\n> front-end code (commit 26aaf97b683d6258c098859e6b1268e1f5da242f). The\n> remaining problems (that I know about) have to do with error reporting\n> and multibyte character support; a read of the patches is suggested\n> for those wanting further details.\n\n> From d05e1fc82a51cb583a0367e72b1afc0de561dd00 Mon Sep 17 00:00:00 2001\n> From: Robert Haas <rhaas@postgresql.org>\n> Date: Wed, 15 Jan 2020 10:36:52 -0500\n> Subject: [PATCH 4/6] Introduce json_error() macro.\n> \n> ---\n> src/backend/utils/adt/jsonapi.c | 221 +++++++++++++-------------------\n> 1 file changed, 90 insertions(+), 131 deletions(-)\n> \n> diff --git a/src/backend/utils/adt/jsonapi.c b/src/backend/utils/adt/jsonapi.c\n> index fc8af9f861..20f7f0f7ac 100644\n> --- a/src/backend/utils/adt/jsonapi.c\n> +++ b/src/backend/utils/adt/jsonapi.c\n> @@ -17,6 +17,9 @@\n> #include \"miscadmin.h\"\n> #include \"utils/jsonapi.h\"\n> \n> +#define json_error(rest) \\\n> +\tereport(ERROR, (rest, report_json_context(lex)))\n> +\n\nIt's not obvious why the better approach here wouldn't be to just have a\nvery simple ereport replacement, that needs to be explicitly included\nfrom frontend code. It'd not be meaningfully harder, imo, and it'd\nrequire fewer adaptions, and it'd look more familiar.\n\n\n\n> /* the null action object used for pure validation */\n> @@ -701,7 +735,11 @@ json_lex_string(JsonLexContext *lex)\n> \t\t\t\t\t\tch = (ch * 16) + (*s - 'A') + 10;\n> \t\t\t\t\telse\n> \t\t\t\t\t{\n> +#ifdef FRONTEND\n> +\t\t\t\t\t\tlex->token_terminator = s + PQmblen(s, PG_UTF8);\n> +#else\n> \t\t\t\t\t\tlex->token_terminator = s + pg_mblen(s);\n> +#endif\n\nIf we were to go this way, it seems like the ifdef should rather be in a\nhelper function, rather than all over. It seems like it should be\nunproblematic to have a common interface for both frontend/backend?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Jan 2020 15:40:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 6:40 PM Andres Freund <andres@anarazel.de> wrote:\n> It's not obvious why the better approach here wouldn't be to just have a\n> very simple ereport replacement, that needs to be explicitly included\n> from frontend code. It'd not be meaningfully harder, imo, and it'd\n> require fewer adaptions, and it'd look more familiar.\n\nI agree that it's far from obvious that the hacks in the patch are\nbest; to the contrary, they are hacks. That said, I feel that the\nsemantics of throwing an error are not very well-defined in a\nfront-end environment. I mean, in a backend context, throwing an error\nis going to abort the current transaction, with all that this implies.\nIf the frontend equivalent is to do nothing and hope for the best, I\ndoubt it will survive anything more than the simplest use cases. This\nis one of the reasons I've been very reluctant to go do down this\nwhole path in the first place.\n\n> > +#ifdef FRONTEND\n> > + lex->token_terminator = s + PQmblen(s, PG_UTF8);\n> > +#else\n> > lex->token_terminator = s + pg_mblen(s);\n> > +#endif\n>\n> If we were to go this way, it seems like the ifdef should rather be in a\n> helper function, rather than all over.\n\nSure... like I said, this is just to illustrate the problem.\n\n> It seems like it should be\n> unproblematic to have a common interface for both frontend/backend?\n\nNot sure how. pg_mblen() and PQmblen() are both existing interfaces,\nand they're not compatible with each other. I guess we could make\nPQmblen() available to backend code, but given that the function name\nimplies an origin in libpq, that seems wicked confusing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 15 Jan 2020 21:39:13 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 09:39:13PM -0500, Robert Haas wrote:\n> On Wed, Jan 15, 2020 at 6:40 PM Andres Freund <andres@anarazel.de> wrote:\n>> It's not obvious why the better approach here wouldn't be to just have a\n>> very simple ereport replacement, that needs to be explicitly included\n>> from frontend code. It'd not be meaningfully harder, imo, and it'd\n>> require fewer adaptions, and it'd look more familiar.\n> \n> I agree that it's far from obvious that the hacks in the patch are\n> best; to the contrary, they are hacks. That said, I feel that the\n> semantics of throwing an error are not very well-defined in a\n> front-end environment. I mean, in a backend context, throwing an error\n> is going to abort the current transaction, with all that this implies.\n> If the frontend equivalent is to do nothing and hope for the best, I\n> doubt it will survive anything more than the simplest use cases. This\n> is one of the reasons I've been very reluctant to go do down this\n> whole path in the first place.\n\nThe error handling is a well defined concept in the backend. If\nconnected to a database, you know that a session has to rollback any\nexisting activity, etc. The clients have to be more flexible because\nan error depends a lot of how the tools is designed and how it should\nreact on a error. So the backend code in charge of logging an error\ndoes the best it can: it throws an error, then lets the caller decide\nwhat to do with it. I agree with the feeling that having a simple\nreplacement for ereport() in the frontend would be nice, that would be\nless code churn in parts shared by backend/frontend.\n\n> Not sure how. pg_mblen() and PQmblen() are both existing interfaces,\n> and they're not compatible with each other. I guess we could make\n> PQmblen() available to backend code, but given that the function name\n> implies an origin in libpq, that seems wicked confusing.\n\nWell, the problem here is the encoding part, and the code looks at the\nsame table pg_wchar_table[] at the end, so this needs some thoughts.\nOn top of that, we don't know exactly on the client what kind of\nencoding is available (this led for example to several\nassumptions/hiccups behind the implementation of SCRAM as it requires\nUTF-8 per its RFC when working on the libpq part). \n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 12:35:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Hi Robert,\n\nOn 1/15/20 2:02 PM, Robert Haas wrote:\n > The discussion on the backup manifest thread has gotten bogged down on\n > the issue of the format that should be used to store the backup\n > manifest file. I want something simple and ad-hoc; David Steele and\n > Stephen Frost prefer JSON. That is problematic because our JSON parser\n > does not work in frontend code, and I want to be able to validate a\n > backup against its manifest, which involves being able to parse the\n > manifest from frontend code. The latest development over there is that\n > David Steele has posted the JSON parser that he wrote for pgbackrest\n > with an offer to try to adapt it for use in front-end PostgreSQL code,\n > an offer which I genuinely appreciate. I'll write more about that over\n > on that thread. However, I decided to spend today doing some further\n > investigation of an alternative approach, namely making the backend's\n > existing JSON parser work in frontend code as well. I did not solve\n > all the problems there, but I did come up with some patches which I\n > think would be worth committing on independent grounds, and I think\n > the whole series is worth posting. So here goes.\n\nI was starting to wonder if it wouldn't be simpler to go back to the \nPostgres JSON parser and see if we can adapt it. I'm not sure that it \n*is* simpler, but it would almost certainly be more acceptable.\n\n > 0001 moves wchar.c from src/backend/utils/mb to src/common. Unless I'm\n > missing something, this seems like an overdue cleanup. It's long been\n > the case that wchar.c is actually compiled and linked into both\n > frontend and backend code. Commit\n > 60f11b87a2349985230c08616fa8a34ffde934c8 added code into src/common\n > that depends on wchar.c being available, but didn't actually make\n > wchar.c part of src/common, which seems like an odd decision: the\n > functions in the library are dependent on code that is not part of any\n > library but whose source files get copied around where needed. Eh?\n\nThis looks like an obvious improvement to me.\n\n > 0002 does some basic header cleanup to make it possible to include the\n > existing header file jsonapi.h in frontend code. The state of the JSON\n > headers today looks generally poor. There seems not to have been much\n > attempt to get the prototypes for a given source file, say foo.c, into\n > a header file with the same name, say foo.h. Also, dependencies\n > between various header files seem to be have added somewhat freely.\n > This patch does not come close to fixing all that, but I consider it a\n > modest down payment on a cleanup that probably ought to be taken\n > further.\n\nAgreed that these header files are fairly disorganized. In general the \nnames json, jsonapi, jsonfuncs don't tell me a whole lot. I feel like \nI'd want to include json.h to get a json parser but it only contains one \nutility function before these patches. I can see that json.c primarily \ncontains SQL functions so that's why.\n\nSo the idea here is that json.c will have the JSON SQL functions, \njsonb.c the JSONB SQL functions, and jsonapi.c the parser, and \njsonfuncs.c the utility functions?\n\n > 0003 splits json.c into two files, json.c and jsonapi.c. All the\n > lexing and parsing stuff (whose prototypes are in jsonapi.h) goes into\n > jsonapi.c, while the stuff that pertains to the 'json' data type\n > remains in json.c. This also seems like a good cleanup, because to me,\n > at least, it's not a great idea to mix together code that is used by\n > both the json and jsonb data types as well as other things in the\n > system that want to generate or parse json together with things that\n > are specific to the 'json' data type.\n\nThis seems like a good first step. I wonder if the remainder of the SQL \njson/jsonb functions should be moved to json.c/jsonb.c respectively?\n\nThat does represent a lot of code churn though, so perhaps not worth it.\n\n > As far as I know all three of the above patches are committable as-is;\n > review and contrary opinions welcome.\n\nAgreed, with some questions as above.\n\n > On the other hand, 0004, 0005, and 0006 are charitably described as\n > experimental or WIP. 0004 and 0005 hack up jsonapi.c so that it can\n > still be compiled even if #include \"postgres.h\" is changed to #include\n > \"postgres-fe.h\" and 0006 moves it into src/common. Note that I say\n > that they make it compile, not work. It's not just untested; it's\n > definitely broken. But it gives a feeling for what the remaining\n > obstacles to making this code available in a frontend environment are.\n > Since I wrote my very first email complaining about the difficulty of\n > making the backend's JSON parser work in a frontend environment, one\n > obstacle has been knocked down: StringInfo is now available in\n > front-end code (commit 26aaf97b683d6258c098859e6b1268e1f5da242f). The\n > remaining problems (that I know about) have to do with error reporting\n > and multibyte character support; a read of the patches is suggested\n > for those wanting further details.\n\nWell, with the caveat that it doesn't work, it's less than I expected.\n\nObviously ereport() is a pretty big deal and I agree with Michael \ndownthread that we should port this to the frontend code.\n\nIt would also be nice to unify functions like PQmblen() and pg_mblen() \nif possible.\n\nThe next question in my mind is given the caveat that the error handing \nis questionable in the front end, can we at least render/parse valid \nJSON with the code?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 16 Jan 2020 11:37:12 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Hi Robert,\n\nOn 1/16/20 11:37 AM, David Steele wrote:\n> \n> The next question in my mind is given the caveat that the error handing \n> is questionable in the front end, can we at least render/parse valid \n> JSON with the code?\n\nHrm, this bit was from an earlier edit. I meant:\n\nThe next question in my mind is what will it take to get this working in \na limited form so we can at least prototype it with pg_basebackup. I \ncan hack on this with some static strings in front end code tomorrow to \nsee what works and what doesn't if that makes sense.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 16 Jan 2020 11:39:31 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 1:37 PM David Steele <david@pgmasters.net> wrote:\n> I was starting to wonder if it wouldn't be simpler to go back to the\n> Postgres JSON parser and see if we can adapt it. I'm not sure that it\n> *is* simpler, but it would almost certainly be more acceptable.\n\nThat is my feeling also.\n\n> So the idea here is that json.c will have the JSON SQL functions,\n> jsonb.c the JSONB SQL functions, and jsonapi.c the parser, and\n> jsonfuncs.c the utility functions?\n\nUh, I think roughly that, yes. Although I can't claim to fully\nunderstand everything that's here.\n\n> This seems like a good first step. I wonder if the remainder of the SQL\n> json/jsonb functions should be moved to json.c/jsonb.c respectively?\n>\n> That does represent a lot of code churn though, so perhaps not worth it.\n\nI don't have an opinion on this right now.\n\n> Well, with the caveat that it doesn't work, it's less than I expected.\n>\n> Obviously ereport() is a pretty big deal and I agree with Michael\n> downthread that we should port this to the frontend code.\n\nAnother possibly-attractive option would be to defer throwing the\nerror: i.e. set some flags in the lex or parse state or something, and\nthen just return. The caller notices the flags and has enough\ninformation to throw an error or whatever it wants to do. The reason I\nthink this might be attractive is that it dodges the whole question of\nwhat exactly throwing an error is supposed to do in a world without\ntransactions, memory contexts, resource owners, etc. However, it has\nsome pitfalls of its own, like maybe being too much code churn or\nhurting performance in non-error cases.\n\n> It would also be nice to unify functions like PQmblen() and pg_mblen()\n> if possible.\n\nI don't see how to do that at the moment, but I agree that it would be\nnice if we can figure it out.\n\n> The next question in my mind is given the caveat that the error handing\n> is questionable in the front end, can we at least render/parse valid\n> JSON with the code?\n\nThat's a real good question. Thanks for offering to test it; I think\nthat would be very helpful.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jan 2020 13:51:34 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 1/15/20 4:40 PM, Andres Freund wrote:\n >\n > I'm not sure where I come down between using json and a simple ad-hoc\n > format, when the dependency for the former is making the existing json\n > parser work in the frontend. But if the alternative is to add a second\n > json parser, it very clearly shifts towards using an ad-hoc\n > format. Having to maintain a simple ad-hoc parser is a lot less\n > technical debt than having a second full blown json parser.\n\nMaybe at first, but it will grow and become more complex as new features \nare added. This has been our experience with pgBackRest, at least.\n\n > Imo even\n > when an external project or three also has to have that simple parser.\n\nI don't agree here. Especially if we outgrow the format and they need \ntwo parsers, depending on the version of PostgreSQL.\n\nTo do page-level incrementals (which this feature is intended to enable) \nthe user will need to be able to associate full and incremental backups \nand the only way I see to do that (currently) is to read the manifests, \nsince the prior backup should be stored there. I think this means that \nparsing the manifest is not really optional -- it will be required to do \nany kind of automation with incrementals.\n\nIt's easy enough for a tool like pgBackRest to do something like that, \nmuch harder for a user hacking together a tool in bash based on \npg_basebackup.\n\n > If the alternative were to use that newly proposed json parser to\n > *replace* the backend one too, the story would again be different.\n\nThat was certainly not my intention.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 16 Jan 2020 11:58:55 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 1/15/20 7:39 PM, Robert Haas wrote:\n > On Wed, Jan 15, 2020 at 6:40 PM Andres Freund <andres@anarazel.de> wrote:\n >> It's not obvious why the better approach here wouldn't be to just have a\n >> very simple ereport replacement, that needs to be explicitly included\n >> from frontend code. It'd not be meaningfully harder, imo, and it'd\n >> require fewer adaptions, and it'd look more familiar.\n >\n > I agree that it's far from obvious that the hacks in the patch are\n > best; to the contrary, they are hacks. That said, I feel that the\n > semantics of throwing an error are not very well-defined in a\n > front-end environment. I mean, in a backend context, throwing an error\n > is going to abort the current transaction, with all that this implies.\n > If the frontend equivalent is to do nothing and hope for the best, I\n > doubt it will survive anything more than the simplest use cases. This\n > is one of the reasons I've been very reluctant to go do down this\n > whole path in the first place.\n\nThe way we handle this in pgBackRest is to put a TRY ... CATCH block in \nmain() to log and exit on any uncaught THROW. That seems like a \nreasonable way to start here. Without memory contexts that almost \ncertainly will mean memory leaks but I'm not sure how much that matters \nif the action is to exit immediately.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 16 Jan 2020 12:11:14 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 1/15/20 7:39 PM, Robert Haas wrote:\n>>> I agree that it's far from obvious that the hacks in the patch are\n>>> best; to the contrary, they are hacks. That said, I feel that the\n>>> semantics of throwing an error are not very well-defined in a\n>>> front-end environment. I mean, in a backend context, throwing an error\n>>> is going to abort the current transaction, with all that this implies.\n>>> If the frontend equivalent is to do nothing and hope for the best, I\n>>> doubt it will survive anything more than the simplest use cases. This\n>>> is one of the reasons I've been very reluctant to go do down this\n>>> whole path in the first place.\n\n> The way we handle this in pgBackRest is to put a TRY ... CATCH block in \n> main() to log and exit on any uncaught THROW. That seems like a \n> reasonable way to start here. Without memory contexts that almost \n> certainly will mean memory leaks but I'm not sure how much that matters \n> if the action is to exit immediately.\n\nIf that's the expectation, we might as well replace backend ereport(ERROR)\nwith something that just prints a message and does exit(1).\n\nThe question comes down to whether there are use-cases where a frontend\napplication would really want to recover and continue processing after\na JSON syntax problem. I'm not seeing that that's a near-term\nrequirement, so maybe we could leave it for somebody to solve when\nand if they want to do it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 14:20:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-16 14:20:28 -0500, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n> > The way we handle this in pgBackRest is to put a TRY ... CATCH block in\n> > main() to log and exit on any uncaught THROW. That seems like a\n> > reasonable way to start here. Without memory contexts that almost\n> > certainly will mean memory leaks but I'm not sure how much that matters\n> > if the action is to exit immediately.\n>\n> If that's the expectation, we might as well replace backend ereport(ERROR)\n> with something that just prints a message and does exit(1).\n\nWell, the process might still want to do some cleanup of half-finished\nwork. You'd not need to be resistant against memory leaks to do so, if\nfollowed by an exit. Obviously you can also do all the necessarily\ncleanup from within the ereport(ERROR) itself, but that doesn't seem\nappealing to me (not composable, harder to reuse for other programs,\netc).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Jan 2020 11:26:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 1/16/20 12:26 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2020-01-16 14:20:28 -0500, Tom Lane wrote:\n>> David Steele <david@pgmasters.net> writes:\n>>> The way we handle this in pgBackRest is to put a TRY ... CATCH block in\n>>> main() to log and exit on any uncaught THROW. That seems like a\n>>> reasonable way to start here. Without memory contexts that almost\n>>> certainly will mean memory leaks but I'm not sure how much that matters\n>>> if the action is to exit immediately.\n>>\n>> If that's the expectation, we might as well replace backend ereport(ERROR)\n>> with something that just prints a message and does exit(1).\n> \n> Well, the process might still want to do some cleanup of half-finished\n> work. You'd not need to be resistant against memory leaks to do so, if\n> followed by an exit. Obviously you can also do all the necessarily\n> cleanup from within the ereport(ERROR) itself, but that doesn't seem\n> appealing to me (not composable, harder to reuse for other programs,\n> etc).\n\nIn pgBackRest we have a default handler that just logs the message to \nstderr and exits (though we consider it a coding error if it gets \ncalled). Seems like we could do the same here. Default message and \nexit if no handler, but optionally allow a handler (which could RETHROW \nto get to the default handler afterwards).\n\nIt seems like we've been wanting a front end version of ereport() for a \nwhile so I'll take a look at that and see what it involves.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 16 Jan 2020 13:10:53 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> 0001 moves wchar.c from src/backend/utils/mb to src/common. Unless I'm\n> missing something, this seems like an overdue cleanup.\n\nHere's a reviewed version of 0001. You missed fixing the MSVC build,\nand there were assorted comments and other things referencing wchar.c\nthat needed to be cleaned up.\n\nAlso, it seemed to me that if we are going to move wchar.c, we should\nalso move encnames.c, so that libpq can get fully out of the\nsymlinking-source-files business. It makes initdb less weird too.\n\nI took the liberty of sticking proper copyright headers onto these\ntwo files, too. (This makes the diff a lot more bulky :-(. Would\nit help to add the headers in a separate commit?)\n\nAnother thing I'm wondering about is if any of the #ifndef FRONTEND\ncode should get moved *back* to src/backend/utils/mb. But that\ncould be a separate commit, too.\n\nLastly, it strikes me that maybe pg_wchar.h, or parts of it, should\nmigrate over to src/include/common. But that'd be far more invasive\nto other source files, so I've not touched the issue here.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 16 Jan 2020 15:11:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 3:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > 0001 moves wchar.c from src/backend/utils/mb to src/common. Unless I'm\n> > missing something, this seems like an overdue cleanup.\n>\n> Here's a reviewed version of 0001. You missed fixing the MSVC build,\n> and there were assorted comments and other things referencing wchar.c\n> that needed to be cleaned up.\n\nWow, thanks.\n\n> Also, it seemed to me that if we are going to move wchar.c, we should\n> also move encnames.c, so that libpq can get fully out of the\n> symlinking-source-files business. It makes initdb less weird too.\n\nOK.\n\n> I took the liberty of sticking proper copyright headers onto these\n> two files, too. (This makes the diff a lot more bulky :-(. Would\n> it help to add the headers in a separate commit?)\n\nI wouldn't bother making it a separate commit, but please do whatever you like.\n\n> Another thing I'm wondering about is if any of the #ifndef FRONTEND\n> code should get moved *back* to src/backend/utils/mb. But that\n> could be a separate commit, too.\n\n+1 for moving that stuff to a separate backend-only file.\n\n> Lastly, it strikes me that maybe pg_wchar.h, or parts of it, should\n> migrate over to src/include/common. But that'd be far more invasive\n> to other source files, so I've not touched the issue here.\n\nI don't have a view on this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jan 2020 15:21:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 1:58 PM David Steele <david@pgmasters.net> wrote:\n> To do page-level incrementals (which this feature is intended to enable)\n> the user will need to be able to associate full and incremental backups\n> and the only way I see to do that (currently) is to read the manifests,\n> since the prior backup should be stored there. I think this means that\n> parsing the manifest is not really optional -- it will be required to do\n> any kind of automation with incrementals.\n\nMy current belief is that enabling incremental backup will require\nextending the manifest format either not at all or by adding one\nadditional line with some LSN info.\n\nIf we could foresee a need to store a bunch of additional *per-file*\ndetails, I'd be a lot more receptive to the argument that we ought to\nbe using a more structured format like JSON. And it doesn't seem\nimpossible that such a thing could happen, but I don't think it's at\nall clear that it actually will happen, or that it will happen soon\nenough that we ought to be worrying about it now.\n\nIt's possible that we're chasing a real problem here, and if there's\nsomething we can agree on and get done I'd rather do that than argue,\nbut I am still quite suspicious that there's no actually serious\ntechnical problem here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jan 2020 15:44:37 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jan 16, 2020 at 3:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a reviewed version of 0001. You missed fixing the MSVC build,\n>> and there were assorted comments and other things referencing wchar.c\n>> that needed to be cleaned up.\n\n> Wow, thanks.\n\nPushed that.\n\n>> Another thing I'm wondering about is if any of the #ifndef FRONTEND\n>> code should get moved *back* to src/backend/utils/mb. But that\n>> could be a separate commit, too.\n\n> +1 for moving that stuff to a separate backend-only file.\n\nAfter a brief look, I propose the following:\n\n* I think we should just shove the \"#ifndef FRONTEND\" stuff in\nwchar.c into mbutils.c. It doesn't seem worth inventing a whole\nnew file for that code, especially when it's arguably within the\nremit of mbutils.c anyway.\n\n* Let's remove the \"#ifndef FRONTEND\" restriction on the ICU-related\nstuff in encnames.c. Even if we don't need that stuff in frontend\ntoday, it's hardly unlikely that we will need it tomorrow. And there's\nnot that much bulk there anyway.\n\n* The one positive reason for that restriction is the ereport() in\nget_encoding_name_for_icu. We could change that to be the usual\n#ifdef-ereport-or-printf dance, but I think there's a better way: put\nthe ereport at the caller, by redefining that function to return NULL\nfor an unsupported encoding. There's only one caller today anyhow.\n\n* PG_char_to_encoding() and PG_encoding_to_char() can be moved to\nmbutils.c; they'd fit reasonably well beside getdatabaseencoding and\npg_client_encoding. (I also thought about utils/adt/misc.c, but\nthat's not obviously better.)\n\nBarring objections I'll go make this happen shortly.\n\n>> Lastly, it strikes me that maybe pg_wchar.h, or parts of it, should\n>> migrate over to src/include/common. But that'd be far more invasive\n>> to other source files, so I've not touched the issue here.\n\n> I don't have a view on this.\n\nIf anyone is hot to do this part, please have at it. I'm not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 16:24:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It's possible that we're chasing a real problem here, and if there's\n> something we can agree on and get done I'd rather do that than argue,\n> but I am still quite suspicious that there's no actually serious\n> technical problem here.\n\nIt's entirely possible that you're right. But if this is a file format\nthat is meant to be exposed to user tools, we need to take a very long\nview of the requirements for it. Five or ten years down the road, we\nmight be darn glad we spent extra time now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 18:15:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 7:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n\n>\n> 0002 does some basic header cleanup to make it possible to include the\n> existing header file jsonapi.h in frontend code. The state of the JSON\n> headers today looks generally poor. There seems not to have been much\n> attempt to get the prototypes for a given source file, say foo.c, into\n> a header file with the same name, say foo.h. Also, dependencies\n> between various header files seem to be have added somewhat freely.\n> This patch does not come close to fixing all that, but I consider it a\n> modest down payment on a cleanup that probably ought to be taken\n> further.\n>\n> 0003 splits json.c into two files, json.c and jsonapi.c. All the\n> lexing and parsing stuff (whose prototypes are in jsonapi.h) goes into\n> jsonapi.c, while the stuff that pertains to the 'json' data type\n> remains in json.c. This also seems like a good cleanup, because to me,\n> at least, it's not a great idea to mix together code that is used by\n> both the json and jsonb data types as well as other things in the\n> system that want to generate or parse json together with things that\n> are specific to the 'json' data type.\n>\n\nI'm probably responsible for a good deal of the mess, so let me say Thankyou.\n\nI'll have a good look at these.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Jan 2020 12:24:57 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Hi Robert,\n\nOn 1/16/20 11:51 AM, Robert Haas wrote:\n> On Thu, Jan 16, 2020 at 1:37 PM David Steele <david@pgmasters.net> wrote:\n> \n>> The next question in my mind is given the caveat that the error handing\n>> is questionable in the front end, can we at least render/parse valid\n>> JSON with the code?\n> \n> That's a real good question. Thanks for offering to test it; I think\n> that would be very helpful.\n\nIt seems to work just fine. I didn't stress it too hard but I did put \nin one escape and a multi-byte character and check the various data types.\n\nAttached is a test hack on pg_basebackup which produces this output:\n\nSTART\n FIELD \"number\", null 0\n SCALAR TYPE 2: 123\n FIELD \"string\", null 0\n SCALAR TYPE 1: val\tue-丏\n FIELD \"bool\", null 0\n SCALAR TYPE 9: true\n FIELD \"null\", null 1\n SCALAR TYPE 11: null\nEND\n\nI used the callbacks because that's the first method I found but it \nseems like json_lex() might be easier to use in practice.\n\nI think it's an issue that the entire string must be passed to the lexer \nat once. That will not be great for large manifests. However, I don't \nthink it will be all that hard to implement an optional \"want more\" \ncallback in the lexer so JSON data can be fed in from the file in chunks.\n\nSo, that just leaves ereport() as the largest remaining issue? I'll \nlook at that today and Tuesday and see what I can work up.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net",
"msg_date": "Fri, 17 Jan 2020 10:36:26 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Hi Robert,\n\nOn 1/16/20 11:51 AM, Robert Haas wrote:\n> On Thu, Jan 16, 2020 at 1:37 PM David Steele <david@pgmasters.net> wrote:\n> \n>> So the idea here is that json.c will have the JSON SQL functions,\n>> jsonb.c the JSONB SQL functions, and jsonapi.c the parser, and\n>> jsonfuncs.c the utility functions?\n> \n> Uh, I think roughly that, yes. Although I can't claim to fully\n> understand everything that's here.\n\nNow that I've spent some time with the code I see your intent was just \nto isolate the JSON lexer code with 0002 and 0003. As such, I now think \nthey are commit-able as is.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 17 Jan 2020 10:41:10 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 8:55 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> I'm probably responsible for a good deal of the mess, so let me say Thankyou.\n>\n> I'll have a good look at these.\n\nThanks, appreciated.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 Jan 2020 16:24:21 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 12:36 PM David Steele <david@pgmasters.net> wrote:\n> It seems to work just fine. I didn't stress it too hard but I did put\n> in one escape and a multi-byte character and check the various data types.\n\nCool.\n\n> I used the callbacks because that's the first method I found but it\n> seems like json_lex() might be easier to use in practice.\n\nUgh, really? That doesn't seem like it would be nice at all.\n\n> I think it's an issue that the entire string must be passed to the lexer\n> at once. That will not be great for large manifests. However, I don't\n> think it will be all that hard to implement an optional \"want more\"\n> callback in the lexer so JSON data can be fed in from the file in chunks.\n\nI thought so initially, but now I'm not so sure. The thing is, you\nactually need all the manifest data in memory at once anyway, or so I\nthink. You're essentially doing a \"full join\" between the contents of\nthe manifest and the contents of the file system, so you've got to\nscan one (probably the filesystem) and then mark entries in the other\n(probably the manifest) used as you go.\n\nBut this might need more thought. The details probably depend on\nexactly how you design it all.\n\n> So, that just leaves ereport() as the largest remaining issue? I'll\n> look at that today and Tuesday and see what I can work up.\n\nPFA my work on that topic. As compared with my previous patch series,\nthe previous 0001 is dropped and what are now 0001 and 0002 are the\nsame as patches from the previous series. 0003 and 0004 are aiming\ntoward getting rid of ereport() and, I believe, show a plausible\nstrategy for so doing. There are, possibly, things not to like here,\nand it's certainly incomplete, but I think I kinda like this\ndirection. Comments appreciated.\n\n0003 nukes lex_accept(), inlining the logic into callers. I found that\nthe refactoring I wanted to do in 0004 was pretty hard without this,\nand it turns out to save code, so I think this is a good idea\nindependently of anything else.\n\n0004 adjusts many functions in jsonapi.c to return a new enumerated\ntype, JsonParseErrorType, instead of directly doing ereport(). It adds\na new function that takes this value and a lexing context and throws\nan error. The JSON_ESCAPING_INVALID case is wrong and involves a gross\nhack, but that's fixable with another field in the lexing context.\nMore work is needed to really bring this up to scratch, but the idea\nis to make this code have a soft dependency on ereport() rather than a\nhard one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 17 Jan 2020 16:33:48 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "> On Jan 16, 2020, at 1:24 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>>> \n>>> Lastly, it strikes me that maybe pg_wchar.h, or parts of it, should\n>>> migrate over to src/include/common. But that'd be far more invasive\n>>> to other source files, so I've not touched the issue here.\n> \n>> I don't have a view on this.\n> \n> If anyone is hot to do this part, please have at it. I'm not.\n\nI moved the file pg_wchar.h into src/include/common and split out\nmost of the functions you marked as being suitable for the\nbackend only into a new file src/include/utils/mbutils.h. That\nresulted in the need to include this new “utils/mbutils.h” from\na number of .c files in the source tree.\n\nOne issue that came up was libpq/pqformat.h uses a couple\nof those functions from within static inline functions, preventing\nme from moving those to a backend-only include file without\nmaking pqformat.h a backend-only include file.\n\nI think the right thing to do here is to move references to these\nfunctions into pqformat.c by un-inlining these functions. I have\nnot done that yet.\n\nThere are whitespace cleanup issues I’m not going to fix just\nyet, since I’ll be making more changes anyway. What do you\nthink of the direction I’m taking in the attached?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 19 Jan 2020 23:02:37 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Hi Robert,\n\nOn 1/17/20 2:33 PM, Robert Haas wrote:\n > On Fri, Jan 17, 2020 at 12:36 PM David Steele <david@pgmasters.net> \nwrote:\n >\n >> I used the callbacks because that's the first method I found but it\n >> seems like json_lex() might be easier to use in practice.\n >\n > Ugh, really? That doesn't seem like it would be nice at all.\n\nI guess it's a matter of how you want to structure the code.\n\n >> I think it's an issue that the entire string must be passed to the lexer\n >> at once. That will not be great for large manifests. However, I don't\n >> think it will be all that hard to implement an optional \"want more\"\n >> callback in the lexer so JSON data can be fed in from the file in \nchunks.\n >\n > I thought so initially, but now I'm not so sure. The thing is, you\n > actually need all the manifest data in memory at once anyway, or so I\n > think. You're essentially doing a \"full join\" between the contents of\n > the manifest and the contents of the file system, so you've got to\n > scan one (probably the filesystem) and then mark entries in the other\n > (probably the manifest) used as you go.\n\nYeah, having a copy of the manifest in memory is the easiest way to do \nvalidation, but I think you'd want it in a structured format.\n\nWe parse the file part of the manifest into a sorted struct array which \nwe can then do binary searches on by filename.\n\n >> So, that just leaves ereport() as the largest remaining issue? I'll\n >> look at that today and Tuesday and see what I can work up.\n >\n > PFA my work on that topic. As compared with my previous patch series,\n > the previous 0001 is dropped and what are now 0001 and 0002 are the\n > same as patches from the previous series. 0003 and 0004 are aiming\n > toward getting rid of ereport() and, I believe, show a plausible\n > strategy for so doing. There are, possibly, things not to like here,\n > and it's certainly incomplete, but I think I kinda like this\n > direction. Comments appreciated.\n >\n > 0003 nukes lex_accept(), inlining the logic into callers. I found that\n > the refactoring I wanted to do in 0004 was pretty hard without this,\n > and it turns out to save code, so I think this is a good idea\n > independently of anything else.\n\nNo arguments here.\n\n > 0004 adjusts many functions in jsonapi.c to return a new enumerated\n > type, JsonParseErrorType, instead of directly doing ereport(). It adds\n > a new function that takes this value and a lexing context and throws\n > an error. The JSON_ESCAPING_INVALID case is wrong and involves a gross\n > hack, but that's fixable with another field in the lexing context.\n > More work is needed to really bring this up to scratch, but the idea\n > is to make this code have a soft dependency on ereport() rather than a\n > hard one.\n\nMy first reaction was that if we migrated ereport() first it would make \nthis all so much easier. Now I'm no so sure.\n\nHaving a general json parser in libcommon that is not tied into a \nspecific error handling/logging system actually sounds like a really \nnice thing to have. If we do migrate ereport() the user would always \nhave the choice to call throw_a_json_error() if they wanted to.\n\nThere's also a bit of de-duplication of error messages, which is nice, \nespecially in the case JSON_ESCAPING_INVALID. And I agree that this \ncase can be fixed with another field in the lexer -- or at least so it \nseems to me.\n\nThough, throw_a_json_error() is not my favorite name. Perhaps \njson_ereport()?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 21 Jan 2020 15:34:55 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 5:34 PM David Steele <david@pgmasters.net> wrote:\n> Though, throw_a_json_error() is not my favorite name. Perhaps\n> json_ereport()?\n\nThat name was deliberately chosen to be dumb, with the thought that\nreaders would understand it was to be replaced at some point before\nthis was final. It sounds like it wasn't quite dumb enough to make\nthat totally clear.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 21 Jan 2020 19:23:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 7:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jan 21, 2020 at 5:34 PM David Steele <david@pgmasters.net> wrote:\n> > Though, throw_a_json_error() is not my favorite name. Perhaps\n> > json_ereport()?\n>\n> That name was deliberately chosen to be dumb, with the thought that\n> readers would understand it was to be replaced at some point before\n> this was final. It sounds like it wasn't quite dumb enough to make\n> that totally clear.\n\nHere is a new version that is, I think, much closer what I would\nconsider a final form. 0001 through 0003 are as before, and unless\nsomebody says pretty soon that they see a problem with those or want\nmore time to review them, I'm going to commit them; David Steele has\nendorsed all three, and they seem like independently sensible\ncleanups.\n\n0004 is a substantially cleaned up version of the patch to make the\nJSON parser return a result code rather than throwing errors. Names\nhave been fixed, interfaces have been tidied up, and the thing is\nbetter integrated with the surrounding code. I would really like\ncomments, if anyone has them, on whether this approach is acceptable.\n\n0005 builds on 0004 by moving three functions from jsonapi.c to\njsonfuncs.c. With that done, jsonapi.c has minimal remaining\ndependencies on the backend environment. It would still need a\nsubstitute for elog(ERROR, \"some internal thing is broken\"); I'm\nthinking of using pg_log_fatal() for that case. It would also need a\nfix for the problem that pg_mblen() is not available in the front-end\nenvironment. I don't know what to do about that yet exactly, but it\ndoesn't seem unsolvable. The frontend environment just needs to know\nwhich encoding to use, and needs a way to call PQmblen() rather than\npg_mblen().\n\nOne problem with this whole thing that I just realized is that the\nbackup manifest file needs to store filenames, and we don't know that\nthe filenames we get from the filesystem are going to be valid in\nUTF-8 (or, for that matter, any other encoding we might want to\nchoose). So, just deciding that the backup manifest is always UTF-8\ndoesn't seem like an option, unless we stick another level of escaping\nin there somehow. Strictly as a theoretical matter, someone might\nconsider this a reason why using JSON for the backup manifest is not\nnecessarily the best fit, but since other arguments to that effect\nhave gotten me nowhere until now, I will instead request that someone\nsuggest to me how I ought to handle that problem.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 22 Jan 2020 13:53:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-22, Robert Haas wrote:\n\n> Here is a new version that is, I think, much closer what I would\n> consider a final form. 0001 through 0003 are as before, and unless\n> somebody says pretty soon that they see a problem with those or want\n> more time to review them, I'm going to commit them; David Steele has\n> endorsed all three, and they seem like independently sensible\n> cleanups.\n\nI'm not sure I see the point of keeping json.h split from jsonapi.h. It\nseems to me that you could move back all the contents from jsonapi.h\ninto json.h, and everything would work just as well. (Evidently the\nDatum in JsonEncodeDateTime's proto is problematic ... perhaps putting\nthat prototype in jsonfuncs.h would work.)\n\nI don't really object to your 0001 patch as posted, though.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Jan 2020 16:26:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 2:26 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I'm not sure I see the point of keeping json.h split from jsonapi.h. It\n> seems to me that you could move back all the contents from jsonapi.h\n> into json.h, and everything would work just as well. (Evidently the\n> Datum in JsonEncodeDateTime's proto is problematic ... perhaps putting\n> that prototype in jsonfuncs.h would work.)\n>\n> I don't really object to your 0001 patch as posted, though.\n\nThe goal is to make it possible to use the JSON parser in the\nfrontend, and we can't do that if the header files that would have to\nbe included on the frontend side rely on things that only work in the\nbackend. As written, the patch series leaves json.h with a dependency\non Datum, so the stuff that it leaves in jsonapi.h (which is intended\nto be the header that gets moved to src/common and included by\nfrontend code) can't be merged with it.\n\nNow, we could obviously rearrange that. I don't think any of the file\nnaming here is great. But I think we probably want, as far as\npossible, for the code in FOO.c to correspond to the prototypes in\nFOO.h. What I'm thinking we should work towards is:\n\njson.c/h - support for the 'json' data type\njsonb.c/h - support for the 'jsonb' data type\njsonfuncs.c/h - backend code that doesn't fit in either of the above\njsonapi.c/h - lexing/parsing code that can be used in either the\nfrontend or the backend\n\nI'm not wedded to that. It just looks like the most natural thing from\nwhere we are now.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 22 Jan 2020 14:57:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-22, Robert Haas wrote:\n\n> On Wed, Jan 22, 2020 at 2:26 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > I'm not sure I see the point of keeping json.h split from jsonapi.h. It\n> > seems to me that you could move back all the contents from jsonapi.h\n> > into json.h, and everything would work just as well. (Evidently the\n> > Datum in JsonEncodeDateTime's proto is problematic ... perhaps putting\n> > that prototype in jsonfuncs.h would work.)\n> >\n> > I don't really object to your 0001 patch as posted, though.\n> \n> The goal is to make it possible to use the JSON parser in the\n> frontend, and we can't do that if the header files that would have to\n> be included on the frontend side rely on things that only work in the\n> backend. As written, the patch series leaves json.h with a dependency\n> on Datum, so the stuff that it leaves in jsonapi.h (which is intended\n> to be the header that gets moved to src/common and included by\n> frontend code) can't be merged with it.\n\nRight, I agree with that goal, and as I said, I don't object to your\npatch as posted.\n\n> Now, we could obviously rearrange that. I don't think any of the file\n> naming here is great. But I think we probably want, as far as\n> possible, for the code in FOO.c to correspond to the prototypes in\n> FOO.h. What I'm thinking we should work towards is:\n> \n> json.c/h - support for the 'json' data type\n> jsonb.c/h - support for the 'jsonb' data type\n> jsonfuncs.c/h - backend code that doesn't fit in either of the above\n> jsonapi.c/h - lexing/parsing code that can be used in either the\n> frontend or the backend\n\n... it would probably require more work to make this 100% attainable,\nbut I don't really care all that much.\n\n> I'm not wedded to that. It just looks like the most natural thing from\n> where we are now.\n\nLet's go with it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Jan 2020 17:11:22 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "> On Jan 22, 2020, at 12:11 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Jan-22, Robert Haas wrote:\n> \n>> On Wed, Jan 22, 2020 at 2:26 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>> I'm not sure I see the point of keeping json.h split from jsonapi.h. It\n>>> seems to me that you could move back all the contents from jsonapi.h\n>>> into json.h, and everything would work just as well. (Evidently the\n>>> Datum in JsonEncodeDateTime's proto is problematic ... perhaps putting\n>>> that prototype in jsonfuncs.h would work.)\n>>> \n>>> I don't really object to your 0001 patch as posted, though.\n>> \n>> The goal is to make it possible to use the JSON parser in the\n>> frontend, and we can't do that if the header files that would have to\n>> be included on the frontend side rely on things that only work in the\n>> backend. As written, the patch series leaves json.h with a dependency\n>> on Datum, so the stuff that it leaves in jsonapi.h (which is intended\n>> to be the header that gets moved to src/common and included by\n>> frontend code) can't be merged with it.\n> \n> Right, I agree with that goal, and as I said, I don't object to your\n> patch as posted.\n> \n>> Now, we could obviously rearrange that. I don't think any of the file\n>> naming here is great. But I think we probably want, as far as\n>> possible, for the code in FOO.c to correspond to the prototypes in\n>> FOO.h. What I'm thinking we should work towards is:\n>> \n>> json.c/h - support for the 'json' data type\n>> jsonb.c/h - support for the 'jsonb' data type\n>> jsonfuncs.c/h - backend code that doesn't fit in either of the above\n>> jsonapi.c/h - lexing/parsing code that can be used in either the\n>> frontend or the backend\n> \n> ... it would probably require more work to make this 100% attainable,\n> but I don't really care all that much.\n> \n>> I'm not wedded to that. It just looks like the most natural thing from\n>> where we are now.\n> \n> Let's go with it.\n\nI have this done in my local repo to the point that I can build frontend tools against the json parser that is now in src/common and also run all the check-world tests without failure. I’m planning to post my work soon, possibly tonight if I don’t run out of time, but more likely tomorrow.\n\nThe main issue remaining is that my repo has a lot of stuff organized differently than Robert’s patches, so I’m trying to turn my code into a simple extension of his work rather than having my implementation compete against his.\n\nFor the curious, the code as Robert left it still relies on the DatabaseEncoding though the use of GetDatabaseEncoding, pg_mblen, and similar, and that has been changed in my patches to only rely on the database encoding in the backend, with the code in src/common taking an explicit encoding, which the backend gets in the usual way and the frontend might get with PQenv2encoding() or whatever the frontend programmer finds appropriate. Hopefully, this addresses Robert’s concern upthread about the filesystem name not necessarily being in utf8 format, though I might be misunderstanding the exact thrust of his concern. I can think of other possible interpretations of his concern as he expressed it, so I’ll wait for him to clarify.\n\nFor those who want a sneak peak, I’m attaching WIP patches to this email with all my changes, with Robert’s changes partially manually cherry-picked and the rest still unmerged. *THESE ARE NOT MEANT FOR COMMIT. THIS IS FOR ADVISORY PURPOSES ONLY.*. I have some debugging cruft left in here, too, like gcc __attribute__ stuff that won’t be in the patches I submit.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 22 Jan 2020 19:00:35 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 10:00 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Hopefully, this addresses Robert’s concern upthread about the filesystem name not necessarily being in utf8 format, though I might be misunderstanding the exact thrust of his concern. I can think of other possible interpretations of his concern as he expressed it, so I’ll wait for him to clarify.\n\nNo, that's not it. Suppose that Álvaro Herrera has some custom\nsettings he likes to put on all the PostgreSQL clusters that he uses,\nso he creates a file álvaro.conf and uses an \"include\" directive in\npostgresql.conf to suck in those settings. If he also likes UTF-8,\nthen the file name will be stored in the file system as a 12-byte\nvalue of which the first two bytes will be 0xc3 0xa1. In that case,\neverything will be fine, because JSON is supposed to always be UTF-8,\nand the file name is UTF-8, and it's all good. But suppose he instead\nlikes LATIN-1. Then the file name will be stored as an 11-byte value\nand the first byte will be 0xe1. The second byte, representing a\nlower-case 'l', will be 0x6c. But we can't put a byte sequence that\ngoes 0xe1 0x6c into a JSON manifest stored as UTF-8, because that's\nnot valid in UTF-8. UTF-8 requires that every byte from 0xc0-0xff be\nfollowed by one or more bytes in the range 0x80-0xbf, and our\nhypothetical file name that starts with 0xe1 0x6c does not meet that\ncriteria.\n\nNow, you might say \"well, why don't we just do an encoding\nconversion?\", but we can't. When the filesystem tells us what the file\nnames are, it does not tell us what encoding the person who created\nthose files had in mind. We don't know that they had *any* encoding in\nmind. IIUC, a file in the data directory can have a name that consists\nof any sequence of bytes whatsoever, so long as it doesn't contain\nprohibited characters like a path separator or \\0 byte. But only some\nof those possible octet sequences can be stored in a manifest that has\nto be valid UTF-8.\n\nThe degree to which there is a practical problem here is limited by\nthe fact that most filenames within the data directory are chosen by\nthe system, e.g. base/16384/16385, and those file names are only going\nto contain ASCII characters (i.e. code points 0-127) and those are\nvalid in UTF-8 and lots of other encodings. Moreover, most people who\ncreate additional files in the data directory will probably use ASCII\ncharacters for those as well, at least if they are from an\nEnglish-speaking country, and if they're not, they're likely going to\nuse UTF-8, and then they'll still be fine. But there is no rule that\nsays people have to do that, and if somebody wants to use file names\nbased around SJIS or whatever, the backup manifest functionality\nshould not for that reason break.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Jan 2020 12:04:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-23, Robert Haas wrote:\n\n> No, that's not it. Suppose that �lvaro Herrera has some custom\n> settings he likes to put on all the PostgreSQL clusters that he uses,\n> so he creates a file �lvaro.conf and uses an \"include\" directive in\n> postgresql.conf to suck in those settings. If he also likes UTF-8,\n> then the file name will be stored in the file system as a 12-byte\n> value of which the first two bytes will be 0xc3 0xa1. In that case,\n> everything will be fine, because JSON is supposed to always be UTF-8,\n> and the file name is UTF-8, and it's all good. But suppose he instead\n> likes LATIN-1.\n\nI do have files with Latin-1-encoded names in my filesystem, even though\nmy system is UTF-8, so I understand the problem. I was wondering if it\nwould work to encode any non-UTF8-valid name using something like\nbase64; the encoded name will be plain ASCII and can be put in the\nmanifest, probably using a different field of the JSON object -- so for\na normal file you'd have { path => '1234/2345' } but for a\nLatin-1-encoded file you'd have { path_base64 => '4Wx2YXJvLmNvbmYK' }.\nThen it's the job of the tool to ensure it decodes the name to its\noriginal form when creating/querying for the file.\n\nA problem I have with this idea is that this is very corner-casey, so\nmost tool implementors will never realize that there's a need to decode\ncertain file names.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 23 Jan 2020 14:23:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 02:23:14PM -0300, Alvaro Herrera wrote:\n> On 2020-Jan-23, Robert Haas wrote:\n> \n> > No, that's not it. Suppose that �lvaro Herrera has some custom\n> > settings he likes to put on all the PostgreSQL clusters that he uses,\n> > so he creates a file �lvaro.conf and uses an \"include\" directive in\n> > postgresql.conf to suck in those settings. If he also likes UTF-8,\n> > then the file name will be stored in the file system as a 12-byte\n> > value of which the first two bytes will be 0xc3 0xa1. In that case,\n> > everything will be fine, because JSON is supposed to always be UTF-8,\n> > and the file name is UTF-8, and it's all good. But suppose he instead\n> > likes LATIN-1.\n> \n> I do have files with Latin-1-encoded names in my filesystem, even though\n> my system is UTF-8, so I understand the problem. I was wondering if it\n> would work to encode any non-UTF8-valid name using something like\n> base64; the encoded name will be plain ASCII and can be put in the\n> manifest, probably using a different field of the JSON object -- so for\n> a normal file you'd have { path => '1234/2345' } but for a\n> Latin-1-encoded file you'd have { path_base64 => '4Wx2YXJvLmNvbmYK' }.\n> Then it's the job of the tool to ensure it decodes the name to its\n> original form when creating/querying for the file.\n> \n> A problem I have with this idea is that this is very corner-casey, so\n> most tool implementors will never realize that there's a need to decode\n> certain file names.\n\nAnother idea is to use base64 for all non-ASCII file names, so we don't\nneed to check if the file name is valid UTF8 before outputting --- we\njust need to check for non-ASCII, which is much easier. Another\nproblem, though, is how do you _flag_ file names as being\nbase64-encoded? Use another JSON field to specify that?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 23 Jan 2020 12:49:58 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 12:24 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> I do have files with Latin-1-encoded names in my filesystem, even though\n> my system is UTF-8, so I understand the problem. I was wondering if it\n> would work to encode any non-UTF8-valid name using something like\n> base64; the encoded name will be plain ASCII and can be put in the\n> manifest, probably using a different field of the JSON object -- so for\n> a normal file you'd have { path => '1234/2345' } but for a\n> Latin-1-encoded file you'd have { path_base64 => '4Wx2YXJvLmNvbmYK' }.\n> Then it's the job of the tool to ensure it decodes the name to its\n> original form when creating/querying for the file.\n\nRight. That's what I meant, a couple of messages back, when I\nmentioned an extra layer of escaping, but your explanation here is\nbetter because it's more detailed.\n\n> A problem I have with this idea is that this is very corner-casey, so\n> most tool implementors will never realize that there's a need to decode\n> certain file names.\n\nThat's a valid concern. I would not necessarily have thought that\nout-of-core tools would find a lot of use in reading them, provided\nPostgreSQL itself both knows how to generate them and how to validate\nthem, but the interest in this topic suggests that people do care\nabout that.\n\nMostly, I think this issue shows the folly of imagining that putting\neverything into JSON is a good idea because it gets rid of escaping\nproblems. Actually, what it does is create multiple kinds of escaping\nproblems. With the format I proposed, you only have to worry that the\nfile name might contain a tab character, because in that format, tab\nis the delimiter. But, if we use JSON, then we've got the same problem\nwith JSON's delimiter, namely a double quote, which the JSON parser\nwill solve for you. We then have this additional and somewhat obscure\nproblem with invalidly encoded data, to which JSON itself provides no\nsolution. We have to invent our own, probably along the lines of what\nyou have just proposed. I think one can reasonably wonder whether this\nis really an improvement over just inventing a way to escape tabs.\n\nThat said, there are other reasons to want to go with JSON, most of\nall the fact that it's easy to see how to extend the format to\nadditional fields. Once you decide that each file will have an object,\nyou can add any keys that you like to that object and things should\nscale up nicely. It has been my contention that we probably will not\nfind the need to add much more here, but such arguments are always\nsuspect and have a good chance of being wrong. Also, such prophecies\ncan be self-fulfilling: if the format is easy to extend, then people\nmay extend it, whereas if it is hard to extend, they may not try, or\nthey may try and then give up.\n\nAt the end of the day, I'm willing to make this work either way. I do\nnot think that my original proposal was bad, but there were things not\nto like about it. There are also things not to like about using a\nJSON-based format, and this seems to me to be a fairly significant\none. However, both sets of problems are solvable, and neither design\nis awful. It's just a question of which kind of warts we like better.\nTo be blunt, I've already spent a lot more effort on this problem than\nI would have liked, and more than 90% of it has been spent on the\nissue of how to format a file that only PostgreSQL needs to read and\nwrite. While I do not think that good file formats are unimportant, I\nremain unconvinced that switching to JSON is making things better. It\nseems like it's just making them different, while inflating the amount\nof coding required by a fairly significant multiple.\n\nThat being said, unless somebody objects in the next few days, I am\ngoing to assume that the people who preferred JSON over a\ntab-separated file are also happy with the idea of using base-64\nencoding as proposed by you above to represent files whose names are\nnot valid UTF-8 sequences; and I will then go rewrite the patch that\ngenerates the manifests to use that format, rewrite the validator\npatch to parse that format using this infrastructure, and hopefully\nend up with something that can be reviewed and committed before we run\nout of time to get things done for this release. If anybody wants to\nvote for another plan, please vote soon.\n\nIn the meantime, any review of the new patches I posted here yesterday\nwould be warmly appreciated.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Jan 2020 13:02:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 12:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Another idea is to use base64 for all non-ASCII file names, so we don't\n> need to check if the file name is valid UTF8 before outputting --- we\n> just need to check for non-ASCII, which is much easier.\n\nI think that we have the infrastructure available to check in a\nconvenient way whether it's valid as UTF-8, so this might not be\nnecessary, but I will look into it further unless there is a consensus\nto go another direction entirely.\n\n> Another\n> problem, though, is how do you _flag_ file names as being\n> base64-encoded? Use another JSON field to specify that?\n\nAlvaro's proposed solution in the message to which you replied was to\ncall the field either 'path' or 'path_base64' depending on whether\nbase-64 escaping was used. That seems better to me than having a field\ncalled 'path' and a separate field called 'is_path_base64' or\nwhatever.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Jan 2020 13:05:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 01:05:50PM -0500, Robert Haas wrote:\n> > Another\n> > problem, though, is how do you _flag_ file names as being\n> > base64-encoded? Use another JSON field to specify that?\n> \n> Alvaro's proposed solution in the message to which you replied was to\n> call the field either 'path' or 'path_base64' depending on whether\n> base-64 escaping was used. That seems better to me than having a field\n> called 'path' and a separate field called 'is_path_base64' or\n> whatever.\n\nHmm, so the JSON key name is the flag --- interesting.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 23 Jan 2020 13:11:01 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-23, Bruce Momjian wrote:\n\n> On Thu, Jan 23, 2020 at 01:05:50PM -0500, Robert Haas wrote:\n> > > Another\n> > > problem, though, is how do you _flag_ file names as being\n> > > base64-encoded? Use another JSON field to specify that?\n> > \n> > Alvaro's proposed solution in the message to which you replied was to\n> > call the field either 'path' or 'path_base64' depending on whether\n> > base-64 escaping was used. That seems better to me than having a field\n> > called 'path' and a separate field called 'is_path_base64' or\n> > whatever.\n> \n> Hmm, so the JSON key name is the flag --- interesting.\n\nYes, because if you use the same key name, you risk a dumb tool writing\nthe file name as the encoded name. That's worse because it's harder to\nfigure out that it's wrong.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 23 Jan 2020 15:20:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 03:20:27PM -0300, Alvaro Herrera wrote:\n> On 2020-Jan-23, Bruce Momjian wrote:\n> \n> > On Thu, Jan 23, 2020 at 01:05:50PM -0500, Robert Haas wrote:\n> > > > Another\n> > > > problem, though, is how do you _flag_ file names as being\n> > > > base64-encoded? Use another JSON field to specify that?\n> > > \n> > > Alvaro's proposed solution in the message to which you replied was to\n> > > call the field either 'path' or 'path_base64' depending on whether\n> > > base-64 escaping was used. That seems better to me than having a field\n> > > called 'path' and a separate field called 'is_path_base64' or\n> > > whatever.\n> > \n> > Hmm, so the JSON key name is the flag --- interesting.\n> \n> Yes, because if you use the same key name, you risk a dumb tool writing\n> the file name as the encoded name. That's worse because it's harder to\n> figure out that it's wrong.\n\nYes, good point. I think my one concern is that someone might specify\nboth keys in the JSON, which would be very odd.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 23 Jan 2020 13:22:19 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 1:22 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Yes, good point. I think my one concern is that someone might specify\n> both keys in the JSON, which would be very odd.\n\nI think that if a tool other than PostgreSQL chooses to generate a\nPostreSQL backup manifest, it must take care to do it in a manner that\nis compatible with what PostgreSQL would generate. If it doesn't,\nwell, that sucks for them, but we can't prevent other people from\nwriting bad code. On a very good day, we can prevent ourselves from\nwriting bad code.\n\nThere is in general the question of how rigorous PostgreSQL ought to\nbe when validating a backup manifest. The proposal on the table is to\nstore four (4) fields per file: name, size, last modification time,\nand checksum. So a JSON object representing a file should have four\nkeys, say \"path\", \"size\", \"mtime\", and \"checksum\". The \"checksum\" key\ncould perhaps be optional, in case the user disables checksums, or we\ncould represent that case in some other way, like \"checksum\" => null,\n\"checksum\" => \"\", or \"checksum\" => \"NONE\". There is an almost\nunlimited scope for bike-shedding here, but let's leave that to one\nside for the moment.\n\nSuppose that someone asks PostgreSQL's backup manifest verification\ntool to validate a backup manifest where there's an extra key. Say, in\naddition to the four keys listed in the previous paragraph, there is\nan additional key, \"momjian\". On the one hand, our backup manifest\nverification tool could take this as a sign that the manifest is\ninvalid, and accordingly throw an error. Or, it could assume that some\nthird-party backup tool generated the backup manifest and that the\n\"momjian\" field is there to track something which is of interest to\nthat tool but not to PostgreSQL core, in which case it should just be\nignored.\n\nIncidentally, some research seems to suggest that the problem of\nfilenames which don't form a valid UTF-8 sequence cannot occur on\nWindows. This blog post may be helpful in understanding the issues:\n\nhttp://beets.io/blog/paths.html\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Jan 2020 14:00:23 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 02:00:23PM -0500, Robert Haas wrote:\n> Incidentally, some research seems to suggest that the problem of\n> filenames which don't form a valid UTF-8 sequence cannot occur on\n> Windows. This blog post may be helpful in understanding the issues:\n> \n> http://beets.io/blog/paths.html\n\nIs there any danger of assuming a non-UTF8 sequence to be UTF8 even when\nit isn't, except that it displays oddly? I am thinking of a non-UTF8\nsequence that is value UTF8.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 23 Jan 2020 14:04:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "\tRobert Haas wrote:\n\n> With the format I proposed, you only have to worry that the\n> file name might contain a tab character, because in that format, tab\n> is the delimiter\n\nIt could be CSV, which has this problem already solved,\nis easier to parse than JSON, certainly no less popular,\nand is not bound to a specific encoding.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 23 Jan 2020 20:08:40 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 2:08 PM Daniel Verite <daniel@manitou-mail.org> wrote:\n> It could be CSV, which has this problem already solved,\n> is easier to parse than JSON, certainly no less popular,\n> and is not bound to a specific encoding.\n\nSure. I don't think that would look quite as nice visually as what I\nproposed when inspected by humans, and our default COPY output format\nis tab-separated rather than comma-separated. However, if CSV would be\nmore acceptable, great.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Jan 2020 14:34:11 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-23, Bruce Momjian wrote:\n\n> Yes, good point. I think my one concern is that someone might specify\n> both keys in the JSON, which would be very odd.\n\nJust make that a reason to raise an error. I think it's even possible\nto specify that as a JSON Schema constraint, using a \"oneOf\" predicate.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 23 Jan 2020 17:06:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "> On Jan 22, 2020, at 7:00 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I have this done in my local repo to the point that I can build frontend tools against the json parser that is now in src/common and also run all the check-world tests without failure. I’m planning to post my work soon, possibly tonight if I don’t run out of time, but more likely tomorrow.\n\nOk, I finished merging with Robert’s patches. The attached follow his numbering, with my patches intended to by applied after his.\n\nI tried not to change his work too much, but I did a bit of refactoring in 0010, as explained in the commit comment.\n\n0011 is just for verifying the linking works ok and the json parser can be invoked from a frontend tool without error — I don’t really see the point in committing it.\n\nI ran some benchmarks for json parsing in the backend both before and after these patches, with very slight changes in runtime. The setup for the benchmark creates an unlogged table with a single text column and loads rows of json formatted text:\n\nCREATE UNLOGGED TABLE benchmark (\n j text\n);\nCOPY benchmark (j) FROM '/Users/mark.dilger/bench/json.csv’;\n\n\nFYI:\n\nwc ~/bench/json.csv\n 107 34465023 503364244 /Users/mark.dilger/bench/json.csv\n\nThe benchmark itself casts the text column to jsonb, as follows:\n\nSELECT jsonb_typeof(j::jsonb) typ, COUNT(*) FROM benchmark GROUP BY typ;\n\nIn summary, the times are:\n\n\tpristine\tpatched\n\t—————\t—————\n\t11.985\t12.237\n\t12.200\t11.992\n\t11.691\t11.896\n\t11.847\t11.833\n\t11.722\t11.936\n\nThe full output for the runtimes without the patch over five iterations:\n\n\nCREATE TABLE\nCOPY 107\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m11.985s\nuser\t0m0.002s\nsys\t0m0.003s\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m12.200s\nuser\t0m0.002s\nsys\t0m0.004s\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m11.691s\nuser\t0m0.002s\nsys\t0m0.003s\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m11.847s\nuser\t0m0.002s\nsys\t0m0.004s\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m11.722s\nuser\t0m0.002s\nsys\t0m0.003s\n\n\nAn with the patch, also five iterations:\n\n\nCREATE TABLE\nCOPY 107\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m12.237s\nuser\t0m0.002s\nsys\t0m0.004s\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m11.992s\nuser\t0m0.002s\nsys\t0m0.004s\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m11.896s\nuser\t0m0.002s\nsys\t0m0.004s\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m11.833s\nuser\t0m0.002s\nsys\t0m0.004s\n typ | count \n--------+-------\n object | 107\n(1 row)\n\n\nreal\t0m11.936s\nuser\t0m0.002s\nsys\t0m0.004s\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 23 Jan 2020 13:05:10 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 7:35 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n> > On Jan 22, 2020, at 7:00 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >\n> > I have this done in my local repo to the point that I can build frontend tools against the json parser that is now in src/common and also run all the check-world tests without failure. I’m planning to post my work soon, possibly tonight if I don’t run out of time, but more likely tomorrow.\n>\n> Ok, I finished merging with Robert’s patches. The attached follow his numbering, with my patches intended to by applied after his.\n>\n> I tried not to change his work too much, but I did a bit of refactoring in 0010, as explained in the commit comment.\n>\n> 0011 is just for verifying the linking works ok and the json parser can be invoked from a frontend tool without error — I don’t really see the point in committing it.\n>\n> I ran some benchmarks for json parsing in the backend both before and after these patches, with very slight changes in runtime. The setup for the benchmark creates an unlogged table with a single text column and loads rows of json formatted text:\n>\n> CREATE UNLOGGED TABLE benchmark (\n> j text\n> );\n> COPY benchmark (j) FROM '/Users/mark.dilger/bench/json.csv’;\n>\n>\n> FYI:\n>\n> wc ~/bench/json.csv\n> 107 34465023 503364244 /Users/mark.dilger/bench/json.csv\n>\n> The benchmark itself casts the text column to jsonb, as follows:\n>\n> SELECT jsonb_typeof(j::jsonb) typ, COUNT(*) FROM benchmark GROUP BY typ;\n>\n> In summary, the times are:\n>\n> pristine patched\n> ————— —————\n> 11.985 12.237\n> 12.200 11.992\n> 11.691 11.896\n> 11.847 11.833\n> 11.722 11.936\n>\n\nOK, nothing noticeable there.\n\n\"accept\" is a common utility I've used in the past with parsers of\nthis kind, but inlining it seems perfectly reasonable.\n\nI've reviewed these patches and Robert's, and they seem basically good to me.\n\nBut I don't think src/bin is the right place for the test program. I\nassume we're not going to ship this program, so it really belongs in\nsrc/test somewhere, I think. It should also have a TAP test.\n\ncheers\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 10:57:40 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "\n\n> On Jan 23, 2020, at 4:27 PM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> \n> On Fri, Jan 24, 2020 at 7:35 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>> \n>>> On Jan 22, 2020, at 7:00 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>> \n>>> I have this done in my local repo to the point that I can build frontend tools against the json parser that is now in src/common and also run all the check-world tests without failure. I’m planning to post my work soon, possibly tonight if I don’t run out of time, but more likely tomorrow.\n>> \n>> Ok, I finished merging with Robert’s patches. The attached follow his numbering, with my patches intended to by applied after his.\n>> \n>> I tried not to change his work too much, but I did a bit of refactoring in 0010, as explained in the commit comment.\n>> \n>> 0011 is just for verifying the linking works ok and the json parser can be invoked from a frontend tool without error — I don’t really see the point in committing it.\n>> \n>> I ran some benchmarks for json parsing in the backend both before and after these patches, with very slight changes in runtime. The setup for the benchmark creates an unlogged table with a single text column and loads rows of json formatted text:\n>> \n>> CREATE UNLOGGED TABLE benchmark (\n>> j text\n>> );\n>> COPY benchmark (j) FROM '/Users/mark.dilger/bench/json.csv’;\n>> \n>> \n>> FYI:\n>> \n>> wc ~/bench/json.csv\n>> 107 34465023 503364244 /Users/mark.dilger/bench/json.csv\n>> \n>> The benchmark itself casts the text column to jsonb, as follows:\n>> \n>> SELECT jsonb_typeof(j::jsonb) typ, COUNT(*) FROM benchmark GROUP BY typ;\n>> \n>> In summary, the times are:\n>> \n>> pristine patched\n>> ————— —————\n>> 11.985 12.237\n>> 12.200 11.992\n>> 11.691 11.896\n>> 11.847 11.833\n>> 11.722 11.936\n>> \n> \n> OK, nothing noticeable there.\n> \n> \"accept\" is a common utility I've used in the past with parsers of\n> this kind, but inlining it seems perfectly reasonable.\n> \n> I've reviewed these patches and Robert's, and they seem basically good to me.\n\nThanks for the review!\n\n> But I don't think src/bin is the right place for the test program. I\n> assume we're not going to ship this program, so it really belongs in\n> src/test somewhere, I think. It should also have a TAP test.\n\nOk, I’ll go do that; thanks for the suggestion.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 24 Jan 2020 06:17:31 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-01-23 18:04, Robert Haas wrote:\n> Now, you might say \"well, why don't we just do an encoding\n> conversion?\", but we can't. When the filesystem tells us what the file\n> names are, it does not tell us what encoding the person who created\n> those files had in mind. We don't know that they had*any* encoding in\n> mind. IIUC, a file in the data directory can have a name that consists\n> of any sequence of bytes whatsoever, so long as it doesn't contain\n> prohibited characters like a path separator or \\0 byte. But only some\n> of those possible octet sequences can be stored in a manifest that has\n> to be valid UTF-8.\n\nI think it wouldn't be unreasonable to require that file names in the \ndatabase directory be consistently encoded (as defined by pg_control, \nprobably). After all, this information is sometimes also shown in \nsystem views, so it's already difficult to process total junk. In \npractice, this shouldn't be an onerous requirement.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 16:05:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-01-23 18:04, Robert Haas wrote:\n>> Now, you might say \"well, why don't we just do an encoding\n>> conversion?\", but we can't. When the filesystem tells us what the file\n>> names are, it does not tell us what encoding the person who created\n>> those files had in mind. We don't know that they had*any* encoding in\n>> mind. IIUC, a file in the data directory can have a name that consists\n>> of any sequence of bytes whatsoever, so long as it doesn't contain\n>> prohibited characters like a path separator or \\0 byte. But only some\n>> of those possible octet sequences can be stored in a manifest that has\n>> to be valid UTF-8.\n\n> I think it wouldn't be unreasonable to require that file names in the \n> database directory be consistently encoded (as defined by pg_control, \n> probably). After all, this information is sometimes also shown in \n> system views, so it's already difficult to process total junk. In \n> practice, this shouldn't be an onerous requirement.\n\nI don't entirely follow why we're discussing this at all, if the\nrequirement is backing up a PG data directory. There are not, and\nare never likely to be, any legitimate files with non-ASCII names\nin that context. Why can't we just skip any such files?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Jan 2020 11:27:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 1/23/20 11:05 AM, Robert Haas wrote:\n > On Thu, Jan 23, 2020 at 12:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n >> Another idea is to use base64 for all non-ASCII file names, so we don't\n >> need to check if the file name is valid UTF8 before outputting --- we\n >> just need to check for non-ASCII, which is much easier.\n >\n > I think that we have the infrastructure available to check in a\n > convenient way whether it's valid as UTF-8, so this might not be\n > necessary, but I will look into it further unless there is a consensus\n > to go another direction entirely.\n >\n >> Another\n >> problem, though, is how do you _flag_ file names as being\n >> base64-encoded? Use another JSON field to specify that?\n >\n > Alvaro's proposed solution in the message to which you replied was to\n > call the field either 'path' or 'path_base64' depending on whether\n > base-64 escaping was used. That seems better to me than having a field\n > called 'path' and a separate field called 'is_path_base64' or\n > whatever.\n\n+1. I'm not excited about this solution but don't have a better idea.\n\nIt might be nice to have a strict mode where non-ASCII/UTF8 characters \nwill error instead, but that can be added on later.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 24 Jan 2020 09:29:48 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 1/24/20 9:27 AM, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-01-23 18:04, Robert Haas wrote:\n>>> Now, you might say \"well, why don't we just do an encoding\n>>> conversion?\", but we can't. When the filesystem tells us what the file\n>>> names are, it does not tell us what encoding the person who created\n>>> those files had in mind. We don't know that they had*any* encoding in\n>>> mind. IIUC, a file in the data directory can have a name that consists\n>>> of any sequence of bytes whatsoever, so long as it doesn't contain\n>>> prohibited characters like a path separator or \\0 byte. But only some\n>>> of those possible octet sequences can be stored in a manifest that has\n>>> to be valid UTF-8.\n> \n>> I think it wouldn't be unreasonable to require that file names in the\n>> database directory be consistently encoded (as defined by pg_control,\n>> probably). After all, this information is sometimes also shown in\n>> system views, so it's already difficult to process total junk. In\n>> practice, this shouldn't be an onerous requirement.\n> \n> I don't entirely follow why we're discussing this at all, if the\n> requirement is backing up a PG data directory. There are not, and\n> are never likely to be, any legitimate files with non-ASCII names\n> in that context. Why can't we just skip any such files?\n\nIt's not uncommon in my experience for users to drop odd files into \nPGDATA (usually versioned copies of postgresql.conf, etc.), but I agree \nthat it should be discouraged. Even so, I don't recall ever seeing any \nnon-ASCII filenames.\n\nSkipping files sounds scary, I'd prefer an error or a warning (and then \nbase64 encode the filename).\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 24 Jan 2020 09:36:58 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-24, David Steele wrote:\n\n> It might be nice to have a strict mode where non-ASCII/UTF8 characters will\n> error instead, but that can be added on later.\n\n\"your backup failed because you have a file we don't like\" is not great\nbehavior. IIRC we already fail when a file is owned by root (or maybe\nunreadable and owned by root), and it messes up severely when people\nedit postgresql.conf as root. Let's not add more cases of that sort.\n\nMaybe we can get away with *ignoring* such files, perhaps after emitting\na warning.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 14:00:01 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 1/24/20 10:00 AM, Alvaro Herrera wrote:\n> On 2020-Jan-24, David Steele wrote:\n> \n>> It might be nice to have a strict mode where non-ASCII/UTF8 characters will\n>> error instead, but that can be added on later.\n> \n> \"your backup failed because you have a file we don't like\" is not great\n> behavior. IIRC we already fail when a file is owned by root (or maybe\n> unreadable and owned by root), and it messes up severely when people\n> edit postgresql.conf as root. Let's not add more cases of that sort.\n\nMy intention was that the strict mode would not be the default, so I \ndon't see why it would be a big issue.\n\n> Maybe we can get away with *ignoring* such files, perhaps after emitting\n> a warning.\n\nI'd prefer an an error (or base64 encoding) rather than just skipping a \nfile. The latter sounds scary.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 24 Jan 2020 10:06:39 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "\n\n> On Jan 24, 2020, at 8:36 AM, David Steele <david@pgmasters.net> wrote:\n> \n>> I don't entirely follow why we're discussing this at all, if the\n>> requirement is backing up a PG data directory. There are not, and\n>> are never likely to be, any legitimate files with non-ASCII names\n>> in that context. Why can't we just skip any such files?\n> \n> It's not uncommon in my experience for users to drop odd files into PGDATA (usually versioned copies of postgresql.conf, etc.), but I agree that it should be discouraged. Even so, I don't recall ever seeing any non-ASCII filenames.\n> \n> Skipping files sounds scary, I'd prefer an error or a warning (and then base64 encode the filename).\n\nI tend to agree with Tom. We know that postgres doesn’t write any such files now, and if we ever decided to change that, we could change this, too. So for now, we can assume any such files are not ours. Either the user manually scribbled in this directory, or had a tool (antivirus checksum file, vim .WHATEVER.swp file, etc) that did so. Raising an error would break any automated backup process that hit this issue, and base64 encoding the file name and backing up the file contents could grab data that the user would not reasonably expect in the backup. But this argument applies equally well to such files regardless of filename encoding. It would be odd to back them up when they happen to be valid UTF-8/ASCII/whatever, but not do so when they are not valid. I would expect, therefore, that we only back up files which match our expected file name pattern and ignore (perhaps with a warning) everything else.\n\nQuoting from Robert’s email about why we want a backup manifest seems to support this idea, at least as I see it:\n\n> So, let's suppose we invent a backup manifest. What should it contain?\n> I imagine that it would consist of a list of files, and the lengths of\n> those files, and a checksum for each file. I think you should have a\n> choice of what kind of checksums to use, because algorithms that used\n> to seem like good choices (e.g. MD5) no longer do; this trend can\n> probably be expected to continue. Even if we initially support only\n> one kind of checksum -- presumably SHA-something since we have code\n> for that already for SCRAM -- I think that it would also be a good\n> idea to allow for future changes. And maybe it's best to just allow a\n> choice of SHA-224, SHA-256, SHA-384, and SHA-512 right out of the\n> gate, so that we can avoid bikeshedding over which one is secure\n> enough. I guess we'll still have to argue about the default. I also\n> think that it should be possible to build a manifest with no\n> checksums, so that one need not pay the overhead of computing\n> checksums if one does not wish. Of course, such a manifest is of much\n> less utility for checking backup integrity, but you can still check\n> that you've got the right files, which is noticeably better than\n> nothing. The manifest should probably also contain a checksum of its\n> own contents so that the integrity of the manifest itself can be\n> verified. And maybe a few other bits of metadata, but I'm not sure\n> exactly what. Ideas?\n> \n> \n> \n> Once we invent the concept of a backup manifest, what do we need to do\n> with them? I think we'd want three things initially:\n> \n> \n> \n> (1) When taking a backup, have the option (perhaps enabled by default)\n> to include a backup manifest.\n> (2) Given an existing backup that has not got a manifest, construct one.\n> (3) Cross-check a manifest against a backup and complain about extra\n> files, missing files, size differences, or checksum mismatches.\n\n\nNothing in there sounds to me like it needs to include random cruft.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 24 Jan 2020 09:14:34 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-24, David Steele wrote:\n\n> On 1/24/20 10:00 AM, Alvaro Herrera wrote:\n\n> > Maybe we can get away with *ignoring* such files, perhaps after emitting\n> > a warning.\n> \n> I'd prefer an an error (or base64 encoding) rather than just skipping a\n> file. The latter sounds scary.\n\nWell, if the file is \"invalid\" then evidently Postgres cannot possibly\ncare about it, so why would it care if it's missing from the backup?\n\nI prefer the encoding scheme myself. I don't see the point of the\nerror.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 14:42:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I prefer the encoding scheme myself. I don't see the point of the\n> error.\n\nYeah, if we don't want to skip such files, then storing them using\na base64-encoded name (with a different key than regular names)\nseems plausible. But I don't really see why we'd go to that much\ntrouble, nor why we'd think it's likely that tools would correctly\nhandle a case that is going to have 0.00% usage in the field.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Jan 2020 12:48:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-24, Mark Dilger wrote:\n\n> I would expect, therefore, that we only back up files which match our\n> expected file name pattern and ignore (perhaps with a warning)\n> everything else.\n\nThat risks missing files placed in the datadir by extensions; see\ndiscussion about pg_checksums using a whitelist[1], which does not\ntranslate directly to this problem, because omitting to checksum a file\nis not the same as failing to copy a file into the backups.\n(Essentially, the backups would be incomplete.)\n\n[1] https://postgr.es/m/20181019171747.4uithw2sjkt6msne@alap3.anarazel.de\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 14:53:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jan-24, Mark Dilger wrote:\n>> I would expect, therefore, that we only back up files which match our\n>> expected file name pattern and ignore (perhaps with a warning)\n>> everything else.\n\n> That risks missing files placed in the datadir by extensions;\n\nI agree that assuming we know everything that will appear in the\ndata directory is a pretty unsafe assumption. But no rational\nextension is going to use a non-ASCII file name, either, if only\nbecause it can't predict what the filesystem encoding will be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Jan 2020 12:56:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 9:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I prefer the encoding scheme myself. I don't see the point of the\n> > error.\n>\n> Yeah, if we don't want to skip such files, then storing them using\n> a base64-encoded name (with a different key than regular names)\n> seems plausible. But I don't really see why we'd go to that much\n> trouble, nor why we'd think it's likely that tools would correctly\n> handle a case that is going to have 0.00% usage in the field.\n\nI mean, I gave a not-totally-unrealistic example of how this could\nhappen upthread. I agree it's going to be rare, but it's not usually\nOK to decide that if a user does something a little unusual,\nnot-obviously-related features subtly break.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 24 Jan 2020 09:56:53 -0800",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-24, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jan-24, Mark Dilger wrote:\n> >> I would expect, therefore, that we only back up files which match our\n> >> expected file name pattern and ignore (perhaps with a warning)\n> >> everything else.\n> \n> > That risks missing files placed in the datadir by extensions;\n> \n> I agree that assuming we know everything that will appear in the\n> data directory is a pretty unsafe assumption. But no rational\n> extension is going to use a non-ASCII file name, either, if only\n> because it can't predict what the filesystem encoding will be.\n\nI see two different arguments. One is about the file encoding. Those\nfiles are rare and would be placed by the user manually. We can fix\nthat by encoding the name. We can have a debug mode that encodes all\nnames that way, just to ensure the tools are prepared for it.\n\nThe other is Mark's point about \"expected file pattern\", which seems a\nslippery slope to me. If the pattern is /^[a-zA-Z0-9_.]*$/ then I'm\nokay with it (maybe add a few other punctuation chars); as you say no\nsane extension would use names much weirder than that. But we should\nnot be stricter, such as counting the number of periods/underscores\nallowed or where are alpha chars expected (except maybe disallow period\nat start of filename), or anything too specific like that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 15:03:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-01-24 18:56, Robert Haas wrote:\n> On Fri, Jan 24, 2020 at 9:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>> I prefer the encoding scheme myself. I don't see the point of the\n>>> error.\n>>\n>> Yeah, if we don't want to skip such files, then storing them using\n>> a base64-encoded name (with a different key than regular names)\n>> seems plausible. But I don't really see why we'd go to that much\n>> trouble, nor why we'd think it's likely that tools would correctly\n>> handle a case that is going to have 0.00% usage in the field.\n> \n> I mean, I gave a not-totally-unrealistic example of how this could\n> happen upthread. I agree it's going to be rare, but it's not usually\n> OK to decide that if a user does something a little unusual,\n> not-obviously-related features subtly break.\n\nAnother example might be log files under pg_log with localized weekday \nor month names. (Maybe we're not planning to back up log files, but the \nroutines that deal with file names should probably be prepared to at \nleast look at the name and decide that they don't care about it rather \nthan freaking out right away.)\n\nI'm not fond of the base64 idea btw., because it seems to sort of \npenalize using non-ASCII characters by making the result completely not \nhuman readable. Something along the lines of MIME would be better in \nthat way. There are existing solutions to storing data with metadata \naround it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 19:16:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "\n\n> On Jan 24, 2020, at 10:03 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> The other is Mark's point about \"expected file pattern\", which seems a\n> slippery slope to me. If the pattern is /^[a-zA-Z0-9_.]*$/ then I'm\n> okay with it (maybe add a few other punctuation chars); as you say no\n> sane extension would use names much weirder than that. But we should\n> not be stricter, such as counting the number of periods/underscores\n> allowed or where are alpha chars expected (except maybe disallow period\n> at start of filename), or anything too specific like that.\n\nWhat bothered me about skipping files based only on encoding is that it creates hard to anticipate bugs. If extensions embed something, like a customer name, into a filename, and that something is usually ASCII, or usually valid UTF-8, and gets backed up, but then some day they embed something that is not ASCII/UTF-8, then it does not get backed up, and maybe nobody notices until they actually *need* the backup, and it’s too late.\n\nWe either need to be really strict about what gets backed up, so that nobody gets a false sense of security about what gets included in that list, or we need to be completely permissive, which would include files named in arbitrary encodings. I don’t see how it does anybody any favors to make the system appear to back up everything until you hit this unanticipated case and then it fails.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 24 Jan 2020 10:16:42 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 1:05 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Ok, I finished merging with Robert’s patches. The attached follow his numbering, with my patches intended to by applied after his.\n\nI think it'd be a good idea to move the pg_wchar.h stuff into a new\nthread. This thread is getting a bit complicated, because we've got\n(1) the patches need to do $SUBJECT plus (2) additional patches that\nclean up the multibyte stuff more plus (3) discussion of issues that\npertain to the backup manifest thread. To my knowledge, $SUBJECT\ndoesn't strictly require the pg_wchar.h changes, so I suggest we try\nto segregate those.\n\n> I tried not to change his work too much, but I did a bit of refactoring in 0010, as explained in the commit comment.\n\nHmm, I generally prefer to avoid these kinds of macro tricks because I\nthink they can be confusing to the reader. It's worth it in a case\nlike equalfuncs.c where so much boilerplate code is saved that the\ngain in readability more than makes up for having to go check what the\nmacros do -- but I don't feel that's the case here. There aren't\n*that* many call sites, and I think the code will be easier to\nunderstand without \"return\" statements concealed within macros...\n\n> I ran some benchmarks for json parsing in the backend both before and after these patches, with very slight changes in runtime.\n\nCool, thanks.\n\nSince 0001-0003 have been reviewed by multiple people and nobody's\nobjected, I have committed those. But I made a hash of it: the first\none, I failed to credit any reviewers, or include a Discussion link,\nand I just realized that I should have listed Alvaro's name as a\nreviewer also. Sorry about that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 24 Jan 2020 10:43:07 -0800",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 2020-Jan-24, Peter Eisentraut wrote:\n\n> I'm not fond of the base64 idea btw., because it seems to sort of penalize\n> using non-ASCII characters by making the result completely not human\n> readable. Something along the lines of MIME would be better in that way.\n> There are existing solutions to storing data with metadata around it.\n\nYou mean quoted-printable? That works for me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 16:08:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "\n\n> On Jan 24, 2020, at 10:43 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Since 0001-0003 have been reviewed by multiple people and nobody's\n> objected, I have committed those.\n\nI think 0004-0005 have been reviewed and accepted by both me and Andrew, if I understood him correctly:\n\n> I've reviewed these patches and Robert's, and they seem basically good to me.\n\nCertainly, nothing in those two patches caused me any concern. I’m going to modify my patches as you suggested, get rid of the INSIST macro, and move the pg_wchar changes to their own thread. None of that should require changes in your 0004 or 0005. It won’t bother me if you commit those two. Andrew?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 24 Jan 2020 11:50:00 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "> On Jan 22, 2020, at 10:53 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> 0004 is a substantially cleaned up version of the patch to make the\n> JSON parser return a result code rather than throwing errors. Names\n> have been fixed, interfaces have been tidied up, and the thing is\n> better integrated with the surrounding code. I would really like\n> comments, if anyone has them, on whether this approach is acceptable.\n> \n> 0005 builds on 0004 by moving three functions from jsonapi.c to\n> jsonfuncs.c. With that done, jsonapi.c has minimal remaining\n> dependencies on the backend environment. It would still need a\n> substitute for elog(ERROR, \"some internal thing is broken\"); I'm\n> thinking of using pg_log_fatal() for that case. It would also need a\n> fix for the problem that pg_mblen() is not available in the front-end\n> environment. I don't know what to do about that yet exactly, but it\n> doesn't seem unsolvable. The frontend environment just needs to know\n> which encoding to use, and needs a way to call PQmblen() rather than\n> pg_mblen().\n\nI have completed the work in the attached 0006 and 0007 patches.\nThese are intended to apply after your 0004 and 0005; they won’t\nwork directly on master which, as of this writing, only contains your\n0001-0003 patches.\n\n0006 finishes moving the json parser to src/include/common and src/common.\n\n0007 adds testing.\n\nI would appreciate somebody looking at the portability issues for 0007\non Windows.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 26 Jan 2020 11:24:39 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Sat, Jan 25, 2020 at 6:20 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jan 24, 2020, at 10:43 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > Since 0001-0003 have been reviewed by multiple people and nobody's\n> > objected, I have committed those.\n>\n> I think 0004-0005 have been reviewed and accepted by both me and Andrew, if I understood him correctly:\n>\n> > I've reviewed these patches and Robert's, and they seem basically good to me.\n>\n> Certainly, nothing in those two patches caused me any concern. I’m going to modify my patches as you suggested, get rid of the INSIST macro, and move the pg_wchar changes to their own thread. None of that should require changes in your 0004 or 0005. It won’t bother me if you commit those two. Andrew?\n>\n\n\nJust reviewed the latest versions of 4 and 5, they look good to me.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 27 Jan 2020 10:50:53 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 5:54 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jan 22, 2020, at 10:53 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > 0004 is a substantially cleaned up version of the patch to make the\n> > JSON parser return a result code rather than throwing errors. Names\n> > have been fixed, interfaces have been tidied up, and the thing is\n> > better integrated with the surrounding code. I would really like\n> > comments, if anyone has them, on whether this approach is acceptable.\n> >\n> > 0005 builds on 0004 by moving three functions from jsonapi.c to\n> > jsonfuncs.c. With that done, jsonapi.c has minimal remaining\n> > dependencies on the backend environment. It would still need a\n> > substitute for elog(ERROR, \"some internal thing is broken\"); I'm\n> > thinking of using pg_log_fatal() for that case. It would also need a\n> > fix for the problem that pg_mblen() is not available in the front-end\n> > environment. I don't know what to do about that yet exactly, but it\n> > doesn't seem unsolvable. The frontend environment just needs to know\n> > which encoding to use, and needs a way to call PQmblen() rather than\n> > pg_mblen().\n>\n> I have completed the work in the attached 0006 and 0007 patches.\n> These are intended to apply after your 0004 and 0005; they won’t\n> work directly on master which, as of this writing, only contains your\n> 0001-0003 patches.\n>\n> 0006 finishes moving the json parser to src/include/common and src/common.\n>\n> 0007 adds testing.\n>\n> I would appreciate somebody looking at the portability issues for 0007\n> on Windows.\n>\n\nWe'll need at a minimum something added to src/tools/msvc to build the\ntest program, maybe some other stuff too. I'll take a look.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 27 Jan 2020 11:39:53 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "\n\n> On Jan 26, 2020, at 5:09 PM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> \n> We'll need at a minimum something added to src/tools/msvc to build the\n> test program, maybe some other stuff too. I'll take a look.\n\nThanks, much appreciated.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sun, 26 Jan 2020 17:11:11 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "> > 0007 adds testing.\n> >\n> > I would appreciate somebody looking at the portability issues for 0007\n> > on Windows.\n> >\n>\n> We'll need at a minimum something added to src/tools/msvc to build the\n> test program, maybe some other stuff too. I'll take a look.\n\n\nPatch complains that the 0007 patch is malformed:\n\nandrew@ariana:pg_head (master)*$ patch -p 1 <\n~/Downloads/v4-0007-Adding-frontend-tests-for-json-parser.patch\npatching file src/Makefile\npatching file src/test/Makefile\npatching file src/test/bin/.gitignore\npatching file src/test/bin/Makefile\npatching file src/test/bin/README\npatching file src/test/bin/t/001_test_json.pl\npatch: **** malformed patch at line 201: diff --git\na/src/test/bin/test_json.c b/src/test/bin/test_json.c\n\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 27 Jan 2020 12:21:58 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "> On Jan 26, 2020, at 5:51 PM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> \n>>> 0007 adds testing.\n>>> \n>>> I would appreciate somebody looking at the portability issues for 0007\n>>> on Windows.\n>>> \n>> \n>> We'll need at a minimum something added to src/tools/msvc to build the\n>> test program, maybe some other stuff too. I'll take a look.\n> \n> \n> Patch complains that the 0007 patch is malformed:\n> \n> andrew@ariana:pg_head (master)*$ patch -p 1 <\n> ~/Downloads/v4-0007-Adding-frontend-tests-for-json-parser.patch\n> patching file src/Makefile\n> patching file src/test/Makefile\n> patching file src/test/bin/.gitignore\n> patching file src/test/bin/Makefile\n> patching file src/test/bin/README\n> patching file src/test/bin/t/001_test_json.pl\n> patch: **** malformed patch at line 201: diff --git\n> a/src/test/bin/test_json.c b/src/test/bin/test_json.c\n\nI manually removed a stray newline in the patch file. I shouldn’t have done that. I’ve removed the stray newline in the sources, committed (with git commit —amend) and am testing again, which is what I should have done the first time….\n\nOk, the tests pass. Here are those two patches again, both regenerated with a fresh invocation of ‘git format-patch’.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 26 Jan 2020 18:05:24 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Sun, Jan 26, 2020 at 9:05 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Ok, the tests pass. Here are those two patches again, both regenerated with a fresh invocation of ‘git format-patch’.\n\nRegarding 0006:\n\n+#ifndef FRONTEND\n #include \"miscadmin.h\"\n-#include \"utils/jsonapi.h\"\n+#endif\n\nI suggest\n\n#ifdef FRONTEND\n#define check_stack_depth()\n#else\n#include \"miscadmin.h\"\n#endif\n\n- lex->token_terminator = s + pg_mblen(s);\n+ lex->token_terminator = s +\npg_wchar_table[lex->input_encoding].mblen((const unsigned char *) s);\n\nCan we use pq_encoding_mblen() here? Regardless, it doesn't seem great\nto add more direct references to pg_wchar_table. I think we should\navoid that.\n\n+ return JSON_BAD_PARSER_STATE;\n\nI don't like this, either. I'm thinking about adding some\nvariable-argument macros that either elog() in backend code or else\npg_log_fatal() and exit(1) in frontend code. There are some existing\nprecedents already (e.g. rmtree.c, pgfnames.c) which could perhaps be\ngeneralized. I think I'll start a new thread about that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Jan 2020 08:30:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Mon, 27 Jan 2020 at 19:00, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Jan 26, 2020 at 9:05 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > Ok, the tests pass. Here are those two patches again, both regenerated with a fresh invocation of ‘git format-patch’.\n>\n> Regarding 0006:\n>\n> +#ifndef FRONTEND\n> #include \"miscadmin.h\"\n> -#include \"utils/jsonapi.h\"\n> +#endif\n>\n> I suggest\n>\n> #ifdef FRONTEND\n> #define check_stack_depth()\n> #else\n> #include \"miscadmin.h\"\n> #endif\n>\n> - lex->token_terminator = s + pg_mblen(s);\n> + lex->token_terminator = s +\n> pg_wchar_table[lex->input_encoding].mblen((const unsigned char *) s);\n>\n> Can we use pq_encoding_mblen() here? Regardless, it doesn't seem great\n> to add more direct references to pg_wchar_table. I think we should\n> avoid that.\n>\n> + return JSON_BAD_PARSER_STATE;\n>\n> I don't like this, either. I'm thinking about adding some\n> variable-argument macros that either elog() in backend code or else\n> pg_log_fatal() and exit(1) in frontend code. There are some existing\n> precedents already (e.g. rmtree.c, pgfnames.c) which could perhaps be\n> generalized. I think I'll start a new thread about that.\n>\n\nHi,\nI can see one warning on HEAD.\n\njsonapi.c: In function ‘json_errdetail’:\njsonapi.c:1068:1: warning: control reaches end of non-void function\n[-Wreturn-type]\n }\n ^\n\nAttaching a patch to fix warning.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 28 Jan 2020 00:32:16 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "> On Jan 27, 2020, at 5:30 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Sun, Jan 26, 2020 at 9:05 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Ok, the tests pass. Here are those two patches again, both regenerated with a fresh invocation of ‘git format-patch’.\n> \n> Regarding 0006:\n> \n> +#ifndef FRONTEND\n> #include \"miscadmin.h\"\n> -#include \"utils/jsonapi.h\"\n> +#endif\n> \n> I suggest\n> \n> #ifdef FRONTEND\n> #define check_stack_depth()\n> #else\n> #include \"miscadmin.h\"\n> #endif\n\nSure, we can do it that way. \n\n> - lex->token_terminator = s + pg_mblen(s);\n> + lex->token_terminator = s +\n> pg_wchar_table[lex->input_encoding].mblen((const unsigned char *) s);\n> \n> Can we use pq_encoding_mblen() here? Regardless, it doesn't seem great\n> to add more direct references to pg_wchar_table. I think we should\n> avoid that.\n\nYes, that looks a lot cleaner.\n\n> \n> + return JSON_BAD_PARSER_STATE;\n> \n> I don't like this, either. I'm thinking about adding some\n> variable-argument macros that either elog() in backend code or else\n> pg_log_fatal() and exit(1) in frontend code. There are some existing\n> precedents already (e.g. rmtree.c, pgfnames.c) which could perhaps be\n> generalized. I think I'll start a new thread about that.\n\nRight, you started the \"pg_croak, or something like it?” thread, which already looks like it might not be resolved quickly. Can we use the\n\n#ifndef FRONTEND\n#define pg_log_warning(...) elog(WARNING, __VA_ARGS__)\n#else\n#include \"common/logging.h\"\n#endif\n\npattern here as a place holder, and revisit it along with the other couple instances of this pattern if/when the “pg_croak, or something like it?” thread is ready for commit? I’m calling it json_log_and_abort(…) for now, as I can’t hope to guess what the final name will be.\n\nI’m attaching a new patch set with these three changes including Mahendra’s patch posted elsewhere on this thread.\n\nSince you’ve committed your 0004 and 0005 patches, this v6 patch set is now based on a fresh copy of master.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 27 Jan 2020 12:05:44 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 2:02 PM Mahendra Singh Thalor\n<mahi6run@gmail.com> wrote:\n> I can see one warning on HEAD.\n>\n> jsonapi.c: In function ‘json_errdetail’:\n> jsonapi.c:1068:1: warning: control reaches end of non-void function\n> [-Wreturn-type]\n> }\n> ^\n>\n> Attaching a patch to fix warning.\n\nHmm, I don't get a warning there. This function is a switch over an\nenum type with a case for every value of the enum, and every branch\neither does a \"return\" or an \"elog,\" so any code after the switch\nshould be unreachable. It's possible your compiler is too dumb to know\nthat, but I thought there were other places in the code base where we\nassumed that if we handled every defined value of enum, that was good\nenough.\n\nBut maybe not. I found similar coding in CreateDestReceiver(), and\nthat ends with:\n\n /* should never get here */\n pg_unreachable();\n\nSo perhaps we need the same thing here. Does adding that fix it for you?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 10:06:23 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 3:05 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I’m attaching a new patch set with these three changes including Mahendra’s patch posted elsewhere on this thread.\n>\n> Since you’ve committed your 0004 and 0005 patches, this v6 patch set is now based on a fresh copy of master.\n\nOK, so I think this is getting close.\n\nWhat is now 0001 manages to have four (4) conditionals on FRONTEND at\nthe top of the file. This seems like at least one two many. I am OK\nwith this being separate:\n\n+#ifndef FRONTEND\n #include \"postgres.h\"\n+#else\n+#include \"postgres_fe.h\"\n+#endif\n\npostgres(_fe).h has pride of place among includes, so it's reasonable\nto put this in its own section like this.\n\n+#ifdef FRONTEND\n+#define check_stack_depth()\n+#define json_log_and_abort(...) pg_log_fatal(__VA_ARGS__); exit(1);\n+#else\n+#define json_log_and_abort(...) elog(ERROR, __VA_ARGS__)\n+#endif\n\nOK, so here we have a section entirely devoted to our own file-local\nmacros. Also reasonable. But in between, you have both an #ifdef\nFRONTEND and an #ifndef FRONTEND for other includes, and I really\nthink that should be like #ifdef FRONTEND .. #else .. #endif.\n\nAlso, the preprocessor police are on their way to your house now to\narrest you for that first one. You need to write it like this:\n\n#define json_log_and_abort(...) \\\n do { pg_log_fatal(__VA_ARGS__); exit(1); } while (0)\n\nOtherwise, hilarity ensues if somebody writes if (my_code_is_buggy)\njson_log_and_abort(\"oops\").\n\n {\n- JsonLexContext *lex = palloc0(sizeof(JsonLexContext));\n+ JsonLexContext *lex;\n+\n+#ifndef FRONTEND\n+ lex = palloc0(sizeof(JsonLexContext));\n+#else\n+ lex = (JsonLexContext*) malloc(sizeof(JsonLexContext));\n+ memset(lex, 0, sizeof(JsonLexContext));\n+#endif\n\nInstead of this, how making no change at all here?\n\n- default:\n- elog(ERROR, \"unexpected json parse state: %d\", ctx);\n }\n+\n+ /* Not reached */\n+ json_log_and_abort(\"unexpected json parse state: %d\", ctx);\n\nThis, too, seems unnecessary.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 10:17:09 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 4:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 27, 2020 at 2:02 PM Mahendra Singh Thalor\n> <mahi6run@gmail.com> wrote:\n> > I can see one warning on HEAD.\n> >\n> > jsonapi.c: In function ‘json_errdetail’:\n> > jsonapi.c:1068:1: warning: control reaches end of non-void function\n> > [-Wreturn-type]\n> > }\n> > ^\n> >\n> > Attaching a patch to fix warning.\n>\n> Hmm, I don't get a warning there. This function is a switch over an\n> enum type with a case for every value of the enum, and every branch\n> either does a \"return\" or an \"elog,\" so any code after the switch\n> should be unreachable. It's possible your compiler is too dumb to know\n> that, but I thought there were other places in the code base where we\n> assumed that if we handled every defined value of enum, that was good\n> enough.\n>\n> But maybe not. I found similar coding in CreateDestReceiver(), and\n> that ends with:\n>\n> /* should never get here */\n> pg_unreachable();\n>\n> So perhaps we need the same thing here. Does adding that fix it for you?\n\nFTR this has unfortunately the same result on Thomas' automatic patch\ntester, e.g. https://travis-ci.org/postgresql-cfbot/postgresql/builds/642634195#L1968\n\n\n",
"msg_date": "Tue, 28 Jan 2020 16:21:05 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, 28 Jan 2020 at 20:36, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 27, 2020 at 2:02 PM Mahendra Singh Thalor\n> <mahi6run@gmail.com> wrote:\n> > I can see one warning on HEAD.\n> >\n> > jsonapi.c: In function ‘json_errdetail’:\n> > jsonapi.c:1068:1: warning: control reaches end of non-void function\n> > [-Wreturn-type]\n> > }\n> > ^\n> >\n> > Attaching a patch to fix warning.\n>\n> Hmm, I don't get a warning there. This function is a switch over an\n> enum type with a case for every value of the enum, and every branch\n> either does a \"return\" or an \"elog,\" so any code after the switch\n> should be unreachable. It's possible your compiler is too dumb to know\n> that, but I thought there were other places in the code base where we\n> assumed that if we handled every defined value of enum, that was good\n> enough.\n>\n> But maybe not. I found similar coding in CreateDestReceiver(), and\n> that ends with:\n>\n> /* should never get here */\n> pg_unreachable();\n>\n> So perhaps we need the same thing here. Does adding that fix it for you?\n>\n\nHi Robert,\nTom Lane already fixed this and committed yesterday(4589c6a2a30faba53d0655a8e).\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Jan 2020 21:00:05 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 10:30 AM Mahendra Singh Thalor\n<mahi6run@gmail.com> wrote:\n> Tom Lane already fixed this and committed yesterday(4589c6a2a30faba53d0655a8e).\n\nOops. OK, thanks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 10:31:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 10:19 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> FTR this has unfortunately the same result on Thomas' automatic patch\n> tester, e.g. https://travis-ci.org/postgresql-cfbot/postgresql/builds/642634195#L1968\n\nThat's unfortunate ... but presumably Tom's changes took care of this?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 11:11:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 28, 2020 at 10:19 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> FTR this has unfortunately the same result on Thomas' automatic patch\n>> tester, e.g. https://travis-ci.org/postgresql-cfbot/postgresql/builds/642634195#L1968\n\n> That's unfortunate ... but presumably Tom's changes took care of this?\n\nProbably the cfbot just hasn't retried this build since that fix.\nI don't know what its algorithm is for retrying failed builds, but it\ndoes seem to do so after awhile.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 11:26:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 3:05 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Since you’ve committed your 0004 and 0005 patches, this v6 patch set is now based on a fresh copy of master.\n\nI think the first question for 0005 is whether want this at all.\nInitially, you proposed NOT committing it, but then Andrew reviewed it\nas if it were for commit. I'm not sure whether he was actually saying\nthat it ought to be committed, though, or whether he just missed your\nremarks on the topic. Nobody else has really taken a position. I'm not\n100% convinced that it's necessary to include this, but I'm also not\nparticularly opposed to it. It's a fairly small amount of code, which\nis nice, and perhaps useful as a demonstration of how to use the JSON\nparser in a frontend application, which someone also might find nice.\nAnyone else want to express an opinion?\n\nMeanwhile, here is a round of nitp^H^H^H^Hreview:\n\n-# installcheck and install should not recurse into the subdirectory \"modules\".\n+# installcheck and install should not recurse into the subdirectory \"modules\"\n+# nor \"bin\".\n\nI would probably have just changed this to:\n\n# installcheck and install should not recurse into \"modules\" or \"bin\"\n\nThe details are arguable, but you definitely shouldn't say \"the\nsubdirectory\" and then list two of them.\n\n+This directory contains a set of programs that exercise functionality declared\n+in src/include/common and defined in src/common. The purpose of these programs\n+is to verify that code intended to work both from frontend and backend code do\n+indeed work when compiled and used in frontend code. The structure of this\n+directory makes no attempt to test that such code works in the backend, as the\n+backend has its own tests already, and presumably those tests sufficiently\n+exercide the code as used by it.\n\n\"exercide\" is not spelled correctly, but I also disagree with giving\nthe directory so narrow a charter. I think you should just say\nsomething like:\n\nThis directory contains programs that are built and executed for\ntesting purposes,\nbut never installed. It may be used, for example, to test that code in\nsrc/common\nworks in frontend environments.\n\n+# There doesn't seem to be any easy way to get TestLib to use the binaries from\n+# our directory, so we hack up a path to our binary and run that\ndirectly. This\n+# seems brittle enough that some other solution should be found, if possible.\n+\n+my $test_json = join('/', $ENV{TESTDIR}, 'test_json');\n\nI don't know what the right thing to do here is. Perhaps someone more\nfamiliar with TAP testing can comment.\n\n+ set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN(\"pg_test_json\"));\n\nDo we need this? I guess we're not likely to bother with translations\nfor a test program.\n\n+ /*\n+ * Make stdout unbuffered to match stderr; and ensure stderr is unbuffered\n+ * too, which it should already be everywhere except sometimes in Windows.\n+ */\n+ setbuf(stdout, NULL);\n+ setbuf(stderr, NULL);\n\nDo we need this? If so, why?\n\n+ char *json;\n+ unsigned int json_len;\n+ JsonLexContext *lex;\n+ int client_encoding;\n+ JsonParseErrorType parse_result;\n+\n+ json_len = (unsigned int) strlen(str);\n+ client_encoding = PQenv2encoding();\n+\n+ json = strdup(str);\n+ lex = makeJsonLexContextCstringLen(json, strlen(json),\nclient_encoding, true /* need_escapes */);\n+ parse_result = pg_parse_json(lex, &nullSemAction);\n+ fprintf(stdout, _(\"%s\\n\"), (JSON_SUCCESS == parse_result ? \"VALID\" :\n\"INVALID\"));\n+ return;\n\njson_len is set but not used.\n\nNot entirely sure why we are using PQenv2encoding() here.\n\nThe trailing return is unnecessary.\n\nI think it would be a good idea to use json_errdetail() in the failure\ncase, print the error, and have the tests check that we got the\nexpected error.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 11:32:33 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 28, 2020 at 10:30 AM Mahendra Singh Thalor\n> <mahi6run@gmail.com> wrote:\n>> Tom Lane already fixed this and committed yesterday(4589c6a2a30faba53d0655a8e).\n\n> Oops. OK, thanks.\n\nYeah, there were multiple issues here:\n\n1. If a switch is expected to cover all values of an enum type,\nwe now prefer not to have a default: case, so that we'll get\ncompiler warnings if somebody adds an enum value and fails to\nupdate the switch.\n\n2. Without a default:, though, you need to have after-the-switch\ncode to catch the possibility that the runtime value was not a\nlegal enum element. Some compilers are trusting and assume that\nthat's not a possible case, but some are not (and Coverity will\ncomplain about it too).\n\n3. Some compilers still don't understand that elog(ERROR) doesn't\nreturn, so you need a dummy return. Perhaps pg_unreachable()\nwould do as well, but project style has been the dummy return for\na long time ... and I'm not entirely convinced by the assumption\nthat every compiler understands pg_unreachable(), anyway.\n\n(I know Robert knows all this stuff, even if he momentarily\nforgot. Just summarizing for onlookers.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 11:34:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 5:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Jan 28, 2020 at 10:19 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >> FTR this has unfortunately the same result on Thomas' automatic patch\n> >> tester, e.g. https://travis-ci.org/postgresql-cfbot/postgresql/builds/642634195#L1968\n>\n> > That's unfortunate ... but presumably Tom's changes took care of this?\n>\n> Probably the cfbot just hasn't retried this build since that fix.\n> I don't know what its algorithm is for retrying failed builds, but it\n> does seem to do so after awhile.\n\nYes, I think to remember that Thomas put some rules to avoid\nrebuilding everything all the time. Patches that was rebuilt since\nindeed starting to get back to green, so it's all good!\n\n\n",
"msg_date": "Tue, 28 Jan 2020 17:51:28 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 11:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 3. Some compilers still don't understand that elog(ERROR) doesn't\n> return, so you need a dummy return. Perhaps pg_unreachable()\n> would do as well, but project style has been the dummy return for\n> a long time ... and I'm not entirely convinced by the assumption\n> that every compiler understands pg_unreachable(), anyway.\n\nIs the example of CreateDestReceiver() sufficient to show that this is\nnot a problem in practice?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 12:20:07 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 28, 2020 at 11:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 3. Some compilers still don't understand that elog(ERROR) doesn't\n>> return, so you need a dummy return. Perhaps pg_unreachable()\n>> would do as well, but project style has been the dummy return for\n>> a long time ... and I'm not entirely convinced by the assumption\n>> that every compiler understands pg_unreachable(), anyway.\n\n> Is the example of CreateDestReceiver() sufficient to show that this is\n> not a problem in practice?\n\nDunno. I don't see any warnings about that in the buildfarm, but\nthat's not a very large sample of non-gcc compilers.\n\nAnother angle here is that on non-gcc compilers, pg_unreachable()\nis going to expand to an abort() call, which is likely to eat more\ncode space than a dummy \"return 0\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 13:16:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "I wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Is the example of CreateDestReceiver() sufficient to show that this is\n>> not a problem in practice?\n\n> Dunno. I don't see any warnings about that in the buildfarm, but\n> that's not a very large sample of non-gcc compilers.\n\nBTW, now that I think about it, CreateDestReceiver is not up to project\nstandards anyway, in that it fails to provide reasonable behavior in\nthe case where what's passed is not a legal value of the enum.\nWhat you'll get, if you're lucky, is a SIGABRT crash with no\nindication of the cause --- or if you're not lucky, some very\nhard-to-debug crash elsewhere as a result of the function returning\na garbage pointer. So independently of whether the current coding\nsuppresses compiler warnings reliably, I think we ought to replace it\nwith elog()-and-return-NULL. Admittedly, that's wasting a few bytes\non a case that should never happen ... but we haven't ever hesitated\nto do that elsewhere, if it'd make the problem more diagnosable.\n\nIOW, there's a good reason why there are exactly no other uses\nof that coding pattern.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 13:32:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 1:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, now that I think about it, CreateDestReceiver is not up to project\n> standards anyway, in that it fails to provide reasonable behavior in\n> the case where what's passed is not a legal value of the enum.\n> What you'll get, if you're lucky, is a SIGABRT crash with no\n> indication of the cause --- or if you're not lucky, some very\n> hard-to-debug crash elsewhere as a result of the function returning\n> a garbage pointer. So independently of whether the current coding\n> suppresses compiler warnings reliably, I think we ought to replace it\n> with elog()-and-return-NULL. Admittedly, that's wasting a few bytes\n> on a case that should never happen ... but we haven't ever hesitated\n> to do that elsewhere, if it'd make the problem more diagnosable.\n\nWell, I might be responsible for the CreateDestReceiver thing -- or I\nmight not, I haven't checked -- but I do think that style is a bit\ncleaner and more elegant. I think it's VERY unlikely that anyone would\never manage to call it with something that's not a legal value of the\nenum, and if they do, I think the chances of surviving are basically\nnil, and frankly, I'd rather die. If you asked me where you want me to\nstore my output and I tell you to store it in the sdklgjsdjgslkdg, you\nreally should refuse to do anything at all, not just stick my output\nsomeplace-or-other and hope for the best.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 14:03:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 28, 2020 at 1:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, now that I think about it, CreateDestReceiver is not up to project\n>> standards anyway, in that it fails to provide reasonable behavior in\n>> the case where what's passed is not a legal value of the enum.\n\n> Well, I might be responsible for the CreateDestReceiver thing -- or I\n> might not, I haven't checked -- but I do think that style is a bit\n> cleaner and more elegant. I think it's VERY unlikely that anyone would\n> ever manage to call it with something that's not a legal value of the\n> enum, and if they do, I think the chances of surviving are basically\n> nil, and frankly, I'd rather die. If you asked me where you want me to\n> store my output and I tell you to store it in the sdklgjsdjgslkdg, you\n> really should refuse to do anything at all, not just stick my output\n> someplace-or-other and hope for the best.\n\nWell, yeah, that's exactly my point. But in my book, \"refuse to do\nanything\" should be \"elog(ERROR)\", not \"invoke undefined behavior\".\nAn actual abort() call might be all right here, in that at least\nwe'd know what would happen and we could debug it once we got hold\nof a stack trace. But pg_unreachable() is not that. Basically, if\nthere's *any* circumstance, broken code or not, where control could\nreach a pg_unreachable() call, you did it wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 14:29:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, yeah, that's exactly my point. But in my book, \"refuse to do\n> anything\" should be \"elog(ERROR)\", not \"invoke undefined behavior\".\n> An actual abort() call might be all right here, in that at least\n> we'd know what would happen and we could debug it once we got hold\n> of a stack trace. But pg_unreachable() is not that. Basically, if\n> there's *any* circumstance, broken code or not, where control could\n> reach a pg_unreachable() call, you did it wrong.\n\nI don't really agree. I think such defensive coding is more than\njustified when the input is coming from a file on disk or some other\nexternal source where it might have been corrupted. For instance, I\nthink the fact that the code which deforms heap tuples will cheerfully\nsail off the end of the buffer or seg fault if the tuple descriptor\ndoesn't match the tuple is a seriously bad thing. It results in actual\nproduction crashes that could be avoided with more defensive coding.\nAdmittedly, there would likely be a performance cost, which might not\nbe a reason to do it, but if that cost is small I would probably vote\nfor paying it, because this is something that actually happens to\nusers on a pretty regular basis.\n\nIn the case at hand, though, there are no constants of type\nCommandDest that come from any place other than a constant in the\nprogram text, and it seems unlikely that this will ever be different\nin the future. So, how could we ever end up with a value that's not in\nthe enum? I guess the program text itself could be corrupted, but we\ncannot defend against that.\n\nMind you, I'm not going to put up a huge stink if you're bound and\ndetermined to go change this. I prefer it the way that it is, and I\nthink that preference is well-justified by facts on the ground, but I\ndon't think it's worth fighting about.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 15:08:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 28, 2020 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, yeah, that's exactly my point. But in my book, \"refuse to do\n>> anything\" should be \"elog(ERROR)\", not \"invoke undefined behavior\".\n>> An actual abort() call might be all right here, in that at least\n>> we'd know what would happen and we could debug it once we got hold\n>> of a stack trace. But pg_unreachable() is not that. Basically, if\n>> there's *any* circumstance, broken code or not, where control could\n>> reach a pg_unreachable() call, you did it wrong.\n\n> I don't really agree. I think such defensive coding is more than\n> justified when the input is coming from a file on disk or some other\n> external source where it might have been corrupted.\n\nThere's certainly an argument to be made that an elog() call is an\nunjustified expenditure of code space and we should just do an abort()\n(but still not pg_unreachable(), IMO). However, what I'm really on about\nhere is that CreateDestReceiver is out of step with nigh-universal project\npractice. If it's not worth having an elog() here, then there are\nliterally hundreds of other elog() calls that we ought to be nuking on\nthe same grounds. I don't really want to run around and have a bunch\nof debates about exactly which extant elog() calls are effectively\nunreachable and which are not. That's not always very clear, and even\nif it is clear today it might not be tomorrow. The minute somebody calls\nCreateDestReceiver with a non-constant argument, the issue becomes open\nagain. And I'd rather not have to stop and think hard about the tradeoff\nbetween elog() and abort() when I write such functions in future.\n\nSo basically, my problem with this is that I don't think it's a coding\nstyle we want to encourage, because it's too fragile. And there's no\ngood argument (like performance) to leave it that way. I quite agree\nwith you that there are places like tuple deforming where we're taking\nmore chances than I'd like --- but there is a noticeable performance\ncost to being paranoid there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 15:42:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "> On Jan 28, 2020, at 8:32 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Jan 27, 2020 at 3:05 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Since you’ve committed your 0004 and 0005 patches, this v6 patch set is now based on a fresh copy of master.\n> \n> I think the first question for 0005 is whether want this at all.\n> Initially, you proposed NOT committing it, but then Andrew reviewed it\n> as if it were for commit. I'm not sure whether he was actually saying\n> that it ought to be committed, though, or whether he just missed your\n> remarks on the topic. Nobody else has really taken a position. I'm not\n> 100% convinced that it's necessary to include this, but I'm also not\n> particularly opposed to it. It's a fairly small amount of code, which\n> is nice, and perhaps useful as a demonstration of how to use the JSON\n> parser in a frontend application, which someone also might find nice.\n\nOnce Andrew reviewed it, I started thinking about it as something that might get committed. In that context, I think there should be a lot more tests in this new src/test/bin directory for other common code, but adding those as part of this patch just seems to confuse this patch.\n\nIn addition to adding frontend tests for code already in src/common, the conversation in another thread about adding frontend versions of elog and ereport seem like candidates for tests in this location. Sure, you can add an elog into a real frontend tool, such as pg_ctl, and update the tests for that program to expect that elog’s output, but what if you just want to exhaustively test the elog infrastructure in the frontend spanning multiple locales, encodings, whatever? You’ve also recently mentioned the possibility of having memory contexts in frontend code. Testing those seems like a good fit, too.\n\nI decided to leave this in the next version of the patch set, v7. v6 had three files, the second being something that already got committed in a different form, so this is now in v7-0002 whereas it had been in v6-0003. v6-0002 has no equivalent in v7.\n\n> Anyone else want to express an opinion?\n> \n> Meanwhile, here is a round of nitp^H^H^H^Hreview:\n> \n> -# installcheck and install should not recurse into the subdirectory \"modules\".\n> +# installcheck and install should not recurse into the subdirectory \"modules\"\n> +# nor \"bin\".\n> \n> I would probably have just changed this to:\n> \n> # installcheck and install should not recurse into \"modules\" or \"bin\"\n> \n> The details are arguable, but you definitely shouldn't say \"the\n> subdirectory\" and then list two of them.\n\nI read that as “nor [the subdirectory] bin” with the [the subdirectory] portion elided, and it doesn’t sound anomalous to me, but your formulation is more compact. I have used it in v7 of the patch set. Thanks.\n\n> \n> +This directory contains a set of programs that exercise functionality declared\n> +in src/include/common and defined in src/common. The purpose of these programs\n> +is to verify that code intended to work both from frontend and backend code do\n> +indeed work when compiled and used in frontend code. The structure of this\n> +directory makes no attempt to test that such code works in the backend, as the\n> +backend has its own tests already, and presumably those tests sufficiently\n> +exercide the code as used by it.\n> \n> \"exercide\" is not spelled correctly, but I also disagree with giving\n> the directory so narrow a charter. I think you should just say\n> something like:\n> \n> This directory contains programs that are built and executed for\n> testing purposes,\n> but never installed. It may be used, for example, to test that code in\n> src/common\n> works in frontend environments.\n\nYour formulation sounds fine, and I’ve used it in v7.\n\n> +# There doesn't seem to be any easy way to get TestLib to use the binaries from\n> +# our directory, so we hack up a path to our binary and run that\n> directly. This\n> +# seems brittle enough that some other solution should be found, if possible.\n> +\n> +my $test_json = join('/', $ENV{TESTDIR}, 'test_json');\n> \n> I don't know what the right thing to do here is. Perhaps someone more\n> familiar with TAP testing can comment.\n\nYeah, I was hoping that might get a comment from Andrew. I think if it works as-is on windows, we could just use it this way until it causes a problem on some platform or other. It’s not a runtime issue, being only a build-time test, and only then when tap tests are enabled *and* running check-world, so nobody should really be adversely affected. I’ll likely get around to testing this on Windows, but I don’t have any Windows environments set up yet, as that is still on my todo list.\n\n> + set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN(\"pg_test_json\"));\n> \n> Do we need this? I guess we're not likely to bother with translations\n> for a test program.\n\nRemoved.\n\n> + /*\n> + * Make stdout unbuffered to match stderr; and ensure stderr is unbuffered\n> + * too, which it should already be everywhere except sometimes in Windows.\n> + */\n> + setbuf(stdout, NULL);\n> + setbuf(stderr, NULL);\n> \n> Do we need this? If so, why?\n\nFor the current test setup, it is not needed. The tap test executes this program (test_json) once per json string, and exits after printing a single line. Surely the tap test wouldn’t have problems hanging on an unflushed buffer for a program that has exited. I was imagining this code might grow more complex, with the tap test communicating repeatedly with the same instance of test_json, such as if we extend the json parser to iterate over chunks of the input json string.\n\nI’ve removed this for v7, since we don’t need it yet.\n\n> + char *json;\n> + unsigned int json_len;\n> + JsonLexContext *lex;\n> + int client_encoding;\n> + JsonParseErrorType parse_result;\n> +\n> + json_len = (unsigned int) strlen(str);\n> + client_encoding = PQenv2encoding();\n> +\n> + json = strdup(str);\n> + lex = makeJsonLexContextCstringLen(json, strlen(json),\n> client_encoding, true /* need_escapes */);\n> + parse_result = pg_parse_json(lex, &nullSemAction);\n> + fprintf(stdout, _(\"%s\\n\"), (JSON_SUCCESS == parse_result ? \"VALID\" :\n> \"INVALID\"));\n> + return;\n> \n> json_len is set but not used.\n\nYou’re right. I’ve removed it.\n\n> Not entirely sure why we are using PQenv2encoding() here.\n\nThis program, which passes possibly json formatted strings into the parser, gets those strings from perl through the shell. If locale settings on the machine where this runs might break something about that for a real client application, then our test should break in the same way. Hard-coding “C” or “POSIX” or whatever for the locale side-steps part of the issue we’re trying to test. No?\n\nI’m leaving it as is for v7, but if you still disagree, I’ll change it. Let me know what you want me to change it *to*, though, as there is no obvious choice that I can see.\n\n> The trailing return is unnecessary.\n\nOk, I’ve removed it.\n\n> I think it would be a good idea to use json_errdetail() in the failure\n> case, print the error, and have the tests check that we got the\n> expected error.\n\nOh, yeah, I like that idea. That works, and is included in v7.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 28 Jan 2020 14:28:32 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Thanks for your review. I considered all of this along with your review comments in another email prior to sending v7 in response to that other email a few minutes ago.\n\n> On Jan 28, 2020, at 7:17 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Jan 27, 2020 at 3:05 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I’m attaching a new patch set with these three changes including Mahendra’s patch posted elsewhere on this thread.\n>> \n>> Since you’ve committed your 0004 and 0005 patches, this v6 patch set is now based on a fresh copy of master.\n> \n> OK, so I think this is getting close.\n> \n> What is now 0001 manages to have four (4) conditionals on FRONTEND at\n> the top of the file. This seems like at least one two many. \n\nYou are referencing this section, copied here from the patch:\n\n> #ifndef FRONTEND\n> #include \"postgres.h\"\n> #else\n> #include \"postgres_fe.h\"\n> #endif\n> \n> #include \"common/jsonapi.h\"\n> \n> #ifdef FRONTEND\n> #include \"common/logging.h\"\n> #endif\n> \n> #include \"mb/pg_wchar.h\"\n> \n> #ifndef FRONTEND\n> #include \"miscadmin.h\"\n> #endif\n\nI merged these a bit. See v7-0001 for details.\n\n> Also, the preprocessor police are on their way to your house now to\n> arrest you for that first one. You need to write it like this:\n> \n> #define json_log_and_abort(...) \\\n> do { pg_log_fatal(__VA_ARGS__); exit(1); } while (0)\n\nYes, right, I had done that and somehow didn’t get it into the patch. I’ll have coffee and donuts waiting.\n\n> {\n> - JsonLexContext *lex = palloc0(sizeof(JsonLexContext));\n> + JsonLexContext *lex;\n> +\n> +#ifndef FRONTEND\n> + lex = palloc0(sizeof(JsonLexContext));\n> +#else\n> + lex = (JsonLexContext*) malloc(sizeof(JsonLexContext));\n> + memset(lex, 0, sizeof(JsonLexContext));\n> +#endif\n> \n> Instead of this, how making no change at all here?\n\nYes, good point. I had split that into frontend vs backend because I was using palloc0fast for the backend, which seems to me the preferred function when the size is compile-time known, like it is here, and there is no palloc0fast in fe_memutils.h for frontend use. I then changed back to palloc0 when I noticed that pretty much nowhere else similar to this in the project uses palloc0fast. I neglected to change back completely, which left what you are quoting.\n\nOut of curiousity, why is palloc0fast not used in more places?\n\n> - default:\n> - elog(ERROR, \"unexpected json parse state: %d\", ctx);\n> }\n> +\n> + /* Not reached */\n> + json_log_and_abort(\"unexpected json parse state: %d\", ctx);\n> \n> This, too, seems unnecessary.\n\nThis was in response to Mahendra’s report of a compiler warning, which I didn’t get on my platform. The code in master changed a bit since v6 was written, so v7 just goes with how the newer committed code does this.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 14:35:44 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On 1/28/20 5:28 PM, Mark Dilger wrote:\n>\n>\n>> +# There doesn't seem to be any easy way to get TestLib to use the binaries from\n>> +# our directory, so we hack up a path to our binary and run that\n>> directly. This\n>> +# seems brittle enough that some other solution should be found, if possible.\n>> +\n>> +my $test_json = join('/', $ENV{TESTDIR}, 'test_json');\n>>\n>> I don't know what the right thing to do here is. Perhaps someone more\n>> familiar with TAP testing can comment.\n> Yeah, I was hoping that might get a comment from Andrew. I think if it works as-is on windows, we could just use it this way until it causes a problem on some platform or other. It’s not a runtime issue, being only a build-time test, and only then when tap tests are enabled *and* running check-world, so nobody should really be adversely affected. I’ll likely get around to testing this on Windows, but I don’t have any Windows environments set up yet, as that is still on my todo list.\n>\n\n\nI think using TESTDIR is Ok, but we do need a little more on Windows,\nbecause the executable name will be different. See attached revised\nversion of the test script.\n\n\n\nWe also need some extra stuff for MSVC. Something like the attached\nchange to src/tools/msvc/Mkvcbuild.pm. Also, the Makefile will need a\nline like:\n\n\nPROGRAM = test_json\n\n\nI'm still not 100% on the location of the test. I think the way the msvc\nsuite works this should be in its own dedicated directory e.g.\nsrc/test/json_parse.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 29 Jan 2020 01:02:13 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 5:35 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I merged these a bit. See v7-0001 for details.\n\nI jiggered that a bit more and committed this. I couldn't see the\npoint of having both the FRONTEND and non-FRONTEND code include\npg_wchar.h.\n\nI'll wait to see what you make of Andrew's latest comments before\ndoing anything further.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:27:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 28, 2020 at 5:35 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I merged these a bit. See v7-0001 for details.\n\n> I jiggered that a bit more and committed this. I couldn't see the\n> point of having both the FRONTEND and non-FRONTEND code include\n> pg_wchar.h.\n\nFirst buildfarm report is not positive:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2020-01-29%2015%3A30%3A26\n\n json.obj : error LNK2019: unresolved external symbol makeJsonLexContextCstringLen referenced in function json_recv [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n jsonb.obj : error LNK2001: unresolved external symbol makeJsonLexContextCstringLen [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n jsonfuncs.obj : error LNK2001: unresolved external symbol makeJsonLexContextCstringLen [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n json.obj : error LNK2019: unresolved external symbol json_lex referenced in function json_typeof [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n json.obj : error LNK2019: unresolved external symbol IsValidJsonNumber referenced in function datum_to_json [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n json.obj : error LNK2001: unresolved external symbol nullSemAction [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n jsonfuncs.obj : error LNK2019: unresolved external symbol pg_parse_json referenced in function json_strip_nulls [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n jsonfuncs.obj : error LNK2019: unresolved external symbol json_count_array_elements referenced in function get_array_start [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n jsonfuncs.obj : error LNK2019: unresolved external symbol json_errdetail referenced in function json_ereport_error [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n .\\Release\\postgres\\postgres.exe : fatal error LNK1120: 7 unresolved externals [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:45:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 10:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Jan 28, 2020 at 5:35 PM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> >> I merged these a bit. See v7-0001 for details.\n>\n> > I jiggered that a bit more and committed this. I couldn't see the\n> > point of having both the FRONTEND and non-FRONTEND code include\n> > pg_wchar.h.\n>\n> First buildfarm report is not positive:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2020-01-29%2015%3A30%3A26\n>\n> json.obj : error LNK2019: unresolved external symbol makeJsonLexContextCstringLen referenced in function json_recv [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> jsonb.obj : error LNK2001: unresolved external symbol makeJsonLexContextCstringLen [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> jsonfuncs.obj : error LNK2001: unresolved external symbol makeJsonLexContextCstringLen [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> json.obj : error LNK2019: unresolved external symbol json_lex referenced in function json_typeof [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> json.obj : error LNK2019: unresolved external symbol IsValidJsonNumber referenced in function datum_to_json [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> json.obj : error LNK2001: unresolved external symbol nullSemAction [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> jsonfuncs.obj : error LNK2019: unresolved external symbol pg_parse_json referenced in function json_strip_nulls [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> jsonfuncs.obj : error LNK2019: unresolved external symbol json_count_array_elements referenced in function get_array_start [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> jsonfuncs.obj : error LNK2019: unresolved external symbol json_errdetail referenced in function json_ereport_error [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n> .\\Release\\postgres\\postgres.exe : fatal error LNK1120: 7 unresolved externals [c:\\pgbuildfarm\\pgbuildroot\\HEAD\\pgsql.build\\postgres.vcxproj]\n\nHrrm, OK. I think it must need a sprinkling of Windows-specific magic.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:48:17 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 10:48 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Hrrm, OK. I think it must need a sprinkling of Windows-specific magic.\n\nI see that the patch Andrew posted earlier adjusts Mkvcbuild.pm's\n@pgcommonallfiles, so I pushed that fix. The other hunks there should\ngo into the patch to add a test_json utility, I think. Hopefully that\nwill fix it, but I guess we'll see.\n\nI was under the impression that the MSVC build gets the list of files\nto build by parsing the Makefiles, but I guess that's not true at\nleast in the case of src/common.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:55:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 4:32 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 1/28/20 5:28 PM, Mark Dilger wrote:\n> >\n> >\n> >> +# There doesn't seem to be any easy way to get TestLib to use the binaries from\n> >> +# our directory, so we hack up a path to our binary and run that\n> >> directly. This\n> >> +# seems brittle enough that some other solution should be found, if possible.\n> >> +\n> >> +my $test_json = join('/', $ENV{TESTDIR}, 'test_json');\n> >>\n> >> I don't know what the right thing to do here is. Perhaps someone more\n> >> familiar with TAP testing can comment.\n> > Yeah, I was hoping that might get a comment from Andrew. I think if it works as-is on windows, we could just use it this way until it causes a problem on some platform or other. It’s not a runtime issue, being only a build-time test, and only then when tap tests are enabled *and* running check-world, so nobody should really be adversely affected. I’ll likely get around to testing this on Windows, but I don’t have any Windows environments set up yet, as that is still on my todo list.\n> >\n>\n>\n> I think using TESTDIR is Ok,\n\n\nI've changed my mind, I don't think that will work for MSVC, the\nexecutable gets built elsewhere for that. I'll try to come up with\nsomething portable.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 Jan 2020 07:32:08 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "\n\n> On Jan 29, 2020, at 1:02 PM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> \n> On Wed, Jan 29, 2020 at 4:32 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> \n>> \n>> On 1/28/20 5:28 PM, Mark Dilger wrote:\n>>> \n>>> \n>>>> +# There doesn't seem to be any easy way to get TestLib to use the binaries from\n>>>> +# our directory, so we hack up a path to our binary and run that\n>>>> directly. This\n>>>> +# seems brittle enough that some other solution should be found, if possible.\n>>>> +\n>>>> +my $test_json = join('/', $ENV{TESTDIR}, 'test_json');\n>>>> \n>>>> I don't know what the right thing to do here is. Perhaps someone more\n>>>> familiar with TAP testing can comment.\n>>> Yeah, I was hoping that might get a comment from Andrew. I think if it works as-is on windows, we could just use it this way until it causes a problem on some platform or other. It’s not a runtime issue, being only a build-time test, and only then when tap tests are enabled *and* running check-world, so nobody should really be adversely affected. I’ll likely get around to testing this on Windows, but I don’t have any Windows environments set up yet, as that is still on my todo list.\n>>> \n>> \n>> \n>> I think using TESTDIR is Ok,\n> \n> \n> I've changed my mind, I don't think that will work for MSVC, the\n> executable gets built elsewhere for that. I'll try to come up with\n> something portable.\n\nI’m just now working on getting my Windows VMs set up with Visual Studio and whatnot, per the wiki instructions, so I don’t need to burden you with this sort of Windows task in the future. If there are any gotchas not mentioned on the wiki, I’d appreciate pointers about how to avoid them. I’ll try to help devise a solution, or test what you come up with, once I’m properly set up for that.\n\nFor no particular reason, I chose Windows Server 2019 and Windows 10 Pro.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 29 Jan 2020 13:06:35 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 7:36 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jan 29, 2020, at 1:02 PM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> >\n> > On Wed, Jan 29, 2020 at 4:32 PM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> >>\n> >>\n> >> On 1/28/20 5:28 PM, Mark Dilger wrote:\n> >>>\n> >>>\n> >>>> +# There doesn't seem to be any easy way to get TestLib to use the binaries from\n> >>>> +# our directory, so we hack up a path to our binary and run that\n> >>>> directly. This\n> >>>> +# seems brittle enough that some other solution should be found, if possible.\n> >>>> +\n> >>>> +my $test_json = join('/', $ENV{TESTDIR}, 'test_json');\n> >>>>\n> >>>> I don't know what the right thing to do here is. Perhaps someone more\n> >>>> familiar with TAP testing can comment.\n> >>> Yeah, I was hoping that might get a comment from Andrew. I think if it works as-is on windows, we could just use it this way until it causes a problem on some platform or other. It’s not a runtime issue, being only a build-time test, and only then when tap tests are enabled *and* running check-world, so nobody should really be adversely affected. I’ll likely get around to testing this on Windows, but I don’t have any Windows environments set up yet, as that is still on my todo list.\n> >>>\n> >>\n> >>\n> >> I think using TESTDIR is Ok,\n> >\n> >\n> > I've changed my mind, I don't think that will work for MSVC, the\n> > executable gets built elsewhere for that. I'll try to come up with\n> > something portable.\n>\n> I’m just now working on getting my Windows VMs set up with Visual Studio and whatnot, per the wiki instructions, so I don’t need to burden you with this sort of Windows task in the future. If there are any gotchas not mentioned on the wiki, I’d appreciate pointers about how to avoid them. I’ll try to help devise a solution, or test what you come up with, once I’m properly set up for that.\n>\n> For no particular reason, I chose Windows Server 2019 and Windows 10 Pro.\n>\n\n\nOne VM should be sufficient. Either W10Pro os WS2019 would be fine. I\nhave buildfarm animals running on both.\n\nHere's what I got working after a lot of trial and error. (This will\nrequire a tiny change in the buildfarm script to make the animals test\nit). Note that there is one test that I couldn't get working, so I\nskipped it. If you can find out why it fails so much the better ... it\nseems to be related to how the command processor handles quotes.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 30 Jan 2020 16:40:44 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: making the backend's json parser work in frontend code"
}
] |
[
{
"msg_contents": "There are some outstanding questions about how B-Tree deduplication\n[1] should be configured, and whether or not it should be enabled by\ndefault. I'm starting this new thread in the hopes of generating\ndiscussion on these high level questions.\n\nThe commit message of the latest version of the patch has a reasonable\nsummary of its overall design, that might be worth reviewing before\nreading on:\n\nhttps://www.postgresql.org/message-id/CAH2-Wz=Tr6mxMsKRmv_=9-05_O9QWqOzQ8GweRV2DXS6+Y38QQ@mail.gmail.com\n\n(If you want to go deeper than that, check out the nbtree README changes.)\n\nB-Tree deduplication comes in two basic varieties:\n\n* Unique index deduplication.\n\n* Non-unique index deduplication.\n\nEach variety works in essentially the same way. However, they differ\nin how and when deduplication is actually applied. We can infer quite\na lot about versioning within a unique index, and we know that version\nchurn is the only problem that we should try to address -- it's not\nreally about space efficiency, since we know that there won't be any\nduplicates in the long run. Also, workloads that have a lot of version\nchurn in unique indexes are generally quite reliant on LP_DEAD bit\nsetting within _bt_check_unique() (this is not to be confused with the\nsimilar kill_prior_tuple LP_DEAD bit optimization) -- we need to be\nsensitive to that.\n\nDespite the fact that there are many more similarities than\ndifferences here, I would like to present each variety as a different\nthing to users (actually, I don't really want users to have to think\nabout unique index deduplication at all).\n\nI believe that the best path forward for users is to make\ndeduplication in non-unique indexes a user-visible feature with\ndocumented knobs (a GUC and an index storage parameter), while leaving\nunique index deduplication as an internal thing that is barely\ndocumented in the sgml user docs (I'll have a paragraph about it in\nthe B-Tree internals chapter). Unique index deduplication won't care\nabout the GUC, and will only trigger a deduplication pass when the\nincoming tuple is definitely a duplicate of an existing tuple (this\nsignals version churn cheaply but reliably). The index storage\nparameter will be respected with unique indexes, but only as a\ndebugging option -- our presumption is that nobody will want to\ndisable deduplication in unique indexes, since leaving it on has\nvirtually no downside (because we know exactly when to trigger it, and\nwhen not to). Unique index deduplication is an internal thing that\nusers benefit from, but aren't really aware of, much like\nopportunistic deletion of LP_DEAD items. (A secondary benefit of this\napproach is that we don't have to have an awkward section in the\ndocumentation that explains why deduplication in unique indexes isn't\nactually an oxymoron.)\n\nThoughts on presenting unique index deduplication a separate\ninternal-only optimization, that doesn't care about the GUC?\n\nI also want to talk about a related but separate topic. I propose that\nthe deduplicate_btree_items GUC (which only affects non-unique\nindexes) be turned on by default in Postgres 13. This can be reviewed\nat the end of the beta period, just like it was with parallelism and\nwith JIT. Note that this means that we'd opportunistically perform a\ndeduplication pass at the point that we usually have to split the\npage, even though in general there is no reason to think that that\nwill work out. (We have no better way of applying deduplication than\n\"try it and see\", unless it's a unique index that has the version\nchurn trigger heuristic that I just described.)\n\nAppend-only tables with multiple non-unique indexes that happen to\nhave only unique key values will pay a cost without seeing a benefit.\nI believe that I saw a 3% throughput cost when I assessed this using\nsomething called the \"insert benchmark\" [2] a couple of months ago\n(this is maintained by Mark Callaghan, the Facebook MyRocks guy). I\nneed to do more work on quantifying the cost with a recent version of\nthe patch, especially the cost of only having one LP_DEAD bit for an\nentire posting list tuple, but in general I think that this is likely\nto be worth it. Consider these points in favor of enabling\ndeduplication by default:\n\n* This insert-only workload is something that Postgres does\nparticularly well with -- append-only tables are reported to have\ngreater throughput in Postgres than in other comparable RDBMSs.\nPresumably this is because we don't need UNDO logs, and don't do index\nkey locking in the same way (apparently even Oracle has locks in\nindexes that protect logical transaction state). We are paying a small\ncost by enabling deduplication by default, but it is paid in an area\nwhere we already do particularly well.\n\n* Deduplication can be turned off at any time. nbtree posting list\ntuples are really not like the ones in GIN, in that we're not\nmaintaining them over time. A deduplication pass fundamentally works\nby generating an alternative physical representation for the same\nlogical contents -- the resulting posting list tuples work in almost\nthe same way as the original tuples.\n\nWhile it's true that a posting list split will be needed on occasion,\nposting list splits work in a way that allows almost all code to\npretend that the incoming item never overlapped with an existing\nposting list in the first place (we conceptually \"rewrite\" an incoming\ntuple insert to not overlap). So this apparent exception isn't much of\nan exception.\n\n* Posting list tuples do not need to be decompressed, so reads pay no penalty.\n\n* Even if all keys are unique, in practice UPDATEs will tend to\ngenerate duplicates. Even with a HOT safe workload. Deduplicating\nphysical tuples for the logical row can help in non-unique indexes\njust as much as in unique indexes.\n\n* Only the kill_prior_tuple optimization can set LP_DEAD bits within a\nnon-unique index. There is evidence that kill_prior_tuple is much less\nvaluable than the _bt_check_unique() stuff -- it took years before\nanybody noticed that 9.5 had significantly regressed the\nkill_prior_tuple optimization [3].\n\nSetting LP_DEAD bits is useful primarily because it prevents page\nsplits, but we always prefer to clear LP_DEAD bit set tuples to\ndeduplication, and a deduplication pass can only occur because the\nonly alternative is to split the page. So, workloads that pay a cost\ndue to not being able to set LP_DEAD bits in a granular fashion will\nstill almost always do fewer splits anyway -- which is what really\nmattered all along. (Not having to visit the heap due to the LP_DEAD\nbit being set in some index tuple is also nice, but standbys never get\nthat benefit, so it's hard to see it as more important than\nnice-to-have.)\n\n* The benefits are huge with a workload consisting of several indexes\nand not many HOT updates. As measured by index size, transaction\nthroughput, query latency, and the ongoing cost of vacuuming -- you\nname it.\n\nThis is an area where we're not so competitive with other RDBMSs\ncurrently. In general, it seems impossible to target workloads like\nthis without accepting some wasted cycles. Low cardinality indexes are\nusually indexes that follow a power law distribution, where some leaf\npages consist of unique values, while others consist of only a single\nvalue -- very hard to imagine an algorithm that works much better than\nalways trying at the point that it looks like we have to split the\npage. I'll go into an example of a workload like this, where\ndeduplication does very well.\n\nRecent versions of the patch manage about a 60% increase in\ntransaction throughput with pgbench, scale 1000, a skewed\npgbench_accounts UPDATE + SELECT (:aid) distribution, and two \"extra\"\nindexes on pgbench_accounts (an index on \"abalance\", and another on\n\"bid\"). This was with 16 clients, over 4 hours on my home workstation,\nwhich is nothing special. The extra indexes are consistently about 3x\nsmaller, and pgbench_branches_pkey and pgbench_tellers_pkey are about\n4x - 5x smaller. This is fairly representative of some workloads,\nespecially workloads that we're known to not do so well on.\n\nIt's certainly possible for index size differences to be more extreme\nthan this. When I create an index on pgbench_accounts.filler, the\ndifference is much bigger still. The filler index is about 11x larger\non master compared with a patched Postgres, since the keys from the\nfiller column happen to be very big. This ~11x difference is\nconsistent and robust. While that workload is kind of silly, the fact\nthat the key size of an indexed column is vitally important with\nUPDATEs that have no logical need to change the indexed column is very\ncounterintuitive. In my experience, indexes are often defined on large\ntext keys with a lot of redundancy, even if that is kind of a slipshod\ndesign -- that's the reality with most real world applications.\n\nThoughts on enabling deduplication by default (by having the\ndeduplicate_btree_items GUC default to 'on')?\n\n[1] https://commitfest.postgresql.org/24/2202/\n[2] https://github.com/mdcallag/mytools/blob/master/bench/ibench/iibench.py\n[3] https://www.postgresql.org/message-id/flat/CAH2-Wz%3DSfAKVMv1x9Jh19EJ8am8TZn9f-yECipS9HrrRqSswnA%40mail.gmail.com#b20ead9675225f12b6a80e53e19eed9d\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 15 Jan 2020 15:38:22 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 6:38 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> There are some outstanding questions about how B-Tree deduplication\n> [1] should be configured, and whether or not it should be enabled by\n> default. I'm starting this new thread in the hopes of generating\n> discussion on these high level questions.\n\nIt seems like the issue here is that you're pretty confident that\ndeduplication will be a win for unique indexes, but not so confident\nthat this will be true for non-unique indexes. I don't know that I\nunderstand why.\n\nIt does seem odd to me to treat them differently, but it's possible\nthat this is a reflection of my own lack of understanding. What do\nother database systems do?\n\nI wonder whether we could avoid the downside of having only one\nLP_DEAD bit per line pointer by having a bit per TID within the\ncompressed tuples themselves. I assume you already thought about that,\nthough.\n\nWhat are the characteristics of this system if you have an index that\nis not declared as UNIQUE but actually happens to be UNIQUE?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jan 2020 13:55:07 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 10:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jan 15, 2020 at 6:38 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > There are some outstanding questions about how B-Tree deduplication\n> > [1] should be configured, and whether or not it should be enabled by\n> > default. I'm starting this new thread in the hopes of generating\n> > discussion on these high level questions.\n>\n> It seems like the issue here is that you're pretty confident that\n> deduplication will be a win for unique indexes, but not so confident\n> that this will be true for non-unique indexes.\n\nRight.\n\n> I don't know that I understand why.\n\nThe main reason that I am confident about unique indexes is that we\nonly do a deduplication pass in a unique index when we observe that\nthe incoming tuple (the one that might end up splitting the page) is a\nduplicate of some existing tuple. Checking that much is virtually\nfree, since we already have the information close at hand today (we\ncache the _bt_check_unique() binary search bounds for reuse within\n_bt_findinsertloc() today). This seems to be an excellent heuristic,\nsince we really only want to target unique index leaf pages where all\nor almost all insertions must be duplicates caused by non-HOT updates\n-- this category includes all the pgbench indexes, and includes all of\nthe unique indexes in TPC-C. Whereas with non-unique indexes, we\naren't specifically targeting version churn (though it will help with\nthat too).\n\nGranted, the fact that the incoming tuple happens to be a duplicate is\nnot a sure sign that the index is in this informal \"suitable for\ndeduplication\" category of mine. The incoming duplicate could just be\na once off. Even still, it's extremely unlikely to matter -- a failed\ndeduplication pass really isn't that expensive anyway, since it takes\nplace just before we split the page (we'll need the page in L1 cache\nanyway). If we consistently attempt deduplication in a unique index,\nthen we're virtually guaranteed to consistently benefit from it.\n\nIn general, the way that deduplication is only considered at the point\nwhere we'd otherwise have to split the page buys *a lot*. The idea of\ndelaying page splits by doing something like load balancing or\ncompression in a lazy fashion has a long history -- it was not my\nidea. I'm not talking about the LP_DEAD bit set deletion stuff here --\nthis goes back to the 1970s.\n\n> It does seem odd to me to treat them differently, but it's possible\n> that this is a reflection of my own lack of understanding. What do\n> other database systems do?\n\nOther database systems treat unique indexes very differently, albeit\nin a way that we're not really in a position to take too much away\nfrom -- other than the general fact that unique indexes can be thought\nof as very different things.\n\nIn general, the unique indexes in other systems are expected to be\nunique in every sense, even during an \"UPDATE foo SET unique_key =\nunique_key + 1\" style query. Index tuples are slightly smaller in a\nunique index compared to an equivalent non-unique index in the case of\none such system. Also, that same system has something called a \"unique\nindex scan\" that can only be used with a unique index (and only when\nall columns appear in the query qual).\n\n> I wonder whether we could avoid the downside of having only one\n> LP_DEAD bit per line pointer by having a bit per TID within the\n> compressed tuples themselves. I assume you already thought about that,\n> though.\n\nSo far, this lack of LP_DEAD bit granularity issue is only a\ntheoretical problem. I haven't been able to demonstrate in any\nmeaningful way. Setting LP_DEAD bits is bound to be correlated, and we\nonly deduplicate to avoid a page split.\n\nJust last night I tried a variant pgbench workload with a tiny\naccounts table, an extremely skewed Zipf distribution, and lots of\nclients relative to the size of the machine. I used a non-unique index\ninstead of a unique index, since that is likely to be where the patch\nwas weakest (no _bt_check_unique() LP_DEAD bit setting that way). The\npatch still came out ahead of the master branch by about 3%. It's very\nhard to prove that there is no real downside to having only one\nLP_DEAD bit per posting list tuple, since absence of evidence isn't\nevidence of absence. I believe that it's much easier to make the\nargument that it's okay to one have one LP_DEAD bit per posting list\nwithin unique indexes specifically, though (because we understand that\nthere can be no duplicates in the long run there).\n\nThroughout this work, and the v12 B-Tree work, I consistently made\nconservative decisions about space accounting in code like\nnbtsplitloc.c (the new nbtdedup.c code has to think about space in\nabout the same way). My intuition is that space accounting is one area\nwhere we really ought to be conservative, since it's so hard to test.\nThat's the main reason why I find the idea of having LP_DEAD bits\nwithin posting list tuples unappealing, whatever the benefits may be\n-- it adds complexity in the one area that I really don't want to add\ncomplexity.\n\n> What are the characteristics of this system if you have an index that\n> is not declared as UNIQUE but actually happens to be UNIQUE?\n\nI believe that the only interesting characteristic is that it is\nappend-only, and has no reads. The variant of the insert benchmark\nthat does updates and deletes will still come out ahead, because then\nversion churn comes in to play -- just like with the pgbench unique\nindexes (even though these aren't unique indexes).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 Jan 2020 12:05:28 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 12:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > It does seem odd to me to treat them differently, but it's possible\n> > that this is a reflection of my own lack of understanding. What do\n> > other database systems do?\n>\n> Other database systems treat unique indexes very differently, albeit\n> in a way that we're not really in a position to take too much away\n> from -- other than the general fact that unique indexes can be thought\n> of as very different things.\n\nI should point out here that I've just posted v31 of the patch, which\nchanges things for unique indexes. Our strategy during deduplication\nis now the same for unique indexes, since the original,\nsuper-incremental approach doesn't seem to make sense anymore. Further\noptimization work in the patch eliminated problems that made this\napproach seem like it might be worthwhile.\n\nNote, however, that v31 changes nothing about how we think about\ndeduplication in unique indexes in general, nor how it is presented to\nusers. There is still special criteria around how deduplication is\n*triggered* in unique indexes. We continue to trigger a deduplication\npass based on seeing a duplicate within _bt_check_unique() +\n_bt_findinsertloc() -- otherwise we never attempt deduplication in a\nunique index (same as before). Plus the GUC still doesn't affect\nunique indexes, unique index deduplication still isn't really\ndocumented in the user docs (it just gets a passing mention in B-Tree\ninternals section), etc. This seems like the right way to go, since\ndeduplication in unique indexes can only make sense on leaf pages\nwhere most or all new items are duplicates of existing items, a\nsituation that is already easy to detect.\n\nIt wouldn't be that bad if we always attempted deduplication in a\nunique index, but it's easy to only do it when we're pretty confident\nwe'll get a benefit -- why not save a few cycles?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 28 Jan 2020 17:36:39 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 3:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The main reason that I am confident about unique indexes is that we\n> only do a deduplication pass in a unique index when we observe that\n> the incoming tuple (the one that might end up splitting the page) is a\n> duplicate of some existing tuple. Checking that much is virtually\n> free, since we already have the information close at hand today (we\n> cache the _bt_check_unique() binary search bounds for reuse within\n> _bt_findinsertloc() today). This seems to be an excellent heuristic,\n> since we really only want to target unique index leaf pages where all\n> or almost all insertions must be duplicates caused by non-HOT updates\n> -- this category includes all the pgbench indexes, and includes all of\n> the unique indexes in TPC-C. Whereas with non-unique indexes, we\n> aren't specifically targeting version churn (though it will help with\n> that too).\n\nThis (and the rest of the explanation) don't really address my\nconcern. I understand that deduplicating in lieu of splitting a page\nin a unique index is highly likely to be a win. What I don't\nunderstand is why it shouldn't just be a win, period. Not splitting a\npage seems like it has a big upside regardless of whether the index is\nunique -- and in fact, the upside could be a lot bigger for a\nnon-unique index. If the coarse-grained LP_DEAD thing is the problem,\nthen I can grasp that issue, but you don't seem very worried about\nthat.\n\nGenerally, I think it's a bad idea to give the user an \"emergency off\nswitch\" and then sometimes ignore it. If the feature seems to be\ngenerally beneficial, but you're worried that there might be\nregressions in obscure cases, then turn it on by default, and give the\nuser the ability to forcibly turn it off. But don't give the the\nopportunity to forcibly turn it off sometimes. Nobody's going to run\naround setting a reloption just for fun -- they're going to do it\nbecause they hit a problem.\n\nI guess I'm also saying here that a reloption seems like a much better\nidea than a GUC. I don't see much reason to believe that a system-wide\nsetting will be useful.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jan 2020 09:56:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 6:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> This (and the rest of the explanation) don't really address my\n> concern. I understand that deduplicating in lieu of splitting a page\n> in a unique index is highly likely to be a win. What I don't\n> understand is why it shouldn't just be a win, period. Not splitting a\n> page seems like it has a big upside regardless of whether the index is\n> unique -- and in fact, the upside could be a lot bigger for a\n> non-unique index. If the coarse-grained LP_DEAD thing is the problem,\n> then I can grasp that issue, but you don't seem very worried about\n> that.\n\nYou're right that I'm not worried about the coarse-grained LP_DEAD\nthing here. What I'm concerned about is cases where we attempt\ndeduplication, but it doesn't work out because there are no duplicates\n-- that means we waste some cycles. Or cases where we manage to delay\na split, but only for a very short period of time -- in theory it\nwould be preferable to just accept the page split up front. However,\nin practice we can't make these distinctions, since it would hinge\nupon predicting the future, and we don't have a good heuristic. The\nfact that a deduplication pass barely manages to prevent an immediate\npage split isn't a useful proxy for how likely it is that the page\nwill split in any timeframe. We might have prevented it from happening\nfor another 2 milliseconds, or we might have prevented it forever.\nIt's totally workload dependent.\n\nThe good news is that these extra cycles aren't very noticeable even\nwith a workload where deduplication doesn't help at all (e.g. with\nseveral indexes an append-only table, and few or no duplicates). The\ncycles are generally a fixed cost. Furthermore, it seems to be\npossible to virtually avoid the problem in the case of unique indexes\nby applying the incoming-item-is-duplicate heuristic. Maybe I am\nworrying over nothing.\n\n> Generally, I think it's a bad idea to give the user an \"emergency off\n> switch\" and then sometimes ignore it. If the feature seems to be\n> generally beneficial, but you're worried that there might be\n> regressions in obscure cases, then turn it on by default, and give the\n> user the ability to forcibly turn it off. But don't give the the\n> opportunity to forcibly turn it off sometimes. Nobody's going to run\n> around setting a reloption just for fun -- they're going to do it\n> because they hit a problem.\n\nActually, we do. There is both a reloption and a GUC. The GUC only\nworks with non-unique indexes, where the extra cost I describe might\nbe an issue (it can at least be demonstrated in a benchmark).The\nreloption works with both unique and non-unique indexes. It will be\nuseful for turning off deduplication selectively in non-unique\nindexes. In the case of unique indexes, it can be thought of as a\ndebugging thing (though we really don't want users to think about\ndeduplication in unique indexes at all).\n\nI'm really having a hard time imagining or demonstrating any downside\nwith unique indexes, given the heuristic, so ISTM that turning off\ndeduplication really is just a debugging thing there.\n\nMy general assumption is that 99%+ of users will want to use\ndeduplication everywhere. I am concerned about the remaining ~1% of\nusers who might have a workload that is slightly regressed by\ndeduplication. Even this small minority of users will still want to\nuse deduplication in unique indexes. Plus we don't really want to talk\nabout deduplication in unique indexes to users, since it'll probably\nconfuse them. That's another reason to treat each case differently.\n\nAgain, maybe I'm making an excessively thin distinction. I really want\nto be able to enable the feature everywhere, while also not getting\neven one complaint about it. Perhaps that's just not a realistic or\nuseful goal.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:15:38 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 1:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The good news is that these extra cycles aren't very noticeable even\n> with a workload where deduplication doesn't help at all (e.g. with\n> several indexes an append-only table, and few or no duplicates). The\n> cycles are generally a fixed cost. Furthermore, it seems to be\n> possible to virtually avoid the problem in the case of unique indexes\n> by applying the incoming-item-is-duplicate heuristic. Maybe I am\n> worrying over nothing.\n\nYeah, maybe. I'm tempted to advocate for dropping the GUC and keeping\nthe reloption. If the worst case is a 3% regression and you expect\nthat to be rare, I don't think a GUC is really worth it, especially\ngiven that the proposed semantics seem somewhat confusing. The\nreloption can be used in a pinch to protect against either bugs or\nperformance regressions, whichever may occur, and it doesn't seem like\nyou need a second mechanism.\n\n> Again, maybe I'm making an excessively thin distinction. I really want\n> to be able to enable the feature everywhere, while also not getting\n> even one complaint about it. Perhaps that's just not a realistic or\n> useful goal.\n\nOne thing that you could do is try to learn whether deduplication (I\nreally don't like that name, but here we are) seems to be working for\na given index, perhaps even in a given session. For instance, suppose\nyou keep track of what happened the last ten times the current session\nattempted deduplication within a given index. Store the state in the\nrelcache. If all of the last ten tries were failures, then only try\n1/4 of the time thereafter. If you have a success, go back to trying\nevery time. That's pretty crude, but it would might be good enough to\nblunt the downsides to the point where you can stop worrying.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jan 2020 13:41:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 10:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah, maybe. I'm tempted to advocate for dropping the GUC and keeping\n> the reloption. If the worst case is a 3% regression and you expect\n> that to be rare, I don't think a GUC is really worth it, especially\n> given that the proposed semantics seem somewhat confusing. The\n> reloption can be used in a pinch to protect against either bugs or\n> performance regressions, whichever may occur, and it doesn't seem like\n> you need a second mechanism.\n\nI like the idea of getting rid of the GUC.\n\nI should stop talking about it for now, and go back to reassessing the\nextent of the regression in highly unsympathetic cases. The patch has\nbecome faster in a couple of different ways since I last looked at\nthis question, and it's entirely possible that the regression is even\nsmaller than it was before.\n\n> One thing that you could do is try to learn whether deduplication (I\n> really don't like that name, but here we are) seems to be working for\n> a given index, perhaps even in a given session. For instance, suppose\n> you keep track of what happened the last ten times the current session\n> attempted deduplication within a given index. Store the state in the\n> relcache.\n\nIt's tempting to try to reason about the state of an index over time\nlike this, but I don't think that it's ever going to work well.\nImagine a unique index where 50% of all values are NULLs, on an\nappend-only table. Actually, let's say it's a non-unique index with\nunique integers, and NULL values for the remaining 50% of rows -- that\nway we don't get the benefit of the incoming-item-is-duplicate\nheuristic.\n\nThere will be two contradictory tendencies within this particular\nindex. We might end up ping-ponging between each behavior. It seems\nbetter to just accept a small performance hit.\n\nSometimes we can add heuristics to things like the split point\nlocation choice logic (nbtsplitloc.c) that behave as if we were\nkeeping track of how things progress across successive, related page\nsplits in the same index -- though we don't actually keep track of\nanything. I'm thinking of things like Postgres v12's \"split after new\ntuple\" optimization, which makes the TPC-C indexes so much smaller. We\ncan do these things statelessly and safely only because the heuristics\nhave little to lose and much to gain. To a lesser extent, it's okay\nbecause the heuristics are self-limiting -- we can only make an\nincorrect inference about what to do because we were unlucky, but\nthere is no reason to think that we'll consistently make the wrong\nchoice. It feels a little bit like quicksort to me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 29 Jan 2020 11:50:26 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 2:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It's tempting to try to reason about the state of an index over time\n> like this, but I don't think that it's ever going to work well.\n> Imagine a unique index where 50% of all values are NULLs, on an\n> append-only table. Actually, let's say it's a non-unique index with\n> unique integers, and NULL values for the remaining 50% of rows -- that\n> way we don't get the benefit of the incoming-item-is-duplicate\n> heuristic.\n\nI mean, if you guess wrong and deduplicate less frequently, you are no\nworse off than today.\n\nBut it depends, too, on the magnitude. If a gain is both large and\nprobable and a loss is both unlikely and improbable, then accepting a\nbit of slowdown when it happens may be the right call.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Jan 2020 16:02:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 11:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I should stop talking about it for now, and go back to reassessing the\n> extent of the regression in highly unsympathetic cases. The patch has\n> become faster in a couple of different ways since I last looked at\n> this question, and it's entirely possible that the regression is even\n> smaller than it was before.\n\nI revisited the insert benchmark as a way of assessing the extent of\nthe regression from deduplication with a very unsympathetic case.\nBackground:\n\nhttps://smalldatum.blogspot.com/2017/06/insert-benchmark-in-memory-intel-nuc.html\n\nhttps://github.com/mdcallag/mytools/blob/master/bench/ibench/iibench.py\n\nThis workload consists of serial inserts into a table with a primary\nkey, plus three additional non-unique indexes. A low concurrency\nbenchmark seemed more likely to be regressed by the patch, so that's\nwhat I focussed on. The indexes used have very few duplicates, and so\ndon't benefit from deduplication at all, with the exception of the\npurchases_index_marketsegment index, which is a bit smaller (see log\nfiles for precise details). The table (which is confusingly named\n\"purchases_index\") had a total of three non-unique indexes, plus a\nstandard serial primary key. We insert 100,000,000 rows in total,\nwhich takes under 30 minutes in each case. There are no reads, and no\nupdates or deletes.\n\nThere is a regression that is just shy of 2% here, as measured in\ninsert benchmark \"rows/sec\" -- this metric goes from \"62190.0\"\nrows/sec on master to \"60986.2 rows/sec\" with the patch. I think that\nthis is an acceptable price to pay for the benefits -- this is a small\nregression for a particularly unfavorable case. Also, I suspect that\nthis result is still quite a bit better than what you'd get with\neither InnoDB or MyRocks on the same hardware (these systems were the\noriginal targets of the insert benchmark, which was only recently\nported over to Postgres). At least, Mark Callaghan reports getting\nonly about 40k rows/sec inserted in 2017 with roughly comparable\nhardware and test conditions (we're both running with\nsynchronous_commit=off, or the equivalent). We're paying a small cost\nin an area where Postgres can afford to take a hit, in order to gain a\nmuch larger benefit in an area where Postgres is much less\ncompetitive.\n\nI attach detailed output from runs for both master and patch.\n\nThe shell script that I used to run the benchmark is as follows:\n\n#!/bin/sh\npsql -c \"create database test;\"\n\ncd $HOME/code/mytools/bench/ibench\npython2 iibench.py --dbms=postgres --setup | tee iibench-output.log\npython2 iibench.py --dbms=postgres --max_rows=100000000 | tee -a\niibench-output.log\npsql -d test -c \"SELECT pg_relation_size(oid),\npg_size_pretty(pg_relation_size(oid)),\nrelname FROM pg_class WHERE relnamespace = 'public'::regnamespace\nORDER BY 1 DESC LIMIT 15;\" | tee -a iibench-output.log\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 29 Jan 2020 22:45:39 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 1:45 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> There is a regression that is just shy of 2% here, as measured in\n> insert benchmark \"rows/sec\" -- this metric goes from \"62190.0\"\n> rows/sec on master to \"60986.2 rows/sec\" with the patch. I think that\n> this is an acceptable price to pay for the benefits -- this is a small\n> regression for a particularly unfavorable case. Also, I suspect that\n> this result is still quite a bit better than what you'd get with\n> either InnoDB or MyRocks on the same hardware (these systems were the\n> original targets of the insert benchmark, which was only recently\n> ported over to Postgres). At least, Mark Callaghan reports getting\n> only about 40k rows/sec inserted in 2017 with roughly comparable\n> hardware and test conditions (we're both running with\n> synchronous_commit=off, or the equivalent). We're paying a small cost\n> in an area where Postgres can afford to take a hit, in order to gain a\n> much larger benefit in an area where Postgres is much less\n> competitive.\n\nHow do things look in a more sympathetic case?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 30 Jan 2020 12:36:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 9:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> How do things look in a more sympathetic case?\n\nI prefer to think of the patch as being about improving the stability\nand predictability of Postgres with certain workloads, rather than\nbeing about overall throughput. Postgres has an ungoing need to VACUUM\nindexes, so making indexes smaller is generally more compelling than\nit would be with another system. That said, there are certainly quite\na few cases that have big improvements in throughput and latency.\n\nAs I mentioned in my opening e-mail, there is a 60% increase in\ntransaction throughput when there are 3 extra indexes on the\npgbench_accounts table, at scale 1000 (so every column has an index).\nThat's a little bit silly, but I think that the extreme cases are\ninteresting. More recently, I found a 19% increase in throughput for a\nsimilar pgbench workload at scale 5000, with only one extra index, on\npgbench_accounts.abalance (so lots of non-HOT updates). I think that\nthere were 12 + 16 clients in both cases. We can reliably keep (say)\nthe extra index on pgbench_accounts.abalance 3x smaller with the\npatch, even though there is constant update churn. The difference in\nindex size between master and patch doesn't depend on having pristine\nindexes. There is also about a 15% reduction in transaction latency in\nthese cases.\n\nWe usually manage to keep pgbench_accounts_pkey a lot smaller -- it\ndepends on the exact distribution of values. Skew that isn't all\nconcentrated in one part of the index (e.g. because we hash the value\ngenerated by the pgbench PRNG) works best when it comes to controlling\npgbench_accounts_pkey bloat. I have seen plenty of cases where it was\nabout 50% - 95% smaller after several hours. OTOH, a less favorable\ndistribution of update values will tend to overwhelm the patch's\nability to soak up extra bloat in pgbench_accounts_pkey, though in a\nway that is less abrupt. Deduplication of pgbench_accounts_pkey never\nseems to have any downside.\n\nYou can even have cases like the insert benchmark that still come out\nahead -- despite having no reads or updates. This will tend to happen\nwhen all non-unique indexes are on lower cardinality columns. Maybe 5\n- 10 tuples for each distinct key value on average. I would even say\nthat this is the common case. If I look at the join benchmark data, or\nthe mouse genome database, or the TPC-E schema, then the patch tends\nto leave non-unique indexes a lot smaller than they'd be on master, by\nenough to pay for the cycles of deduplication and then some. The patch\nmakes all indexes taken together (including all unique indexes) about\n60% of their original size with the join benchmark database and with\nthe mouse genome database. Also, all of the larger TPC-E non-unique\nindexes are at least low cardinality enough to be somewhat smaller. If\nyou can make indexes 2x - 3x smaller, then even inserts will be a lot\nfaster.\n\nBack in 2008, Jignesh Shah reported that the TPC-E trade table's indexes\nwere the source of a lot of problems:\n\nhttps://www.slideshare.net/jkshah/postgresql-and-benchmarks (See slide 19)\n\nAll of the three non-unique indexes on the trade table are only about\n30% of their original size with deduplication (e.g. i_t_st_id goes\nfrom 1853 MB to only 571 MB). I haven't been able to run the DBT-5\nimplementation of TPC-E, since it has severely bitrot, but I imagine\nthat deduplication would help a lot. I did manage to get DBT-5 to produce\ninitial test data, and have been in touch with Mark Wong about it. That's how\nI know that all three extra indexes are 30% of their original size.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:16:34 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 11:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I prefer to think of the patch as being about improving the stability\n> and predictability of Postgres with certain workloads, rather than\n> being about overall throughput. Postgres has an ungoing need to VACUUM\n> indexes, so making indexes smaller is generally more compelling than\n> it would be with another system. That said, there are certainly quite\n> a few cases that have big improvements in throughput and latency.\n\nI also reran TPC-C/benchmarksql with the patch (v30). TPC-C has hardly\nany non-unique indexes, which is a little unrealistic. I found that\nthe patch was up to 7% faster in the first few hours, since it can\ncontrol the bloat from certain non-HOT updates. This isn't a\nparticularly relevant workload, since almost all UPDATEs don't affect\nindexed columns. The incoming-item-is-duplicate heuristic works well\nwith TPC-C, so there is probably hardly any possible downside there.\n\nI think that I should commit the patch without the GUC tentatively.\nJust have the storage parameter, so that everyone gets the\noptimization without asking for it. We can then review the decision to\nenable deduplication generally after the feature has been in the tree\nfor several months.\n\nThere is no need to make a final decision about whether or not the\noptimization gets enabled before committing the patch.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:40:24 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 2:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Jan 30, 2020 at 11:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I prefer to think of the patch as being about improving the stability\n> > and predictability of Postgres with certain workloads, rather than\n> > being about overall throughput. Postgres has an ungoing need to VACUUM\n> > indexes, so making indexes smaller is generally more compelling than\n> > it would be with another system. That said, there are certainly quite\n> > a few cases that have big improvements in throughput and latency.\n>\n> I also reran TPC-C/benchmarksql with the patch (v30). TPC-C has hardly\n> any non-unique indexes, which is a little unrealistic. I found that\n> the patch was up to 7% faster in the first few hours, since it can\n> control the bloat from certain non-HOT updates. This isn't a\n> particularly relevant workload, since almost all UPDATEs don't affect\n> indexed columns. The incoming-item-is-duplicate heuristic works well\n> with TPC-C, so there is probably hardly any possible downside there.\n>\n> I think that I should commit the patch without the GUC tentatively.\n> Just have the storage parameter, so that everyone gets the\n> optimization without asking for it. We can then review the decision to\n> enable deduplication generally after the feature has been in the tree\n> for several months.\n>\n> There is no need to make a final decision about whether or not the\n> optimization gets enabled before committing the patch.\n\nThat seems reasonable.\n\nI suspect that you're right that the worst-case downside is not big\nenough to really be a problem given all the upsides. But the advantage\nof getting things committed is that we can find out what users think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 30 Jan 2020 15:57:37 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 12:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> That seems reasonable.\n\nMy approach to showing the downsides of the patch wasn't particularly\nobvious, or easy to come up with. I could have contrived a case like\nthe insert benchmark, but with more low cardinality non-unique\nindexes. That would also have the effect of increasing the memory\nbandwidth/latency bottleneck, which was the big bottleneck already.\nIt's not clear that if that makes the patch look worse or better. You\nend up executing more instructions that go to waste, but with a\nworkload where the CPU stalls on memory even more than the original\ninsert benchmark.\n\nOTOH, it's possible to contrive a case that makes the patch look\nbetter than the master branch to an extreme extent. Just keep adding\nlow cardinality columns that each get an index, on a table that gets\nmany non-HOT updates. The effect isn't even linear, because VACUUM has\na harder time with keeping up as you add columns/indexes, making the\nbloat situation worse, in turn making it harder for VACUUM to keep up.\nFor bonus points, make sure that the tuples are nice and wide -- that\nalso \"amplifies\" bloat in a non-intuitive way (which is an effect that\nis also ameliorated by the patch).\n\n> I suspect that you're right that the worst-case downside is not big\n> enough to really be a problem given all the upsides. But the advantage\n> of getting things committed is that we can find out what users think.\n\nIt's certainly impossible to predict everything. On the upside, I\nsuspect that the patch makes VACUUM easier to tune with certain real\nworld workloads, though that is hard to prove.\n\nI've always disliked the way that autovacuum gets triggered by fairly\ngeneric criteria. Timeliness can matter a lot when it comes to index\nbloat, but that isn't taken into account. I think that the patch will\ntend to bring B-Tree indexes closer to heap tables in terms of their\noverall sensitivity to how frequently VACUUM runs.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Jan 2020 14:13:43 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 2:13 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> My approach to showing the downsides of the patch wasn't particularly\n> obvious, or easy to come up with. I could have contrived a case like\n> the insert benchmark, but with more low cardinality non-unique\n> indexes.\n\nSorry. I meant with more *high* cardinality indexes. An exaggerated\nversion of the original insert benchmark.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Jan 2020 14:18:14 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 11:40 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that I should commit the patch without the GUC tentatively.\n> Just have the storage parameter, so that everyone gets the\n> optimization without asking for it. We can then review the decision to\n> enable deduplication generally after the feature has been in the tree\n> for several months.\n\nThis is how things work in the committed patch (commit 0d861bbb):\nThere is a B-Tree storage parameter named deduplicate_items, which is\nenabled by default. In general, users will get deduplication unless\nthey opt out, including in unique indexes (though note that we're more\nselective about triggering a deduplication patch in unique indexes\n[1]).\n\n> There is no need to make a final decision about whether or not the\n> optimization gets enabled before committing the patch.\n\nIt's now time to make a final decision on this. Does anyone have any\nreason to believe that leaving deduplication enabled by default is the\nwrong way to go?\n\nNote that using deduplication isn't strictly better than not using\ndeduplication for all indexes in all workloads; that's why it's\npossible to disable the optimization. This thread has lots of\ninformation about the reasons why enabling deduplication by default\nseems appropriate, all of which still apply. Note that there have been\nno bug reports involving deduplication since it was committed on\nFebruary 26th, with the exception of some minor issues that I reported\nand fixed.\n\nThe view of the RMT is that the feature should remain enabled by\ndefault (i.e. no changes are required). Of course, I am a member of\nthe RMT this year, as well as one of the authors of the patch. I am\nhardly an impartial voice here. I believe that that did not sway the\ndecision making process of the RMT in this instance. If there are no\nobjections in the next week or so, then I'll close out the relevant\nopen item.\n\n[1] https://www.postgresql.org/docs/devel/btree-implementation.html#BTREE-DEDUPLICATION\n-- See \"Tip\"\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Jun 2020 16:28:22 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jun 25, 2020 at 4:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It's now time to make a final decision on this. Does anyone have any\n> reason to believe that leaving deduplication enabled by default is the\n> wrong way to go?\n\nI marked the open item resolved just now -- B-Tree deduplication will\nremain enabled by default in Postgres 13.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 2 Jul 2020 14:59:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
},
{
"msg_contents": "On Thu, Jul 2, 2020 at 02:59:47PM -0700, Peter Geoghegan wrote:\n> On Thu, Jun 25, 2020 at 4:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > It's now time to make a final decision on this. Does anyone have any\n> > reason to believe that leaving deduplication enabled by default is the\n> > wrong way to go?\n> \n> I marked the open item resolved just now -- B-Tree deduplication will\n> remain enabled by default in Postgres 13.\n\n+1\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 2 Jul 2020 19:06:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Enabling B-Tree deduplication by default"
}
] |
[
{
"msg_contents": "Hi Tomas,\n\nI just noticed that having a slab context around in an assertion build\nleads to performance degrading and memory usage going up. A bit of\npoking revealed that SlabCheck() doesn't free the freechunks it\nallocates.\n\nIt's on its own obviously trivial to fix.\n\nBut there's also this note at the top:\n * NOTE: report errors as WARNING, *not* ERROR or FATAL. Otherwise you'll\n * find yourself in an infinite loop when trouble occurs, because this\n * routine will be entered again when elog cleanup tries to release memory!\n\nwhich seems to be violated by doing:\n\n\t/* bitmap of free chunks on a block */\n\tfreechunks = palloc(slab->chunksPerBlock * sizeof(bool));\n\nI guess it'd be better to fall back to malloc() here, and handle the\nallocation failure gracefully? Or perhaps we can just allocate something\npersistently during SlabCreate()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Jan 2020 20:41:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I just noticed that having a slab context around in an assertion build\n> leads to performance degrading and memory usage going up. A bit of\n> poking revealed that SlabCheck() doesn't free the freechunks it\n> allocates.\n\n> It's on its own obviously trivial to fix.\n\nIt seems like having a context check function do new allocations\nis something to be avoided in the first place. It's basically assuming\nthat the memory management mechanism is sane, which makes the whole thing\nfundamentally circular, even if it's relying on some other context to be\nsane. Is there a way to do the checking without extra allocations?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 00:09:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-16 00:09:53 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I just noticed that having a slab context around in an assertion build\n> > leads to performance degrading and memory usage going up. A bit of\n> > poking revealed that SlabCheck() doesn't free the freechunks it\n> > allocates.\n>\n> > It's on its own obviously trivial to fix.\n>\n> It seems like having a context check function do new allocations\n> is something to be avoided in the first place.\n\nYea, that's why I was wondering about doing the allocation during\ncontext creation, for the largest size necessary of any context:\n\n\t/* bitmap of free chunks on a block */\n\tfreechunks = palloc(slab->chunksPerBlock * sizeof(bool));\n\nor at the very least using malloc(), rather than another context.\n\n\n> It's basically assuming that the memory management mechanism is sane,\n> which makes the whole thing fundamentally circular, even if it's\n> relying on some other context to be sane. Is there a way to do the\n> checking without extra allocations?\n\nProbably not easily.\n\nWas wondering if the bitmap could be made more dense, and allocated on\nthe stack. It could actually using one bit for each tracked chunk,\ninstead of one byte. Would have to put in a clear lower limit of the\nallowed chunk size, in relation to the block size, however, to stay in a\nreasonable range.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Jan 2020 22:17:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-01-16 00:09:53 -0500, Tom Lane wrote:\n>> It's basically assuming that the memory management mechanism is sane,\n>> which makes the whole thing fundamentally circular, even if it's\n>> relying on some other context to be sane. Is there a way to do the\n>> checking without extra allocations?\n\n> Probably not easily.\n\nIn the AllocSet code, we don't hesitate to expend extra space per-chunk\nfor debug support:\n\ntypedef struct AllocChunkData\n{\n...\n#ifdef MEMORY_CONTEXT_CHECKING\n\tSize\t\trequested_size;\n#endif\n...\n\nI don't see a pressing reason why SlabContext couldn't do something\nsimilar, either per-chunk or per-context, whatever makes sense.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 01:25:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-16 01:25:00 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-01-16 00:09:53 -0500, Tom Lane wrote:\n> >> It's basically assuming that the memory management mechanism is sane,\n> >> which makes the whole thing fundamentally circular, even if it's\n> >> relying on some other context to be sane. Is there a way to do the\n> >> checking without extra allocations?\n> \n> > Probably not easily.\n> \n> In the AllocSet code, we don't hesitate to expend extra space per-chunk\n> for debug support:\n> \n> typedef struct AllocChunkData\n> {\n> ...\n> #ifdef MEMORY_CONTEXT_CHECKING\n> \tSize\t\trequested_size;\n> #endif\n> ...\n> \n> I don't see a pressing reason why SlabContext couldn't do something\n> similar, either per-chunk or per-context, whatever makes sense.\n\nWell, what I suggested upthread, was to just have two globals like\n\nint slabcheck_freechunks_size;\nbool *slabcheck_freechunks;\n\nand realloc that to the larger size in SlabContextCreate() if necessary,\nbased on the computed chunksPerBlock? I thought you were asking whether\nany additional memory could just be avoided...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Jan 2020 22:41:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 10:41:43PM -0800, Andres Freund wrote:\n>Hi,\n>\n>On 2020-01-16 01:25:00 -0500, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>> > On 2020-01-16 00:09:53 -0500, Tom Lane wrote:\n>> >> It's basically assuming that the memory management mechanism is sane,\n>> >> which makes the whole thing fundamentally circular, even if it's\n>> >> relying on some other context to be sane. Is there a way to do the\n>> >> checking without extra allocations?\n>>\n>> > Probably not easily.\n>>\n>> In the AllocSet code, we don't hesitate to expend extra space per-chunk\n>> for debug support:\n>>\n>> typedef struct AllocChunkData\n>> {\n>> ...\n>> #ifdef MEMORY_CONTEXT_CHECKING\n>> \tSize\t\trequested_size;\n>> #endif\n>> ...\n>>\n>> I don't see a pressing reason why SlabContext couldn't do something\n>> similar, either per-chunk or per-context, whatever makes sense.\n>\n>Well, what I suggested upthread, was to just have two globals like\n>\n>int slabcheck_freechunks_size;\n>bool *slabcheck_freechunks;\n>\n>and realloc that to the larger size in SlabContextCreate() if necessary,\n>based on the computed chunksPerBlock? I thought you were asking whether\n>any additional memory could just be avoided...\n>\n\nI don't see why not to just do what Tom proposed, i.e. allocate this as\npart of the memory context in SlabContextCreate(), when memory context\nchecking is enabled. It seems much more convenient / simpler than the\nglobals (no repalloc, ...).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jan 2020 14:46:52 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> ... I thought you were asking whether\n> any additional memory could just be avoided...\n\nWell, I was kind of wondering that, but if it's not practical then\npreallocating the space instead will do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 10:27:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 10:27:01AM -0500, Tom Lane wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> ... I thought you were asking whether\n>> any additional memory could just be avoided...\n>\n>Well, I was kind of wondering that, but if it's not practical then\n>preallocating the space instead will do.\n>\n\nI don't think it's practical to rework the checks in a way that would\nnot require allocations. Maybe it's possible, but I think it's not worth\nthe extra complexity.\n\nThe attached fix should do the trick - it pre-allocates the space when\ncreating the context. There is a bit of complexity because we want to\nallocate the space as part of the context header, but nothin too bad. We\nmight optimize it a bit by using a regular bitmap (instead of just an\narray of bools), but I haven't done that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 16 Jan 2020 17:25:00 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> The attached fix should do the trick - it pre-allocates the space when\n> creating the context. There is a bit of complexity because we want to\n> allocate the space as part of the context header, but nothin too bad. We\n> might optimize it a bit by using a regular bitmap (instead of just an\n> array of bools), but I haven't done that.\n\nHmm ... so if this is an array of bools, why isn't it declared bool*\nrather than char* ? (Pre-existing ugliness, sure, but we might as\nwell fix it while we're here. Especially since you used sizeof(bool)\nin the space calculation.)\n\nI agree that maxaligning the start point of the array is pointless.\n\nI'd write \"free chunks in a block\" not \"free chunks on a block\",\nthe latter seems rather shaky English. But that's getting picky.\n\nLGTM otherwise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 11:43:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-16 17:25:00 +0100, Tomas Vondra wrote:\n> On Thu, Jan 16, 2020 at 10:27:01AM -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > ... I thought you were asking whether\n> > > any additional memory could just be avoided...\n> > \n> > Well, I was kind of wondering that, but if it's not practical then\n> > preallocating the space instead will do.\n> > \n> \n> I don't think it's practical to rework the checks in a way that would\n> not require allocations. Maybe it's possible, but I think it's not worth\n> the extra complexity.\n> \n> The attached fix should do the trick - it pre-allocates the space when\n> creating the context. There is a bit of complexity because we want to\n> allocate the space as part of the context header, but nothin too bad. We\n> might optimize it a bit by using a regular bitmap (instead of just an\n> array of bools), but I haven't done that.\n\nI don't get why it's advantageous to allocate this once for each slab,\nrather than having it as a global once for all slabs. But anyway, still\nclearly better than the current situation.\n\n- Andres\n\n\n",
"msg_date": "Thu, 16 Jan 2020 08:48:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 08:48:49AM -0800, Andres Freund wrote:\n>Hi,\n>\n>On 2020-01-16 17:25:00 +0100, Tomas Vondra wrote:\n>> On Thu, Jan 16, 2020 at 10:27:01AM -0500, Tom Lane wrote:\n>> > Andres Freund <andres@anarazel.de> writes:\n>> > > ... I thought you were asking whether\n>> > > any additional memory could just be avoided...\n>> >\n>> > Well, I was kind of wondering that, but if it's not practical then\n>> > preallocating the space instead will do.\n>> >\n>>\n>> I don't think it's practical to rework the checks in a way that would\n>> not require allocations. Maybe it's possible, but I think it's not worth\n>> the extra complexity.\n>>\n>> The attached fix should do the trick - it pre-allocates the space when\n>> creating the context. There is a bit of complexity because we want to\n>> allocate the space as part of the context header, but nothin too bad. We\n>> might optimize it a bit by using a regular bitmap (instead of just an\n>> array of bools), but I haven't done that.\n>\n>I don't get why it's advantageous to allocate this once for each slab,\n>rather than having it as a global once for all slabs. But anyway, still\n>clearly better than the current situation.\n>\n\nIt's largely a matter of personal preference - I agree there are cases\nwhen global variables are the best solution, but I kinda dislike them.\nIt seems cleaner to just allocate it as part of the slab, not having to\ndeal with different number of chunks per block between slabs.\n\nPlus we don't have all that many slabs (like 2), and it's only really\nused in debug builds anyway. So I'm not all that woried about this\nwasting a couple extra kB of memory.\n\nYMMV of course ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jan 2020 18:01:53 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 11:43:34AM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> The attached fix should do the trick - it pre-allocates the space when\n>> creating the context. There is a bit of complexity because we want to\n>> allocate the space as part of the context header, but nothin too bad. We\n>> might optimize it a bit by using a regular bitmap (instead of just an\n>> array of bools), but I haven't done that.\n>\n>Hmm ... so if this is an array of bools, why isn't it declared bool*\n>rather than char* ? (Pre-existing ugliness, sure, but we might as\n>well fix it while we're here. Especially since you used sizeof(bool)\n>in the space calculation.)\n>\n\nTrue. Will fix.\n\n>I agree that maxaligning the start point of the array is pointless.\n>\n>I'd write \"free chunks in a block\" not \"free chunks on a block\",\n>the latter seems rather shaky English. But that's getting picky.\n>\n>LGTM otherwise.\n>\n\nOK. Barring objections I'll push and backpatch this later today.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 Jan 2020 18:04:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Jan 16, 2020 at 08:48:49AM -0800, Andres Freund wrote:\n>> I don't get why it's advantageous to allocate this once for each slab,\n>> rather than having it as a global once for all slabs. But anyway, still\n>> clearly better than the current situation.\n\n> It's largely a matter of personal preference - I agree there are cases\n> when global variables are the best solution, but I kinda dislike them.\n> It seems cleaner to just allocate it as part of the slab, not having to\n> deal with different number of chunks per block between slabs.\n> Plus we don't have all that many slabs (like 2), and it's only really\n> used in debug builds anyway. So I'm not all that woried about this\n> wasting a couple extra kB of memory.\n\nA positive argument for doing it like this is that the overhead goes away\nwhen the SlabContexts are all deallocated, while a global variable would\npresumably stick around indefinitely. But I concur that in current usage,\nthere's hardly any point in worrying about the relative benefits. We\nshould just keep it simple, and this seems marginally simpler than the\nother way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 12:33:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-16 18:01:53 +0100, Tomas Vondra wrote:\n> Plus we don't have all that many slabs (like 2)\n\nFWIW, I have a local patch that adds additional ones, for the relcache\nand catcache, that's how I noticed the leak. Because a test pgbench\nabsolutely *tanked* in performance.\n\nJust for giggles. With leak:\npgbench -n -M prepared -P1 -c20 -j20 -T6000 -S\nprogress: 1.0 s, 81689.4 tps, lat 0.242 ms stddev 0.087\nprogress: 2.0 s, 51228.5 tps, lat 0.390 ms stddev 0.107\nprogress: 3.0 s, 42297.4 tps, lat 0.473 ms stddev 0.141\nprogress: 4.0 s, 34885.9 tps, lat 0.573 ms stddev 0.171\nprogress: 5.0 s, 31211.2 tps, lat 0.640 ms stddev 0.182\nprogress: 6.0 s, 27307.9 tps, lat 0.732 ms stddev 0.216\nprogress: 7.0 s, 25698.9 tps, lat 0.778 ms stddev 0.228\n\nwithout:\npgbench -n -M prepared -P1 -c20 -j20 -T6000 -S\nprogress: 1.0 s, 144119.1 tps, lat 0.137 ms stddev 0.047\nprogress: 2.0 s, 148092.8 tps, lat 0.135 ms stddev 0.039\nprogress: 3.0 s, 148757.0 tps, lat 0.134 ms stddev 0.032\nprogress: 4.0 s, 148553.7 tps, lat 0.134 ms stddev 0.038\n\nI do find the size of the impact quite impressive. It's all due to the\nTopMemoryContext's AllocSetCheck() taking longer and longer.\n\n\n> and it's only really used in debug builds anyway. So I'm not all that\n> woried about this wasting a couple extra kB of memory.\n\nIDK, making memory usage look different makes optimizing it harder. Not\na hard rule, obviously, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Jan 2020 10:08:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-01-16 18:01:53 +0100, Tomas Vondra wrote:\n>> and it's only really used in debug builds anyway. So I'm not all that\n>> woried about this wasting a couple extra kB of memory.\n\n> IDK, making memory usage look different makes optimizing it harder. Not\n> a hard rule, obviously, but ...\n\nWell, if you're that excited about it, make a patch so we can see\nhow ugly it ends up being.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 13:41:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 12:33:03PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Thu, Jan 16, 2020 at 08:48:49AM -0800, Andres Freund wrote:\n>>> I don't get why it's advantageous to allocate this once for each slab,\n>>> rather than having it as a global once for all slabs. But anyway, still\n>>> clearly better than the current situation.\n>\n>> It's largely a matter of personal preference - I agree there are cases\n>> when global variables are the best solution, but I kinda dislike them.\n>> It seems cleaner to just allocate it as part of the slab, not having to\n>> deal with different number of chunks per block between slabs.\n>> Plus we don't have all that many slabs (like 2), and it's only really\n>> used in debug builds anyway. So I'm not all that woried about this\n>> wasting a couple extra kB of memory.\n>\n>A positive argument for doing it like this is that the overhead goes away\n>when the SlabContexts are all deallocated, while a global variable would\n>presumably stick around indefinitely. But I concur that in current usage,\n>there's hardly any point in worrying about the relative benefits. We\n>should just keep it simple, and this seems marginally simpler than the\n>other way.\n>\n\nI think the one possible argument against this approach might be that it\nadds a field to the struct, so if you have an extension using a Slab\ncontext, it'll break if you don't rebuild it. But that only matters if\nwe want to backpatch it (which I think is not the plan) and with memory\ncontext checking enabled (which does not apply to regular packages).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 16 Jan 2020 21:00:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I think the one possible argument against this approach might be that it\n> adds a field to the struct, so if you have an extension using a Slab\n> context, it'll break if you don't rebuild it. But that only matters if\n> we want to backpatch it (which I think is not the plan) and with memory\n> context checking enabled (which does not apply to regular packages).\n\nHuh? That struct is private in slab.c, no? Any outside code relying\non its contents deserves to break.\n\nI do think we ought to back-patch this, given the horrible results\nAndres showed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 15:15:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 03:15:41PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I think the one possible argument against this approach might be that it\n>> adds a field to the struct, so if you have an extension using a Slab\n>> context, it'll break if you don't rebuild it. But that only matters if\n>> we want to backpatch it (which I think is not the plan) and with memory\n>> context checking enabled (which does not apply to regular packages).\n>\n>Huh? That struct is private in slab.c, no? Any outside code relying\n>on its contents deserves to break.\n>\n\nAh, right. Silly me.\n\n>I do think we ought to back-patch this, given the horrible results\n>Andres showed.\n>\n\nOK.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 16 Jan 2020 21:21:28 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 06:04:32PM +0100, Tomas Vondra wrote:\n>On Thu, Jan 16, 2020 at 11:43:34AM -0500, Tom Lane wrote:\n>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>The attached fix should do the trick - it pre-allocates the space when\n>>>creating the context. There is a bit of complexity because we want to\n>>>allocate the space as part of the context header, but nothin too bad. We\n>>>might optimize it a bit by using a regular bitmap (instead of just an\n>>>array of bools), but I haven't done that.\n>>\n>>Hmm ... so if this is an array of bools, why isn't it declared bool*\n>>rather than char* ? (Pre-existing ugliness, sure, but we might as\n>>well fix it while we're here. Especially since you used sizeof(bool)\n>>in the space calculation.)\n>>\n>\n>True. Will fix.\n>\n>>I agree that maxaligning the start point of the array is pointless.\n>>\n>>I'd write \"free chunks in a block\" not \"free chunks on a block\",\n>>the latter seems rather shaky English. But that's getting picky.\n>>\n>>LGTM otherwise.\n>>\n>\n>OK. Barring objections I'll push and backpatch this later today.\n>\n\nI've pushed and backpatched this all the back back to 10.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Jan 2020 15:34:50 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 01:41:39PM -0500, Tom Lane wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On 2020-01-16 18:01:53 +0100, Tomas Vondra wrote:\n>>> and it's only really used in debug builds anyway. So I'm not all that\n>>> woried about this wasting a couple extra kB of memory.\n>\n>> IDK, making memory usage look different makes optimizing it harder. Not\n>> a hard rule, obviously, but ...\n>\n>Well, if you're that excited about it, make a patch so we can see\n>how ugly it ends up being.\n>\n\nI think the question is how much memory would using globals actually\nsave, compared to including the bitmap in SlabContext.\n\nThe bitmap size depends on block/chunk size - I don't know what\nparameters Andres uses for the additional contexts, but for the two\nplaces already using Slab we have 8kB blocks with 80B and 240B chunks,\nso ~102 and ~34 chunks in a block. So it's not a huge amount, and we\ncould easily reduce this to 1/8 by switching to a proper bitmap.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 Jan 2020 15:40:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SlabCheck leaks memory into TopMemoryContext"
}
] |
[
{
"msg_contents": "Hello,\n\ndefault constructor for ranges use lower bound closed '[' and upper bound \nopen ')'. This is correct behavior, but when upper bound is same like \nlower bound then range is empty. Mathematically is correct again - but in \ndatabase is lost information about range bounds (lower/upper is NULL). To \nprevent this sitiuation we must have check if lower and upper argument is \nsame and add some 0.00001s to upper range or use another constructor like \ntstzrange(now(),now(),'[]') .\n\nIs there chance to change behavior of storing ranges? Its possible store \nrange bounds in internal structure and lower(tstzrange(now(),now())) show \nnot NULL value or change default behavior \ntstzrange(timestamptz,timestamptz) - if both args are same, then store as \n'[]', else '[)' and only tstzrange(timestamptz,timestamtz,'[)') and \ntstzrange(timestamptz,timestamtz,'()') store empty range.\n\nIt's only suggestion, i don't now if somebody wants store empty range \nwithout bounds.\n\nWe must have some checks to prevent storing empty values on every place \nwhere can occur this empty range, becouse we don't want lose bound \ninformation.\n\nBest regards,\n-- \n-------------------------------------\nIng. David TUROŇ\nLinuxBox.cz, s.r.o.\n28. rijna 168, 709 01 Ostrava\n\ntel.: +420 591 166 224\nfax: +420 596 621 273\nmobil: +420 732 589 152\nwww.linuxbox.cz\n\nmobil servis: +420 737 238 656\nemail servis: servis@linuxbox.cz\n-------------------------------------\n\nHello,\n\ndefault constructor for ranges use lower\nbound closed '[' and upper bound open ')'. This is correct behavior, but\nwhen upper bound is same like lower bound then range is empty. Mathematically\nis correct again - but in database is lost information about range bounds\n(lower/upper is NULL). To prevent this sitiuation we must have check if\nlower and upper argument is same and add some 0.00001s to upper range\nor use another constructor like tstzrange(now(),now(),'[]') .\n\nIs there chance to change behavior of\nstoring ranges? Its possible store range bounds in internal structure and\nlower(tstzrange(now(),now())) show not NULL value or change default behavior\ntstzrange(timestamptz,timestamptz) - if both args are same, then store\nas '[]', else '[)' and only tstzrange(timestamptz,timestamtz,'[)') and\n tstzrange(timestamptz,timestamtz,'()') store empty range.\n\nIt's only suggestion, i don't now if\nsomebody wants store empty range without bounds.\n\nWe must have some checks to prevent\nstoring empty values on every place where can occur this empty range, becouse\nwe don't want lose bound information.\n\nBest regards,\n-- \n-------------------------------------\nIng. David TUROŇ\nLinuxBox.cz, s.r.o.\n28. rijna 168, 709 01 Ostrava\n\ntel.: +420 591 166 224\nfax: +420 596 621 273\nmobil: +420 732 589 152\nwww.linuxbox.cz\n\nmobil servis: +420 737 238 656\nemail servis: servis@linuxbox.cz\n-------------------------------------",
"msg_date": "Thu, 16 Jan 2020 10:27:28 +0100",
"msg_from": "david.turon@linuxbox.cz",
"msg_from_op": true,
"msg_subject": "empty range"
},
{
"msg_contents": "> It's only suggestion, i don't now if somebody wants store empty range without bounds.\n\nI thought about the same while developing the BRIN inclusion operator\nclass. I am not sure how useful empty ranges are in practice, but\nkeeping their bound would only bring more flexibility, and eliminate\nspecial cases on most of the range operators. For reference, we allow\nempty boxes, and none of the geometric operators has to handle them\nspecially.\n\n\n",
"msg_date": "Thu, 16 Jan 2020 14:00:24 +0000",
"msg_from": "Emre Hasegeli <emre@hasegeli.com>",
"msg_from_op": false,
"msg_subject": "Re: empty range"
},
{
"msg_contents": "Emre Hasegeli <emre@hasegeli.com> writes:\n>> It's only suggestion, i don't now if somebody wants store empty range without bounds.\n\n> I thought about the same while developing the BRIN inclusion operator\n> class. I am not sure how useful empty ranges are in practice, but\n> keeping their bound would only bring more flexibility, and eliminate\n> special cases on most of the range operators. For reference, we allow\n> empty boxes, and none of the geometric operators has to handle them\n> specially.\n\nI think it'd just move the special cases somewhere else. Consider\n\nregression=# select int4range(4,4) = int4range(5,5);\n ?column? \n----------\n t\n(1 row)\n\nHow do you preserve that behavior ... or if you don't, how much\ndamage does that do to the semantics of ranges? Right now there's\na pretty solid set-theoretic basis for understanding what a range is,\nie two ranges are the same if they include the same sets of elements.\nIt seems like that goes out the window if we don't consider that\nall empty ranges are the same.\n\nBTW, I think the main reason for all the bound-normalization pushups\nis to try to have a rule that ranges that are set-theoretically equal\nwill look the same. That also goes out the window if we make\nempty ranges look like this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 10:21:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: empty range"
},
{
"msg_contents": "> regression=# select int4range(4,4) = int4range(5,5);\n> ?column? \n> ----------\n> t\n> (1 row)\n\nYes you have right, i don't realize this situation. But what about yust \nleave empty bit set to True and don't discard the input value - items can \nbe sort and we can see the value - lower(int4range(4,4)) can be 4 without \nany damage. There is isempty function and i hope nobody check empty ranges \nwith lower(anyrange) IS NULL AND upper(anyrange) IS NULL right?\n\nAnother thing that we did not know before was that is better store \ntstzrange(now(), NULL) instead of tstzrange(now,'infinity') becouse \nupper_inf is not True in this case and is not True if bounds are closed \n'[]'. Maybe we did not read documentation properly and we did not know how \nis range type stored internally and we was confused after discovering this \nbehavior. \n\nSELECT upper_inf(tstzrange(now(),'infinity','[]'));\n upper_inf \n-----------\n f\n(1 řádka) \n\n\n\nBest regards\n\nDavid T. \n\n-- \n-------------------------------------\nIng. David TUROŇ\nLinuxBox.cz, s.r.o.\n28. rijna 168, 709 01 Ostrava\n\ntel.: +420 591 166 224\nfax: +420 596 621 273\nmobil: +420 732 589 152\nwww.linuxbox.cz\n\nmobil servis: +420 737 238 656\nemail servis: servis@linuxbox.cz\n-------------------------------------\n\n\n\nOd: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nKomu: emre@hasegeli.com\nKopie: david.turon@linuxbox.cz, pgsql-hackers@postgresql.org\nDatum: 16. 01. 2020 16:21\nPředmět: Re: empty range\n\n\n\nEmre Hasegeli <emre@hasegeli.com> writes:\n>> It's only suggestion, i don't now if somebody wants store empty range \nwithout bounds.\n\n> I thought about the same while developing the BRIN inclusion operator\n> class. I am not sure how useful empty ranges are in practice, but\n> keeping their bound would only bring more flexibility, and eliminate\n> special cases on most of the range operators. For reference, we allow\n> empty boxes, and none of the geometric operators has to handle them\n> specially.\n\nI think it'd just move the special cases somewhere else. Consider\n\nregression=# select int4range(4,4) = int4range(5,5);\n ?column? \n----------\n t\n(1 row)\n\nHow do you preserve that behavior ... or if you don't, how much\ndamage does that do to the semantics of ranges? Right now there's\na pretty solid set-theoretic basis for understanding what a range is,\nie two ranges are the same if they include the same sets of elements.\nIt seems like that goes out the window if we don't consider that\nall empty ranges are the same.\n\nBTW, I think the main reason for all the bound-normalization pushups\nis to try to have a rule that ranges that are set-theoretically equal\nwill look the same. That also goes out the window if we make\nempty ranges look like this.\n\n regards, tom lane\n\n\n\n> regression=# select int4range(4,4) = int4range(5,5);\n> ?column? \n> ----------\n> t\n> (1 row)\n\nYes you have right, i don't realize\nthis situation. But what about yust leave empty bit set to True and don't\ndiscard the input value - items can be sort and we can see the value -\nlower(int4range(4,4))\ncan be 4 without any damage. There is isempty function and i hope nobody\ncheck empty ranges with lower(anyrange) IS NULL AND upper(anyrange) IS\nNULL right?\n\nAnother thing that we did not know before\nwas that is better store tstzrange(now(), NULL) instead of tstzrange(now,'infinity')\nbecouse upper_inf is not True in this case and is not True if bounds are\nclosed '[]'. Maybe we did not read documentation properly and we did not\nknow how is range type stored internally and we was confused after discovering\nthis behavior. \n\nSELECT upper_inf(tstzrange(now(),'infinity','[]'));\n upper_inf \n-----------\n f\n(1 řádka) \n\n\n\nBest regards\n\nDavid T. \n\n-- \n-------------------------------------\nIng. David TUROŇ\nLinuxBox.cz, s.r.o.\n28. rijna 168, 709 01 Ostrava\n\ntel.: +420 591 166 224\nfax: +420 596 621 273\nmobil: +420 732 589 152\nwww.linuxbox.cz\n\nmobil servis: +420 737 238 656\nemail servis: servis@linuxbox.cz\n-------------------------------------\n\n\n\nOd: \n \"Tom Lane\"\n<tgl@sss.pgh.pa.us>\nKomu: \n emre@hasegeli.com\nKopie: \n david.turon@linuxbox.cz,\npgsql-hackers@postgresql.org\nDatum: \n 16. 01. 2020 16:21\nPředmět: \n Re: empty range\n\n\n\n\nEmre Hasegeli <emre@hasegeli.com> writes:\n>> It's only suggestion, i don't now if somebody wants store empty\nrange without bounds.\n\n> I thought about the same while developing the BRIN inclusion operator\n> class. I am not sure how useful empty ranges are in practice,\nbut\n> keeping their bound would only bring more flexibility, and eliminate\n> special cases on most of the range operators. For reference,\nwe allow\n> empty boxes, and none of the geometric operators has to handle them\n> specially.\n\nI think it'd just move the special cases somewhere else. Consider\n\nregression=# select int4range(4,4) = int4range(5,5);\n ?column? \n----------\n t\n(1 row)\n\nHow do you preserve that behavior ... or if you don't, how much\ndamage does that do to the semantics of ranges? Right now there's\na pretty solid set-theoretic basis for understanding what a range is,\nie two ranges are the same if they include the same sets of elements.\nIt seems like that goes out the window if we don't consider that\nall empty ranges are the same.\n\nBTW, I think the main reason for all the bound-normalization pushups\nis to try to have a rule that ranges that are set-theoretically equal\nwill look the same. That also goes out the window if we make\nempty ranges look like this.\n\n \n \n \nregards, tom lane",
"msg_date": "Mon, 20 Jan 2020 08:39:35 +0100",
"msg_from": "david.turon@linuxbox.cz",
"msg_from_op": true,
"msg_subject": "Re: empty range"
}
] |
[
{
"msg_contents": "Hello, hackers.\n\nCurrently hint bits in the index pages (dead tuples) are set and taken\ninto account only at primary server. Standby just ignores it. It is\ndone for reasons, of course (see RelationGetIndexScan and [1]):\n\n * We do this because the xmin on the primary node could easily be\n * later than the xmin on the standby node, so that what the primary\n * thinks is killed is supposed to be visible on standby. So for correct\n * MVCC for queries during recovery we must ignore these hints and check\n * all tuples.\n\nAlso, according to [2] and cases like [3] it seems to be good idea to\nsupport \"ignore_killed_tuples\" on standby.\n\nI hope I know the way to support it correctly with reasonable amount of changes.\n\nFirst thing we need to consider - checksums and wal_log_hints are\nwidely used these days. So, at any moment master could send FPW page\nwith new \"killed tuples\" hints and overwrite hints set by standby.\nMoreover it is not possible to distinguish hints are set by primary or standby.\n\nAnd there is where hot_standby_feedback comes to play. Master node\nconsiders xmin of hot_standy_feedback replicas (RecentGlobalXmin)\nwhile setting \"killed tuples\" bits. So, if hot_standby_feedback is\nenabled on standby for a while - it could safely trust hint bits from\nmaster.\nAlso, standby could set own hints using xmin it sends to primary\nduring feedback (but without marking page as dirty).\n\nOf course all is not so easy, there are a few things and corner cases\nto care about\n* Looks like RecentGlobalXmin could be moved backwards in case of new\nreplica with lower xmin is connected (or by switching some replica to\nhot_standby_feedback=on). We must ensure RecentGlobalXmin is moved\nstrictly forward.\n* hot_standby_feedback could be enabled on the fly. In such a case we\nneed distinguish transactions which are safe or unsafe to deal with\nhints. Standby could receive fresh RecentGlobalXmin as response to\nfeedback message. All standby transactions with xmin >=\nRecentGlobalXmin are safe to use hints.\n* hot_standby_feedback could be disabled on the fly. In such situation\nstandby needs to continue to send feedback while canceling all queries\nwith ignore_killed_tuples=true. Once all such queries are canceled -\nfeedback are no longer needed and should be disabled.\n\nCould someone validate my thoughts please? If the idea is mostly\ncorrect - I could try to implement and test it.\n\n[1] - https://www.postgresql.org/message-id/flat/7067.1529246768%40sss.pgh.pa.us#d9e2e570ba34fc96c4300a362cbe8c38\n[2] - https://www.postgresql.org/message-id/flat/12843.1529331619%40sss.pgh.pa.us#6df9694fdfd5d550fbb38e711d162be8\n[3] - https://www.postgresql.org/message-id/flat/20170428133818.24368.33533%40wrigleys.postgresql.org\n\n\n",
"msg_date": "Thu, 16 Jan 2020 14:30:12 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-16 14:30:12 +0300, Michail Nikolaev wrote:\n> First thing we need to consider - checksums and wal_log_hints are\n> widely used these days. So, at any moment master could send FPW page\n> with new \"killed tuples\" hints and overwrite hints set by standby.\n> Moreover it is not possible to distinguish hints are set by primary or standby.\n\nNote that the FPIs are only going to be sent for the first write to a\npage after a checksum. I don't think you're suggesting we rely on them\nfor correctness (this far into the email at least), but it still seems\nworthwhile to point out.\n\n\n> And there is where hot_standby_feedback comes to play. Master node\n> considers xmin of hot_standy_feedback replicas (RecentGlobalXmin)\n> while setting \"killed tuples\" bits. So, if hot_standby_feedback is\n> enabled on standby for a while - it could safely trust hint bits from\n> master.\n\nWell, not easily. There's no guarantee that the node it reports\nhot_standby_feedback to is actually the primary. It could be an\ncascading replica setup, that doesn't report hot_standby_feedback\nupstream. Also hot_standby_feedback only takes effect while a streaming\nconnection is active, if that is even temporarily interrupted, the\nprimary will loose all knowledge of the standby's horizon - unless\nreplication slots are in use, that is.\n\nAdditionally, we also need to handle the case where the replica\ncurrently has restarted, and is recovering using local WAL, and/or\narchive based recovery. In that case the standby could already have sent\na *newer* horizon as feedback upstream, but currently again have an\nolder view. It is entirely possible that the standby is consistent and\nqueryable in such a state (if nothing forced disk writes during WAL\nreplay, minRecoveryLSN will not be set to something too new).\n\n\n> Also, standby could set own hints using xmin it sends to primary\n> during feedback (but without marking page as dirty).\n\nWe do something similar for heap hint bits already...\n\n\n> Of course all is not so easy, there are a few things and corner cases\n> to care about\n> * Looks like RecentGlobalXmin could be moved backwards in case of new\n> replica with lower xmin is connected (or by switching some replica to\n> hot_standby_feedback=on). We must ensure RecentGlobalXmin is moved\n> strictly forward.\n\nI'm not sure this is a problem. If that happens we cannot rely on the\ndifferent xmin horizon anyway, because action may have been taken on the\nold RecentGlobalXmin. Thus we need to be safe against that anyway.\n\n\n> * hot_standby_feedback could be enabled on the fly. In such a case we\n> need distinguish transactions which are safe or unsafe to deal with\n> hints. Standby could receive fresh RecentGlobalXmin as response to\n> feedback message. All standby transactions with xmin >=\n> RecentGlobalXmin are safe to use hints.\n> * hot_standby_feedback could be disabled on the fly. In such situation\n> standby needs to continue to send feedback while canceling all queries\n> with ignore_killed_tuples=true. Once all such queries are canceled -\n> feedback are no longer needed and should be disabled.\n\nI don't think we can rely on hot_standby_feedback at all. We can to\navoid unnecessary cancellations, etc, and even assume it's setup up\nreasonably for some configurations, but there always needs to be an\nindependent correctness backstop.\n\n\n\nI think it might be more promising to improve the the kill logic based\non the WAL logged horizons from the primary. All I think we need to do\nis to use a more conservatively computed RecentGlobalXmin when\ndetermining whether tuples are dead, I think. We already regularly log a\nxl_running_xacts, adding information about the primaries horizon to\nthat, and stashing it in shared memory on the standby, shouldn't be too\nhard. Then we can make a correct, albeit likely overly pessimistic,\nvisibility determinations about tuples, and go on to set LP_DEAD.\n\nThere are some complexities around how to avoid unnecessary query\ncancellations. We'd not want to trigger recovery conflicts based on the\nnew xl_running_xacts field, as that'd make the conflict rate go through\nthe roof - but I think we could safely use the logical minimum of the\nlocal RecentGlobalXmin, and the primaries'.\n\nThat should allow us to set additional LP_DEAD safely, I believe. We\ncould even rely on those LP_DEAD bits. But:\n\nI'm less clear on how we can make sure that we can *rely* on LP_DEAD to\nskip over entries during scans, however. The bits set as described above\nwould be safe, but we also can see LP_DEAD set by the primary (and even\nupstream cascading standbys at least in case of new base backups taken\nfrom them), due to them being not being WAL logged. As we don't WAL log,\nthere is no conflict associated with the LP_DEADs being set. My gut\nfeeling is that it's going to be very hard to get around this, without\nadding WAL logging for _bt_killitems et al (including an interface for\nkill_prior_tuple to report the used horizon to the index).\n\n\nI'm wondering if we could recycle BTPageOpaqueData.xact to store the\nhorizon applying to killed tuples on the page. We don't need to store\nthe level for leaf pages, because we have BTP_LEAF, so we could make\nspace for that (potentially signalled by a new BTP flag). Obviously we\nhave to be careful with storing xids in the index, due to potential\nwraparound danger - but I think such page would have to be vacuumed\nanyway, before a potential wraparound. I think we could safely unset\nthe xid during nbtree single page cleanup, and vacuum, by making sure no\nLP_DEAD entries survive, and by including the horizon in the generated\nWAL record.\n\nThat however still doesn't really fully allow us to set LP_DEAD on\nstandbys, however - but it'd allow us to take the primary's LP_DEADs\ninto account on a standby. I think we'd run into torn page issues, if we\nwere to do so without WAL logging, because we'd rely on the LP_DEAD bits\nand BTPageOpaqueData.xact to be in sync. I *think* we might be safe to\ndo so *iff* the page's LSN indicates that there has been a WAL record\ncovering it since the last redo location.\n\n\nI only had my first cup of coffee for the day, so I might also just be\nentirely off base here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Jan 2020 09:54:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 9:54 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't think we can rely on hot_standby_feedback at all. We can to\n> avoid unnecessary cancellations, etc, and even assume it's setup up\n> reasonably for some configurations, but there always needs to be an\n> independent correctness backstop.\n\n+1\n\n> I'm less clear on how we can make sure that we can *rely* on LP_DEAD to\n> skip over entries during scans, however. The bits set as described above\n> would be safe, but we also can see LP_DEAD set by the primary (and even\n> upstream cascading standbys at least in case of new base backups taken\n> from them), due to them being not being WAL logged. As we don't WAL log,\n> there is no conflict associated with the LP_DEADs being set. My gut\n> feeling is that it's going to be very hard to get around this, without\n> adding WAL logging for _bt_killitems et al (including an interface for\n> kill_prior_tuple to report the used horizon to the index).\n\nI agree.\n\nWhat about calling _bt_vacuum_one_page() more often than strictly\nnecessary to avoid a page split on the primary? The B-Tree\ndeduplication patch sometimes does that, albeit for completely\nunrelated reasons. (We don't want to have to unset an LP_DEAD bit in\nthe case when a new/incoming duplicate tuple has a TID that overlaps\nwith the posting list range of some existing duplicate posting list\ntuple.)\n\nI have no idea how you'd determine that it was time to call\n_bt_vacuum_one_page(). Seems worth considering.\n\n> I'm wondering if we could recycle BTPageOpaqueData.xact to store the\n> horizon applying to killed tuples on the page. We don't need to store\n> the level for leaf pages, because we have BTP_LEAF, so we could make\n> space for that (potentially signalled by a new BTP flag). Obviously we\n> have to be careful with storing xids in the index, due to potential\n> wraparound danger - but I think such page would have to be vacuumed\n> anyway, before a potential wraparound.\n\nYou would think that, but unfortunately we don't currently do it that\nway. We store XIDs in deleted leaf pages that can sometimes be missed\nuntil the next wraparound.\n\nWe need to do something like commit\n6655a7299d835dea9e8e0ba69cc5284611b96f29, but for B-Tree. It's\nsomewhere on my TODO list.\n\n> I think we could safely unset\n> the xid during nbtree single page cleanup, and vacuum, by making sure no\n> LP_DEAD entries survive, and by including the horizon in the generated\n> WAL record.\n>\n> That however still doesn't really fully allow us to set LP_DEAD on\n> standbys, however - but it'd allow us to take the primary's LP_DEADs\n> into account on a standby. I think we'd run into torn page issues, if we\n> were to do so without WAL logging, because we'd rely on the LP_DEAD bits\n> and BTPageOpaqueData.xact to be in sync. I *think* we might be safe to\n> do so *iff* the page's LSN indicates that there has been a WAL record\n> covering it since the last redo location.\n\nThat sounds like a huge mess.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 Jan 2020 10:49:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello again.\n\nAndres, Peter, thanks for your comments.\n\nSome of issues your mentioned (reporting feedback to the another\ncascade standby, processing queries after restart and newer xid\nalready reported) could be fixed in provided design, but your\nintention to have \"independent correctness backstop\" is a right thing\nto do.\n\nSo, I was thinking about another approach which is:\n* still not too tricky to implement\n* easy to understand\n* does not rely on hot_standby_feedback for correctness, but only for efficiency\n* could be used with any kind of index\n* does not generate a lot of WAL\n\nLet's add a new type of WAL record like \"some index killed tuple hint\nbits are set according to RecentGlobalXmin=x\" (without specifying page\nor even relation). Let's call 'x' as 'LastKilledIndexTuplesXmin' and\ntrack it in standby memory. It is sent only in case of\nwal_log_hints=true. If hints cause FPW - it is sent before FPW record.\nAlso, it is not required to write such WAL every time primary marks\nindex tuple as dead. It should be done only in case\n'LastKilledIndexTuplesXmin' is changed (moved forward).\n\nOn standby such record is used to cancel queries. If transaction is\nexecuted with \"ignore_killed_tuples==true\" (set on snapshot creation)\nand its xid is less than received LastKilledIndexTuplesXmin - just\ncancel the query (because it could rely on invalid hint bit). So,\ntechnically it should be correct to use hints received from master to\nskip tuples according to MVCC, but \"the conflict rate goes through the\nroof\".\n\nTo avoid any real conflicts standby sets\n ignore_killed_tuples = (hot_standby_feedback is on)\n AND (wal_log_hints is on on primary)\n AND (standby new snapshot xid >= last\nLastKilledIndexTuplesXmin received)\n AND (hot_standby_feedback is reported\ndirectly to master).\n\nSo, hot_standby_feedback loop effectively eliminates any conflicts\n(because LastKilledIndexTuplesXmin is technically RecentGlobalXmin in\nsuch case). But if feedback is broken for some reason - query\ncancellation logic will keep everything safe.\n\nFor correctness LastKilledIndexTuplesXmin (and as consequence\nRecentGlobalXmin) should be moved only forward.\n\nTo set killed bits on standby we should check tuples visibility\naccording to last LastKilledIndexTuplesXmin received. It is just like\nmaster sets these bits according to its state - so it is even safe to\ntransfer them to another standby.\n\nDoes it look better now?\n\nThanks, Michail.\n\n\n",
"msg_date": "Fri, 24 Jan 2020 17:15:47 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hi Michail,\n\nOn Fri, Jan 24, 2020 at 6:16 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> Some of issues your mentioned (reporting feedback to the another\n> cascade standby, processing queries after restart and newer xid\n> already reported) could be fixed in provided design, but your\n> intention to have \"independent correctness backstop\" is a right thing\n> to do.\n\nAttached is a very rough POC patch of my own, which makes item\ndeletion occur \"non-opportunistically\" in unique indexes. The idea is\nthat we exploit the uniqueness property of unique indexes to identify\n\"version churn\" from non-HOT updates. If any single value on a leaf\npage has several duplicates, then there is a good chance that we can\nsafely delete some of them. It's worth going to the heap to check\nwhether that's safe selectively, at the point where we'd usually have\nto split the page. We only have to free one or two items to avoid\nsplitting the page. If we can avoid splitting the page immediately, we\nmay well avoid splitting it indefinitely, or forever.\n\nThis approach seems to be super effective. It can leave the PK on\npgbench_accounts in pristine condition (no page splits) after many\nhours with a pgbench-like workload that makes all updates non-HOT\nupdates. Even with many clients, and a skewed distribution. Provided\nthe index isn't tiny to begin with, we can always keep up with\ncontrolling index bloat -- once the client backends themselves begin\nto take active responsibility for garbage collection, rather than just\ntreating it as a nice-to-have. I'm pretty sure that I'm going to be\nspending a lot of time developing this approach, because it really\nworks.\n\nThis seems fairly relevant to what you're doing. It makes almost all\nindex cleanup occur using the new delete infrastructure for some of\nthe most interesting workloads where deletion takes place in client\nbackends. In practice, a standby will almost be in the same position\nas the primary in a workload that this approach really helps with,\nsince setting the LP_DEAD bit itself doesn't really need to happen (we\ncan go straight to deleting the items in the new deletion path).\n\nTo address the questions you've asked: I don't really like the idea of\nintroducing new rules around tuple visibility and WAL logging to set\nmore LP_DEAD bits like this at all. It seems very complicated. I\nsuspect that we'd be better off introducing ways of making the actual\ndeletes occur sooner on the primary, possibly much sooner, avoiding\nany need for special infrastructure on the standby. This is probably\nnot limited to the special unique index case that my patch focuses on\n-- we can probably push this general approach forward in a number of\ndifferent ways. I just started with unique indexes because that seemed\nmost promising. I have only worked on the project for a few days. I\ndon't really know how it will evolve.\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 5 Apr 2020 13:05:00 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello, Peter.\n\nThanks for your feedback.\n\n> Attached is a very rough POC patch of my own, which makes item\n> deletion occur \"non-opportunistically\" in unique indexes. The idea is\n> that we exploit the uniqueness property of unique indexes to identify\n> \"version churn\" from non-HOT updates. If any single value on a leaf\n> page has several duplicates, then there is a good chance that we can\n> safely delete some of them. It's worth going to the heap to check\n> whether that's safe selectively, at the point where we'd usually have\n> to split the page. We only have to free one or two items to avoid\n> splitting the page. If we can avoid splitting the page immediately, we\n> may well avoid splitting it indefinitely, or forever.\n\nYes, it is a brilliant idea to use uniqueness to avoid bloating the index. I am\nnot able to understand all the code now, but I’ll check the patch in more\ndetail later.\n\n> This seems fairly relevant to what you're doing. It makes almost all\n> index cleanup occur using the new delete infrastructure for some of\n> the most interesting workloads where deletion takes place in client\n> backends. In practice, a standby will almost be in the same position\n> as the primary in a workload that this approach really helps with,\n> since setting the LP_DEAD bit itself doesn't really need to happen (we\n> can go straight to deleting the items in the new deletion path).\n\n> This is probably\n> not limited to the special unique index case that my patch focuses on\n> -- we can probably push this general approach forward in a number of\n> different ways. I just started with unique indexes because that seemed\n> most promising. I have only worked on the project for a few days. I\n> don't really know how it will evolve.\n\nYes, it is relevant, but I think it is «located in a different plane» and\ncomplement each other. Because most of the indexes are not unique these days\nand most of the standbys (and even primaries) have long snapshots (up to\nminutes, hours) – so, multiple versions of index records are still required for\nthem. Even if we could avoid multiple versions somehow - it could lead to a very\nhigh rate of query cancelations on standby.\n\n> To address the questions you've asked: I don't really like the idea of\n> introducing new rules around tuple visibility and WAL logging to set\n> more LP_DEAD bits like this at all. It seems very complicated.\n\nI don’t think it is too complicated. I have polished the idea a little and now\nit looks even elegant for me :) I’ll try to explain the concept briefly (there\nare no new visibility rules or changes to set more LP_DEAD bits than now –\neverything is based on well-tested mechanics):\n\n1) There is some kind of horizon of xmin values primary pushes to a standby\ncurrently. All standby’s snapshots are required to satisfice this horizon to\naccess heap and indexes. This is done by *ResolveRecoveryConflictWithSnapshot*\nand corresponding WAL records (for example -XLOG_BTREE_DELETE).\n\n2) We could introduce a new WAL record (named XLOG_INDEX_HINT in the patch for\nnow) to define a horizon of xmin required for standby’s snapshot to use LP_DEAD\nbits in the indexes.\n\n3) Master sends XLOG_INDEX_HINT in case it sets LP_DEAD bit on the index page\n(but before possible FPW caused by hints) by calling *LogIndexHintIfNeeded*. It\nis required to send such a record only if the new xmin value is greater than\none send before. I made tests to estimate the amount of new WAL – it is really\nsmall (especially compared to FPW writes done because of LP_DEAD bit set).\n\n4) New XLOG_INDEX_HINT contains only a database id and value of\n*latestIndexHintXid* (new horizon position). For simplicity, the primary could\nset just set horizon to *RecentGlobalXmin*. But for now in the patch horizon\nvalue extracted from heap in *HeapTupleIsSurelyDead* to reduce the number of\nXLOG_INDEX_HINT records even more).\n\n\n5) There is a new field in PGPROC structure - *indexIgnoreKilledTuples*. If it\nis set to true – standby queries are going to use LP_DEAD bits in index scans.\nIn such a case snapshot is required to satisfice new LP_DEAD-horizon pushed by\nXLOG_INDEX_HINT records. It is done by the same mechanism as used for heap -\n*ResolveRecoveryConflictWithSnapshot*.\n\n6) The major thing here – it is safe to set *indexIgnoreKilledTuples* to both\n‘true’ and ‘false’ from the perspective of correctness. It is just some kind of\nperformance compromise – use LP_DEAD bits but be aware of XLOG_INDEX_HINT\nhorizon or vice versa.\n\n7) What is the way to do the right decision about this compromise? It is pretty\nsimple – if hot_standby_feedback is on and primary confirmed our feedback is\nreceived – then set *indexIgnoreKilledTuples* too ‘true’ – since while feedback\nis working as expected – the query will be never canceled by XLOG_INDEX_HINT\nhorizon!\n\n8) To support cascading standby setups (with a possible break of feedback\nchain) – additional byte added to the ‘keep-alive’ message of the feedback\nprotocol.\n\n9) So, at the moment we are safe to use LP_DEAD bits received from the\nprimary when we want to.\n\n10) What is about setting LP_DEAD bits by standby? The main thing here -\n*RecentGlobalXmin* on standby is always lower than XLOG_INDEX_HINT horizon by\ndefinition – standby is always behind the primary. So, if something looks dead\non standby – it is definitely dead on the primary.\n\n11) Even if:\n\n* the primary changes vacuum_defer_cleanup_age\n* standby restarted\n* standby promoted to the primary\n* base backup taken from standby\n* standby is serving queries during recovery\n– nothing could go wrong here.\n\nBecause *HeapTupleIsSurelyDead* (and index LP_DEAD as result) needs *HEAP* hint\nbits to be already set at standby. So, the same code decides to set hint bits\nin the heap (it is done already on standby for a long time) and in the index.\n\nSo, the only thing we pay – a few additional bytes of WAL and some additional\nmoderate code complexity. But the support of hint-bits on standby is a huge\nadvantage for many workloads. I was able to get more than 1000% performance\nboost (and it is not surprising – index hint bits is just great optimization).\nAnd it works for almost all index types out of the box.\n\nAnother major thing here – everything is based on old, well-tested mechanics:\nquery cancelation because of snapshot conflicts and setting heap hint bits on\nstandby.\n\nMost of the patch – are technical changes to support new query cancelation\ncounters, new WAL record, new PGPROC field and so on. There are some places I\nam not sure about yet, naming is bad – it is still POC.\n\nThanks,\nMichail.",
"msg_date": "Wed, 8 Apr 2020 15:23:42 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 5:23 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> Yes, it is a brilliant idea to use uniqueness to avoid bloating the index. I am\n> not able to understand all the code now, but I’ll check the patch in more\n> detail later.\n\nThe design is probably simpler than you imagine. The algorithm tries\nto be clever about unique indexes in various ways, but don't get\ndistracted by those details. At a high level we're merely performing\natomic actions that can already happen, and already use the same high\nlevel rules.\n\nThere is LP_DEAD bit setting that's similar to what often happens in\n_bt_check_unique() already, plus there is a new way in which we can\ncall _bt_delitems_delete(). Importantly, we're only changing the time\nand the reasons for doing these things.\n\n> Yes, it is relevant, but I think it is «located in a different plane» and\n> complement each other. Because most of the indexes are not unique these days\n> and most of the standbys (and even primaries) have long snapshots (up to\n> minutes, hours) – so, multiple versions of index records are still required for\n> them. Even if we could avoid multiple versions somehow - it could lead to a very\n> high rate of query cancelations on standby.\n\nI admit that I didn't really understand what you were trying to do\ninitially. I now understand a little better.\n\nI think that there is something to be said for calling\n_bt_delitems_delete() more frequently on the standby, not necessarily\njust for the special case of unique indexes. That might also help on\nstandbys, at the cost of more recovery conflicts. I admit that it's\nunclear how much that would help with the cases that you seem to\nreally care about. I'm not going to argue that the kind of thing I'm\ntalking about (actually doing deletion more frequently on the primary)\nis truly a replacement for your patch, even if it was generalized\nfurther than my POC patch -- it is not a replacement. As best, it is a\nsimpler way of \"sending the LP_DEAD bits on the primary to the standby\nsooner\". Even still, I cannot help but wonder how much value there is\nin just doing this much (or figuring out some way to make LP_DEAD bits\nfrom the primary usable on the standby). That seems like a far less\nrisky project, even if it is less valuable to some users.\n\nLet me make sure I understand your position:\n\nYou're particularly concerned about cases where there are relatively\nfew page splits, and the standby has to wait for VACUUM to run on the\nprimary before dead index tuples get cleaned up. The primary itself\nprobably has no problem with setting LP_DEAD bits to avoid having\nindex scans visiting the heap unnecessarily. Or maybe the queries are\ndifferent on the standby anyway, so it matters to the standby that\ncertain index pages get LP_DEAD bits set quickly, though not to the\nprimary (at least not for the same pages). Setting the LP_DEAD bits on\nthe standby (in about the same way as we can already on the primary)\nis a \"night and day\" level difference. And we're willing to account\nfor FPIs on the primary (and the LP_DEAD bits set there) just to be\nable to also set LP_DEAD bits on the standby.\n\nRight?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 8 Apr 2020 18:13:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello, Peter.\n\n> Let me make sure I understand your position:\n\n> You're particularly concerned about cases where there are relatively\n> few page splits, and the standby has to wait for VACUUM to run on the\n> primary before dead index tuples get cleaned up. The primary itself\n> probably has no problem with setting LP_DEAD bits to avoid having\n> index scans visiting the heap unnecessarily. Or maybe the queries are\n> different on the standby anyway, so it matters to the standby that\n> certain index pages get LP_DEAD bits set quickly, though not to the\n> primary (at least not for the same pages). Setting the LP_DEAD bits on\n> the standby (in about the same way as we can already on the primary)\n> is a \"night and day\" level difference.\n> Right?\n\nYes, exactly.\n\nMy initial attempts were too naive (first and second letter) - but you and\nAndres gave me some hints on how to make it reliable.\n\nThe main goal is to make the standby to be able to use and set LP_DEAD almost\nas a primary does. Of course, standby could receive LP_DEAD with FPI from\nprimary at any moment - so, some kind of cancellation logic is required. Also,\nwe should keep the frequency of query cancellation at the same level - for that\nreason LP_DEAD bits better to be used only by standbys with\nhot_standby_feedback enabled. So, I am just repeating myself from the previous\nletter here.\n\n> And we're willing to account\n> for FPIs on the primary (and the LP_DEAD bits set there) just to be\n> able to also set LP_DEAD bits on the standby.\n\nYes, metaphorically saying - master sending WAL record with the letter:\n\"Attention, it is possible to receive FPI from me with LP_DEAD set for tuple\nwith xmax=ABCD, so, if you using LP_DEAD - your xmin should be greater or you\nshould cancel yourself\". And such a letter is required only if this horizon is\nmoved forward.\n\nAnd... Looks like it works - queries are mush faster, results look correct,\nadditional WAL traffic is low, cancellation at the same level... As far as I\ncan see - the basic concept is correct and effective (but of course, I\ncould miss something).\n\nThe patch is hard to look into - I'll try to split it into several patches\nlater. And of course, a lot of polishing is required (and there are few places\nI am not sure about yet).\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Fri, 10 Apr 2020 02:02:44 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello, hackers.\n\nSorry for necroposting, but if someone is interested - I hope the patch is\nready now and available in the other thread (1).\n\nThanks,\nMichail.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CANtu0oiP18H31dSaEzn0B0rW6tA_q1G7%3D9Y92%2BUS_WHGOoQevg%40mail.gmail.com\n\nHello, hackers.Sorry for necroposting, but if someone is interested - I hope the patch is ready now and available in the other thread (1).Thanks,Michail.[1] https://www.postgresql.org/message-id/flat/CANtu0oiP18H31dSaEzn0B0rW6tA_q1G7%3D9Y92%2BUS_WHGOoQevg%40mail.gmail.com",
"msg_date": "Wed, 27 Jan 2021 22:30:33 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 11:30 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> Sorry for necroposting, but if someone is interested - I hope the patch is ready now and available in the other thread (1).\n\nI wonder if it would help to not actually use the LP_DEAD bit for\nthis. Instead, you could use the currently-unused-in-indexes\nLP_REDIRECT bit. The general idea here is that this status bit is\ninterpreted as a \"recently dead\" bit in nbtree. It is always something\nthat is true *relative* to some *ephemeral* coarse-grained definition\nof recently dead. Whether or not \"recently dead\" means \"dead to my\nparticular MVCC snapshot\" can be determined using some kind of\nin-memory state that won't survive a crash (or a per-index-page\nepoch?). We still have LP_DEAD bits in this world, which work in the\nsame way as now (and so unambiguously indicate that the index tuple is\ndead-to-all, at least on the primary). I think that this idea of a\n\"recently dead\" bit might be able to accomplish a few goals all at\nonce, including your specific goal of setting \"hint bits\" in indexes.\n\nThe issue here is that it would also be nice to use a \"recently dead\"\nbit on the primary, but if you do that then maybe you're back to the\noriginal problem. OTOH, I also think that we could repurpose the\nLP_NORMAL bit in index AMs, so we could potentially have 3 different\ndefinitions of dead-ness without great difficulty!\n\nPerhaps this doesn't seem all that helpful, since I am expanding scope\nhere for a project that is already very difficult. And maybe it just\nisn't helpful -- I don't know. But it is at least possible that\nexpanding scope could actually *help* your case a lot, even if you\nonly ever care about your original goal. My personal experience with\nnbtree patches is just that: sometimes *expanding* scope actually\nmakes things *easier*, not harder. This is sometimes called \"The\nInventor's Paradox\":\n\nhttps://en.wikipedia.org/wiki/Inventor%27s_paradox\n\nConsider the example of my work on nbtree in PostgreSQL 12. It was\nactually about 6 or 7 enhancements, not just 1 big enhancement -- it\nis easy to forget that now. I think that it was *necessary* to add at\nleast 5 of these enhancements at the same time (maybe 2 or so really\nwere optional). This is deeply counter-intuitive, but still seems to\nbe true in my case. The specific reasons why I believe that this is\ntrue of the PostgreSQL 12 work are complicated, but it boils down to\nthis: the ~5 related-though-distinct enhancements were individually\nnot all that compelling (certainly not compelling enough to make\non-disk changes for), but were much greater than the sum of their\nparts when considered together. Maybe I got lucky there.\n\nMore generally, the nbtree stuff that I worked on in 12, 13, and now\n14 actually feels like one big project to me. I will refrain from\nexplaining exactly why that is right now, but you might be very\nsurprised at how closely related it all is. I didn't exactly plan it\nthat way, but trying to see things in the most general terms turned\nout to be a huge asset to me. If there are several independent reasons\nto move in one particular direction all at once, you can generally\nafford to be wrong about a lot of things without it truly harming\nanybody. Plus it's easy to see that you will not frustrate future work\nthat is closely related but distinct when that future work is\n*directly enabled* by what you're doing.\n\nWhat's more, the extra stuff I'm talking about probably has a bunch of\nother benefits on the primary, if done well. Consider how the deletion\nstuff with LP_DEAD bits now considers \"extra\" index tuples to delete\nwhen they're close by. We could even do something like that with these\nLP_REDIRECT/recently dead bits on the primary.\n\nI understand that it's hard to have a really long term outlook. I\nthink that that's almost necessary when working on a project like\nthis, though.\n\nDon't forget that this works both ways. Maybe a person that wanted to\ndo this \"recently dead\" stuff (which sounds really interesting to me\nright now) would similarly be compelled to consider the bigger\npicture, including the question of setting hint bits on standbys --\nthis other person had better not make that harder in the future, for\nthe same reasons (obviously what you want to do here makes sense, it\njust isn't everything to everybody). This is yet another way in which\nexpanding scope can help.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 27 Jan 2021 17:24:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 5:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The issue here is that it would also be nice to use a \"recently dead\"\n> bit on the primary, but if you do that then maybe you're back to the\n> original problem. OTOH, I also think that we could repurpose the\n> LP_NORMAL bit in index AMs, so we could potentially have 3 different\n> definitions of dead-ness without great difficulty!\n\nTo be clear, what I mean is that you currently have two bits in line\npointers. In an nbtree leaf page we only currently use one -- the\nLP_DEAD bit. But bringing the second bit into it allows us to have a\nrepresentation for two additional states (not just one), since of\ncourse the meaning of the second bit can be interpreted using the\nLP_DEAD bit. You could for example having encodings for each of the\nfollowing distinct per-LP states, while still preserving on-disk\ncompatibility:\n\n\"Not known to be dead in any sense\" (0), \"Unambiguously dead to all\"\n(what we now simply call LP_DEAD), \"recently dead on standby\"\n(currently-spare bit is set), and \"recently dead on primary\" (both\n'lp_flags' bits set).\n\nApplying FPIs on the standby would have to be careful to preserve a\nstandby-only bit. I'm probably not thinking of every subtlety, but\n\"putting all of the pieces of the puzzle together for further\nconsideration\" is likely to be a useful exercise.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 27 Jan 2021 17:42:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello, Peter.\n\n\n> I wonder if it would help to not actually use the LP_DEAD bit for\n> this. Instead, you could use the currently-unused-in-indexes\n> LP_REDIRECT bit.\n\nHm… Sound very promising - an additional bit is a lot in this situation.\n\n> Whether or not \"recently dead\" means \"dead to my\n> particular MVCC snapshot\" can be determined using some kind of\n> in-memory state that won't survive a crash (or a per-index-page\n> epoch?).\n\nDo you have any additional information about this idea? (maybe some\nthread). What kind of “in-memory state that won't survive a crash” and how\nto deal with flushed bits after the crash?\n\n> \"Not known to be dead in any sense\" (0), \"Unambiguously dead to all\"\n> (what we now simply call LP_DEAD), \"recently dead on standby\"\n> (currently-spare bit is set), and \"recently dead on primary\" (both\n> 'lp_flags' bits set).\n\nHm. What is about this way:\n\n 10 - dead to all on standby (LP_REDIRECT)\n 11 - dead to all on primary (LP_DEAD)\n 01 - future “recently DEAD” on primary (LP_NORMAL)\n\nIn such a case standby could just always ignore all LP_DEAD bits from\nprimary (standby will lose its own hint after FPI - but it is not a big\ndeal). So, we don’t need any conflict resolution (and any additional WAL\nrecords). Also, hot_standby_feedback-related stuff is not required anymore.\nAll we need to do (without details of course) - is correctly check if it is\nsafe to set LP_REDIRECT on standby according to `minRecoveryPoint` (to\navoid consistency issues during crash recovery). Or, probably, introduce\nsome kind of `indexHintMinRecoveryPoint`.\n\nAlso, looks like both GIST and HASH indexes also do not use LP_REDIRECT.\n\nSo, it will remove more than 80% of the current patch complexity!\n\nAlso, btw, do you know any reason to keep minRecoveryPoint at a low value?\nBecause it blocks standby for settings hints bits in *heap* already. And,\nprobably, will block standby to set *index* hint bits aggressively.\n\nThanks a lot,\nMichail.\n\nHello, Peter.> I wonder if it would help to not actually use the LP_DEAD bit for> this. Instead, you could use the currently-unused-in-indexes> LP_REDIRECT bit.Hm… Sound very promising - an additional bit is a lot in this situation. > Whether or not \"recently dead\" means \"dead to my> particular MVCC snapshot\" can be determined using some kind of> in-memory state that won't survive a crash (or a per-index-page> epoch?).Do you have any additional information about this idea? (maybe some thread). What kind of “in-memory state that won't survive a crash” and how to deal with flushed bits after the crash?> \"Not known to be dead in any sense\" (0), \"Unambiguously dead to all\"> (what we now simply call LP_DEAD), \"recently dead on standby\"> (currently-spare bit is set), and \"recently dead on primary\" (both> 'lp_flags' bits set).Hm. What is about this way: 10 - dead to all on standby (LP_REDIRECT) 11 - dead to all on primary (LP_DEAD) 01 - future “recently DEAD” on primary (LP_NORMAL)In such a case standby could just always ignore all LP_DEAD bits from primary (standby will lose its own hint after FPI - but it is not a big deal). So, we don’t need any conflict resolution (and any additional WAL records). Also, hot_standby_feedback-related stuff is not required anymore. All we need to do (without details of course) - is correctly check if it is safe to set LP_REDIRECT on standby according to `minRecoveryPoint` (to avoid consistency issues during crash recovery). Or, probably, introduce some kind of `indexHintMinRecoveryPoint`.Also, looks like both GIST and HASH indexes also do not use LP_REDIRECT.So, it will remove more than 80% of the current patch complexity!Also, btw, do you know any reason to keep minRecoveryPoint at a low value? Because it blocks standby for settings hints bits in *heap* already. And, probably, will block standby to set *index* hint bits aggressively.Thanks a lot,Michail.",
"msg_date": "Thu, 28 Jan 2021 21:15:52 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 10:16 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> > I wonder if it would help to not actually use the LP_DEAD bit for\n> > this. Instead, you could use the currently-unused-in-indexes\n> > LP_REDIRECT bit.\n>\n> Hm… Sound very promising - an additional bit is a lot in this situation.\n\nYeah, it would help a lot. But those bits are precious. So it makes\nsense to think about what to do with both of them in index AMs at the\nsame time. Otherwise we risk missing some important opportunity.\n\n> > Whether or not \"recently dead\" means \"dead to my\n> > particular MVCC snapshot\" can be determined using some kind of\n> > in-memory state that won't survive a crash (or a per-index-page\n> > epoch?).\n>\n> Do you have any additional information about this idea? (maybe some thread). What kind of “in-memory state that won't survive a crash” and how to deal with flushed bits after the crash?\n\nHonestly, that part wasn't very well thought out. A lot of things might work.\n\nSome kind of \"recently dead\" bit is easier on the primary. If we have\nrecently dead bits set on the primary (using a dedicated LP bit for\noriginal execution recently-dead-ness), then we wouldn't even\nnecessarily have to change anything about index scans/visibility at\nall. There would still be a significant benefit if we simply used the\nrecently dead bits when considering which heap blocks nbtree simple\ndeletion will visit inside _bt_deadblocks() -- in practice there would\nprobably be no real downside from assuming that the recently dead bits\nare now fully dead (it would sometimes be wrong, but not enough to\nmatter, probably only when there is a snapshot held for way way too\nlong).\n\nDeletion in indexes can work well while starting off with only an\n*approximate* idea of which index tuples will be safe to delete --\nthis is a high level idea behind my recent commit d168b666823. It\nseems very possible that that could be pushed even further here on the\nprimary.\n\nOn standbys (which set standby recently dead bits) it will be\ndifferent, because you need \"index hint bits\" set that are attuned to\nthe workload on the standby, and because you don't ever use the bit to\nhelp with deleting anything on the standby (that all happens during\noriginal execution).\n\nBTW, what happens when the page splits on the primary, btw? Does your\npatch \"move over\" the LP_DEAD bits to each half of the split?\n\n> Hm. What is about this way:\n>\n> 10 - dead to all on standby (LP_REDIRECT)\n> 11 - dead to all on primary (LP_DEAD)\n> 01 - future “recently DEAD” on primary (LP_NORMAL)\n\nNot sure.\n\n> Also, looks like both GIST and HASH indexes also do not use LP_REDIRECT.\n\nRight -- if we were to do this, the idea would be that it would apply\nto all index AMs that currently have (or will ever have) something\nlike the LP_DEAD bit stuff. The GiST and hash support for index\ndeletion is directly based on the original nbtree version, and there\nis no reason why we cannot eventually do all this stuff in at least\nthose three AMs.\n\nThere are already some line-pointer level differences in index AMs:\nLP_DEAD items have storage in index AMs, but not in heapam. This\nall-table-AMs/all-index-AMs divide in how item pointers work would be\npreserved.\n\n> Also, btw, do you know any reason to keep minRecoveryPoint at a low value?\n\nNot offhand.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 29 Jan 2021 18:03:58 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello, Peter.\n\n> Yeah, it would help a lot. But those bits are precious. So it makes\n> sense to think about what to do with both of them in index AMs at the\n> same time. Otherwise we risk missing some important opportunity.\n\nHm. I was trying to \"expand the scope\" as you said and got an idea... Why\nwe even should do any conflict resolution for hint bits? Or share precious\nLP bits?\n\nThe only way standby could get an \"invalid\" hint bit - is an FPI from the\nprimary. We could just use the current *btree_mask* infrastructure and\nclear all \"probably invalid\" bits from primary in case of standby while\napplying FPI (in `XLogReadBufferForRedoExtended`)!\nFor binary compatibility, we could use one of `btpo_flags` bits to mark the\npage as \"primary bits masked\". The same way would work for hash\\gist too.\n\nNo conflicts, only LP_DEAD bit is used by standby, `btpo_flags` has many\nfree bits for now, easy to implement, page content of primary\\standby\nalready differs in this bits...\nLooks like an easy and effective solution for me.\n\nWhat do you think?\n\n>> Also, btw, do you know any reason to keep minRecoveryPoint at a low\nvalue?\n> Not offhand.\n\nIf so, looks like it is not a bad idea to move minRecoveryPoint forward\nfrom time to time (for more aggressive standby index hint bits). For each\n`xl_running_xacts` (about each 15s), for example.\n\n> BTW, what happens when the page splits on the primary, btw? Does your\n> patch \"move over\" the LP_DEAD bits to each half of the split?\n\nThat part is not changed in any way.\n\nThanks,\nMichail.\n\nHello, Peter.> Yeah, it would help a lot. But those bits are precious. So it makes> sense to think about what to do with both of them in index AMs at the> same time. Otherwise we risk missing some important opportunity.Hm. I was trying to \"expand the scope\" as you said and got an idea... Why we even should do any conflict resolution for hint bits? Or share precious LP bits?The only way standby could get an \"invalid\" hint bit - is an FPI from the primary. We could just use the current *btree_mask* infrastructure and clear all \"probably invalid\" bits from primary in case of standby while applying FPI (in `XLogReadBufferForRedoExtended`)!For binary compatibility, we could use one of `btpo_flags` bits to mark the page as \"primary bits masked\". The same way would work for hash\\gist too.No conflicts, only LP_DEAD bit is used by standby, `btpo_flags` has many free bits for now, easy to implement, page content of primary\\standby already differs in this bits...Looks like an easy and effective solution for me.What do you think?>> Also, btw, do you know any reason to keep minRecoveryPoint at a low value?> Not offhand.If so, looks like it is not a bad idea to move minRecoveryPoint forward from time to time (for more aggressive standby index hint bits). For each `xl_running_xacts` (about each 15s), for example.> BTW, what happens when the page splits on the primary, btw? Does your> patch \"move over\" the LP_DEAD bits to each half of the split?That part is not changed in any way.Thanks,Michail.",
"msg_date": "Sat, 30 Jan 2021 20:11:16 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "On Sat, Jan 30, 2021 at 9:11 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> > Yeah, it would help a lot. But those bits are precious. So it makes\n> > sense to think about what to do with both of them in index AMs at the\n> > same time. Otherwise we risk missing some important opportunity.\n>\n> Hm. I was trying to \"expand the scope\" as you said and got an idea... Why we even should do any conflict resolution for hint bits? Or share precious LP bits?\n\nWhat does it mean today when an LP_DEAD bit is set on a standby? I\ndon't think that it means nothing at all -- at least not if you think\nabout it in the most general terms.\n\nIn a way it actually means something like \"recently dead\" even today,\nat least in one narrow specific sense: recovery may end, and then we\nactually *will* do *something* with the LP_DEAD bit, without directly\nconcerning ourselves with when or how each LP_DEAD bit was set.\n\nDuring recovery, we will probably always have to consider the\npossibility that LP_DEAD bits that get set on the primary may be\nreceived by a replica through some implementation detail (e.g. LP_DEAD\nbits are set in FPIs we replay, maybe even some other thing that\nneither of us have thought of). We can't really mask LP_DEAD bits from\nthe primary in recovery anyway, because of stuff like page-level\nchecksums. I suspect that it just isn't useful to fight against that.\nThese preexisting assumptions are baked into everything already.\n\nWhy should we assume that *anybody* understands all of the ways in\nwhich that is true?\n\nEven today, \"LP_DEAD actually means a limited kind of 'recently dead'\nduring recovery + hot standby\" is something that is true (as I just\nwent into), but at the same time also has a fuzzy definition. My gut\ninstinct is to be conservative about that. As I suggested earlier, you\ncould invent some distinct kind of \"recently dead\" that achieves your\ngoals in a way that is 100% orthogonal.\n\nThe two unused LP dead bits (unused in indexes, though not tables) are\nonly precious if we assume that somebody will eventually use them for\nsomething -- if nobody ever uses them then they're 100% useless. The\nnumber of possible projects that might end up using the two bits for\nsomething is not that high -- certainly not more than 3 or 4. Besides,\nit is always a good idea to keep the on-disk format as simple and\nexplicit as possible -- it should be easy to analyze forensically in\nthe event of bugs or some other kind of corruption, especially by\nusing tools like amcheck.\n\nAs I said in my earlier email, we can even play tricks during page\ndeletion by treating certain kinds of \"recently dead\" bits as\napproximate things. As a further example, we can \"rely\" on the\n\"dead-to-all but only on standby\" bits when recovery ends, during a\nsubsequent write transactions. We can do so by simply using them in\n_bt_deadblocks() as if they were LP_DEAD (we'll recheck heap tuples in\nheapam.c instead).\n\n> The only way standby could get an \"invalid\" hint bit - is an FPI from the primary. We could just use the current *btree_mask* infrastructure and clear all \"probably invalid\" bits from primary in case of standby while applying FPI (in `XLogReadBufferForRedoExtended`)!\n\nI don't like that idea. Apart from what I said already, you're\nassuming that setting LP_DEAD bits in indexes on the primary won't\neventually have some value on the standby after it is promoted and can\naccept writes -- they really do have meaning and value on standbys.\nPlus you'll introduce new overhead for this process during replay,\nwhich creates significant overhead -- often most leaf pages have some\nLP_DEAD bits set during recovery.\n\n> For binary compatibility, we could use one of `btpo_flags` bits to mark the page as \"primary bits masked\". The same way would work for hash\\gist too.\n\nI agree that there are plenty of btpo_flags bits. However, I have my\ndoubts about this.\n\nWhy shouldn't this break page-level checksums (or wal_log_hints) in\nsome way? What about pg_rewind, some eventual implementation of\nincremental backups, etc? I suspect that it will be necessary to\ninvent some entirely new concept that is like a hint bit, but works on\nstandbys (without breaking anything else).\n\n> No conflicts, only LP_DEAD bit is used by standby, `btpo_flags` has many free bits for now, easy to implement, page content of primary\\standby already differs in this bits...\n> Looks like an easy and effective solution for me.\n\nNote that the BTP_HAS_GARBAGE flag (which goes in btpo_flags) was\ndeprecated in commit cf2acaf4. It was never a reliable indicator of\nwhether or not some LP_DEAD bits are set in the page. And I think that\nit never could be made to work like that.\n\n> >> Also, btw, do you know any reason to keep minRecoveryPoint at a low value?\n> > Not offhand.\n>\n> If so, looks like it is not a bad idea to move minRecoveryPoint forward from time to time (for more aggressive standby index hint bits). For each `xl_running_xacts` (about each 15s), for example.\n\nIt's necessary for recoverypoints (i.e. the standby equivalent of a\ncheckpoint) to do that in order to ensure that we won't break\nchecksums. This whole area is scary to me:\n\nhttps://postgr.es/m/CABOikdPOewjNL=05K5CbNMxnNtXnQjhTx2F--4p4ruorCjukbA@mail.gmail.com\n\nSince, as I said, it's already true that LP_DEAD bits on standbys are\nsome particular kind of \"recently dead bit\", even today, any design\nthat uses LP_DEAD bits in some novel new way (also on the standby) is\nvery hard to test. It might easily have very subtle bugs -- obviously\na recently dead bit relates to a tuple pointing to a logical row that\nis bound to become dead soon enough. The difference between dead and\nrecently dead is already blurred, and I would rather not go there.\n\nIf you invent some entirely new category of standby-only hint bit at a\nlevel below the access method code, then you can use it inside access\nmethod code such as nbtree. Maybe you don't have to play games with\nminRecoveryPoint in code like the \"if (RecoveryInProgress())\" path\nfrom the XLogNeedsFlush() function. Maybe you can do some kind of\nrudimentary \"masking\" for the in recovery case at the point that we\n*write out* a buffer (*not* at the point hint bits are initially set)\n-- maybe this could happen near to or inside FlushBuffer(), and maybe\nonly when checksums are enabled? I'm unsure.\n\n> > BTW, what happens when the page splits on the primary, btw? Does your\n> > patch \"move over\" the LP_DEAD bits to each half of the split?\n>\n> That part is not changed in any way.\n\nMaybe it's okay to assume that it's no loss to throw away hint bits\nset on the standby, because in practice deletion on the primary will\nusually do the right thing anyway. But you never said that. I think\nthat you should take an explicit position on this question -- make it\na formal part of your overall design.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 30 Jan 2021 17:39:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "On Sat, Jan 30, 2021 at 5:39 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> If you invent some entirely new category of standby-only hint bit at a\n> level below the access method code, then you can use it inside access\n> method code such as nbtree. Maybe you don't have to play games with\n> minRecoveryPoint in code like the \"if (RecoveryInProgress())\" path\n> from the XLogNeedsFlush() function. Maybe you can do some kind of\n> rudimentary \"masking\" for the in recovery case at the point that we\n> *write out* a buffer (*not* at the point hint bits are initially set)\n> -- maybe this could happen near to or inside FlushBuffer(), and maybe\n> only when checksums are enabled? I'm unsure.\n\nI should point out that hint bits in heap pages are really not like\nLP_DEAD bits in indexes -- if they're similar at all then the\nsimilarity is only superficial/mechanistic. In fact, the term \"hint\nbits in indexes\" does not seem useful at all to me, for this reason.\n\nHeap hint bits indicate whether or not the xmin or xmax in a heap\ntuple header committed or aborted. We cache the commit or abort status\nof one particular XID in the heap tuple header. Notably, this\ninformation alone tells us nothing about whether or not the tuple\nshould be visible to any given MVCC snapshot. Except perhaps in cases\ninvolving aborted transactions -- but that \"exception\" is just a\nlimited special case (and less than 1% of transactions are aborted in\nalmost all environments anyway).\n\nIn contrast, a set LP_DEAD bit in an index is all the information we\nneed to know that the tuple is dead, and can be ignored completely\n(except during hot standby, where at least today we assume nothing\nabout the status of the tuple, since that would be unsafe). Generally\nspeaking, the index page LP_DEAD bit is \"direct\" visibility\ninformation about the tuple, not information about XIDs that are\nstored in the tuple header. So a set LD_DEAD bit in an index is\nactually like an LP_DEAD-set line pointer in the heap (that's the\nclosest equivalent in the heap, by far). It's also like a frozen heap\ntuple (except it's dead-to-all, not live-to-all).\n\nThe difference may be subtle, but it's important here -- it justifies\ninventing a whole new type of LP_DEAD-style status bit that gets set\nonly on standbys. Even today, heap tuples can have hint bits\n\"independently\" set on standbys, subject to certain limitations needed\nto avoid breaking things like data page checksums. Hint bits are\nultimately just a thing that remembers the status of transactions that\nare known committed or aborted, and so can be set immediately after\nthe relevant xact commits/aborts (at least on the primary, during\noriginal execution). A long-held MVCC snapshot is never a reason to\nnot set a hint bit in a heap tuple header (during original execution\nor during hot standby/recovery). Of course, a long-held MVCC snapshot\n*is* often a reason why we cannot set an LP_DEAD bit in an index.\n\nConclusion: The whole minRecoveryPoint question that you're trying to\nanswer to improve things for your patch is just the wrong question.\nBecause LP_DEAD bits in indexes are not *true* \"hint bits\". Maybe it\nwould be useful to set \"true hint bits\" on standbys earlier, and maybe\nthinking about minRecoveryPoint would help with that problem, but that\ndoesn't help your index-related patch -- because indexes simply don't\nhave true hint bits.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 31 Jan 2021 14:19:17 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello, Peter.\n\nThanks a lot for your comments.\nThere are some mine thoughts, related to the “masked bits” solution and\nyour comments:\n\n> During recovery, we will probably always have to consider the\n> possibility that LP_DEAD bits that get set on the primary may be\n> received by a replica through some implementation detail (e.g. LP_DEAD\n> bits are set in FPIs we replay, maybe even some other thing that\n> neither of us have thought of).\n\nIt is fine to receive a page to the standby from any source: `btpo_flags`\nshould have some kind “LP_DEAD safe for standby” bit set to allow new bits\nto be set and old - read.\n\n> We can't really mask LP_DEAD bits from\n> the primary in recovery anyway, because of stuff like page-level\n> checksums. I suspect that it just isn't useful to fight against that.\n\nAs far as I can see - there is no problem here. Checksums already differ\nfor both heap and index pages on standby and primary. Checksums are\ncalculated before the page is written to the disk (not after applying FPI).\nSo, the masking page during *applying* the FPI is semantically the same as\nsetting a bit in it 1 nanosecond after.\n\nAnd `btree_mask` (and other mask functions) already used for consistency\nchecks to exclude LP_DEAD.\n\n> Plus you'll introduce new overhead for this process during replay,\n> which creates significant overhead -- often most leaf pages have some\n> LP_DEAD bits set during recovery.\n\nI hope it is not big enough, because FPIs are not too frequent + positive\neffect will easily overcome additional CPU cycles of `btree_mask` (and the\npage is already in CPU cache at the moment).\n\n> I don't like that idea. Apart from what I said already, you're\n> assuming that setting LP_DEAD bits in indexes on the primary won't\n> eventually have some value on the standby after it is promoted and can\n> accept writes -- they really do have meaning and value on standbys.\n\nI think it is fine to lose part of LP_DEAD on promotion (which can be\nreceived only by FPI in practice). They will be set on the first scan\nanyway. Also, bits set by standby may be used by newly promoted primary if\nwe honor OldestXmin of the previous primary while setting it (just need to\nadd OldestXmin in xl_running_xacts and include it into dead-horizon on\nstandby).\n\n> Why shouldn't this break page-level checksums (or wal_log_hints) in\n> some way? What about pg_rewind, some eventual implementation of\n> incremental backups, etc? I suspect that it will be necessary to\n> invent some entirely new concept that is like a hint bit, but works on\n> standbys (without breaking anything else).\n\nAs I said before - applying the mask on *standby* will not break any\nchecksums - because the page is still dirty after that (and it is even\npossible to call `PageSetChecksumInplace` for an additional paranoia).\nActual checksums on standby and primary already have different values (and,\nprobably, in the most of the pages because LP_DEAD and “classic” hint bits).\n\n> If you invent some entirely new category of standby-only hint bit at a\n> level below the access method code, then you can use it inside access\n> method code such as nbtree. Maybe you don't have to play games with\n> minRecoveryPoint in code like the \"if (RecoveryInProgress())\" path\n> from the XLogNeedsFlush() function. Maybe you can do some kind of\n> rudimentary \"masking\" for the in recovery case at the point that we\n> *write out* a buffer (*not* at the point hint bits are initially set)\n> -- maybe this could happen near to or inside FlushBuffer(), and maybe\n> only when checksums are enabled? I'm unsure.\n\nNot sure I was able to understand your idea here, sorry.\n\n> The difference may be subtle, but it's important here -- it justifies\n> inventing a whole new type of LP_DEAD-style status bit that gets set\n> only on standbys. Even today, heap tuples can have hint bits\n> \"independently\" set on standbys, subject to certain limitations needed\n> to avoid breaking things like data page checksums\n\nYes, and I see three major ways to implement it in the current\ninfrastructure:\n\n1) Use LP_REDIRECT (or other free value) instead of LP_DEAD on standby\n2) Use LP_DEAD on standby, but involve some kind of recovery conflicts\n(like here -\nhttps://www.postgresql.org/message-id/flat/CANtu0oiP18H31dSaEzn0B0rW6tA_q1G7%3D9Y92%2BUS_WHGOoQevg%40mail.gmail.com\n)\n3) Mask index FPI during a replay on hot standby + mark page as “primary\nLP_DEAD free” in btpo_flags\n\nOf course, each variant requires some special additional things to keep\neverything safe.\n\nAs I see in SetHintsBits limitations are related to XLogNeedsFlush (check\nof minRecoveryPoint in case of standby).\n\n> Conclusion: The whole minRecoveryPoint question that you're trying to\n> answer to improve things for your patch is just the wrong question.\n> Because LP_DEAD bits in indexes are not *true* \"hint bits\". Maybe it\n> would be useful to set \"true hint bits\" on standbys earlier, and maybe\n> thinking about minRecoveryPoint would help with that problem, but that\n> doesn't help your index-related patch -- because indexes simply don't\n> have true hint bits.\n\nAttention to minRecoveryPoint is required because of possible situation\ndescribed here -\nhttps://www.postgresql.org/message-id/flat/CANtu0ojwFcSQpyCxrGxJuLVTnOBSSrzKuF3cB_yCk0U-X-wpGw%40mail.gmail.com#4d8ef8754b18c5e35146ed589b25bf27\nThe source of the potential problem - is the fact what the setting of\nLP_DEAD does not change page LSN and it could cause consistency issues\nduring crash recovery.\n\nThanks,\nMichail.\n\nHello, Peter.Thanks a lot for your comments.There are some mine thoughts, related to the “masked bits” solution and your comments:> During recovery, we will probably always have to consider the> possibility that LP_DEAD bits that get set on the primary may be> received by a replica through some implementation detail (e.g. LP_DEAD> bits are set in FPIs we replay, maybe even some other thing that> neither of us have thought of).It is fine to receive a page to the standby from any source: `btpo_flags` should have some kind “LP_DEAD safe for standby” bit set to allow new bits to be set and old - read.> We can't really mask LP_DEAD bits from> the primary in recovery anyway, because of stuff like page-level> checksums. I suspect that it just isn't useful to fight against that.As far as I can see - there is no problem here. Checksums already differ for both heap and index pages on standby and primary. Checksums are calculated before the page is written to the disk (not after applying FPI). So, the masking page during *applying* the FPI is semantically the same as setting a bit in it 1 nanosecond after.And `btree_mask` (and other mask functions) already used for consistency checks to exclude LP_DEAD.> Plus you'll introduce new overhead for this process during replay,> which creates significant overhead -- often most leaf pages have some> LP_DEAD bits set during recovery.I hope it is not big enough, because FPIs are not too frequent + positive effect will easily overcome additional CPU cycles of `btree_mask` (and the page is already in CPU cache at the moment).> I don't like that idea. Apart from what I said already, you're> assuming that setting LP_DEAD bits in indexes on the primary won't> eventually have some value on the standby after it is promoted and can> accept writes -- they really do have meaning and value on standbys.I think it is fine to lose part of LP_DEAD on promotion (which can be received only by FPI in practice). They will be set on the first scan anyway. Also, bits set by standby may be used by newly promoted primary if we honor OldestXmin of the previous primary while setting it (just need to add OldestXmin in xl_running_xacts and include it into dead-horizon on standby).> Why shouldn't this break page-level checksums (or wal_log_hints) in> some way? What about pg_rewind, some eventual implementation of> incremental backups, etc? I suspect that it will be necessary to> invent some entirely new concept that is like a hint bit, but works on> standbys (without breaking anything else).As I said before - applying the mask on *standby* will not break any checksums - because the page is still dirty after that (and it is even possible to call `PageSetChecksumInplace` for an additional paranoia). Actual checksums on standby and primary already have different values (and, probably, in the most of the pages because LP_DEAD and “classic” hint bits).> If you invent some entirely new category of standby-only hint bit at a> level below the access method code, then you can use it inside access> method code such as nbtree. Maybe you don't have to play games with> minRecoveryPoint in code like the \"if (RecoveryInProgress())\" path> from the XLogNeedsFlush() function. Maybe you can do some kind of> rudimentary \"masking\" for the in recovery case at the point that we> *write out* a buffer (*not* at the point hint bits are initially set)> -- maybe this could happen near to or inside FlushBuffer(), and maybe> only when checksums are enabled? I'm unsure.Not sure I was able to understand your idea here, sorry.> The difference may be subtle, but it's important here -- it justifies> inventing a whole new type of LP_DEAD-style status bit that gets set> only on standbys. Even today, heap tuples can have hint bits> \"independently\" set on standbys, subject to certain limitations needed> to avoid breaking things like data page checksumsYes, and I see three major ways to implement it in the current infrastructure:1) Use LP_REDIRECT (or other free value) instead of LP_DEAD on standby2) Use LP_DEAD on standby, but involve some kind of recovery conflicts (like here - https://www.postgresql.org/message-id/flat/CANtu0oiP18H31dSaEzn0B0rW6tA_q1G7%3D9Y92%2BUS_WHGOoQevg%40mail.gmail.com )3) Mask index FPI during a replay on hot standby + mark page as “primary LP_DEAD free” in btpo_flagsOf course, each variant requires some special additional things to keep everything safe.As I see in SetHintsBits limitations are related to XLogNeedsFlush (check of minRecoveryPoint in case of standby).> Conclusion: The whole minRecoveryPoint question that you're trying to> answer to improve things for your patch is just the wrong question.> Because LP_DEAD bits in indexes are not *true* \"hint bits\". Maybe it> would be useful to set \"true hint bits\" on standbys earlier, and maybe> thinking about minRecoveryPoint would help with that problem, but that> doesn't help your index-related patch -- because indexes simply don't> have true hint bits.Attention to minRecoveryPoint is required because of possible situation described here - https://www.postgresql.org/message-id/flat/CANtu0ojwFcSQpyCxrGxJuLVTnOBSSrzKuF3cB_yCk0U-X-wpGw%40mail.gmail.com#4d8ef8754b18c5e35146ed589b25bf27 The source of the potential problem - is the fact what the setting of LP_DEAD does not change page LSN and it could cause consistency issues during crash recovery.Thanks,Michail.",
"msg_date": "Tue, 2 Feb 2021 00:19:03 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 1:19 PM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> It is fine to receive a page to the standby from any source: `btpo_flags` should have some kind “LP_DEAD safe for standby” bit set to allow new bits to be set and old - read.\n>\n> > We can't really mask LP_DEAD bits from\n> > the primary in recovery anyway, because of stuff like page-level\n> > checksums. I suspect that it just isn't useful to fight against that.\n>\n> As far as I can see - there is no problem here. Checksums already differ for both heap and index pages on standby and primary.\n\nAFAICT that's not true, at least not in any practical sense. See the\ncomment in the middle of MarkBufferDirtyHint() that begins with \"If we\nmust not write WAL, due to a relfilenode-specific...\", and see the\n\"Checksums\" section at the end of src/backend/storage/page/README. The\nlast paragraph in the README is particularly relevant:\n\nNew WAL records cannot be written during recovery, so hint bits set during\nrecovery must not dirty the page if the buffer is not already dirty, when\nchecksums are enabled. Systems in Hot-Standby mode may benefit from hint bits\nbeing set, but with checksums enabled, a page cannot be dirtied after setting a\nhint bit (due to the torn page risk). So, it must wait for full-page images\ncontaining the hint bit updates to arrive from the primary.\n\nIIUC the intention is that MarkBufferDirtyHint() is a no-op during hot\nstandby when we successfully set a hint bit, though only in the\nXLogHintBitIsNeeded() case. So we don't really dirty the page within\nSetHintBits() in this specific scenario. That is, the buffer header\nwon't actually get marked BM_DIRTY or BM_JUST_DIRTIED within\nMarkBufferDirtyHint() when in Hot Standby + XLogHintBitIsNeeded().\nWhat else could work at all? The only \"alternative\" is to write an\nFPI, just like on the primary -- but writing new WAL records is not\npossible during Hot Standby!\n\nA comment within MarkBufferDirtyHint() spells it out directly -- we\ncan have hint bits set in hot standby independently of the primary,\nbut it works in a way that makes sure that the hint bits never make it\nout to disk:\n\n\"We can set the hint, just not dirty the page as a result so the hint\nis lost when we evict the page or shutdown\"\n\nYou may be right in some narrow sense -- checksums can differ on a\nstandby. But that's like saying that it's sometimes okay to have a\ntorn page on disk. Yes, it's okay, but only because we expect the\nproblem during crash recovery, and can reliably repair it.\n\n> Checksums are calculated before the page is written to the disk (not after applying FPI). So, the masking page during *applying* the FPI is semantically the same as setting a bit in it 1 nanosecond after.\n>\n> And `btree_mask` (and other mask functions) already used for consistency checks to exclude LP_DEAD.\n\nI don't see how that is relevant. btree_mask() is only used by\nwal_consistency_checking, which is mostly just for Postgres hackers.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 1 Feb 2021 14:45:52 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello, Peter.\n\n> AFAICT that's not true, at least not in any practical sense. See the\n> comment in the middle of MarkBufferDirtyHint() that begins with \"If we\n> must not write WAL, due to a relfilenode-specific...\", and see the\n> \"Checksums\" section at the end of src/backend/storage/page/README. The\n> last paragraph in the README is particularly relevant:\n\nI have attached a TAP-test to demonstrate how easily checksums on standby\nand primary starts to differ. The test shows two different scenarios - for\nboth heap and index (and the bit is placed in both standby and primary).\n\nYes, MarkBufferDirtyHint does not mark the page as dirty… So, hint bits on\nsecondary could be easily lost. But it leaves the page dirty if it already\nis (or it could be marked dirty by WAL replay later). So, hints bits could\nbe easily flushed and taken into account during checksum calculation on\nboth - standby and primary.\n\n> \"We can set the hint, just not dirty the page as a result so the hint\n> is lost when we evict the page or shutdown\"\n\nYes, it is not allowed to mark a page as dirty because of hints on standby.\nBecause we could achieve this:\n\nCHECKPOINT\nSET HINT BIT\nTORN FLUSH + CRASH = BROKEN CHECKSUM, SERVER FAULT\n\nBut this scenario is totally fine:\n\nCHECKPOINT\nFPI (page is still dirty)\nSET HINT BIT\nTORN FLUSH + CRASH = PAGE IS RECOVERED, EVERYTHING IS OK\n\nAnd, as result, this is fine too:\n\nCHECKPOINT\nFPI WITH MASKED LP_DEAD (page is still dirty)\nSET HINT BIT\nTORN FLUSH + CRASH = PAGE IS RECOVERED + LP_DEAD MASKED AGAIN IF STANDBY\n\nSo, my point here - it is fine to mask LP_DEAD bits during replay because\nthey are already different on standby and primary. And it is fine to set\nand flush hint bits (and LP_DEADs) on standby because they already could be\neasily flushed (just need to consider minRecovertPoint and, probably,\nOldesXmin from primary in case of LP_DEAD to make promotion easily).\n\n>> And `btree_mask` (and other mask functions) already used for consistency\nchecks to exclude LP_DEAD.\n> I don't see how that is relevant. btree_mask() is only used by\n> wal_consistency_checking, which is mostly just for Postgres hackers.\n\nI was thinking about the possibility to reuse these functions in masking\nduring replay.\n\nThanks,\nMichail.",
"msg_date": "Tue, 2 Feb 2021 23:31:00 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
},
{
"msg_contents": "Hello, Peter.\n\nIf you are interested, the possible patch (based on FPI mask during\nreplay) was sent with some additional explanation and graphics to (1).\nAt the moment I unable to find any \"incorrectness\" in it.\n\nThanks again for your comments.\n\nMichail.\n\n\n[1] https://www.postgresql.org/message-id/flat/CANtu0ohHu1r1xQfTzEJuxeaOMYncG7xRxUQWdH%3DcMXZSf%2Bnzvg%40mail.gmail.com#4c81a4d623d8152f5e8889e97e750eec\n\n\n",
"msg_date": "Thu, 11 Feb 2021 02:32:33 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on \"killed tuples\" index hint bits support on standby"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI find the build_regexp_split_result() has redundant codes, we can move it to before the condition check, can we?\n\nBest regards.\n\nJapin Li",
"msg_date": "Thu, 16 Jan 2020 15:18:43 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Code cleanup for build_regexp_split_result"
},
{
"msg_contents": "Li Japin <japinli@hotmail.com> writes:\n> I find the build_regexp_split_result() has redundant codes, we can move it to before the condition check, can we?\n\nHm, yeah, that looks a bit strange. It was less strange before\nc8ea87e4bd950572cba4575e9a62284cebf85ac5, I think.\n\nPushed with some additional simplification to get rid of the\nrather ugly (IMO) PG_USED_FOR_ASSERTS_ONLY variable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Jan 2020 11:33:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Code cleanup for build_regexp_split_result"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16213\nLogged by: Matt Jibson\nEmail address: matt.jibson@gmail.com\nPostgreSQL version: 11.5\nOperating system: linux\nDescription: \n\nSELECT\r\n *\r\nFROM\r\n (\r\n SELECT\r\n tab_31924.col_41292 AS col_41294,\r\n tab_31924.col_41293 AS col_41295,\r\n 0::OID AS col_41296,\r\n false AS col_41297\r\n FROM\r\n (\r\n VALUES\r\n (\r\n 'A'::STRING::STRING\r\n NOT IN (\r\n SELECT\r\n 'E'::STRING::STRING AS col_41289\r\n FROM\r\n (\r\n VALUES\r\n (NULL),\r\n (NULL),\r\n (NULL),\r\n (NULL)\r\n )\r\n AS tab_31923 (col_41288)\r\n WHERE\r\n false\r\n ),\r\n NULL,\r\n 'B'::STRING,\r\n 3::OID\r\n ),\r\n (false, 4::OID, 'B'::STRING, 0::OID)\r\n )\r\n AS tab_31924\r\n (col_41290, col_41291, col_41292, col_41293)\r\n WHERE\r\n tab_31924.col_41290\r\n )\r\n AS tab_31925\r\nORDER BY\r\n col_41294 NULLS FIRST,\r\n col_41295 NULLS FIRST,\r\n col_41296 NULLS FIRST,\r\n col_41297 NULLS FIRST;\r\n\r\nThe above query produces an error in the server log:\r\n\r\nLOG: server process (PID 108) was terminated by signal 11: Segmentation\nfault",
"msg_date": "Thu, 16 Jan 2020 23:27:29 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #16213: segfault when running a query"
},
{
"msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> The above query produces an error in the server log:\n> LOG: server process (PID 108) was terminated by signal 11: Segmentation\n> fault\n\nYeah, duplicated here. (For anyone following along at home: you\ncan \"create domain string as text\", or just s/STRING/TEXT/g in the\ngiven query. That type's not relevant to the problem.) The problem\nis probably much more easily reproduced in a debug build, because it\nboils down to a dangling-pointer bug. I duplicated it back to 9.4,\nand it's probably older than that.\n\nThe direct cause of the crash is that by the time we get to ExecutorEnd,\nthere are dangling pointers in the es_tupleTable list. Those pointers\nturn out to originate from ExecInitSubPlan's creation of TupleTableSlots\nfor the ProjectionInfo objects it creates when doing hashing. And the\nreason they're dangling is that the subplan is inside a VALUES list,\nand nodeValuesscan.c does this remarkably bletcherous thing:\n\n * Build the expression eval state in the econtext's per-tuple memory.\n * This is a tad unusual, but we want to delete the eval state again\n * when we move to the next row, to avoid growth of memory\n * requirements over a long values list.\n\nIt turns out that just below that, there's already some messy hacking\nto deal with subplans in the VALUES list. But I guess we'd not hit\nthe case of a subplan using hashing within VALUES.\n\nThe attached draft patch fixes this by not letting ExecInitSubPlan hook\nthe slots it's making into the outer es_tupleTable list. Ordinarily\nthat would be bad because it exposes us to leaking resources, if the\nslots aren't cleared before ending execution. But nodeSubplan.c is\nalready being (mostly) careful to clear those slots promptly, so it\ndoesn't cost us anything much to lose this backstop.\n\nWhat that change fails to do is guarantee that there are no such\nbugs elsewhere. In the attached I made nodeValuesscan.c assert that\nnothing has added slots to the es_tupleTable list, but of course\nthat only catches cases where there's a live bug. Given how long\nthis case escaped detection, I don't feel like that's quite enough.\n(Unsurprisingly, the existing regression tests don't trigger this\nassert, even without the nodeSubplan.c fix.)\n\nAnother angle I've not run to ground is that the comments for the\nexisting subplan-related hacking in nodeValuesscan.c claim that\na problematic subplan could only occur in conjunction with LATERAL.\nBut there's no LATERAL in this example --- are those comments wrong\nor obsolete, or just talking about a different case?\n\nI didn't work on making a minimal test case for the regression tests,\neither.\n\nAnyway, thanks for the report!\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 16 Jan 2020 23:03:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16213: segfault when running a query"
},
{
"msg_contents": "I wrote:\n> PG Bug reporting form <noreply@postgresql.org> writes:\n>> The above query produces an error in the server log:\n>> LOG: server process (PID 108) was terminated by signal 11: Segmentation\n>> fault\n\n> The direct cause of the crash is that by the time we get to ExecutorEnd,\n> there are dangling pointers in the es_tupleTable list.\n\nAfter further reflection, I'm totally dissatisfied with the quick-hack\npatch I posted last night. I think that what this example demonstrates\nis that my fix (in commit 9b63c13f0) for bug #14924 in fact does not\nwork: the subPlan list is not the only way in which a SubPlan connects\nup to the outer plan level. I have no faith that the es_tupleTable list\nis the only other way, either, or that we won't create more in future.\n\nI think what we have to do here is revert 9b63c13f0, going back to\nthe previous policy of passing down parent = NULL to the transient\nsubexpressions, so that there is a strong guarantee that there aren't\nany unwanted connections between short-lived and longer-lived state.\nAnd then we need some other solution for making SubPlans in VALUES\nlists work. The best bet seems to be what I speculated about in that\ncommit message: initialize the expressions for VALUES rows that contain\nSubPlans normally at executor startup, and use the trick with\nshort-lived expression state only for VALUES rows that don't contain any\nSubPlans. I think that the case we're worried about with long VALUES\nlists is not likely to involve any SubPlans, so that this shouldn't be\ntoo awful for memory consumption.\n\nAnother benefit of doing it like this is that SubPlans in the VALUES\nlists are reported normally by EXPLAIN, while the previous hack caused\nthem to be missing from the output.\n\nObjections, better ideas?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 17 Jan 2020 13:29:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16213: segfault when running a query"
}
] |
[
{
"msg_contents": "Hi\n\nStandby does not start walreceiver process until startup process\nfinishes WAL replay. The more WAL there is to replay, longer is the\ndelay in starting streaming replication. If replication connection is\ntemporarily disconnected, this delay becomes a major problem and we\nare proposing a solution to avoid the delay.\n\nWAL replay is likely to fall behind when master is processing\nwrite-heavy workload, because WAL is generated by concurrently running\nbackends on master while only one startup process on standby replays WAL\nrecords in sequence as new WAL is received from master.\n\nReplication connection between walsender and walreceiver may break due\nto reasons such as transient network issue, standby going through\nrestart, etc. The delay in resuming replication connection leads to\nlack of high availability - only one copy of WAL is available during\nthis period.\n\nThe problem worsens when the replication is configured to be\nsynchronous. Commits on master must wait until the WAL replay is\nfinished on standby, walreceiver is then started and it confirms flush\nof WAL upto the commit LSN. If synchronous_commit GUC is set to\nremote_write, this behavior is equivalent to tacitly changing it to\nremote_apply until the replication connection is re-established!\n\nHas anyone encountered such a problem with streaming replication?\n\nWe propose to address this by starting walreceiver without waiting for\nstartup process to finish replay of WAL. Please see attached\npatchset. It can be summarized as follows:\n\n 0001 - TAP test to demonstrate the problem.\n\n 0002 - The standby startup sequence is changed such that\n walreceiver is started by startup process before it begins\n to replay WAL.\n\n 0003 - Postmaster starts walreceiver if it finds that a\n walreceiver process is no longer running and the state\n indicates that it is operating as a standby.\n\nThis is a POC, we are looking for early feedback on whether the\nproblem is worth solving and if it makes sense to solve if along this\nroute.\n\nHao and Asim",
"msg_date": "Fri, 17 Jan 2020 09:34:05 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 09:34:05AM +0530, Asim R P wrote:\n> Standby does not start walreceiver process until startup process\n> finishes WAL replay. The more WAL there is to replay, longer is the\n> delay in starting streaming replication. If replication connection is\n> temporarily disconnected, this delay becomes a major problem and we\n> are proposing a solution to avoid the delay.\n\nYeah, that's documented:\nhttps://www.postgresql.org/message-id/20190910062325.GD11737@paquier.xyz\n\n> We propose to address this by starting walreceiver without waiting for\n> startup process to finish replay of WAL. Please see attached\n> patchset. It can be summarized as follows:\n> \n> 0001 - TAP test to demonstrate the problem.\n\nThere is no real need for debug_replay_delay because we have already\nrecovery_min_apply_delay, no? That would count only after consistency\nhas been reached, and only for COMMIT records, but your test would be\nenough with that.\n\n> 0002 - The standby startup sequence is changed such that\n> walreceiver is started by startup process before it begins\n> to replay WAL.\n\nSee below.\n\n> 0003 - Postmaster starts walreceiver if it finds that a\n> walreceiver process is no longer running and the state\n> indicates that it is operating as a standby.\n\nI have not checked in details, but I smell some race conditions\nbetween the postmaster and the startup process here.\n\n> This is a POC, we are looking for early feedback on whether the\n> problem is worth solving and if it makes sense to solve if along this\n> route.\n\nYou are not the first person interested in this problem, we have a\npatch registered in this CF to control the timing when a WAL receiver\nis started at recovery:\nhttps://commitfest.postgresql.org/26/1995/\nhttps://www.postgresql.org/message-id/b271715f-f945-35b0-d1f5-c9de3e56f65e@postgrespro.ru\n\nI am pretty sure that we should not change the default behavior to\nstart the WAL receiver after replaying everything from the archives to\navoid copying some WAL segments for nothing, so being able to use a\nGUC switch should be the way to go, and Konstantin's latest patch was\nusing this approach. Your patch 0002 adds visibly a third mode: start\nimmediately on top of the two ones already proposed:\n- Start after replaying all WAL available locally and in the\narchives.\n- Start after reaching a consistent point.\n--\nMichael",
"msg_date": "Fri, 17 Jan 2020 14:37:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 11:08 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n>\n> On Fri, Jan 17, 2020 at 09:34:05AM +0530, Asim R P wrote:\n> >\n> > 0001 - TAP test to demonstrate the problem.\n>\n> There is no real need for debug_replay_delay because we have already\n> recovery_min_apply_delay, no? That would count only after consistency\n> has been reached, and only for COMMIT records, but your test would be\n> enough with that.\n>\n\nIndeed, we didn't know about recovery_min_apply_delay. Thank you for\nthe suggestion, the updated test is attached.\n\n>\n> > This is a POC, we are looking for early feedback on whether the\n> > problem is worth solving and if it makes sense to solve if along this\n> > route.\n>\n> You are not the first person interested in this problem, we have a\n> patch registered in this CF to control the timing when a WAL receiver\n> is started at recovery:\n> https://commitfest.postgresql.org/26/1995/\n>\nhttps://www.postgresql.org/message-id/b271715f-f945-35b0-d1f5-c9de3e56f65e@postgrespro.ru\n>\n\nGreat to know about this patch and the discussion. The test case and\nthe part that saves next start point in control file from our patch\ncan be combined with Konstantin's patch to solve this problem. Let me\nwork on that.\n\n> I am pretty sure that we should not change the default behavior to\n> start the WAL receiver after replaying everything from the archives to\n> avoid copying some WAL segments for nothing, so being able to use a\n> GUC switch should be the way to go, and Konstantin's latest patch was\n> using this approach. Your patch 0002 adds visibly a third mode: start\n> immediately on top of the two ones already proposed:\n> - Start after replaying all WAL available locally and in the\n> archives.\n> - Start after reaching a consistent point.\n\nConsistent point should be reached fairly quickly, in spite of large\nreplay lag. Min recovery point is updated during XLOG flush and that\nhappens when a commit record is replayed. Commits should occur\nfrequently in the WAL stream. So I do not see much value in starting\nWAL receiver immediately as compared to starting it after reaching a\nconsistent point. Does that make sense?\n\nThat said, is there anything obviously wrong with starting WAL receiver\nimmediately, even before reaching consistent state? A consequence is\nthat WAL receiver may overwrite a WAL segment while startup process is\nreading and replaying WAL from it. But that doesn't appear to be a\nproblem because the overwrite should happen with identical content as\nbefore.\n\nAsim",
"msg_date": "Fri, 17 Jan 2020 18:30:58 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "I would like to revive this thready by submitting a rebased patch to start streaming replication without waiting for startup process to finish replaying all WAL. The start LSN for streaming is determined to be the LSN that points to the beginning of the most recently flushed WAL segment.\r\n\r\nThe patch passes tests under src/test/recovery and top level “make check”.",
"msg_date": "Sun, 9 Aug 2020 05:54:32 +0000",
"msg_from": "Asim Praveen <pasim@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On Sun, Aug 09, 2020 at 05:54:32AM +0000, Asim Praveen wrote:\n> I would like to revive this thready by submitting a rebased patch to\n> start streaming replication without waiting for startup process to\n> finish replaying all WAL. The start LSN for streaming is determined\n> to be the LSN that points to the beginning of the most recently\n> flushed WAL segment.\n> \n> The patch passes tests under src/test/recovery and top level “make check”.\n\nI have not really looked at the proposed patch, but it would be good\nto have some documentation.\n--\nMichael",
"msg_date": "Sun, 9 Aug 2020 17:41:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "\r\n\r\n> On 09-Aug-2020, at 2:11 PM, Michael Paquier <michael@paquier.xyz> wrote:\r\n> \r\n> I have not really looked at the proposed patch, but it would be good\r\n> to have some documentation.\r\n> \r\n\r\nAh, right. The basic idea is to reuse the logic to allow read-only connections to also start WAL streaming. The patch borrows a new GUC “wal_receiver_start_condition” introduced by another patch alluded to upthread. It affects when to start WAL receiver process on a standby. By default, the GUC is set to “replay”, which means no change in current behavior - WAL receiver is started only after replaying all WAL already available in pg_wal. When set to “consistency”, WAL receiver process is started earlier, as soon as consistent state is reached during WAL replay.\r\n\r\nThe LSN where to start streaming from is determined to be the LSN that points at the beginning of the WAL segment file that was most recently flushed in pg_wal. To find the most recently flushed WAL segment, first blocks of all WAL segment files in pg_wal, starting from the segment that contains currently replayed record, are inspected. The search stops when a first page with no valid header is found.\r\n\r\nThe benefits of starting WAL receiver early are mentioned upthread but allow me to reiterate: as WAL streaming starts, any commits that are waiting for synchronous replication on the master are unblocked. The benefit of this is apparent in situations where significant replay lag has been built up and the replication is configured to be synchronous.\r\n\r\nAsim",
"msg_date": "Mon, 10 Aug 2020 04:31:05 +0000",
"msg_from": "Asim Praveen <pasim@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On Sun, 9 Aug 2020 at 14:54, Asim Praveen <pasim@vmware.com> wrote:\n>\n> I would like to revive this thready by submitting a rebased patch to start streaming replication without waiting for startup process to finish replaying all WAL. The start LSN for streaming is determined to be the LSN that points to the beginning of the most recently flushed WAL segment.\n>\n> The patch passes tests under src/test/recovery and top level “make check”.\n>\n\nThe patch can be applied cleanly to the current HEAD but I got the\nerror on building the code with this patch:\n\nxlog.c: In function 'StartupXLOG':\nxlog.c:7315:6: error: too few arguments to function 'RequestXLogStreaming'\n 7315 | RequestXLogStreaming(ThisTimeLineID,\n | ^~~~~~~~~~~~~~~~~~~~\nIn file included from xlog.c:59:\n../../../../src/include/replication/walreceiver.h:463:13: note: declared here\n 463 | extern void RequestXLogStreaming(TimeLineID tli, XLogRecPtr recptr,\n | ^~~~~~~~~~~~~~~~~~~~\n\ncfbot also complaints this.\n\nCould you please update the patch?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Aug 2020 15:57:33 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "> On 10-Aug-2020, at 12:27 PM, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> \n> The patch can be applied cleanly to the current HEAD but I got the\n> error on building the code with this patch:\n> \n> xlog.c: In function 'StartupXLOG':\n> xlog.c:7315:6: error: too few arguments to function 'RequestXLogStreaming'\n> 7315 | RequestXLogStreaming(ThisTimeLineID,\n> | ^~~~~~~~~~~~~~~~~~~~\n> In file included from xlog.c:59:\n> ../../../../src/include/replication/walreceiver.h:463:13: note: declared here\n> 463 | extern void RequestXLogStreaming(TimeLineID tli, XLogRecPtr recptr,\n> | ^~~~~~~~~~~~~~~~~~~~\n> \n> cfbot also complaints this.\n> \n> Could you please update the patch?\n> \n\nThank you for trying the patch and apologies for the compiler error. I missed adding a hunk earlier, it should be fixed in the version attached here.",
"msg_date": "Mon, 10 Aug 2020 08:53:34 +0000",
"msg_from": "Asim Praveen <pasim@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "Hello\r\n\r\nI read the code and test the patch, it run well on my side, and I have several issues on the\r\npatch.\r\n\r\n1. When call RequestXLogStreaming() during replay, you pick timeline straightly from control\r\nfile, do you think it should pick timeline from timeline history file?\r\n\r\n2. In archive recovery mode which will never turn to a stream mode, I think in current code it\r\nwill call RequestXLogStreaming() too which can avoid.\r\n\r\n3. I found two 018_xxxxx.pl when I do make check, maybe rename the new one?\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\nHelloI read the code and test the patch, it run well on my side, and I have several issues on thepatch.1. When call RequestXLogStreaming() during replay, you pick timeline straightly from controlfile, do you think it should pick timeline from timeline history file?2. In archive recovery mode which will never turn to a stream mode, I think in current code itwill call RequestXLogStreaming() too which can avoid.3. I found two 018_xxxxx.pl when I do make check, maybe rename the new one?\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Tue, 15 Sep 2020 17:30:22 +0800",
"msg_from": "\"lchch1990@sina.cn\" <lchch1990@sina.cn>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On Tue, Sep 15, 2020 at 05:30:22PM +0800, lchch1990@sina.cn wrote:\n> I read the code and test the patch, it run well on my side, and I have several issues on the\n> patch.\n\n+ RequestXLogStreaming(ThisTimeLineID,\n+ startpoint,\n+ PrimaryConnInfo,\n+ PrimarySlotName,\n+ wal_receiver_create_temp_slot);\n\nThis patch thinks that it is fine to request streaming even if\nPrimaryConnInfo is not set, but that's not fine.\n\nAnyway, I don't quite understand what you are trying to achieve here.\n\"startpoint\" is used to request the beginning of streaming. It is\nroughly the consistency LSN + some alpha with some checks on WAL\npages (those WAL page checks are not acceptable as they make\nmaintenance harder). What about the case where consistency is\nreached but there are many segments still ahead that need to be\nreplayed? Your patch would cause streaming to begin too early, and\na manual copy of segments is not a rare thing as in some environments\na bulk copy of segments can make the catchup of a standby faster than\nstreaming.\n\nIt seems to me that what you are looking for here is some kind of\npre-processing before entering the redo loop to determine the LSN\nthat could be reused for the fast streaming start, which should match\nthe end of the WAL present locally. In short, you would need a\nXLogReaderState that begins a scan of WAL from the redo point until it\ncannot find anything more, and use the last LSN found as a base to\nbegin requesting streaming. The question of timeline jumps can also\nbe very tricky, but it could also be possible to not allow this option\nif a timeline jump happens while attempting to guess the end of WAL\nahead of time. Another thing: could it be useful to have an extra\nmode to begin streaming without waiting for consistency to finish?\n--\nMichael",
"msg_date": "Fri, 20 Nov 2020 17:21:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On 20.11.2020 11:21, Michael Paquier wrote:\n> On Tue, Sep 15, 2020 at 05:30:22PM +0800, lchch1990@sina.cn wrote:\n>> I read the code and test the patch, it run well on my side, and I have several issues on the\n>> patch.\n> + RequestXLogStreaming(ThisTimeLineID,\n> + startpoint,\n> + PrimaryConnInfo,\n> + PrimarySlotName,\n> + wal_receiver_create_temp_slot);\n>\n> This patch thinks that it is fine to request streaming even if\n> PrimaryConnInfo is not set, but that's not fine.\n>\n> Anyway, I don't quite understand what you are trying to achieve here.\n> \"startpoint\" is used to request the beginning of streaming. It is\n> roughly the consistency LSN + some alpha with some checks on WAL\n> pages (those WAL page checks are not acceptable as they make\n> maintenance harder). What about the case where consistency is\n> reached but there are many segments still ahead that need to be\n> replayed? Your patch would cause streaming to begin too early, and\n> a manual copy of segments is not a rare thing as in some environments\n> a bulk copy of segments can make the catchup of a standby faster than\n> streaming.\n>\n> It seems to me that what you are looking for here is some kind of\n> pre-processing before entering the redo loop to determine the LSN\n> that could be reused for the fast streaming start, which should match\n> the end of the WAL present locally. In short, you would need a\n> XLogReaderState that begins a scan of WAL from the redo point until it\n> cannot find anything more, and use the last LSN found as a base to\n> begin requesting streaming. The question of timeline jumps can also\n> be very tricky, but it could also be possible to not allow this option\n> if a timeline jump happens while attempting to guess the end of WAL\n> ahead of time. Another thing: could it be useful to have an extra\n> mode to begin streaming without waiting for consistency to finish?\n> --\n> Michael\n\n\nStatus update for a commitfest entry.\n\nThis entry was \"Waiting On Author\" during this CF, so I've marked it as \nreturned with feedback. Feel free to resubmit an updated version to a \nfuture commitfest.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 1 Dec 2020 17:21:51 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "Hello,\n\nAshwin and I recently got a chance to work on this and we addressed all\noutstanding feedback and suggestions. PFA a significantly reworked patch.\n\nOn 20.11.2020 11:21, Michael Paquier wrote:\n\n> This patch thinks that it is fine to request streaming even if\n> PrimaryConnInfo is not set, but that's not fine.\n\nWe introduced a check to ensure that PrimaryConnInfo is set up before we\nrequest the WAL stream eagerly.\n\n> Anyway, I don't quite understand what you are trying to achieve here.\n> \"startpoint\" is used to request the beginning of streaming. It is\n> roughly the consistency LSN + some alpha with some checks on WAL\n> pages (those WAL page checks are not acceptable as they make\n> maintenance harder). What about the case where consistency is\n> reached but there are many segments still ahead that need to be\n> replayed? Your patch would cause streaming to begin too early, and\n> a manual copy of segments is not a rare thing as in some environments\n> a bulk copy of segments can make the catchup of a standby faster than\n> streaming.\n>\n> It seems to me that what you are looking for here is some kind of\n> pre-processing before entering the redo loop to determine the LSN\n> that could be reused for the fast streaming start, which should match\n> the end of the WAL present locally. In short, you would need a\n> XLogReaderState that begins a scan of WAL from the redo point until it\n> cannot find anything more, and use the last LSN found as a base to\n> begin requesting streaming. The question of timeline jumps can also\n> be very tricky, but it could also be possible to not allow this option\n> if a timeline jump happens while attempting to guess the end of WAL\n> ahead of time. Another thing: could it be useful to have an extra\n> mode to begin streaming without waiting for consistency to finish?\n\n1. When wal_receiver_start_condition='consistency', we feel that the\nstream start point calculation should be done only when we reach\nconsistency. Imagine the situation where consistency is reached 2 hours\nafter start, and within that 2 hours a lot of WAL has been manually\ncopied over into the standby's pg_wal. If we pre-calculated the stream\nstart location before we entered the main redo apply loop, we would be\nstarting the stream from a much earlier location (minus the 2 hours\nworth of WAL), leading to wasted work.\n\n2. We have significantly changed the code to calculate the WAL stream\nstart location. We now traverse pg_wal, find the latest valid WAL\nsegment and start the stream from the segment's start. This is much\nmore performant than reading from the beginning of the locally available\nWAL.\n\n3. To perform the validation check, we no longer have duplicate code -\nas we can now rely on the XLogReaderState(), XLogReaderValidatePageHeader()\nand friends.\n\n4. We have an extra mode: wal_receiver_start_condition='startup', which\nwill start the WAL receiver before the startup process reaches\nconsistency. We don't fully understand the utility of having 'startup' over\n'consistency' though.\n\n5. During the traversal of pg_wal, if we find WAL segments on differing\ntimelines, we bail out and abandon attempting to start the WAL stream\neagerly.\n\n6. To handle the cases where a lot of WAL is copied over after the\nWAL receiver has started at consistency:\ni) Don't recommend wal_receiver_start_condition='startup|consistency'.\n\nii) Copy over the WAL files and then start the standby, so that the WAL\nstream starts from a fresher point.\n\niii) Have an LSN/segment# target to start the WAL receiver from?\n\n7. We have significantly changed the test. It is much more simplified\nand focused.\n\n8. We did not test wal_receiver_start_condition='startup' in the test.\nIt's actually hard to assert that the walreceiver has started at\nstartup. recovery_min_apply_delay only kicks in once we reach\nconsistency, and thus there is no way I could think of to reliably halt\nthe startup process and check: \"Has the wal receiver started even\nthough the standby hasn't reached consistency?\" Only way we could think\nof is to generate a large workload during the course of the backup so\nthat the standby has significant WAL to replay before it reaches\nconsistency. But that will make the test flaky as we will have no\nabsolutely precise wait condition. That said, we felt that checking\nfor 'consistency' is enough as it covers the majority of the added\ncode.\n\n9. We added a documentation section describing the GUC.\n\n\nRegards,\nAshwin and Soumyadeep (VMware)",
"msg_date": "Tue, 24 Aug 2021 21:51:25 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "Rebased and added a CF entry for Nov CF:\nhttps://commitfest.postgresql.org/35/3376/.\n\nRebased and added a CF entry for Nov CF: https://commitfest.postgresql.org/35/3376/.",
"msg_date": "Mon, 25 Oct 2021 12:11:07 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On Tue, Aug 24, 2021 at 09:51:25PM -0700, Soumyadeep Chakraborty wrote:\n> Ashwin and I recently got a chance to work on this and we addressed all\n> outstanding feedback and suggestions. PFA a significantly reworked patch.\n\n+static void\n+StartWALReceiverEagerly()\n+{\nThe patch fails to apply because of the recent changes from Robert to\neliminate ThisTimeLineID. The correct thing to do would be to add one\nTimeLineID argument, passing down the local ThisTimeLineID in\nStartupXLOG() and using XLogCtl->lastReplayedTLI in\nCheckRecoveryConsistency().\n\n+\t/*\n+\t * We should never reach here. We should have at least one valid WAL\n+\t * segment in our pg_wal, for the standby to have started.\n+\t */\n+\tAssert(false);\nThe reason behind that is not that we have a standby, but that we read\nat least the segment that included the checkpoint record we are\nreplaying from, at least (it is possible for a standby to start\nwithout any contents in pg_wal/ as long as recovery is configured),\nand because StartWALReceiverEagerly() is called just after that.\n\nIt would be better to make sure that StartWALReceiverEagerly() gets\nonly called from the startup process, perhaps?\n\n+\tRequestXLogStreaming(ThisTimeLineID, startptr, PrimaryConnInfo,\n+\t\t\t PrimarySlotName, wal_receiver_create_temp_slot);\n+\tXLogReaderFree(state);\nXLogReaderFree() should happen before RequestXLogStreaming(). The\ntipping point of the patch is here, where the WAL receiver is started\nbased on the location of the first valid WAL record found.\n\nwal_receiver_start_condition is missing in postgresql.conf.sample.\n\n+\t/*\n+\t * Start WAL receiver eagerly if requested.\n+\t */\n+\tif (StandbyModeRequested && !WalRcvStreaming() &&\n+\t\tPrimaryConnInfo && strcmp(PrimaryConnInfo, \"\") != 0 &&\n+\t\twal_receiver_start_condition == WAL_RCV_START_AT_STARTUP)\n+\t\tStartWALReceiverEagerly();\n[...]\n+\tif (StandbyModeRequested && !WalRcvStreaming() && reachedConsistency &&\n+\t\tPrimaryConnInfo && strcmp(PrimaryConnInfo, \"\") != 0 &&\n+\t\twal_receiver_start_condition == WAL_RCV_START_AT_CONSISTENCY)\n+\t\tStartWALReceiverEagerly();\nThis repeats two times the same set of conditions, which does not look\nlike a good idea to me. I think that you'd better add an extra\nargument to StartWALReceiverEagerly to track the start timing expected\nin this code path, that will be matched with the GUC in the routine.\nIt would be better to document the reasons behind each check done, as\nwell.\n\n+\t/* Find the latest and earliest WAL segments in pg_wal */\n+\tdir = AllocateDir(\"pg_wal\");\n+\twhile ((de = ReadDir(dir, \"pg_wal\")) != NULL)\n+\t{\n[ ... ]\n+\t/* Find the latest valid WAL segment and request streaming from its start */\n+\twhile (endsegno >= startsegno)\n+\t{\n[...]\n+\t\tXLogReaderFree(state);\n+\t\tendsegno--;\n+\t}\nSo, this reads the contents of pg_wal/ for any files that exist, then\ngoes down to the first segment found with a valid beginning. That's\ngoing to be expensive with a large max_wal_size. When searching for a\npoint like that, a dichotomy method would be better to calculate a LSN\nyou'd like to start from. Anyway, I think that there is a problem\nwith the approach: what should we do if there are holes in the\nsegments present in pg_wal/? As of HEAD, or\nwal_receiver_start_condition = 'exhaust' in this patch, we would\nswitch across local pg_wal/, archive and stream in a linear way,\nthanks to WaitForWALToBecomeAvailable(). For example, imagine that we\nhave a standby with the following set of valid segments, because of\nthe buggy way a base backup has been taken:\n000000010000000000000001\n000000010000000000000003\n000000010000000000000005\nWhat the patch would do is starting a WAL receiver from segment 5,\nwhich is in contradiction with the existing logic where we should try\nto look for the segment once we are waiting for something in segment\n2. This would be dangerous once the startup process waits for some\nWAL to become available, because we have a WAL receiver started, but\nwe cannot fetch the segment we have. Perhaps a deployment has\narchiving, in which case it would be able to grab segment 2 (if no\narchiving, recovery would not be able to move on, so that would be\ngame over).\n \n /*\n * Move to XLOG_FROM_STREAM state, and set to start a\n- * walreceiver if necessary.\n+ * walreceiver if necessary. The WAL receiver may have\n+ * already started (if it was configured to start\n+ * eagerly).\n */\n currentSource = XLOG_FROM_STREAM;\n- startWalReceiver = true;\n+ startWalReceiver = !WalRcvStreaming();\n break;\n case XLOG_FROM_ARCHIVE:\n case XLOG_FROM_PG_WAL:\n \n- /*\n- * WAL receiver must not be running when reading WAL from\n- * archive or pg_wal.\n- */\n- Assert(!WalRcvStreaming());\n\nThese parts should IMO not be changed. They are strong assumptions we\nrely on in the startup process, and this comes down to the fact that\nit is not a good idea to mix a WAL receiver started while\ncurrentSource could be pointing at a WAL source completely different. \nThat's going to bring a lot of racy conditions, I am afraid, as we\nrely on currentSource a lot during recovery, in combination that we\nexpect the code to be able to retrieve WAL in a linear fashion from\nthe LSN position that recovery is looking for.\n\nSo, I think that deciding if a WAL receiver should be started blindly\noutside of the code path deciding if the startup process is waiting\nfor some WAL is not a good idea, and the position we may begin to\nstream from may be something that we may have zero need for at the\nend (this is going to be tricky if we detect a TLI jump while\nreplaying the local WAL, also?). The issue is that I am not sure what\na good design for that should be. We have no idea when the startup\nprocess will need WAL from a different source until replay comes\naround, but what we want here is to anticipate othis LSN :)\n\nI am wondering if there should be a way to work out something with the\ncontrol file, though, but things can get very fancy with HA\nand base backup deployments and the various cases we support thanks to\nthe current way recovery works, as well. We could also go simpler and\nrework the priority order if both archiving and streaming are options\nwanted by the user.\n--\nMichael",
"msg_date": "Mon, 8 Nov 2021 18:41:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "Hi Michael,\n\nThanks for the detailed review! Attached is a rebased patch that addresses\nmost of the feedback.\n\nOn Mon, Nov 8, 2021 at 1:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> +static void\n> +StartWALReceiverEagerly()\n> +{\n> The patch fails to apply because of the recent changes from Robert to\n> eliminate ThisTimeLineID. The correct thing to do would be to add one\n> TimeLineID argument, passing down the local ThisTimeLineID in\n> StartupXLOG() and using XLogCtl->lastReplayedTLI in\n> CheckRecoveryConsistency().\n\nRebased.\n\n> + /*\n> + * We should never reach here. We should have at least one valid\nWAL\n> + * segment in our pg_wal, for the standby to have started.\n> + */\n> + Assert(false);\n> The reason behind that is not that we have a standby, but that we read\n> at least the segment that included the checkpoint record we are\n> replaying from, at least (it is possible for a standby to start\n> without any contents in pg_wal/ as long as recovery is configured),\n> and because StartWALReceiverEagerly() is called just after that.\n\nFair, comment updated.\n\n> It would be better to make sure that StartWALReceiverEagerly() gets\n> only called from the startup process, perhaps?\n\nAdded Assert(AmStartupProcess()) at the beginning of\nStartWALReceiverEagerly().\n\n>\n> + RequestXLogStreaming(ThisTimeLineID, startptr, PrimaryConnInfo,\n> + PrimarySlotName,\nwal_receiver_create_temp_slot);\n> + XLogReaderFree(state);\n> XLogReaderFree() should happen before RequestXLogStreaming(). The\n> tipping point of the patch is here, where the WAL receiver is started\n> based on the location of the first valid WAL record found.\n\nDone.\n\n> wal_receiver_start_condition is missing in postgresql.conf.sample.\n\nFixed.\n\n> + /*\n> + * Start WAL receiver eagerly if requested.\n> + */\n> + if (StandbyModeRequested && !WalRcvStreaming() &&\n> + PrimaryConnInfo && strcmp(PrimaryConnInfo, \"\") != 0 &&\n> + wal_receiver_start_condition == WAL_RCV_START_AT_STARTUP)\n> + StartWALReceiverEagerly();\n> [...]\n> + if (StandbyModeRequested && !WalRcvStreaming() &&\nreachedConsistency &&\n> + PrimaryConnInfo && strcmp(PrimaryConnInfo, \"\") != 0 &&\n> + wal_receiver_start_condition ==\nWAL_RCV_START_AT_CONSISTENCY)\n> + StartWALReceiverEagerly();\n> This repeats two times the same set of conditions, which does not look\n> like a good idea to me. I think that you'd better add an extra\n> argument to StartWALReceiverEagerly to track the start timing expected\n> in this code path, that will be matched with the GUC in the routine.\n> It would be better to document the reasons behind each check done, as\n> well.\n\nDone.\n\n> So, this reads the contents of pg_wal/ for any files that exist, then\n> goes down to the first segment found with a valid beginning. That's\n> going to be expensive with a large max_wal_size. When searching for a\n> point like that, a dichotomy method would be better to calculate a LSN\n> you'd like to start from.\n\nEven if there is a large max_wal_size, do we expect that there will be\na lot of invalid high-numbered WAL files? If that is not the case, most\nof the time we would be looking at the last 1 or 2 WAL files to\ndetermine the start point, making it efficient?\n\n> Anyway, I think that there is a problem\n> with the approach: what should we do if there are holes in the\n> segments present in pg_wal/? As of HEAD, or\n> wal_receiver_start_condition = 'exhaust' in this patch, we would\n> switch across local pg_wal/, archive and stream in a linear way,\n> thanks to WaitForWALToBecomeAvailable(). For example, imagine that we\n> have a standby with the following set of valid segments, because of\n> the buggy way a base backup has been taken:\n> 000000010000000000000001\n> 000000010000000000000003\n> 000000010000000000000005\n> What the patch would do is starting a WAL receiver from segment 5,\n> which is in contradiction with the existing logic where we should try\n> to look for the segment once we are waiting for something in segment\n> 2. This would be dangerous once the startup process waits for some\n> WAL to become available, because we have a WAL receiver started, but\n> we cannot fetch the segment we have. Perhaps a deployment has\n> archiving, in which case it would be able to grab segment 2 (if no\n> archiving, recovery would not be able to move on, so that would be\n> game over).\n\nWe could easily check for holes while we are doing the ReadDir() and\nbail fron the early start if there are holes, just like we do if there\nis a timeline jump in any of the WAL segments.\n\n> /*\n> * Move to XLOG_FROM_STREAM state, and set to start a\n> - * walreceiver if necessary.\n> + * walreceiver if necessary. The WAL receiver may have\n> + * already started (if it was configured to start\n> + * eagerly).\n> */\n> currentSource = XLOG_FROM_STREAM;\n> - startWalReceiver = true;\n> + startWalReceiver = !WalRcvStreaming();\n> break;\n> case XLOG_FROM_ARCHIVE:\n> case XLOG_FROM_PG_WAL:\n>\n> - /*\n> - * WAL receiver must not be running when reading WAL from\n> - * archive or pg_wal.\n> - */\n> - Assert(!WalRcvStreaming());\n>\n> These parts should IMO not be changed. They are strong assumptions we\n> rely on in the startup process, and this comes down to the fact that\n> it is not a good idea to mix a WAL receiver started while\n> currentSource could be pointing at a WAL source completely different.\n> That's going to bring a lot of racy conditions, I am afraid, as we\n> rely on currentSource a lot during recovery, in combination that we\n> expect the code to be able to retrieve WAL in a linear fashion from\n> the LSN position that recovery is looking for.\n>\n> So, I think that deciding if a WAL receiver should be started blindly\n> outside of the code path deciding if the startup process is waiting\n> for some WAL is not a good idea, and the position we may begin to\n> stream from may be something that we may have zero need for at the\n> end (this is going to be tricky if we detect a TLI jump while\n> replaying the local WAL, also?). The issue is that I am not sure what\n> a good design for that should be. We have no idea when the startup\n> process will need WAL from a different source until replay comes\n> around, but what we want here is to anticipate othis LSN :)\n\nCan you elaborate on the race conditions that you are thinking about?\nDo the race conditions manifest only when we mix archiving and streaming?\nIf yes, how do you feel about making the GUC a no-op with a WARNING\nwhile we are in WAL archiving mode?\n\n> I am wondering if there should be a way to work out something with the\n> control file, though, but things can get very fancy with HA\n> and base backup deployments and the various cases we support thanks to\n> the current way recovery works, as well. We could also go simpler and\n> rework the priority order if both archiving and streaming are options\n> wanted by the user.\n\nAgreed, it would be much better to depend on the state in pg_wal,\nnamely the files that are available there.\n\nReworking the priority order seems like an appealing fix - if we can say\nstreaming > archiving in terms of priority, then the race that you are\nreferring to will not happen?\n\nAlso, what are some use cases where one would give priority to streaming\nreplication over archive recovery, if both sources have the same WAL\nsegments?\n\nRegards,\nAshwin & Soumyadeep",
"msg_date": "Tue, 9 Nov 2021 15:41:09 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "> On 10 Nov 2021, at 00:41, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:\n\n> Thanks for the detailed review! Attached is a rebased patch that addresses\n> most of the feedback.\n\nThis patch no longer applies after e997a0c64 and associated follow-up commits,\nplease submit a rebased version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 15 Nov 2021 10:59:04 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "Hi Daniel,\n\nThanks for checking in on this patch.\nAttached rebased version.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Fri, 19 Nov 2021 00:35:04 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 2:05 PM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n>\n> Hi Daniel,\n>\n> Thanks for checking in on this patch.\n> Attached rebased version.\n\nHi, I've not gone through the patch or this thread entirely, yet, can\nyou please confirm if there's any relation between this thread and\nanother one at [1]\n\n[1] https://www.postgresql.org/message-id/CAFiTN-vzbcSM_qZ%2B-mhS3OWecxupDCR5DkhQUTy%2BTKfrCMQLKQ%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 22 Nov 2021 16:23:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "Hi Bharath,\n\nYes, that thread has been discussed here. Asim had x-posted the patch to\n[1]. This thread\nwas more recent when Ashwin and I picked up the patch in Aug 2021, so we\ncontinued here.\nThe patch has been significantly updated by us, addressing Michael's long\noutstanding feedback.\n\nRegards,\nSoumyadeep (VMware)\n\n[1]\nhttps://www.postgresql.org/message-id/CANXE4TeinQdw%2BM2Or0kTR24eRgWCOg479N8%3DgRvj9Ouki-tZFg%40mail.gmail.com\n\nHi Bharath,Yes, that thread has been discussed here. Asim had x-posted the patch to [1]. This threadwas more recent when Ashwin and I picked up the patch in Aug 2021, so we continued here.The patch has been significantly updated by us, addressing Michael's long outstanding feedback.Regards,Soumyadeep (VMware)[1] https://www.postgresql.org/message-id/CANXE4TeinQdw%2BM2Or0kTR24eRgWCOg479N8%3DgRvj9Ouki-tZFg%40mail.gmail.com",
"msg_date": "Mon, 22 Nov 2021 12:09:22 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "On Tue, Nov 23, 2021 at 1:39 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n>\n> Hi Bharath,\n>\n> Yes, that thread has been discussed here. Asim had x-posted the patch to [1]. This thread\n> was more recent when Ashwin and I picked up the patch in Aug 2021, so we continued here.\n> The patch has been significantly updated by us, addressing Michael's long outstanding feedback.\n\nThanks for the patch. I reviewed it a bit, here are some comments:\n\n1) A memory leak: add FreeDir(dir); before returning.\n+ ereport(LOG,\n+ (errmsg(\"Could not start streaming WAL eagerly\"),\n+ errdetail(\"There are timeline changes in the locally available WAL files.\"),\n+ errhint(\"WAL streaming will begin once all local WAL and archives\nare exhausted.\")));\n+ return;\n+ }\n\n2) Is there a guarantee that while we traverse the pg_wal directory to\nfind startsegno and endsegno, the new wal files arrive from the\nprimary or archive location or old wal files get removed/recycled by\nthe standby? Especially when wal_receiver_start_condition=consistency?\n+ startsegno = (startsegno == -1) ? logSegNo : Min(startsegno, logSegNo);\n+ endsegno = (endsegno == -1) ? logSegNo : Max(endsegno, logSegNo);\n+ }\n\n3) I think the errmsg text format isn't correct. Note that the errmsg\ntext starts with lowercase and doesn't end with \".\" whereas errdetail\nor errhint starts with uppercase and ends with \".\". Please check other\nmessages for reference.\nThe following should be changed.\n+ errmsg(\"Requesting stream from beginning of: %s\",\n+ errmsg(\"Invalid WAL segment found while calculating stream start:\n%s. Skipping.\",\n+ (errmsg(\"Could not start streaming WAL eagerly\"),\n\n4) I think you also need to have wal files names in double quotes,\nsomething like below:\nerrmsg(\"could not close file \\\"%s\\\": %m\", xlogfname)));\n\n5) It is \".....stream start: \\\"%s\\\", skipping..\",\n+ errmsg(\"Invalid WAL segment found while calculating stream start:\n%s. Skipping.\",\n\n4) I think the patch can make the startup process significantly slow,\nespecially when there are lots of wal files that exist in the standby\npg_wal directory. This is because of the overhead\nStartWALReceiverEagerlyIfPossible adds i.e. doing two while loops to\nfigure out the start position of the\nstreaming in advance. This might end up the startup process doing the\nloop over in the directory rather than the important thing of doing\ncrash recovery or standby recovery.\n\n5) What happens if this new GUC is enabled in case of a synchronous standby?\nWhat happens if this new GUC is enabled in case of a crash recovery?\nWhat happens if this new GUC is enabled in case a restore command is\nset i.e. standby performing archive recovery?\n\n6) How about bgwriter/checkpointer which gets started even before the\nstartup process (or a new bg worker? of course it's going to be an\noverkill) finding out the new start pos for the startup process and\nthen we could get rid of <literal>startup</literal> behaviour of the\npatch? This avoids an extra burden on the startup process. Many times,\nusers will be complaining about why recovery is taking more time now,\nafter the GUC wal_receiver_start_condition=startup.\n\n7) I think we can just have 'consistency' and 'exhaust' behaviours and\nlet the bgwrite or checkpointer find out the start position for the\nstartup process, so the startup process whenever reaches a consistent\npoint, it sees if the other process has calculated\nstart pos for it or not, if yes it starts wal receiver other wise it\ngoes with its usual recovery. I'm not sure if this will be a good\nidea.\n\n8) Can we have a better GUC name than wal_receiver_start_condition?\nSomething like wal_receiver_start_at or wal_receiver_start or some\nother?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sun, 28 Nov 2021 08:05:52 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "Hi Bharath,\n\nThanks for the review!\n\nOn Sat, Nov 27, 2021 at 6:36 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> 1) A memory leak: add FreeDir(dir); before returning.\n> + ereport(LOG,\n> + (errmsg(\"Could not start streaming WAL eagerly\"),\n> + errdetail(\"There are timeline changes in the locally available WAL\nfiles.\"),\n> + errhint(\"WAL streaming will begin once all local WAL and archives\n> are exhausted.\")));\n> + return;\n> + }\n>\n\nThanks for catching that. Fixed.\n\n>\n>\n> 2) Is there a guarantee that while we traverse the pg_wal directory to\n> find startsegno and endsegno, the new wal files arrive from the\n> primary or archive location or old wal files get removed/recycled by\n> the standby? Especially when wal_receiver_start_condition=consistency?\n> + startsegno = (startsegno == -1) ? logSegNo : Min(startsegno, logSegNo);\n> + endsegno = (endsegno == -1) ? logSegNo : Max(endsegno, logSegNo);\n> + }\n>\n\nEven if newer wal files arrive after the snapshot of the dir listing\ntaken by AllocateDir()/ReadDir(), we will in effect start from a\nslightly older location, which should be fine. It shouldn't matter if\nan older file is recycled. If the last valid WAL segment is recycled,\nwe will ERROR out in StartWALReceiverEagerlyIfPossible() and the eager\nstart can be retried by the startup process when\nCheckRecoveryConsistency() is called again.\n\n>\n>\n> 3) I think the errmsg text format isn't correct. Note that the errmsg\n> text starts with lowercase and doesn't end with \".\" whereas errdetail\n> or errhint starts with uppercase and ends with \".\". Please check other\n> messages for reference.\n> The following should be changed.\n> + errmsg(\"Requesting stream from beginning of: %s\",\n> + errmsg(\"Invalid WAL segment found while calculating stream start:\n> %s. Skipping.\",\n> + (errmsg(\"Could not start streaming WAL eagerly\"),\n\nFixed.\n\n> 4) I think you also need to have wal files names in double quotes,\n> something like below:\n> errmsg(\"could not close file \\\"%s\\\": %m\", xlogfname)));\n\nFixed.\n\n>\n> 5) It is \".....stream start: \\\"%s\\\", skipping..\",\n> + errmsg(\"Invalid WAL segment found while calculating stream start:\n> %s. Skipping.\",\n\nFixed.\n\n> 4) I think the patch can make the startup process significantly slow,\n> especially when there are lots of wal files that exist in the standby\n> pg_wal directory. This is because of the overhead\n> StartWALReceiverEagerlyIfPossible adds i.e. doing two while loops to\n> figure out the start position of the\n> streaming in advance. This might end up the startup process doing the\n> loop over in the directory rather than the important thing of doing\n> crash recovery or standby recovery.\n\nWell, 99% of the time we can expect that the second loop finishes after\n1 or 2 iterations, as the last valid WAL segment would most likely be\nthe highest numbered WAL file or thereabouts. I don't think that the\noverhead will be significant as we are just looking up a directory\nlisting and not reading any files.\n\n> 5) What happens if this new GUC is enabled in case of a synchronous\nstandby?\n> What happens if this new GUC is enabled in case of a crash recovery?\n> What happens if this new GUC is enabled in case a restore command is\n> set i.e. standby performing archive recovery?\n\nThe GUC would behave the same way for all of these cases. If we have\nchosen 'startup'/'consistency', we would be starting the WAL receiver\neagerly. There might be certain race conditions when one combines this\nGUC with archive recovery, which was discussed upthread. [1]\n\n> 6) How about bgwriter/checkpointer which gets started even before the\n> startup process (or a new bg worker? of course it's going to be an\n> overkill) finding out the new start pos for the startup process and\n> then we could get rid of <literal>startup</literal> behaviour of the\n> patch? This avoids an extra burden on the startup process. Many times,\n> users will be complaining about why recovery is taking more time now,\n> after the GUC wal_receiver_start_condition=startup.\n\nHmm, then we would be needing additional synchronization. There will\nalso be an added dependency on checkpoint_timeout. I don't think that\nthe performance hit is significant enough to warrant this change.\n\n> 8) Can we have a better GUC name than wal_receiver_start_condition?\n> Something like wal_receiver_start_at or wal_receiver_start or some\n> other?\n\nSure, that makes more sense. Fixed.\n\nRegards,\nSoumyadeep (VMware)\n\n[1]\nhttps://www.postgresql.org/message-id/CAE-ML%2B-8KnuJqXKHz0mrC7-qFMQJ3ArDC78X3-AjGKos7Ceocw%40mail.gmail.com",
"msg_date": "Wed, 15 Dec 2021 17:01:24 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
},
{
"msg_contents": "At Wed, 15 Dec 2021 17:01:24 -0800, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote in \n> Sure, that makes more sense. Fixed.\n\nAs I played with this briefly. I started a standby from a backup that\nhas an access to archive. I had the following log lines steadily.\n\n\n[139535:postmaster] LOG: database system is ready to accept read-only connections\n[139542:walreceiver] LOG: started streaming WAL from primary at 0/2000000 on timeline 1\ncp: cannot stat '/home/horiguti/data/arc_work/000000010000000000000003': No such file or directory\n[139542:walreceiver] FATAL: could not open file \"pg_wal/000000010000000000000003\": No such file or directory\ncp: cannot stat '/home/horiguti/data/arc_work/00000002.history': No such file or directory\ncp: cannot stat '/home/horiguti/data/arc_work/000000010000000000000003': No such file or directory\n[139548:walreceiver] LOG: started streaming WAL from primary at 0/3000000 on timeline 1\n\nThe \"FATAL: could not open file\" message from walreceiver means that\nthe walreceiver was operationally prohibited to install a new wal\nsegment at the time. Thus the walreceiver ended as soon as started.\nIn short, the eager replication is not working at all.\n\n\nI have a comment on the behavior and objective of this feature.\n\nIn the case where archive recovery is started from a backup, this\nfeature lets walreceiver start while the archive recovery is ongoing.\nIf walreceiver (or the eager replication) worked as expected, it would\nwrite wal files while archive recovery writes the same set of WAL\nsegments to the same directory. I don't think that is a sane behavior.\nOr, if putting more modestly, an unintended behavior.\n\nIn common cases, I believe archive recovery is faster than\nreplication. If a segment is available from archive, we don't need to\nprefetch it via stream.\n\nIf this feature is intended to use only for crash recovery of a\nstandby, it should fire only when it is needed.\n\nIf not, that is, if it is intended to work also for archive recovery,\nI think the eager replication should start from the next segment of\nthe last WAL in archive but that would invite more complex problems.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 16 Dec 2021 19:05:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary delay in streaming replication due to replay lag"
}
] |
[
{
"msg_contents": "Hello,\nwould it be possible to add PGDLLIMPORT to permit to build following\nextensions on windows\n\npg_stat_sql_plans:\nsrc/include/pgstat.h\nextern PGDLLIMPORT bool pgstat_track_activities;\n\npg_background:\nsrc/include/storage/proc.h\nextern PGDLLIMPORT int\tStatementTimeout;\n\nThanks in advance\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 17 Jan 2020 15:07:48 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "pg13 PGDLLIMPORT list"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 03:07:48PM -0700, legrand legrand wrote:\n> Would it be possible to add PGDLLIMPORT to permit to build following\n> extensions on windows\n> \n> pg_stat_sql_plans:\n> src/include/pgstat.h\n> extern PGDLLIMPORT bool pgstat_track_activities;\n> \n> pg_background:\n> src/include/storage/proc.h\n> extern PGDLLIMPORT int\tStatementTimeout;\n\nNo objections from me to add both to what's imported. Do you have a \nspecific use-case in mind for an extension on Windows? Just\nwondering..\n--\nMichael",
"msg_date": "Sat, 18 Jan 2020 11:26:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 7:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 17, 2020 at 03:07:48PM -0700, legrand legrand wrote:\n> > Would it be possible to add PGDLLIMPORT to permit to build following\n> > extensions on windows\n> >\n> > pg_stat_sql_plans:\n> > src/include/pgstat.h\n> > extern PGDLLIMPORT bool pgstat_track_activities;\n> >\n> > pg_background:\n> > src/include/storage/proc.h\n> > extern PGDLLIMPORT int StatementTimeout;\n>\n> No objections from me to add both to what's imported.\n>\n\n+1 for adding PGDLLIMPORT to these variables. In the past, we have\nadded it on the request of some extension authors, so I don't see any\nproblem doing this time as well.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 18 Jan 2020 12:34:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
},
{
"msg_contents": "On Sat, 18 Jan 2020, 09:04 Amit Kapila <amit.kapila16@gmail.com wrote:\n\n> On Sat, Jan 18, 2020 at 7:56 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > On Fri, Jan 17, 2020 at 03:07:48PM -0700, legrand legrand wrote:\n> > > Would it be possible to add PGDLLIMPORT to permit to build following\n> > > extensions on windows\n> > >\n> > > pg_stat_sql_plans:\n> > > src/include/pgstat.h\n> > > extern PGDLLIMPORT bool pgstat_track_activities;\n> > >\n> > > pg_background:\n> > > src/include/storage/proc.h\n> > > extern PGDLLIMPORT int StatementTimeout;\n> >\n> > No objections from me to add both to what's imported.\n> >\n>\n> +1 for adding PGDLLIMPORT to these variables. In the past, we have\n> added it on the request of some extension authors, so I don't see any\n> problem doing this time as well.\n>\n\n+1 too\n\n>\n\nOn Sat, 18 Jan 2020, 09:04 Amit Kapila <amit.kapila16@gmail.com wrote:On Sat, Jan 18, 2020 at 7:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 17, 2020 at 03:07:48PM -0700, legrand legrand wrote:\n> > Would it be possible to add PGDLLIMPORT to permit to build following\n> > extensions on windows\n> >\n> > pg_stat_sql_plans:\n> > src/include/pgstat.h\n> > extern PGDLLIMPORT bool pgstat_track_activities;\n> >\n> > pg_background:\n> > src/include/storage/proc.h\n> > extern PGDLLIMPORT int StatementTimeout;\n>\n> No objections from me to add both to what's imported.\n>\n\n+1 for adding PGDLLIMPORT to these variables. In the past, we have\nadded it on the request of some extension authors, so I don't see any\nproblem doing this time as well.+1 too",
"msg_date": "Sat, 18 Jan 2020 09:19:19 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
},
{
"msg_contents": "Michael Paquier-2 wrote\n> On Fri, Jan 17, 2020 at 03:07:48PM -0700, legrand legrand wrote:\n> [...]\n> \n> No objections from me to add both to what's imported. Do you have a \n> specific use-case in mind for an extension on Windows? Just\n> wondering..\n> --\n> Michael\n> \n> signature.asc (849 bytes)\n> <https://www.postgresql-archive.org/attachment/6119761/0/signature.asc>\n\nNo specific use-case, just the need to test features (IVM, push agg to base\nrelations and joins, ...)\non a professional laptop in windows 10 for a nomad app that collects\nperformance metrics on Oracle \ndatabases.\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 18 Jan 2020 02:32:49 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 02:32:49AM -0700, legrand legrand wrote:\n> No specific use-case, just the need to test features (IVM, push agg to base\n> relations and joins, ...)\n> on a professional laptop in windows 10 for a nomad app that collects\n> performance metrics on Oracle \n> databases.\n\nThat pretty much is a use-case, at least to me :)\n--\nMichael",
"msg_date": "Sat, 18 Jan 2020 18:48:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 09:19:19AM +0200, Julien Rouhaud wrote:\n> On Sat, 18 Jan 2020, 09:04 Amit Kapila <amit.kapila16@gmail.com wrote:\n>> +1 for adding PGDLLIMPORT to these variables. In the past, we have\n>> added it on the request of some extension authors, so I don't see any\n>> problem doing this time as well.\n>>\n> \n> +1 too\n\nThanks. If there are no objections, I would like to actually\nback-patch that.\n--\nMichael",
"msg_date": "Sat, 18 Jan 2020 18:49:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 3:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jan 18, 2020 at 09:19:19AM +0200, Julien Rouhaud wrote:\n> > On Sat, 18 Jan 2020, 09:04 Amit Kapila <amit.kapila16@gmail.com wrote:\n> >> +1 for adding PGDLLIMPORT to these variables. In the past, we have\n> >> added it on the request of some extension authors, so I don't see any\n> >> problem doing this time as well.\n> >>\n> >\n> > +1 too\n>\n> Thanks. If there are no objections, I would like to actually\n> back-patch that.\n>\n\nAs such no objection, but I am not sure if the other person need it on\nback branches as well. Are you planning to commit this, or if you\nwant I can take care of it?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Jan 2020 08:34:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 08:34:05AM +0530, Amit Kapila wrote:\n> As such no objection, but I am not sure if the other person need it on\n> back branches as well. Are you planning to commit this, or if you\n> want I can take care of it?\n\nThanks for the reminder. Done now. I have also switched the\nsurrounding parameters while on it to not be inconsistent.\n--\nMichael",
"msg_date": "Tue, 21 Jan 2020 13:49:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
},
{
"msg_contents": "On Tue, 21 Jan 2020 at 12:49, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 21, 2020 at 08:34:05AM +0530, Amit Kapila wrote:\n> > As such no objection, but I am not sure if the other person need it on\n> > back branches as well. Are you planning to commit this, or if you\n> > want I can take care of it?\n>\n> Thanks for the reminder. Done now. I have also switched the\n> surrounding parameters while on it to not be inconsistent.\n\nWhile speaking of PGDLLIMPORT, I wrote a quick check target in the\nmakefiles for some extensions I work on. It identified the following\nsymbols as used by the extensions but not exported:\n\nXactLastRecEnd (xlog.h)\ncriticalSharedRelcachesBuilt (relcache.h)\nhot_standby_feedback (walreceiver.h)\npgstat_track_activities (pgstat.h)\nWalRcv (walreceiver.h)\nwal_receiver_status_interval (walreceiver.h)\nwal_retrieve_retry_interval (walreceiver.h)\n\nOf those, XactLastRecEnd is by far the most important.\n\nFailure to export pgstat_track_activities is a bug IMO, since it's\nexported by inline functions pgstat_report_wait_start() and\npgstat_report_wait_end() in pgstat.h\n\ncriticalSharedRelcachesBuilt is useful in extensions that may do genam\nsystable_beginscan() etc in functions called both early in startup and\nlater on.\n\nhot_standby_feedback can be worked around by reading the GUC via the\nconfig options interface. But IMO all GUC symbols should be\nPGDLLEXPORTed, especially since we lack an interface for extensions to\nread arbitrary GUC values w/o formatting to string then parsing the\nstring.\n\nwal_receiver_status_interval and wal_retrieve_retry_interval are not\nthat important, but again they're GUCs.\n\nBeing able to see WalRcv is very useful when running extension code on\na streaming physical replica, where you want to make decisions based\non what's actually replicated.\n\nAnyone object to exporting these?\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Tue, 21 Jan 2020 15:07:31 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13 PGDLLIMPORT list"
}
] |
[
{
"msg_contents": "Folks,\n\nWhile going over places where I might use compiler intrinsics for\nthings like ceil(log2(n))) and next power of 2(n), I noticed that a\nlot of things that can't be fractional are longs instead of, say,\nuint64s. Is this the case for historical reasons, or is there some\nmore specific utility to expressing as longs things that can only have\nnon-negative integer values? Did this practice pre-date our\nnow-required 64-bit integers?\n\nThanks in advance for any insights into this!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 18 Jan 2020 01:42:14 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "longs where uint64s could be"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 4:42 PM David Fetter <david@fetter.org> wrote:\n> While going over places where I might use compiler intrinsics for\n> things like ceil(log2(n))) and next power of 2(n), I noticed that a\n> lot of things that can't be fractional are longs instead of, say,\n> uint64s. Is this the case for historical reasons, or is there some\n> more specific utility to expressing as longs things that can only have\n> non-negative integer values? Did this practice pre-date our\n> now-required 64-bit integers?\n\nYeah, it's historic. I wince when I see \"long\" integers. They're\nalmost wrong by definition. Windows has longs that are only 32-bits\nwide/the same width as a regular \"int\". Anybody that uses a long must\nhave done so because they expect it to be wider than an int, even\nthough in general it cannot be assumed to be in Postgres C code.\n\nwork_mem calculations often use long by convention. We restrict the\nsize of work_mem on Windows in order to make this safe everywhere. I\nbelieve that this is based on a tacit assumption that long is wider\noutside of Windows.\n\nlogtape.c uses long ints. This means that Windows cannot support very\nlarge external sorts. I don't recall hearing any complaints about\nthat, but it still doesn't seem great.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 17 Jan 2020 17:12:20 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: longs where uint64s could be"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 05:12:20PM -0800, Peter Geoghegan wrote:\n> On Fri, Jan 17, 2020 at 4:42 PM David Fetter <david@fetter.org> wrote:\n> > While going over places where I might use compiler intrinsics for\n> > things like ceil(log2(n))) and next power of 2(n), I noticed that a\n> > lot of things that can't be fractional are longs instead of, say,\n> > uint64s. Is this the case for historical reasons, or is there some\n> > more specific utility to expressing as longs things that can only have\n> > non-negative integer values? Did this practice pre-date our\n> > now-required 64-bit integers?\n> \n> Yeah, it's historic. I wince when I see \"long\" integers. They're\n> almost wrong by definition. Windows has longs that are only 32-bits\n> wide/the same width as a regular \"int\". Anybody that uses a long must\n> have done so because they expect it to be wider than an int, even\n> though in general it cannot be assumed to be in Postgres C code.\n> \n> work_mem calculations often use long by convention. We restrict the\n> size of work_mem on Windows in order to make this safe everywhere. I\n> believe that this is based on a tacit assumption that long is wider\n> outside of Windows.\n> \n> logtape.c uses long ints. This means that Windows cannot support very\n> large external sorts. I don't recall hearing any complaints about\n> that, but it still doesn't seem great.\n\nPlease find attached a patch that changes logtape.c and things in near\ndependency to it that changes longs to appropriate ints.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 19 Jan 2020 23:45:53 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: longs where uint64s could be"
}
] |
[
{
"msg_contents": "Hello,\nafter building devel snapshot from 2020-01-17 with msys,\ninitDB generates a lot of additional informations when launched:\n\nVERSION=13devel\nPGDATA=../data\nshare_path=C:/msys64/usr/local/pgsql/share\nPGPATH=C:/msys64/usr/local/pgsql/bin\nPOSTGRES_SUPERUSERNAME=lemoyp\nPOSTGRES_BKI=C:/msys64/usr/local/pgsql/share/postgres.bki\nPOSTGRES_DESCR=C:/msys64/usr/local/pgsql/share/postgres.description\nPOSTGRES_SHDESCR=C:/msys64/usr/local/pgsql/share/postgres.shdescription\nPOSTGRESQL_CONF_SAMPLE=C:/msys64/usr/local/pgsql/share/postgresql.conf.sample\nPG_HBA_SAMPLE=C:/msys64/usr/local/pgsql/share/pg_hba.conf.sample\nPG_IDENT_SAMPLE=C:/msys64/usr/local/pgsql/share/pg_ident.conf.sample\n2020-01-18 00:19:37.407 CET [10152] DEBUG: invoking\nIpcMemoryCreate(size=149061632)\n2020-01-18 00:19:37.408 CET [10152] DEBUG: could not enable Lock Pages in\nMemory user right\n2020-01-18 00:19:37.408 CET [10152] HINT: Assign Lock Pages in Memory user\nright to the Windows user account which runs PostgreSQL.\n2020-01-18 00:19:37.408 CET [10152] DEBUG: disabling huge pages\n2020-01-18 00:19:37.414 CET [10152] DEBUG: SlruScanDirectory invoking\ncallback on pg_notify/0000\n2020-01-18 00:19:37.414 CET [10152] DEBUG: removing file \"pg_notify/0000\"\n2020-01-18 00:19:37.416 CET [10152] DEBUG: dynamic shared memory system\nwill support 308 segments\n2020-01-18 00:19:37.416 CET [10152] DEBUG: created dynamic shared memory\ncontrol segment 718036776 (7408 bytes)\n2020-01-18 00:19:37.416 CET [10152] DEBUG: transaction ID wrap limit is\n2147483650, limited by database with OID 1\n2020-01-18 00:19:37.416 CET [10152] DEBUG: MultiXactId wrap limit is\n2147483648, limited by database with OID 1\n2020-01-18 00:19:37.416 CET [10152] DEBUG: creating and filling new WAL\nfile\n2020-01-18 00:19:37.446 CET [10152] DEBUG: could not remove file\n\"pg_wal/000000010000000000000001\": No such file or directory\n2020-01-18 00:19:37.454 CET [10152] DEBUG: mapped win32 error code 5 to 13\n2020-01-18 00:19:37.455 CET [10152] DEBUG: done creating and filling new\nWAL file\n2020-01-18 00:19:37.467 CET [10152] DEBUG: InitPostgres\n2020-01-18 00:19:37.467 CET [10152] DEBUG: my backend ID is 1\n2020-01-18 00:19:37.467 CET [10152] NOTICE: database system was shut down\nat 2020-01-18 00:19:37 CET\n2020-01-18 00:19:37.467 CET [10152] DEBUG: mapped win32 error code 2 to 2\n2020-01-18 00:19:37.471 CET [10152] DEBUG: checkpoint record is at\n0/1000028\n2020-01-18 00:19:37.471 CET [10152] DEBUG: redo record is at 0/1000028;\nshutdown true\n...\n\nIs that the expected behavior, or just a temporary test ?\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 18 Jan 2020 02:40:00 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "postgresql-13devel initDB Running in debug mode."
},
{
"msg_contents": "legrand legrand <legrand_legrand@hotmail.com> writes:\n> after building devel snapshot from 2020-01-17 with msys,\n> initDB generates a lot of additional informations when launched:\n> [ debug output snipped ]\n> Is that the expected behavior, or just a temporary test ?\n\nIt'd be the expected behavior if you'd given a -d switch to initdb.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 Jan 2020 11:05:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-13devel initDB Running in debug mode."
},
{
"msg_contents": "Tom Lane-2 wrote\n> legrand legrand <\n\n> legrand_legrand@\n\n> > writes:\n>> after building devel snapshot from 2020-01-17 with msys,\n>> initDB generates a lot of additional informations when launched:\n>> [ debug output snipped ]\n>> Is that the expected behavior, or just a temporary test ?\n> \n> It'd be the expected behavior if you'd given a -d switch to initdb.\n> \n> \t\t\tregards, tom lane\n\nso yes, its me :\n-d directory \ninspite of \n-D directory\n\nand the doc is perfect:\n-d\n--debug\n\n Print debugging output from the bootstrap backend and a few other\nmessages of lesser interest for the general public. The bootstrap backend is\nthe program initdb uses to create the catalog tables. This option generates\na tremendous amount of extremely boring output.\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 18 Jan 2020 10:05:20 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgresql-13devel initDB Running in debug mode."
}
] |
[
{
"msg_contents": "One of our PG12 instances was in crash recovery for an embarassingly long time\nafter hitting ENOSPC. (Note, I first started wroting this mail 10 months ago\nwhile running PG11 after having same experience after OOM). Running linux.\n\nAs I understand, the first thing that happens syncing every file in the data\ndir, like in initdb --sync. These instances were both 5+TB on zfs, with\ncompression, so that's slow, but tolerable, and at least understandable, and\nwith visible progress in ps.\n\nThe 2nd stage replays WAL. strace show's it's occasionally running\nsync_file_range, and I think recovery might've been several times faster if\nwe'd just dumped the data at the OS ASAP, fsync once per file. In fact, I've\njust kill -9 the recovery process and edited the config to disable this lest it\nspend all night in recovery.\n\n$ sudo strace -p 12564 2>&1 |sed 33q\nProcess 12564 attached\nsync_file_range(0x21, 0x2bba000, 0xa000, 0x2) = 0\nsync_file_range(0xb2, 0x2026000, 0x1a000, 0x2) = 0\nclock_gettime(CLOCK_MONOTONIC, {7521130, 31376505}) = 0\n\n(gdb) bt\n#0 0x00000032b2adfe8a in sync_file_range () from /lib64/libc.so.6\n#1 0x00000000007454e2 in pg_flush_data (fd=<value optimized out>, offset=<value optimized out>, nbytes=<value optimized out>) at fd.c:437\n#2 0x00000000007456b4 in FileWriteback (file=<value optimized out>, offset=41508864, nbytes=16384, wait_event_info=167772170) at fd.c:1855\n#3 0x000000000073dbac in IssuePendingWritebacks (context=0x7ffed45f8530) at bufmgr.c:4381\n#4 0x000000000073f1ff in SyncOneBuffer (buf_id=<value optimized out>, skip_recently_used=<value optimized out>, wb_context=0x7ffed45f8530) at bufmgr.c:2409\n#5 0x000000000073f549 in BufferSync (flags=6) at bufmgr.c:1991\n#6 0x000000000073f5d6 in CheckPointBuffers (flags=6) at bufmgr.c:2585\n#7 0x000000000050552c in CheckPointGuts (checkPointRedo=535426125266848, flags=6) at xlog.c:9006\n#8 0x000000000050cace in CreateCheckPoint (flags=6) at xlog.c:8795\n#9 0x0000000000511740 in StartupXLOG () at xlog.c:7612\n#10 0x00000000006faaf1 in StartupProcessMain () at startup.c:207\n\nThat GUC is intended to reduce latency spikes caused by checkpoint fsync. But\nI think limiting to default 256kB between syncs is too limiting during\nrecovery, and at that point it's better to optimize for throughput anyway,\nsince no other backends are running (in that instance) and cannot run until\nrecovery finishes. At least, if this setting is going to apply during\nrecovery, the documentation should mention it (it's a \"recovery checkpoint\")\n\nSee also\n4bc0f16 Change default of backend_flush_after GUC to 0 (disabled).\n428b1d6 Allow to trigger kernel writeback after a configurable number of writes.\n\n\n",
"msg_date": "Sat, 18 Jan 2020 08:08:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "should crash recovery ignore checkpoint_flush_after ?"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-18 08:08:07 -0600, Justin Pryzby wrote:\n> One of our PG12 instances was in crash recovery for an embarassingly long time\n> after hitting ENOSPC. (Note, I first started wroting this mail 10 months ago\n> while running PG11 after having same experience after OOM). Running linux.\n> \n> As I understand, the first thing that happens syncing every file in the data\n> dir, like in initdb --sync. These instances were both 5+TB on zfs, with\n> compression, so that's slow, but tolerable, and at least understandable, and\n> with visible progress in ps.\n>\n> The 2nd stage replays WAL. strace show's it's occasionally running\n> sync_file_range, and I think recovery might've been several times faster if\n> we'd just dumped the data at the OS ASAP, fsync once per file. In fact, I've\n> just kill -9 the recovery process and edited the config to disable this lest it\n> spend all night in recovery.\n\nI'm not quite sure what you mean here with \"fsync once per file\". The\nsync_file_range doesn't actually issue an fsync, even if sounds like\nit. In the case of checkpointing what it basically does is to ask the\nkernel to please start writing data back immediately, instead of waiting\ntill the absolute end of the checkpoint, when doing fsyncs. IOW, the\ndata is going to be written back *anyway* in short order.\n\nIt's ossible that ZFS's compression just does broken things here, I\ndon't know.\n\n\n> That GUC is intended to reduce latency spikes caused by checkpoint fsync. But\n> I think limiting to default 256kB between syncs is too limiting during\n> recovery, and at that point it's better to optimize for throughput anyway,\n> since no other backends are running (in that instance) and cannot run until\n> recovery finishes.\n\nI don't think that'd be good by default - in my experience the stalls\ncaused by the kernel writing back massive amounts of data at once is\nalso problematic during recovery (and can lead to much higher %sys\ntoo). You get the pattern of the fsync at the end taking forever, while\nIO is idle before. And you'd get the latency spikes once recovery is\nover too.\n\n\n> At least, if this setting is going to apply during\n> recovery, the documentation should mention it (it's a \"recovery checkpoint\")\n\nThat makes sense.\n\n\n> See also\n> 4bc0f16 Change default of backend_flush_after GUC to 0 (disabled).\n\nFWIW, I still think this is the wrong default, and that it causes our\nusers harm. It only makes sense because the reverse was the default. But\nit's easy to see quite massive stalls even on fast nvme SSDs (as in 10s\nof no transactions committing, in an oltp workload). Nor do I think is\nit really comparable with the checkpointing setting, because there we\n*know* that we're about to fsync the file, whereas in the backend case\nwe might just use the fs page cache as an extension of shared buffers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Jan 2020 10:48:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: should crash recovery ignore checkpoint_flush_after ?"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 10:48:22AM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2020-01-18 08:08:07 -0600, Justin Pryzby wrote:\n> > One of our PG12 instances was in crash recovery for an embarassingly long time\n> > after hitting ENOSPC. (Note, I first started wroting this mail 10 months ago\n> > while running PG11 after having same experience after OOM). Running linux.\n> > \n> > As I understand, the first thing that happens syncing every file in the data\n> > dir, like in initdb --sync. These instances were both 5+TB on zfs, with\n> > compression, so that's slow, but tolerable, and at least understandable, and\n> > with visible progress in ps.\n> >\n> > The 2nd stage replays WAL. strace show's it's occasionally running\n> > sync_file_range, and I think recovery might've been several times faster if\n> > we'd just dumped the data at the OS ASAP, fsync once per file. In fact, I've\n> > just kill -9 the recovery process and edited the config to disable this lest it\n> > spend all night in recovery.\n> \n> I'm not quite sure what you mean here with \"fsync once per file\". The\n> sync_file_range doesn't actually issue an fsync, even if sounds like it.\n\nI mean if we didn't call sync_file_range() and instead let the kernel handle\nthe writes and then fsync() at end of checkpoint, which happens in any case.\nI think I'll increase or maybe disable this GUC on our servers and, if needed,\nadjust /proc/sys/vm/dirty_*ratio.\n\n> It's ossible that ZFS's compression just does broken things here, I\n> don't know.\n\nOr, our settings aren't ideal or recovery is just going to perform poorly for\nthat. Which I'm ok with, since it should be rare anyway, and recovery is\nunlikely to be a big deal for us.\n\n> > At least, if this setting is going to apply during\n> > recovery, the documentation should mention it (it's a \"recovery checkpoint\")\n> \n> That makes sense.\n\nFind attached.\nI modified a 2nd sentence since \"that\" was ambiguous, and could be read to\nrefer to \"stalls\".\n\n@@ -2994,17 +2994,19 @@ include_dir 'conf.d'\n Whenever more than this amount of data has been\n written while performing a checkpoint, attempt to force the\n OS to issue these writes to the underlying storage. Doing so will\n limit the amount of dirty data in the kernel's page cache, reducing\n the likelihood of stalls when an <function>fsync</function> is issued at the end of the\n checkpoint, or when the OS writes data back in larger batches in the\n- background. Often that will result in greatly reduced transaction\n+ background. This feature will often result in greatly reduced transaction\n latency, but there also are some cases, especially with workloads\n that are bigger than <xref linkend=\"guc-shared-buffers\"/>, but smaller\n than the OS's page cache, where performance might degrade. This\n setting may have no effect on some platforms.\n+ This setting also applies to the checkpoint written at the end of crash\n+ recovery.\n If this value is specified without units, it is taken as blocks,\n that is <symbol>BLCKSZ</symbol> bytes, typically 8kB.\n The valid range is\n between <literal>0</literal>, which disables forced writeback,\n and <literal>2MB</literal>. The default is <literal>256kB</literal> on\n Linux, <literal>0</literal> elsewhere. (If <symbol>BLCKSZ</symbol> is not\n\nWhat about also updating PS following the last xlog replayed ?\nOtherwise it shows \"recovering <file>\" for the duration of the recovery\ncheckpoint.\n\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -7628,3 +7628,6 @@ StartupXLOG(void)\n else\n+ {\n+ set_ps_display(\"recovery checkpoint\", false);\n CreateCheckPoint(CHECKPOINT_END_OF_RECOVERY | CHECKPOINT_IMMEDIATE);\n+ }\n\n> > 4bc0f16 Change default of backend_flush_after GUC to 0 (disabled).\n> \n> FWIW, I still think this is the wrong default, and that it causes our\n> users harm.\n\nI have no opinion about the default, but the maximum seems low, as a maximum.\nWhy not INT_MAX, like wal_writer_flush_after ?\n\nsrc/include/pg_config_manual.h:#define WRITEBACK_MAX_PENDING_FLUSHES 256\n\nThanks,\nJustin",
"msg_date": "Sat, 18 Jan 2020 14:11:12 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should crash recovery ignore checkpoint_flush_after ?"
},
{
"msg_contents": "On Sun, Jan 19, 2020 at 3:08 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> As I understand, the first thing that happens syncing every file in the data\n> dir, like in initdb --sync. These instances were both 5+TB on zfs, with\n> compression, so that's slow, but tolerable, and at least understandable, and\n> with visible progress in ps.\n>\n> The 2nd stage replays WAL. strace show's it's occasionally running\n> sync_file_range, and I think recovery might've been several times faster if\n> we'd just dumped the data at the OS ASAP, fsync once per file. In fact, I've\n> just kill -9 the recovery process and edited the config to disable this lest it\n> spend all night in recovery.\n\nDoes sync_file_range() even do anything for non-mmap'd files on ZFS?\nNon-mmap'd ZFS data is not in the Linux page cache, and I think\nsync_file_range() works at that level. At a guess, there'd need to be\na new VFS file_operation so that ZFS could get a callback to handle\ndata in its ARC.\n\n\n",
"msg_date": "Sun, 19 Jan 2020 09:52:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should crash recovery ignore checkpoint_flush_after ?"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-18 14:11:12 -0600, Justin Pryzby wrote:\n> On Sat, Jan 18, 2020 at 10:48:22AM -0800, Andres Freund wrote:\n> > On 2020-01-18 08:08:07 -0600, Justin Pryzby wrote:\n> > > One of our PG12 instances was in crash recovery for an embarassingly long time\n> > > after hitting ENOSPC. (Note, I first started wroting this mail 10 months ago\n> > > while running PG11 after having same experience after OOM). Running linux.\n> > > \n> > > As I understand, the first thing that happens syncing every file in the data\n> > > dir, like in initdb --sync. These instances were both 5+TB on zfs, with\n> > > compression, so that's slow, but tolerable, and at least understandable, and\n> > > with visible progress in ps.\n> > >\n> > > The 2nd stage replays WAL. strace show's it's occasionally running\n> > > sync_file_range, and I think recovery might've been several times faster if\n> > > we'd just dumped the data at the OS ASAP, fsync once per file. In fact, I've\n> > > just kill -9 the recovery process and edited the config to disable this lest it\n> > > spend all night in recovery.\n> > \n> > I'm not quite sure what you mean here with \"fsync once per file\". The\n> > sync_file_range doesn't actually issue an fsync, even if sounds like it.\n> \n> I mean if we didn't call sync_file_range() and instead let the kernel handle\n> the writes and then fsync() at end of checkpoint, which happens in any\n> case.\n\nYea, but then more writes have to be done at the end, instead of in\nparallel with other work during checkpointing. the kernel will often end\nup starting to write back buffers before that - but without much concern\nfor locality, so it'll be a lot more random writes.\n\n\n\n> > > 4bc0f16 Change default of backend_flush_after GUC to 0 (disabled).\n> > \n> > FWIW, I still think this is the wrong default, and that it causes our\n> > users harm.\n> \n> I have no opinion about the default, but the maximum seems low, as a maximum.\n> Why not INT_MAX, like wal_writer_flush_after ?\n\nBecause it requires a static memory allocation, and that'd not be all\nthat trivial to change (we may be in a critical section, so can't\nallocate). And issuing them in a larger batch will often stall within\nthe kernel, anyway - there's a limited number of writes the kernel can\nhave in progress at once. We could make it a PGC_POSTMASTER variable,\nand allocate at server start, but that seems like a cure worse than the\ndisease.\n\nwal_writer_flush_after doesn't have that concern, because it works\ndifferently.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Jan 2020 15:22:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: should crash recovery ignore checkpoint_flush_after ?"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-19 09:52:21 +1300, Thomas Munro wrote:\n> On Sun, Jan 19, 2020 at 3:08 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > As I understand, the first thing that happens syncing every file in the data\n> > dir, like in initdb --sync. These instances were both 5+TB on zfs, with\n> > compression, so that's slow, but tolerable, and at least understandable, and\n> > with visible progress in ps.\n> >\n> > The 2nd stage replays WAL. strace show's it's occasionally running\n> > sync_file_range, and I think recovery might've been several times faster if\n> > we'd just dumped the data at the OS ASAP, fsync once per file. In fact, I've\n> > just kill -9 the recovery process and edited the config to disable this lest it\n> > spend all night in recovery.\n> \n> Does sync_file_range() even do anything for non-mmap'd files on ZFS?\n\nGood point. Next time it might be worthwhile to use strace -T to see\nwhether the sync_file_range calls actually take meaningful time.\n\n\n> Non-mmap'd ZFS data is not in the Linux page cache, and I think\n> sync_file_range() works at that level. At a guess, there'd need to be\n> a new VFS file_operation so that ZFS could get a callback to handle\n> data in its ARC.\n\nYea, it requires the pages to be in the pagecache to do anything:\n\nint sync_file_range(struct file *file, loff_t offset, loff_t nbytes,\n\t\t unsigned int flags)\n{\n...\n\n\tif (flags & SYNC_FILE_RANGE_WRITE) {\n\t\tint sync_mode = WB_SYNC_NONE;\n\n\t\tif ((flags & SYNC_FILE_RANGE_WRITE_AND_WAIT) ==\n\t\t\t SYNC_FILE_RANGE_WRITE_AND_WAIT)\n\t\t\tsync_mode = WB_SYNC_ALL;\n\n\t\tret = __filemap_fdatawrite_range(mapping, offset, endbyte,\n\t\t\t\t\t\t sync_mode);\n\t\tif (ret < 0)\n\t\t\tgoto out;\n\t}\n\nand then\n\nint __filemap_fdatawrite_range(struct address_space *mapping, loff_t start,\n\t\t\t\tloff_t end, int sync_mode)\n{\n\tint ret;\n\tstruct writeback_control wbc = {\n\t\t.sync_mode = sync_mode,\n\t\t.nr_to_write = LONG_MAX,\n\t\t.range_start = start,\n\t\t.range_end = end,\n\t};\n\n\tif (!mapping_cap_writeback_dirty(mapping) ||\n\t !mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))\n\t\treturn 0;\n\nwhich means that if there's no pages in the pagecache for the relevant\nrange, it'll just finish here. *Iff* there are some, say because\nsomething else mmap()ed a section, it'd potentially call into\naddress_space->writepages() callback. So it's possible to emulate\nenough state for ZFS or such to still get sync_file_range() call into it\n(by setting up a pseudo map tagged as dirty), but it's not really the\nnormal path.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Jan 2020 15:32:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: should crash recovery ignore checkpoint_flush_after ?"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 03:32:02PM -0800, Andres Freund wrote:\n> On 2020-01-19 09:52:21 +1300, Thomas Munro wrote:\n> > On Sun, Jan 19, 2020 at 3:08 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Does sync_file_range() even do anything for non-mmap'd files on ZFS?\n> \n> Good point. Next time it might be worthwhile to use strace -T to see\n> whether the sync_file_range calls actually take meaningful time.\n\n> Yea, it requires the pages to be in the pagecache to do anything:\n\n> \tif (!mapping_cap_writeback_dirty(mapping) ||\n> \t !mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))\n> \t\treturn 0;\n\nThat logic is actually brand new (Sep 23, 2019, linux 5.4)\nhttps://github.com/torvalds/linux/commit/c3aab9a0bd91b696a852169479b7db1ece6cbf8c#diff-fd2d793b8b4760b4887c8c7bbb3451d7\n\nRunning a manual CHECKPOINT, I saw stuff like:\n\nsync_file_range(0x15f, 0x1442c000, 0x2000, 0x2) = 0 <2.953956>\nsync_file_range(0x15f, 0x14430000, 0x4000, 0x2) = 0 <0.006395>\nsync_file_range(0x15f, 0x14436000, 0x4000, 0x2) = 0 <0.003859>\nsync_file_range(0x15f, 0x1443e000, 0x2000, 0x2) = 0 <0.027975>\nsync_file_range(0x15f, 0x14442000, 0x2000, 0x2) = 0 <0.000048>\n\nAnd actually, that server had been running its DB instance on a centos6 VM\n(kernel-2.6.32-754.23.1.el6.x86_64), shared with the appserver, to mitigate\nanother issue last year. I moved the DB back to its own centos7 VM\n(kernel-3.10.0-862.14.4.el7.x86_64), and I cannot see that anymore.\nIt seems if there's any issue (with postgres or otherwise), it's vastly\nmitigated or much harder to hit under modern kernels.\n\nI also found these:\nhttps://github.com/torvalds/linux/commit/23d0127096cb91cb6d354bdc71bd88a7bae3a1d5 (master v5.5-rc6...v4.4-rc1)\nhttps://github.com/torvalds/linux/commit/ee53a891f47444c53318b98dac947ede963db400 (master v5.5-rc6...v2.6.29-rc1)\n\nThe 2nd commit is maybe the cause of the issue.\n\nThe first commit is supposedly too new to explain the difference between the\ntwo kernels, but I'm guessing redhat maybe backpatched it into the 3.10 kernel.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Sun, 19 Jan 2020 10:13:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should crash recovery ignore checkpoint_flush_after ?"
}
] |
[
{
"msg_contents": "Hi,\n\nOne of the stats I occasionally wanted to know are stats for the SLRU\nstats (we have couple of those - clog, subtrans, ...). So here is a WIP\nversion of a patch adding that.\n\nThe implementation is fairly simple - the slru code updates counters in\nlocal memory, and then sends them to the collector at the end of the\ntransaction (similarly to table/func stats). The collector stores it\nsimilarly to global stats. And the collected stats are accessible\nthrough pg_stat_slru.\n\nThe main issue is that we have multiple SLRU caches, and it seems handy\nto have separate stats for each. OTOH the number of SLRU stats is not\nfixed, so e.g. extensions might define their own SLRU caches. But\nhanding dynamic number of SLRU caches seems a bit hard (we'd need to\nassign some sort of unique IDs etc.) so what I did was define a fixed\nnumber of SLRU types\n\n typedef enum SlruType\n {\n SLRU_CLOG,\n SLRU_COMMIT_TS,\n SLRU_MULTIXACT_OFFSET,\n SLRU_MULTIXACT_MEMBER,\n SLRU_SUBTRANS,\n SLRU_ASYNC,\n SLRU_OLDSERXID,\n SLRU_OTHER\n } SlruType;\n\nwith one group of counters for each group. The last type (SLRU_OTHER) is\nused to store stats for all SLRUs that are not predefined. It wouldn't\nbe that difficult to store dynamic number of SLRUs, but I'm not sure how\nto solve issues with identifying SLRUs etc. And there are probably very\nfew extensions adding custom SLRU anyway.\n\nThe one thing missing from the patch is a way to reset the SLRU stats,\nsimilarly to how we can reset bgwriter stats.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 19 Jan 2020 15:37:07 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "SLRU statistics"
},
{
"msg_contents": "From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> One of the stats I occasionally wanted to know are stats for the SLRU\n> stats (we have couple of those - clog, subtrans, ...). So here is a WIP\n> version of a patch adding that.\n\nHow can users take advantage of this information? I think we also need the ability to set the size of SLRU buffers. (I want to be freed from the concern about the buffer shortage by setting the buffer size to its maximum. For example, CLOG would be only 1 GB.)\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n",
"msg_date": "Mon, 20 Jan 2020 01:04:33 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SLRU statistics"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 01:04:33AM +0000, tsunakawa.takay@fujitsu.com wrote:\n>From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> One of the stats I occasionally wanted to know are stats for the SLRU\n>> stats (we have couple of those - clog, subtrans, ...). So here is a WIP\n>> version of a patch adding that.\n>\n>How can users take advantage of this information? I think we also need\n>the ability to set the size of SLRU buffers. (I want to be freed from\n>the concern about the buffer shortage by setting the buffer size to its\n>maximum. For example, CLOG would be only 1 GB.)\n>\n\nYou're right the users can't really take advantage of this - my primary\nmotivation was providing a feedback for devs, benchmarking etc. That\nmight have been done with DEBUG messages or something, but this seems\nmore convenient.\n\nI think it's unclear how desirable / necessary it is to allow users to\ntweak those caches. I don't think we should have a GUC for everything,\nbut maybe there's some sort of heuristics to determine the size. The\nassumption is we actually find practical workloads where the size of\nthese SLRUs is a performance issue.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 Jan 2020 17:37:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On 2020-Jan-20, Tomas Vondra wrote:\n\n> On Mon, Jan 20, 2020 at 01:04:33AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> > From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> > > One of the stats I occasionally wanted to know are stats for the SLRU\n> > > stats (we have couple of those - clog, subtrans, ...). So here is a WIP\n> > > version of a patch adding that.\n> > \n> > How can users take advantage of this information? I think we also need\n> > the ability to set the size of SLRU buffers. (I want to be freed from\n> > the concern about the buffer shortage by setting the buffer size to its\n> > maximum. For example, CLOG would be only 1 GB.)\n> \n> You're right the users can't really take advantage of this - my primary\n> motivation was providing a feedback for devs, benchmarking etc. That\n> might have been done with DEBUG messages or something, but this seems\n> more convenient.\n\nI think the stats are definitely needed if we keep the current code.\nI've researched some specific problems in this code, such as the need\nfor more subtrans SLRU buffers; IIRC it was pretty painful to figure out\nwhat the problem was without counters, and it'd have been trivial with\nthem.\n\n> I think it's unclear how desirable / necessary it is to allow users to\n> tweak those caches. I don't think we should have a GUC for everything,\n> but maybe there's some sort of heuristics to determine the size. The\n> assumption is we actually find practical workloads where the size of\n> these SLRUs is a performance issue.\n\nI expect we'll eventually realize the need for changes in this area.\nEither configurability in the buffer pool sizes, or moving them to be\npart of shared_buffers (IIRC Thomas Munro had a patch for this.)\nExample: SLRUs like pg_commit and pg_subtrans have higher buffer\nconsumption as the range of open transactions increases; for many users\nthis is not a concern and they can live with the default values.\n\n(I think when pg_commit (n�e pg_clog) buffers were increased, we should\nhave increased pg_subtrans buffers to match.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 Jan 2020 15:01:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 03:01:36PM -0300, Alvaro Herrera wrote:\n>On 2020-Jan-20, Tomas Vondra wrote:\n>\n>> On Mon, Jan 20, 2020 at 01:04:33AM +0000, tsunakawa.takay@fujitsu.com wrote:\n>> > From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> > > One of the stats I occasionally wanted to know are stats for the SLRU\n>> > > stats (we have couple of those - clog, subtrans, ...). So here is a WIP\n>> > > version of a patch adding that.\n>> >\n>> > How can users take advantage of this information? I think we also need\n>> > the ability to set the size of SLRU buffers. (I want to be freed from\n>> > the concern about the buffer shortage by setting the buffer size to its\n>> > maximum. For example, CLOG would be only 1 GB.)\n>>\n>> You're right the users can't really take advantage of this - my primary\n>> motivation was providing a feedback for devs, benchmarking etc. That\n>> might have been done with DEBUG messages or something, but this seems\n>> more convenient.\n>\n>I think the stats are definitely needed if we keep the current code.\n>I've researched some specific problems in this code, such as the need\n>for more subtrans SLRU buffers; IIRC it was pretty painful to figure out\n>what the problem was without counters, and it'd have been trivial with\n>them.\n>\n\nRight. Improving our ability to monitor/measure things is the goal of\nthis patch.\n\n>> I think it's unclear how desirable / necessary it is to allow users to\n>> tweak those caches. I don't think we should have a GUC for everything,\n>> but maybe there's some sort of heuristics to determine the size. The\n>> assumption is we actually find practical workloads where the size of\n>> these SLRUs is a performance issue.\n>\n>I expect we'll eventually realize the need for changes in this area.\n>Either configurability in the buffer pool sizes, or moving them to be\n>part of shared_buffers (IIRC Thomas Munro had a patch for this.)\n>Example: SLRUs like pg_commit and pg_subtrans have higher buffer\n>consumption as the range of open transactions increases; for many users\n>this is not a concern and they can live with the default values.\n>\n>(I think when pg_commit (n�e pg_clog) buffers were increased, we should\n>have increased pg_subtrans buffers to match.)\n>\n\nQuite possibly, yes. All I'm saying is that it's not something I intend\nto address with this patch. It's quite possible the solutions will be\ndifferent for each SLRU, and that will require more research.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 Jan 2020 19:45:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> You're right the users can't really take advantage of this - my primary\n> motivation was providing a feedback for devs, benchmarking etc. That\n> might have been done with DEBUG messages or something, but this seems\n> more convenient.\n\nUnderstood. I'm in favor of adding performance information even if it doesn't make sense for users (like other DBMSs sometimes do.) One concern is that all the PostgreSQL performance statistics have been useful so far for tuning in some way, and this may become the first exception. Do we describe the SLRU stats view in the manual, or hide it only for PG devs and support staff?\n\n\n> I think it's unclear how desirable / necessary it is to allow users to\n> tweak those caches. I don't think we should have a GUC for everything,\n> but maybe there's some sort of heuristics to determine the size. The\n> assumption is we actually find practical workloads where the size of\n> these SLRUs is a performance issue.\n\nI understood that the new performance statistics are expected to reveal what SLRUs need to be tunable and/or implemented with a different mechanism like shared buffers.\n\n\nRegards\nTakayuki Tsunakawa\n\n\t\t\n\n\n\n",
"msg_date": "Tue, 21 Jan 2020 06:24:29 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SLRU statistics"
},
{
"msg_contents": "On Tue, 21 Jan 2020 at 01:38, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Mon, Jan 20, 2020 at 01:04:33AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> >From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> >> One of the stats I occasionally wanted to know are stats for the SLRU\n> >> stats (we have couple of those - clog, subtrans, ...). So here is a WIP\n> >> version of a patch adding that.\n\n+1\n\n> >\n> >How can users take advantage of this information? I think we also need\n> >the ability to set the size of SLRU buffers. (I want to be freed from\n> >the concern about the buffer shortage by setting the buffer size to its\n> >maximum. For example, CLOG would be only 1 GB.)\n> >\n>\n> You're right the users can't really take advantage of this - my primary\n> motivation was providing a feedback for devs, benchmarking etc. That\n> might have been done with DEBUG messages or something, but this seems\n> more convenient.\n\nI've not tested the performance impact but perhaps we might want to\ndisable these counter by default and controlled by a GUC. And similar\nto buffer statistics it might be better to inline\npgstat_count_slru_page_xxx function for better performance.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 17:09:33 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 05:09:33PM +0900, Masahiko Sawada wrote:\n>On Tue, 21 Jan 2020 at 01:38, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Mon, Jan 20, 2020 at 01:04:33AM +0000, tsunakawa.takay@fujitsu.com wrote:\n>> >From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> >> One of the stats I occasionally wanted to know are stats for the SLRU\n>> >> stats (we have couple of those - clog, subtrans, ...). So here is a WIP\n>> >> version of a patch adding that.\n>\n>+1\n>\n>> >\n>> >How can users take advantage of this information? I think we also need\n>> >the ability to set the size of SLRU buffers. (I want to be freed from\n>> >the concern about the buffer shortage by setting the buffer size to its\n>> >maximum. For example, CLOG would be only 1 GB.)\n>> >\n>>\n>> You're right the users can't really take advantage of this - my primary\n>> motivation was providing a feedback for devs, benchmarking etc. That\n>> might have been done with DEBUG messages or something, but this seems\n>> more convenient.\n>\n>I've not tested the performance impact but perhaps we might want to\n>disable these counter by default and controlled by a GUC. And similar\n>to buffer statistics it might be better to inline\n>pgstat_count_slru_page_xxx function for better performance.\n>\n\nHmmm, yeah. Inlining seems like a good idea, and maybe we should have\nsomething like track_slru GUC.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 14:56:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 06:24:29AM +0000, tsunakawa.takay@fujitsu.com\nwrote:\n>From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> You're right the users can't really take advantage of this - my\n>> primary motivation was providing a feedback for devs, benchmarking\n>> etc. That might have been done with DEBUG messages or something, but\n>> this seems more convenient.\n>\n>Understood. I'm in favor of adding performance information even if it\n>doesn't make sense for users (like other DBMSs sometimes do.) One\n>concern is that all the PostgreSQL performance statistics have been\n>useful so far for tuning in some way, and this may become the first\n>exception. Do we describe the SLRU stats view in the manual, or hide\n>it only for PG devs and support staff?\n>\n\nYes, the pg_stat_slru view should be described in a manual. That's\nmissing from the patch.\n\n>\n>> I think it's unclear how desirable / necessary it is to allow users\n>> to tweak those caches. I don't think we should have a GUC for\n>> everything, but maybe there's some sort of heuristics to determine\n>> the size. The assumption is we actually find practical workloads\n>> where the size of these SLRUs is a performance issue.\n>\n>I understood that the new performance statistics are expected to reveal\n>what SLRUs need to be tunable and/or implemented with a different\n>mechanism like shared buffers.\n>\n\nRight. It's certainly meant to provide information for further tuning.\nI'm just saying it's targeted more at developers, at least initially.\nMaybe we'll end up with GUCs, maybe we'll choose other approaches for\nsome SLRUs. I don't have an opinion on that yet.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 14:59:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On 2020-Jan-21, Tomas Vondra wrote:\n\n> On Tue, Jan 21, 2020 at 05:09:33PM +0900, Masahiko Sawada wrote:\n\n> > I've not tested the performance impact but perhaps we might want to\n> > disable these counter by default and controlled by a GUC. And similar\n> > to buffer statistics it might be better to inline\n> > pgstat_count_slru_page_xxx function for better performance.\n> \n> Hmmm, yeah. Inlining seems like a good idea, and maybe we should have\n> something like track_slru GUC.\n\nI disagree with adding a GUC. If a performance impact can be measured\nlet's turn the functions to static inline, as already proposed. My\nguess is that pgstat_count_slru_page_hit() is the only candidate for\nthat; all the other paths involve I/O or lock acquisition or even WAL\ngeneration, so the impact won't be measurable anyhow. We removed\ntrack-enabling GUCs years ago.\n\nBTW, this comment:\n\t\t\t/* update the stats counter of pages found in shared buffers */\n\nis not strictly true, because we don't use what we normally call \"shared\nbuffers\" for SLRUs.\n\nPatch applies cleanly. I suggest to move the page_miss() call until\nafter SlruRecentlyUsed(), for consistency with the other case.\n\nI find SlruType pretty odd, and the accompanying \"if\" list in\npg_stat_get_slru() correspondingly so. Would it be possible to have\neach SLRU enumerate itself somehow? Maybe add the name in SlruCtlData\nand query that, somehow. (I don't think we have an array of SlruCtlData\nanywhere though, so this might be a useless idea.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 28 Feb 2020 20:19:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Fri, Feb 28, 2020 at 08:19:18PM -0300, Alvaro Herrera wrote:\n>On 2020-Jan-21, Tomas Vondra wrote:\n>\n>> On Tue, Jan 21, 2020 at 05:09:33PM +0900, Masahiko Sawada wrote:\n>\n>> > I've not tested the performance impact but perhaps we might want to\n>> > disable these counter by default and controlled by a GUC. And similar\n>> > to buffer statistics it might be better to inline\n>> > pgstat_count_slru_page_xxx function for better performance.\n>>\n>> Hmmm, yeah. Inlining seems like a good idea, and maybe we should have\n>> something like track_slru GUC.\n>\n>I disagree with adding a GUC. If a performance impact can be measured\n>let's turn the functions to static inline, as already proposed. My\n>guess is that pgstat_count_slru_page_hit() is the only candidate for\n>that; all the other paths involve I/O or lock acquisition or even WAL\n>generation, so the impact won't be measurable anyhow. We removed\n>track-enabling GUCs years ago.\n>\n\nDid we actually remove track-enabling GUCs? I think we still have\n\n - track_activities\n - track_counts\n - track_io_timing\n - track_functions\n\nBut maybe I'm missing something?\n\nThat being said, I'm not sure we need to add a GUC. I'll do some\nmeasurements and we'll see. Maybe the statis inline will me enough.\n\n>BTW, this comment:\n>\t\t\t/* update the stats counter of pages found in shared buffers */\n>\n>is not strictly true, because we don't use what we normally call \"shared\n>buffers\" for SLRUs.\n>\n\nOh, right. Will fix.\n\n>Patch applies cleanly. I suggest to move the page_miss() call until\n>after SlruRecentlyUsed(), for consistency with the other case.\n>\n\nOK.\n\n>I find SlruType pretty odd, and the accompanying \"if\" list in\n>pg_stat_get_slru() correspondingly so. Would it be possible to have\n>each SLRU enumerate itself somehow? Maybe add the name in SlruCtlData\n>and query that, somehow. (I don't think we have an array of SlruCtlData\n>anywhere though, so this might be a useless idea.)\n>\n\nWell, maybe. We could have a system to register SLRUs dynamically, but\nthe trick here is that by having a fixed predefined number of SLRUs\nsimplifies serialization in pgstat.c and so on. I don't think the \"if\"\nbranches in pg_stat_get_slru() are particularly ugly, but maybe we could\nreplace the enum with a registry of structs, something like rmgrlist.h.\nIt seems like an overkill to me, though.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 29 Feb 2020 13:24:41 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On 2020-Feb-29, Tomas Vondra wrote:\n\n> Did we actually remove track-enabling GUCs? I think we still have\n> \n> - track_activities\n> - track_counts\n> - track_io_timing\n> - track_functions\n> \n> But maybe I'm missing something?\n\nHm I remembered we removed the one for row-level stats\n(track_row_stats), but what we really did is merge it with block-level\nstats (track_block_stats) into track_counts -- commit 48f7e6439568. \nFunnily enough, if you disable that autovacuum won't work, so I'm not\nsure it's a very useful tunable. And it definitely has more overhead\nthan what this new GUC would have.\n\n> > I find SlruType pretty odd, and the accompanying \"if\" list in\n> > pg_stat_get_slru() correspondingly so. Would it be possible to have\n> > each SLRU enumerate itself somehow? Maybe add the name in SlruCtlData\n> > and query that, somehow. (I don't think we have an array of SlruCtlData\n> > anywhere though, so this might be a useless idea.)\n> \n> Well, maybe. We could have a system to register SLRUs dynamically, but\n> the trick here is that by having a fixed predefined number of SLRUs\n> simplifies serialization in pgstat.c and so on. I don't think the \"if\"\n> branches in pg_stat_get_slru() are particularly ugly, but maybe we could\n> replace the enum with a registry of structs, something like rmgrlist.h.\n> It seems like an overkill to me, though.\n\nYeah, maybe we don't have to fix that now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 29 Feb 2020 11:44:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Sat, Feb 29, 2020 at 11:44:26AM -0300, Alvaro Herrera wrote:\n>On 2020-Feb-29, Tomas Vondra wrote:\n>\n>> Did we actually remove track-enabling GUCs? I think we still have\n>>\n>> - track_activities\n>> - track_counts\n>> - track_io_timing\n>> - track_functions\n>>\n>> But maybe I'm missing something?\n>\n>Hm I remembered we removed the one for row-level stats\n>(track_row_stats), but what we really did is merge it with block-level\n>stats (track_block_stats) into track_counts -- commit 48f7e6439568.\n>Funnily enough, if you disable that autovacuum won't work, so I'm not\n>sure it's a very useful tunable. And it definitely has more overhead\n>than what this new GUC would have.\n>\n\nOK\n\n>> > I find SlruType pretty odd, and the accompanying \"if\" list in\n>> > pg_stat_get_slru() correspondingly so. Would it be possible to have\n>> > each SLRU enumerate itself somehow? Maybe add the name in SlruCtlData\n>> > and query that, somehow. (I don't think we have an array of SlruCtlData\n>> > anywhere though, so this might be a useless idea.)\n>>\n>> Well, maybe. We could have a system to register SLRUs dynamically, but\n>> the trick here is that by having a fixed predefined number of SLRUs\n>> simplifies serialization in pgstat.c and so on. I don't think the \"if\"\n>> branches in pg_stat_get_slru() are particularly ugly, but maybe we could\n>> replace the enum with a registry of structs, something like rmgrlist.h.\n>> It seems like an overkill to me, though.\n>\n>Yeah, maybe we don't have to fix that now.\n>\n\nIMO the current solution is sufficient for the purpose. I guess we could\njust stick a name into the SlruCtlData (and remove SlruType entirely),\nand use that to identify the stats entries. That might be enough, and in\nfact we already have that - SimpleLruInit gets a name parameter and\ncopies that to the lwlock_tranche_name.\n\nOne of the main reasons why I opted to use the enum is that it makes\ntracking, lookup and serialization pretty trivial - it's just an index\nlookup, etc. But maybe it wouldn't be much more complex with the name, \nconsidering the name length is limited by SLRU_MAX_NAME_LENGTH. And we\nprobably don't expect many entries, so we could keep them in a simple\nlist, or maybe a simplehash.\n\nI'm not sure what to do with data for SLRUs that might have disappeared\nafter a restart (e.g. because someone removed an extension). Until now\nthose would be in the all in the \"other\" entry.\n\n\nThe attached v2 fixes the issues in your first message:\n\n- I moved the page_miss() call after SlruRecentlyUsed(), but then I\n realized it's entirely duplicate with the page_read() update done in\n SlruPhysicalReadPage(). I removed the call from SlruPhysicalReadPage()\n and renamed page_miss to page_read - that's more consistent with\n shared buffers stats, which also have buffers_hit and buffer_read.\n\n- I've also implemented the reset. I ended up adding a new option to\n pg_stat_reset_shared, which always resets all SLRU entries. We track\n the reset timestamp for each SLRU entry, but the value is always the\n same. I admit this is a bit weird - I did it like this because (a) I'm\n not sure how to identify the individual entries and (b) the SLRU is\n shared, so pg_stat_reset_shared seems kinda natural.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 29 Feb 2020 21:55:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Hi,\n\nAttached is v3 of the patch with one big change and various small ones.\n\nThe main change is that it gets rid of the SlruType enum and the new\nfield in SlruCtlData. Instead, the patch now uses the name passed to\nSimpleLruInit (which is then stored as LWLock tranche name).\n\nThe counters are still stored in a fixed-sized array, and there's a\nsimple name/index mapping. We don't have any registry of stable SLRU\nIDs, so I can't think of anything better, and I think this is good\nenough for now.\n\nThe other change is that I got rid of the io_error counter. We don't\nhave that for shared buffers etc. either, anyway.\n\nI've also renamed the colunms from \"pages\" to \"blks\" to make it\nconsistent with other similar stats (blks_hit, blks_read). I've\nrenamed the fields to \"blks_written\" and \"blks_zeroed\".\n\nAnd finally, I've added the view to monitoring.sgml.\n\nBarring objections, I'll get this committed in the next few days, after\nreviewing the comments a bit.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Mar 2020 01:41:11 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Hi,\n\nhere is a bit improved version of the patch - I've been annoyed by how\nthe resetting works (per-entry timestamp, but resetting all entries) so\nI've added a new function pg_stat_reset_slru() that allows resetting\neither all entries or just one entry (identified by name). So\n\n SELECT pg_stat_reset_slru('clog');\n\nresets just \"clog\" SLRU counters, while\n\n SELECT pg_stat_reset_slru(NULL);\n\nresets all entries.\n\nI've also done a bit of benchmarking, to see if this has measurable\nimpact (in which case it might deserve a new GUC), and I think it's not\nmeasurable. I've used a tiny unlogged table (single row).\n\n CREATE UNLOGGED TABLE t (a int);\n INSERT INTO t VALUES (1);\n\nand then short pgbench runs with a single client, updatint the row. I've\nbeen unable to measure any regression, it's all well within 1% so noise.\nBut perhaps there's some other benchmark that I should do?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 28 Mar 2020 01:26:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Hi,\n\nI've pushed this after some minor cleanup and improvements.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Apr 2020 02:41:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Hi, \n\nThank you for developing great features.\nThe attached patch is a small fix to the committed documentation for the data type name of blks_hit column.\n\nBest regards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com] \nSent: Thursday, April 2, 2020 9:42 AM\nTo: Alvaro Herrera <alvherre@2ndquadrant.com>\nCc: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>; tsunakawa.takay@fujitsu.com; pgsql-hackers@postgresql.org\nSubject: Re: SLRU statistics\n\nHi,\n\nI've pushed this after some minor cleanup and improvements.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 2 Apr 2020 02:04:10 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: SLRU statistics"
},
{
"msg_contents": "On Thu, Apr 02, 2020 at 02:04:10AM +0000, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n>Hi,\n>\n>Thank you for developing great features.\n>The attached patch is a small fix to the committed documentation for the data type name of blks_hit column.\n>\n\nThank you for the patch, pushed.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Apr 2020 14:29:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Hello Tomas,\n\nOn Thu, Apr 2, 2020 at 5:59 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Thank you for the patch, pushed.\n>\nIn SimpleLruReadPage_ReadOnly, we first try to find the SLRU page in\nshared buffer under shared lock, then conditionally visit\nSimpleLruReadPage if reading is necessary. IMHO, we should update\nhit_count if we can find the buffer in SimpleLruReadPage_ReadOnly\ndirectly. Am I missing something?\n\nAttached a patch for the same.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 7 Apr 2020 17:01:37 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Tue, Apr 07, 2020 at 05:01:37PM +0530, Kuntal Ghosh wrote:\n>Hello Tomas,\n>\n>On Thu, Apr 2, 2020 at 5:59 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> Thank you for the patch, pushed.\n>>\n>In SimpleLruReadPage_ReadOnly, we first try to find the SLRU page in\n>shared buffer under shared lock, then conditionally visit\n>SimpleLruReadPage if reading is necessary. IMHO, we should update\n>hit_count if we can find the buffer in SimpleLruReadPage_ReadOnly\n>directly. Am I missing something?\n>\n>Attached a patch for the same.\n>\n\nYes, I think that's correct - without this we fail to account for\n(possibly) a quite significant number of hits. Thanks for the report,\nI'll get this pushed later today.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 7 Apr 2020 19:29:06 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On 2020/04/02 9:41, Tomas Vondra wrote:\n> Hi,\n> \n> I've pushed this after some minor cleanup and improvements.\n\n+static char *slru_names[] = {\"async\", \"clog\", \"commit_timestamp\",\n+\t\t\t\t\t\t\t \"multixact_offset\", \"multixact_member\",\n+\t\t\t\t\t\t\t \"oldserxid\", \"pg_xact\", \"subtrans\",\n+\t\t\t\t\t\t\t \"other\" /* has to be last */};\n\nWhen I tried pg_stat_slru, I found that it returns a row for \"pg_xact\".\nBut since there is no \"pg_xact\" slru (\"clog\" slru exists instead),\n\"pg_xact\" should be removed? Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 1 May 2020 03:02:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Fri, May 01, 2020 at 03:02:59AM +0900, Fujii Masao wrote:\n>\n>\n>On 2020/04/02 9:41, Tomas Vondra wrote:\n>>Hi,\n>>\n>>I've pushed this after some minor cleanup and improvements.\n>\n>+static char *slru_names[] = {\"async\", \"clog\", \"commit_timestamp\",\n>+\t\t\t\t\t\t\t \"multixact_offset\", \"multixact_member\",\n>+\t\t\t\t\t\t\t \"oldserxid\", \"pg_xact\", \"subtrans\",\n>+\t\t\t\t\t\t\t \"other\" /* has to be last */};\n>\n>When I tried pg_stat_slru, I found that it returns a row for \"pg_xact\".\n>But since there is no \"pg_xact\" slru (\"clog\" slru exists instead),\n>\"pg_xact\" should be removed? Patch attached.\n>\n\nYeah, I think I got confused and accidentally added both \"clog\" and\n\"pg_xact\". I'll get \"pg_xact\" removed.\n\n>Regards,\n>\n>-- \n>Fujii Masao\n>Advanced Computing Technology Center\n>Research and Development Headquarters\n>NTT DATA CORPORATION\n\n>diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n>index 6562cc400b..ba6d8d2123 100644\n>--- a/doc/src/sgml/monitoring.sgml\n>+++ b/doc/src/sgml/monitoring.sgml\n>@@ -3483,7 +3483,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n> predefined list (<literal>async</literal>, <literal>clog</literal>,\n> <literal>commit_timestamp</literal>, <literal>multixact_offset</literal>,\n> <literal>multixact_member</literal>, <literal>oldserxid</literal>,\n>- <literal>pg_xact</literal>, <literal>subtrans</literal> and\n>+ <literal>subtrans</literal> and\n> <literal>other</literal>) resets counters for only that entry.\n> Names not included in this list are treated as <literal>other</literal>.\n> </entry>\n>diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\n>index 50eea2e8a8..2ba3858d31 100644\n>--- a/src/backend/postmaster/pgstat.c\n>+++ b/src/backend/postmaster/pgstat.c\n>@@ -152,7 +152,7 @@ PgStat_MsgBgWriter BgWriterStats;\n> */\n> static char *slru_names[] = {\"async\", \"clog\", \"commit_timestamp\",\n> \t\t\t\t\t\t\t \"multixact_offset\", \"multixact_member\",\n>-\t\t\t\t\t\t\t \"oldserxid\", \"pg_xact\", \"subtrans\",\n>+\t\t\t\t\t\t\t \"oldserxid\", \"subtrans\",\n> \t\t\t\t\t\t\t \"other\" /* has to be last */};\n>\n> /* number of elemenents of slru_name array */\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 Apr 2020 20:19:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "\n\nOn 2020/05/01 3:19, Tomas Vondra wrote:\n> On Fri, May 01, 2020 at 03:02:59AM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/04/02 9:41, Tomas Vondra wrote:\n>>> Hi,\n>>>\n>>> I've pushed this after some minor cleanup and improvements.\n>>\n>> +static char *slru_names[] = {\"async\", \"clog\", \"commit_timestamp\",\n>> + \"multixact_offset\", \"multixact_member\",\n>> + \"oldserxid\", \"pg_xact\", \"subtrans\",\n>> + \"other\" /* has to be last */};\n>>\n>> When I tried pg_stat_slru, I found that it returns a row for \"pg_xact\".\n>> But since there is no \"pg_xact\" slru (\"clog\" slru exists instead),\n>> \"pg_xact\" should be removed? Patch attached.\n>>\n> \n> Yeah, I think I got confused and accidentally added both \"clog\" and\n> \"pg_xact\". I'll get \"pg_xact\" removed.\n\nThanks!\n\nAnother thing I found is; pgstat_send_slru() should be called also by\nother processes than backend? For example, since clog data is flushed\nbasically by checkpointer, checkpointer seems to need to send slru stats.\nOtherwise, pg_stat_slru.flushes would not be updated.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 1 May 2020 11:49:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Fri, May 01, 2020 at 11:49:51AM +0900, Fujii Masao wrote:\n>\n>\n>On 2020/05/01 3:19, Tomas Vondra wrote:\n>>On Fri, May 01, 2020 at 03:02:59AM +0900, Fujii Masao wrote:\n>>>\n>>>\n>>>On 2020/04/02 9:41, Tomas Vondra wrote:\n>>>>Hi,\n>>>>\n>>>>I've pushed this after some minor cleanup and improvements.\n>>>\n>>>+static char *slru_names[] = {\"async\", \"clog\", \"commit_timestamp\",\n>>>+����������������������������� \"multixact_offset\", \"multixact_member\",\n>>>+����������������������������� \"oldserxid\", \"pg_xact\", \"subtrans\",\n>>>+����������������������������� \"other\" /* has to be last */};\n>>>\n>>>When I tried pg_stat_slru, I found that it returns a row for \"pg_xact\".\n>>>But since there is no \"pg_xact\" slru (\"clog\" slru exists instead),\n>>>\"pg_xact\" should be removed? Patch attached.\n>>>\n>>\n>>Yeah, I think I got confused and accidentally added both \"clog\" and\n>>\"pg_xact\". I'll get \"pg_xact\" removed.\n>\n>Thanks!\n>\n\nOK, pushed. Thanks!\n\n>Another thing I found is; pgstat_send_slru() should be called also by\n>other processes than backend? For example, since clog data is flushed\n>basically by checkpointer, checkpointer seems to need to send slru stats.\n>Otherwise, pg_stat_slru.flushes would not be updated.\n>\n\nHmmm, that's a good point. If I understand the issue correctly, the\ncheckpointer accumulates the stats but never really sends them because\nit never calls pgstat_report_stat/pgstat_send_slru. That's only called\nfrom PostgresMain, but not from CheckpointerMain.\n\nI think we could simply add pgstat_send_slru() right after the existing\ncall in CheckpointerMain, right?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 2 May 2020 02:08:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "\n\nOn 2020/05/02 9:08, Tomas Vondra wrote:\n> On Fri, May 01, 2020 at 11:49:51AM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/05/01 3:19, Tomas Vondra wrote:\n>>> On Fri, May 01, 2020 at 03:02:59AM +0900, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2020/04/02 9:41, Tomas Vondra wrote:\n>>>>> Hi,\n>>>>>\n>>>>> I've pushed this after some minor cleanup and improvements.\n>>>>\n>>>> +static char *slru_names[] = {\"async\", \"clog\", \"commit_timestamp\",\n>>>> + \"multixact_offset\", \"multixact_member\",\n>>>> + \"oldserxid\", \"pg_xact\", \"subtrans\",\n>>>> + \"other\" /* has to be last */};\n>>>>\n>>>> When I tried pg_stat_slru, I found that it returns a row for \"pg_xact\".\n>>>> But since there is no \"pg_xact\" slru (\"clog\" slru exists instead),\n>>>> \"pg_xact\" should be removed? Patch attached.\n>>>>\n>>>\n>>> Yeah, I think I got confused and accidentally added both \"clog\" and\n>>> \"pg_xact\". I'll get \"pg_xact\" removed.\n>>\n>> Thanks!\n>>\n> \n> OK, pushed. Thanks!\n\nThanks a lot!\n\nBut, like the patch that I attached in the previous email does,\n\"pg_xact\" should be removed from the description of pg_stat_reset_slru()\nin monitoring.sgml.\n\n>> Another thing I found is; pgstat_send_slru() should be called also by\n>> other processes than backend? For example, since clog data is flushed\n>> basically by checkpointer, checkpointer seems to need to send slru stats.\n>> Otherwise, pg_stat_slru.flushes would not be updated.\n>>\n> \n> Hmmm, that's a good point. If I understand the issue correctly, the\n> checkpointer accumulates the stats but never really sends them because\n> it never calls pgstat_report_stat/pgstat_send_slru. That's only called\n> from PostgresMain, but not from CheckpointerMain.\n\nYes.\n\n> I think we could simply add pgstat_send_slru() right after the existing\n> call in CheckpointerMain, right?\n\nCheckpointer sends off activity statistics to the stats collector in\ntwo places, by calling pgstat_send_bgwriter(). What about calling\npgstat_send_slru() just after pgstat_send_bgwriter()?\n\nIn previous email, I mentioned checkpointer just as an example.\nSo probably we need to investigate what process should send slru stats,\nother than checkpointer. I guess that at least autovacuum worker,\nlogical replication walsender and parallel query worker (maybe this has\nbeen already covered by parallel query some mechanisms. Sorry I've\nnot checked that) would need to send its slru stats.\n\nAtsushi-san reported another issue in pg_stat_slru.\nYou're planning to work on that?\nhttps://postgr.es/m/CACZ0uYFe16pjZxQYaTn53mspyM7dgMPYL3DJLjjPw69GMCC2Ow@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 2 May 2020 15:56:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Sat, May 02, 2020 at 03:56:07PM +0900, Fujii Masao wrote:\n>\n>\n>On 2020/05/02 9:08, Tomas Vondra wrote:\n>>On Fri, May 01, 2020 at 11:49:51AM +0900, Fujii Masao wrote:\n>>>\n>>>\n>>>On 2020/05/01 3:19, Tomas Vondra wrote:\n>>>>On Fri, May 01, 2020 at 03:02:59AM +0900, Fujii Masao wrote:\n>>>>>\n>>>>>\n>>>>>On 2020/04/02 9:41, Tomas Vondra wrote:\n>>>>>>Hi,\n>>>>>>\n>>>>>>I've pushed this after some minor cleanup and improvements.\n>>>>>\n>>>>>+static char *slru_names[] = {\"async\", \"clog\", \"commit_timestamp\",\n>>>>>+����������������������������� \"multixact_offset\", \"multixact_member\",\n>>>>>+����������������������������� \"oldserxid\", \"pg_xact\", \"subtrans\",\n>>>>>+����������������������������� \"other\" /* has to be last */};\n>>>>>\n>>>>>When I tried pg_stat_slru, I found that it returns a row for \"pg_xact\".\n>>>>>But since there is no \"pg_xact\" slru (\"clog\" slru exists instead),\n>>>>>\"pg_xact\" should be removed? Patch attached.\n>>>>>\n>>>>\n>>>>Yeah, I think I got confused and accidentally added both \"clog\" and\n>>>>\"pg_xact\". I'll get \"pg_xact\" removed.\n>>>\n>>>Thanks!\n>>>\n>>\n>>OK, pushed. Thanks!\n>\n>Thanks a lot!\n>\n>But, like the patch that I attached in the previous email does,\n>\"pg_xact\" should be removed from the description of pg_stat_reset_slru()\n>in monitoring.sgml.\n>\n\nWhooops. My bad, will fix.\n\n>>>Another thing I found is; pgstat_send_slru() should be called also by\n>>>other processes than backend? For example, since clog data is flushed\n>>>basically by checkpointer, checkpointer seems to need to send slru stats.\n>>>Otherwise, pg_stat_slru.flushes would not be updated.\n>>>\n>>\n>>Hmmm, that's a good point. If I understand the issue correctly, the\n>>checkpointer accumulates the stats but never really sends them because\n>>it never calls pgstat_report_stat/pgstat_send_slru. That's only called\n>>from PostgresMain, but not from CheckpointerMain.\n>\n>Yes.\n>\n>>I think we could simply add pgstat_send_slru() right after the existing\n>>call in CheckpointerMain, right?\n>\n>Checkpointer sends off activity statistics to the stats collector in\n>two places, by calling pgstat_send_bgwriter(). What about calling\n>pgstat_send_slru() just after pgstat_send_bgwriter()?\n>\n\nYep, that's what I proposed.\n\n>In previous email, I mentioned checkpointer just as an example.\n>So probably we need to investigate what process should send slru stats,\n>other than checkpointer. I guess that at least autovacuum worker,\n>logical replication walsender and parallel query worker (maybe this has\n>been already covered by parallel query some mechanisms. Sorry I've\n>not checked that) would need to send its slru stats.\n>\n\nProbably. Do you have any other process type in mind?\n\n>Atsushi-san reported another issue in pg_stat_slru.\n>You're planning to work on that?\n>https://postgr.es/m/CACZ0uYFe16pjZxQYaTn53mspyM7dgMPYL3DJLjjPw69GMCC2Ow@mail.gmail.com\n>\n\nYes, I'll investigate.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 2 May 2020 12:55:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Sat, May 02, 2020 at 12:55:00PM +0200, Tomas Vondra wrote:\n>On Sat, May 02, 2020 at 03:56:07PM +0900, Fujii Masao wrote:\n>>\n>> ...\n>\n>>>>Another thing I found is; pgstat_send_slru() should be called also by\n>>>>other processes than backend? For example, since clog data is flushed\n>>>>basically by checkpointer, checkpointer seems to need to send slru stats.\n>>>>Otherwise, pg_stat_slru.flushes would not be updated.\n>>>>\n>>>\n>>>Hmmm, that's a good point. If I understand the issue correctly, the\n>>>checkpointer accumulates the stats but never really sends them because\n>>>it never calls pgstat_report_stat/pgstat_send_slru. That's only called\n>>>from PostgresMain, but not from CheckpointerMain.\n>>\n>>Yes.\n>>\n>>>I think we could simply add pgstat_send_slru() right after the existing\n>>>call in CheckpointerMain, right?\n>>\n>>Checkpointer sends off activity statistics to the stats collector in\n>>two places, by calling pgstat_send_bgwriter(). What about calling\n>>pgstat_send_slru() just after pgstat_send_bgwriter()?\n>>\n>\n>Yep, that's what I proposed.\n>\n>>In previous email, I mentioned checkpointer just as an example.\n>>So probably we need to investigate what process should send slru stats,\n>>other than checkpointer. I guess that at least autovacuum worker,\n>>logical replication walsender and parallel query worker (maybe this has\n>>been already covered by parallel query some mechanisms. Sorry I've\n>>not checked that) would need to send its slru stats.\n>>\n>\n>Probably. Do you have any other process type in mind?\n>\n\nI've looked at places calling pgstat_send_* functions, and I found\nthsese places:\n\nsrc/backend/postmaster/bgwriter.c\n\n- AFAIK it merely writes out dirty shared buffers, so likely irrelevant.\n\nsrc/backend/postmaster/checkpointer.c\n\n- This is what we're already discussing here.\n\nsrc/backend/postmaster/pgarch.c\n\n- Seems irrelevant.\n\n\nI'm a bit puzzled why we're not sending any stats from walsender, which\nI suppose could do various stuff during logical decoding.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 2 May 2020 18:59:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On 2020/05/03 1:59, Tomas Vondra wrote:\n> On Sat, May 02, 2020 at 12:55:00PM +0200, Tomas Vondra wrote:\n>> On Sat, May 02, 2020 at 03:56:07PM +0900, Fujii Masao wrote:\n>>>\n>>> ...\n>>\n>>>>> Another thing I found is; pgstat_send_slru() should be called also by\n>>>>> other processes than backend? For example, since clog data is flushed\n>>>>> basically by checkpointer, checkpointer seems to need to send slru stats.\n>>>>> Otherwise, pg_stat_slru.flushes would not be updated.\n>>>>>\n>>>>\n>>>> Hmmm, that's a good point. If I understand the issue correctly, the\n>>>> checkpointer accumulates the stats but never really sends them because\n>>>> it never calls pgstat_report_stat/pgstat_send_slru. That's only called\n>>>> from PostgresMain, but not from CheckpointerMain.\n>>>\n>>> Yes.\n>>>\n>>>> I think we could simply add pgstat_send_slru() right after the existing\n>>>> call in CheckpointerMain, right?\n>>>\n>>> Checkpointer sends off activity statistics to the stats collector in\n>>> two places, by calling pgstat_send_bgwriter(). What about calling\n>>> pgstat_send_slru() just after pgstat_send_bgwriter()?\n>>>\n>>\n>> Yep, that's what I proposed.\n>>\n>>> In previous email, I mentioned checkpointer just as an example.\n>>> So probably we need to investigate what process should send slru stats,\n>>> other than checkpointer. I guess that at least autovacuum worker,\n>>> logical replication walsender and parallel query worker (maybe this has\n>>> been already covered by parallel query some mechanisms. Sorry I've\n>>> not checked that) would need to send its slru stats.\n>>>\n>>\n>> Probably. Do you have any other process type in mind?\n\nNo. For now what I'm in mind are just checkpointer, autovacuum worker,\nlogical replication walsender and parallel query worker. Seems logical\nreplication worker and syncer have sent slru stats via pgstat_report_stat().\n\n> I've looked at places calling pgstat_send_* functions, and I found\n> thsese places:\n> \n> src/backend/postmaster/bgwriter.c\n> \n> - AFAIK it merely writes out dirty shared buffers, so likely irrelevant.\n> \n> src/backend/postmaster/checkpointer.c\n> \n> - This is what we're already discussing here.\n> \n> src/backend/postmaster/pgarch.c\n> \n> - Seems irrelevant.\n> \n> \n> I'm a bit puzzled why we're not sending any stats from walsender, which\n> I suppose could do various stuff during logical decoding.\n\nNot sure why, but that seems an oversight...\n\n\nAlso I found another minor issue; SLRUStats has not been initialized to 0\nand which could update the counters unexpectedly. Attached patch fixes\nthis issue.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 7 May 2020 13:47:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On 2020/05/07 13:47, Fujii Masao wrote:\n> \n> \n> On 2020/05/03 1:59, Tomas Vondra wrote:\n>> On Sat, May 02, 2020 at 12:55:00PM +0200, Tomas Vondra wrote:\n>>> On Sat, May 02, 2020 at 03:56:07PM +0900, Fujii Masao wrote:\n>>>>\n>>>> ...\n>>>\n>>>>>> Another thing I found is; pgstat_send_slru() should be called also by\n>>>>>> other processes than backend? For example, since clog data is flushed\n>>>>>> basically by checkpointer, checkpointer seems to need to send slru stats.\n>>>>>> Otherwise, pg_stat_slru.flushes would not be updated.\n>>>>>>\n>>>>>\n>>>>> Hmmm, that's a good point. If I understand the issue correctly, the\n>>>>> checkpointer accumulates the stats but never really sends them because\n>>>>> it never calls pgstat_report_stat/pgstat_send_slru. That's only called\n>>>>> from PostgresMain, but not from CheckpointerMain.\n>>>>\n>>>> Yes.\n>>>>\n>>>>> I think we could simply add pgstat_send_slru() right after the existing\n>>>>> call in CheckpointerMain, right?\n>>>>\n>>>> Checkpointer sends off activity statistics to the stats collector in\n>>>> two places, by calling pgstat_send_bgwriter(). What about calling\n>>>> pgstat_send_slru() just after pgstat_send_bgwriter()?\n>>>>\n>>>\n>>> Yep, that's what I proposed.\n>>>\n>>>> In previous email, I mentioned checkpointer just as an example.\n>>>> So probably we need to investigate what process should send slru stats,\n>>>> other than checkpointer. I guess that at least autovacuum worker,\n>>>> logical replication walsender and parallel query worker (maybe this has\n>>>> been already covered by parallel query some mechanisms. Sorry I've\n>>>> not checked that) would need to send its slru stats.\n>>>>\n>>>\n>>> Probably. Do you have any other process type in mind?\n> \n> No. For now what I'm in mind are just checkpointer, autovacuum worker,\n> logical replication walsender and parallel query worker. Seems logical\n> replication worker and syncer have sent slru stats via pgstat_report_stat().\n> \n>> I've looked at places calling pgstat_send_* functions, and I found\n>> thsese places:\n>>\n>> src/backend/postmaster/bgwriter.c\n>>\n>> - AFAIK it merely writes out dirty shared buffers, so likely irrelevant.\n>>\n>> src/backend/postmaster/checkpointer.c\n>>\n>> - This is what we're already discussing here.\n>>\n>> src/backend/postmaster/pgarch.c\n>>\n>> - Seems irrelevant.\n>>\n>>\n>> I'm a bit puzzled why we're not sending any stats from walsender, which\n>> I suppose could do various stuff during logical decoding.\n> \n> Not sure why, but that seems an oversight...\n> \n> \n> Also I found another minor issue; SLRUStats has not been initialized to 0\n> and which could update the counters unexpectedly. Attached patch fixes\n> this issue.\n\nThis is minor issue, but basically it's better to fix that before\nv13 beta1 release. So barring any objection, I will commit the patch.\n\n+\t\tvalues[8] = Int64GetDatum(stat.stat_reset_timestamp);\n\nAlso I found another small issue: pg_stat_get_slru() returns the timestamp\nwhen pg_stat_slru was reset by using Int64GetDatum(). This works maybe\nbecause the timestamp is also int64. But TimestampTzGetDatum() should\nbe used here, instead. Patch attached. Thought?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 13 May 2020 16:10:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Wed, May 13, 2020 at 04:10:30PM +0900, Fujii Masao wrote:\n>\n>\n>On 2020/05/07 13:47, Fujii Masao wrote:\n>>\n>>\n>>On 2020/05/03 1:59, Tomas Vondra wrote:\n>>>On Sat, May 02, 2020 at 12:55:00PM +0200, Tomas Vondra wrote:\n>>>>On Sat, May 02, 2020 at 03:56:07PM +0900, Fujii Masao wrote:\n>>>>>\n>>>>>...\n>>>>\n>>>>>>>Another thing I found is; pgstat_send_slru() should be called also by\n>>>>>>>other processes than backend? For example, since clog data is flushed\n>>>>>>>basically by checkpointer, checkpointer seems to need to send slru stats.\n>>>>>>>Otherwise, pg_stat_slru.flushes would not be updated.\n>>>>>>>\n>>>>>>\n>>>>>>Hmmm, that's a good point. If I understand the issue correctly, the\n>>>>>>checkpointer accumulates the stats but never really sends them because\n>>>>>>it never calls pgstat_report_stat/pgstat_send_slru. That's only called\n>>>>>>from PostgresMain, but not from CheckpointerMain.\n>>>>>\n>>>>>Yes.\n>>>>>\n>>>>>>I think we could simply add pgstat_send_slru() right after the existing\n>>>>>>call in CheckpointerMain, right?\n>>>>>\n>>>>>Checkpointer sends off activity statistics to the stats collector in\n>>>>>two places, by calling pgstat_send_bgwriter(). What about calling\n>>>>>pgstat_send_slru() just after pgstat_send_bgwriter()?\n>>>>>\n>>>>\n>>>>Yep, that's what I proposed.\n>>>>\n>>>>>In previous email, I mentioned checkpointer just as an example.\n>>>>>So probably we need to investigate what process should send slru stats,\n>>>>>other than checkpointer. I guess that at least autovacuum worker,\n>>>>>logical replication walsender and parallel query worker (maybe this has\n>>>>>been already covered by parallel query some mechanisms. Sorry I've\n>>>>>not checked that) would need to send its slru stats.\n>>>>>\n>>>>\n>>>>Probably. Do you have any other process type in mind?\n>>\n>>No. For now what I'm in mind are just checkpointer, autovacuum worker,\n>>logical replication walsender and parallel query worker. Seems logical\n>>replication worker and syncer have sent slru stats via pgstat_report_stat().\n>>\n>>>I've looked at places calling pgstat_send_* functions, and I found\n>>>thsese places:\n>>>\n>>>src/backend/postmaster/bgwriter.c\n>>>\n>>>- AFAIK it merely writes out dirty shared buffers, so likely irrelevant.\n>>>\n>>>src/backend/postmaster/checkpointer.c\n>>>\n>>>- This is what we're already discussing here.\n>>>\n>>>src/backend/postmaster/pgarch.c\n>>>\n>>>- Seems irrelevant.\n>>>\n>>>\n>>>I'm a bit puzzled why we're not sending any stats from walsender, which\n>>>I suppose could do various stuff during logical decoding.\n>>\n>>Not sure why, but that seems an oversight...\n>>\n>>\n>>Also I found another minor issue; SLRUStats has not been initialized to 0\n>>and which could update the counters unexpectedly. Attached patch fixes\n>>this issue.\n>\n>This is minor issue, but basically it's better to fix that before\n>v13 beta1 release. So barring any objection, I will commit the patch.\n>\n>+\t\tvalues[8] = Int64GetDatum(stat.stat_reset_timestamp);\n>\n>Also I found another small issue: pg_stat_get_slru() returns the timestamp\n>when pg_stat_slru was reset by using Int64GetDatum(). This works maybe\n>because the timestamp is also int64. But TimestampTzGetDatum() should\n>be used here, instead. Patch attached. Thought?\n>\n\nI agree with both fixes.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 13 May 2020 10:21:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "\n\nOn 2020/05/13 17:21, Tomas Vondra wrote:\n> On Wed, May 13, 2020 at 04:10:30PM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/05/07 13:47, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/05/03 1:59, Tomas Vondra wrote:\n>>>> On Sat, May 02, 2020 at 12:55:00PM +0200, Tomas Vondra wrote:\n>>>>> On Sat, May 02, 2020 at 03:56:07PM +0900, Fujii Masao wrote:\n>>>>>>\n>>>>>> ...\n>>>>>\n>>>>>>>> Another thing I found is; pgstat_send_slru() should be called also by\n>>>>>>>> other processes than backend? For example, since clog data is flushed\n>>>>>>>> basically by checkpointer, checkpointer seems to need to send slru stats.\n>>>>>>>> Otherwise, pg_stat_slru.flushes would not be updated.\n>>>>>>>>\n>>>>>>>\n>>>>>>> Hmmm, that's a good point. If I understand the issue correctly, the\n>>>>>>> checkpointer accumulates the stats but never really sends them because\n>>>>>>> it never calls pgstat_report_stat/pgstat_send_slru. That's only called\n>>>>>>> from PostgresMain, but not from CheckpointerMain.\n>>>>>>\n>>>>>> Yes.\n>>>>>>\n>>>>>>> I think we could simply add pgstat_send_slru() right after the existing\n>>>>>>> call in CheckpointerMain, right?\n>>>>>>\n>>>>>> Checkpointer sends off activity statistics to the stats collector in\n>>>>>> two places, by calling pgstat_send_bgwriter(). What about calling\n>>>>>> pgstat_send_slru() just after pgstat_send_bgwriter()?\n>>>>>>\n>>>>>\n>>>>> Yep, that's what I proposed.\n>>>>>\n>>>>>> In previous email, I mentioned checkpointer just as an example.\n>>>>>> So probably we need to investigate what process should send slru stats,\n>>>>>> other than checkpointer. I guess that at least autovacuum worker,\n>>>>>> logical replication walsender and parallel query worker (maybe this has\n>>>>>> been already covered by parallel query some mechanisms. Sorry I've\n>>>>>> not checked that) would need to send its slru stats.\n>>>>>>\n>>>>>\n>>>>> Probably. Do you have any other process type in mind?\n>>>\n>>> No. For now what I'm in mind are just checkpointer, autovacuum worker,\n>>> logical replication walsender and parallel query worker. Seems logical\n>>> replication worker and syncer have sent slru stats via pgstat_report_stat().\n>>>\n>>>> I've looked at places calling pgstat_send_* functions, and I found\n>>>> thsese places:\n>>>>\n>>>> src/backend/postmaster/bgwriter.c\n>>>>\n>>>> - AFAIK it merely writes out dirty shared buffers, so likely irrelevant.\n>>>>\n>>>> src/backend/postmaster/checkpointer.c\n>>>>\n>>>> - This is what we're already discussing here.\n>>>>\n>>>> src/backend/postmaster/pgarch.c\n>>>>\n>>>> - Seems irrelevant.\n>>>>\n>>>>\n>>>> I'm a bit puzzled why we're not sending any stats from walsender, which\n>>>> I suppose could do various stuff during logical decoding.\n>>>\n>>> Not sure why, but that seems an oversight...\n>>>\n>>>\n>>> Also I found another minor issue; SLRUStats has not been initialized to 0\n>>> and which could update the counters unexpectedly. Attached patch fixes\n>>> this issue.\n>>\n>> This is minor issue, but basically it's better to fix that before\n>> v13 beta1 release. So barring any objection, I will commit the patch.\n>>\n>> + values[8] = Int64GetDatum(stat.stat_reset_timestamp);\n>>\n>> Also I found another small issue: pg_stat_get_slru() returns the timestamp\n>> when pg_stat_slru was reset by using Int64GetDatum(). This works maybe\n>> because the timestamp is also int64. But TimestampTzGetDatum() should\n>> be used here, instead. Patch attached. Thought?\n>>\n> \n> I agree with both fixes.\n\nPushed both. Thanks!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 13 May 2020 22:22:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2020/05/13 17:21, Tomas Vondra wrote:\n>> On Wed, May 13, 2020 at 04:10:30PM +0900, Fujii Masao wrote:\n>>> Also I found another minor issue; SLRUStats has not been initialized to 0\n>>> and which could update the counters unexpectedly. Attached patch fixes\n>>> this issue.\n\n> Pushed both. Thanks!\n\nWhy is that necessary? A static variable is defined by C to start off\nas zeroes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 May 2020 10:26:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "\n\nOn 2020/05/13 23:26, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> On 2020/05/13 17:21, Tomas Vondra wrote:\n>>> On Wed, May 13, 2020 at 04:10:30PM +0900, Fujii Masao wrote:\n>>>> Also I found another minor issue; SLRUStats has not been initialized to 0\n>>>> and which could update the counters unexpectedly. Attached patch fixes\n>>>> this issue.\n> \n>> Pushed both. Thanks!\n> \n> Why is that necessary? A static variable is defined by C to start off\n> as zeroes.\n\nBecause SLRUStats is not a static variable. No?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 13 May 2020 23:46:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Wed, May 13, 2020 at 10:26:39AM -0400, Tom Lane wrote:\n>Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> On 2020/05/13 17:21, Tomas Vondra wrote:\n>>> On Wed, May 13, 2020 at 04:10:30PM +0900, Fujii Masao wrote:\n>>>> Also I found another minor issue; SLRUStats has not been initialized to 0\n>>>> and which could update the counters unexpectedly. Attached patch fixes\n>>>> this issue.\n>\n>> Pushed both. Thanks!\n>\n>Why is that necessary? A static variable is defined by C to start off\n>as zeroes.\n>\n\nBut is it a static variable? It's not declared as 'static' but maybe we\ncan assume it inits to zeroes anyway? I see we do that for\nBgWriterStats.\n\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 13 May 2020 16:57:39 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, May 13, 2020 at 10:26:39AM -0400, Tom Lane wrote:\n>> Why is that necessary? A static variable is defined by C to start off\n>> as zeroes.\n\n> But is it a static variable? It's not declared as 'static' but maybe we\n> can assume it inits to zeroes anyway? I see we do that for\n> BgWriterStats.\n\nSorry, by \"static\" I meant \"statically allocated\", not \"private to\nthis module\". I'm sure the C standard has some more precise terminology\nfor this distinction, but I forget what it is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 May 2020 11:01:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Wed, May 13, 2020 at 11:01:47AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Wed, May 13, 2020 at 10:26:39AM -0400, Tom Lane wrote:\n>>> Why is that necessary? A static variable is defined by C to start off\n>>> as zeroes.\n>\n>> But is it a static variable? It's not declared as 'static' but maybe we\n>> can assume it inits to zeroes anyway? I see we do that for\n>> BgWriterStats.\n>\n>Sorry, by \"static\" I meant \"statically allocated\", not \"private to\n>this module\". I'm sure the C standard has some more precise terminology\n>for this distinction, but I forget what it is.\n>\n\nAh, I see. I'm no expert in reading C standard (or any other standard),\nbut a quick google search yielded this section of C99 standard:\n\n-------------------------------------------------------------------------\nIf an object that has static storage duration is not initialized\nexplicitly, then:\n\n- if it has pointer type, it is initialized to a null pointer;\n\n- if it has arithmetic type, it is initialized to (positive or unsigned)\n zero;\n\n- if it is an aggregate, every member is initialized (recursively)\n according to these rules;\n\n- if it is au nion, the first named member is initialized (recursively)\n according to these rules\n-------------------------------------------------------------------------\n\nI assume the SLRU variable counts as aggregate, with members having\narithmetic types. In which case it really should be initialized to 0.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 13 May 2020 17:23:39 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Wed, May 13, 2020 at 11:46:30PM +0900, Fujii Masao wrote:\n>\n>\n>On 2020/05/13 23:26, Tom Lane wrote:\n>>Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>>On 2020/05/13 17:21, Tomas Vondra wrote:\n>>>>On Wed, May 13, 2020 at 04:10:30PM +0900, Fujii Masao wrote:\n>>>>>Also I found another minor issue; SLRUStats has not been initialized to 0\n>>>>>and which could update the counters unexpectedly. Attached patch fixes\n>>>>>this issue.\n>>\n>>>Pushed both. Thanks!\n>>\n>>Why is that necessary? A static variable is defined by C to start off\n>>as zeroes.\n>\n>Because SLRUStats is not a static variable. No?\n>\n\nI think it counts as a variable with \"static storage duration\" per 6.7.8\n(para 10), see [1]. I wasn't aware of this either, but it probably means\nthe memset is unnecessary.\n\nAlso, it seems a bit strange/confusing to handle this differently from\nBgWriterStats. And that worked fine without the init for years ...\n\n\n[1] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 13 May 2020 17:28:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I think it counts as a variable with \"static storage duration\" per 6.7.8\n> (para 10), see [1]. I wasn't aware of this either, but it probably means\n> the memset is unnecessary.\n> Also, it seems a bit strange/confusing to handle this differently from\n> BgWriterStats. And that worked fine without the init for years ...\n\nYeah, exactly.\n\nThere might be merit in memsetting it if we thought that it could have\nbecome nonzero in the postmaster during a previous shmem cycle-of-life.\nBut the postmaster really shouldn't be accumulating such counts; and\nif it is, then we have a bigger problem, because child processes would\nbe inheriting those counts via fork.\n\nI think this change is unnecessary and should be reverted to avoid\nfuture confusion about whether somehow it is necessary.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 May 2020 11:38:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "\n\nOn 2020/05/14 0:38, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I think it counts as a variable with \"static storage duration\" per 6.7.8\n>> (para 10), see [1]. I wasn't aware of this either, but it probably means\n>> the memset is unnecessary.\n>> Also, it seems a bit strange/confusing to handle this differently from\n>> BgWriterStats. And that worked fine without the init for years ...\n> \n> Yeah, exactly.\n> \n> There might be merit in memsetting it if we thought that it could have\n> become nonzero in the postmaster during a previous shmem cycle-of-life.\n> But the postmaster really shouldn't be accumulating such counts; and\n> if it is, then we have a bigger problem, because child processes would\n> be inheriting those counts via fork.\n\nIn my previous test, I thought I observed that the counters are already\nupdated at the beginning of some processes. So I thought that\nthe counters need to be initialized. Sorry, that's my fault...\n\nSo I tried the similar test again and found that postmaster seems to be\nable to increment the counters unless I'm missing something.\nFor example,\n\n frame #2: 0x000000010d93845f postgres`pgstat_count_slru_page_zeroed(ctl=0x000000010de27320) at pgstat.c:6739:2\n frame #3: 0x000000010d5922ba postgres`SimpleLruZeroPage(ctl=0x000000010de27320, pageno=0) at slru.c:290:2\n frame #4: 0x000000010d6b9ae2 postgres`AsyncShmemInit at async.c:568:12\n frame #5: 0x000000010d9da9a6 postgres`CreateSharedMemoryAndSemaphores at ipci.c:265:2\n frame #6: 0x000000010d93f679 postgres`reset_shared at postmaster.c:2664:2\n frame #7: 0x000000010d93d253 postgres`PostmasterMain(argc=3, argv=0x00007fad56402e00) at postmaster.c:1008:2\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 14 May 2020 01:06:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On 2020-May-14, Fujii Masao wrote:\n\n> So I tried the similar test again and found that postmaster seems to be\n> able to increment the counters unless I'm missing something.\n> For example,\n> \n> frame #2: 0x000000010d93845f postgres`pgstat_count_slru_page_zeroed(ctl=0x000000010de27320) at pgstat.c:6739:2\n> frame #3: 0x000000010d5922ba postgres`SimpleLruZeroPage(ctl=0x000000010de27320, pageno=0) at slru.c:290:2\n> frame #4: 0x000000010d6b9ae2 postgres`AsyncShmemInit at async.c:568:12\n> frame #5: 0x000000010d9da9a6 postgres`CreateSharedMemoryAndSemaphores at ipci.c:265:2\n> frame #6: 0x000000010d93f679 postgres`reset_shared at postmaster.c:2664:2\n> frame #7: 0x000000010d93d253 postgres`PostmasterMain(argc=3, argv=0x00007fad56402e00) at postmaster.c:1008:2\n\nUmm. I have the feeling that we'd rather avoid these updates in\npostmaster, per our general rule that postmaster should not touch shared\nmemory. However, it might be that it's okay in this case, as it only\nhappens just as shmem is being \"created\", so other processes have not\nyet had any time to mess things up. (IIRC only the Async module is\ndoing that.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 13 May 2020 12:14:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "\n\nOn 2020/05/14 1:14, Alvaro Herrera wrote:\n> On 2020-May-14, Fujii Masao wrote:\n> \n>> So I tried the similar test again and found that postmaster seems to be\n>> able to increment the counters unless I'm missing something.\n>> For example,\n>>\n>> frame #2: 0x000000010d93845f postgres`pgstat_count_slru_page_zeroed(ctl=0x000000010de27320) at pgstat.c:6739:2\n>> frame #3: 0x000000010d5922ba postgres`SimpleLruZeroPage(ctl=0x000000010de27320, pageno=0) at slru.c:290:2\n>> frame #4: 0x000000010d6b9ae2 postgres`AsyncShmemInit at async.c:568:12\n>> frame #5: 0x000000010d9da9a6 postgres`CreateSharedMemoryAndSemaphores at ipci.c:265:2\n>> frame #6: 0x000000010d93f679 postgres`reset_shared at postmaster.c:2664:2\n>> frame #7: 0x000000010d93d253 postgres`PostmasterMain(argc=3, argv=0x00007fad56402e00) at postmaster.c:1008:2\n> \n> Umm. I have the feeling that we'd rather avoid these updates in\n> postmaster, per our general rule that postmaster should not touch shared\n> memory. However, it might be that it's okay in this case, as it only\n> happens just as shmem is being \"created\", so other processes have not\n> yet had any time to mess things up.\n\nBut since the counter that postmaster incremented is propagated to\nchild processes via fork, it should be zeroed at postmaster or the\nbeginning of child process? Otherwise that counter always starts\nwith non-zero in child process.\n\n> (IIRC only the Async module is\n> doing that.)\n\nYes, as far as I do the test.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 14 May 2020 01:41:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> But since the counter that postmaster incremented is propagated to\n> child processes via fork, it should be zeroed at postmaster or the\n> beginning of child process? Otherwise that counter always starts\n> with non-zero in child process.\n\nYes, if the postmaster is incrementing these counts then we would\nhave to reset them at the start of each child process. I share\nAlvaro's feeling that that's bad and we don't want to do it.\n\n>> (IIRC only the Async module is doing that.)\n\nHm, maybe we can fix that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 May 2020 12:55:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "I wrote:\n>>> (IIRC only the Async module is doing that.)\n\n> Hm, maybe we can fix that.\n\nYeah, it's quite easy to make async.c postpone its first write to the\nasync SLRU. This seems like a win all around, because many installations\ndon't use NOTIFY and so will never need to do that work at all. In\ninstallations that do use notify, this costs an extra instruction or\ntwo per NOTIFY, but that's down in the noise.\n\nI got through check-world with the assertion shown that we are not\ncounting any SLRU operations in the postmaster. Don't know if we\nwant to commit that or not --- any thoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 13 May 2020 13:44:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Em qua., 13 de mai. de 2020 às 11:46, Fujii Masao <\nmasao.fujii@oss.nttdata.com> escreveu:\n\n>\n>\n> On 2020/05/13 23:26, Tom Lane wrote:\n> > Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> >> On 2020/05/13 17:21, Tomas Vondra wrote:\n> >>> On Wed, May 13, 2020 at 04:10:30PM +0900, Fujii Masao wrote:\n> >>>> Also I found another minor issue; SLRUStats has not been initialized\n> to 0\n> >>>> and which could update the counters unexpectedly. Attached patch fixes\n> >>>> this issue.\n> >\n> >> Pushed both. Thanks!\n> >\n> > Why is that necessary? A static variable is defined by C to start off\n> > as zeroes.\n>\n> Because SLRUStats is not a static variable. No?\n>\nIMHO, BgWriterStats have the same problem, shouldn't the same be done?\n\n/* Initialize BgWriterStats to zero */\nMemSet(&BgWriterStats, 0, sizeof(BgWriterStats));\n\n/* Initialize SLRU statistics to zero */\nmemset(&SLRUStats, 0, sizeof(SLRUStats));\n\nregards,\nRanier Vilela\n\nEm qua., 13 de mai. de 2020 às 11:46, Fujii Masao <masao.fujii@oss.nttdata.com> escreveu:\n\nOn 2020/05/13 23:26, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> On 2020/05/13 17:21, Tomas Vondra wrote:\n>>> On Wed, May 13, 2020 at 04:10:30PM +0900, Fujii Masao wrote:\n>>>> Also I found another minor issue; SLRUStats has not been initialized to 0\n>>>> and which could update the counters unexpectedly. Attached patch fixes\n>>>> this issue.\n> \n>> Pushed both. Thanks!\n> \n> Why is that necessary? A static variable is defined by C to start off\n> as zeroes.\n\nBecause SLRUStats is not a static variable. No?IMHO, \nBgWriterStats \n\nhave the same problem, shouldn't the same be done?\t/* Initialize BgWriterStats to zero */\tMemSet(&BgWriterStats, 0, sizeof(BgWriterStats));\t\t/* Initialize SLRU statistics to zero */\tmemset(&SLRUStats, 0, sizeof(SLRUStats));regards,Ranier Vilela",
"msg_date": "Wed, 13 May 2020 17:08:39 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "\n\nOn 2020/05/14 2:44, Tom Lane wrote:\n> I wrote:\n>>>> (IIRC only the Async module is doing that.)\n> \n>> Hm, maybe we can fix that.\n> \n> Yeah, it's quite easy to make async.c postpone its first write to the\n> async SLRU. This seems like a win all around, because many installations\n> don't use NOTIFY and so will never need to do that work at all. In\n> installations that do use notify, this costs an extra instruction or\n> two per NOTIFY, but that's down in the noise.\n\nLooks good to me. Thanks for the patch!\n\n> I got through check-world with the assertion shown that we are not\n> counting any SLRU operations in the postmaster. Don't know if we\n> want to commit that or not --- any thoughts?\n\n+1 to add this assertion because basically it's not good thing\nto access to SLRU at postmaster and we may want to fix that if found.\nAt least if we get rid of the SLRUStats initialization code,\nIMO it's better to add this assertion and ensure that postmaster\ndoesn't update the SLRU stats counters.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 14 May 2020 11:18:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2020/05/14 2:44, Tom Lane wrote:\n>> I got through check-world with the assertion shown that we are not\n>> counting any SLRU operations in the postmaster. Don't know if we\n>> want to commit that or not --- any thoughts?\n\n> +1 to add this assertion because basically it's not good thing\n> to access to SLRU at postmaster and we may want to fix that if found.\n> At least if we get rid of the SLRUStats initialization code,\n> IMO it's better to add this assertion and ensure that postmaster\n> doesn't update the SLRU stats counters.\n\nSeems reasonable --- I'll include it.\n\nIt might be nice to have similar assertions protecting BgWriterStats.\nBut given that we've made that public to be hacked on directly by several\ndifferent modules, I'm not sure that there's any simple way to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 May 2020 22:24:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "\n\nOn 2020/05/07 13:47, Fujii Masao wrote:\n> \n> \n> On 2020/05/03 1:59, Tomas Vondra wrote:\n>> On Sat, May 02, 2020 at 12:55:00PM +0200, Tomas Vondra wrote:\n>>> On Sat, May 02, 2020 at 03:56:07PM +0900, Fujii Masao wrote:\n>>>>\n>>>> ...\n>>>\n>>>>>> Another thing I found is; pgstat_send_slru() should be called also by\n>>>>>> other processes than backend? For example, since clog data is flushed\n>>>>>> basically by checkpointer, checkpointer seems to need to send slru stats.\n>>>>>> Otherwise, pg_stat_slru.flushes would not be updated.\n>>>>>>\n>>>>>\n>>>>> Hmmm, that's a good point. If I understand the issue correctly, the\n>>>>> checkpointer accumulates the stats but never really sends them because\n>>>>> it never calls pgstat_report_stat/pgstat_send_slru. That's only called\n>>>>> from PostgresMain, but not from CheckpointerMain.\n>>>>\n>>>> Yes.\n>>>>\n>>>>> I think we could simply add pgstat_send_slru() right after the existing\n>>>>> call in CheckpointerMain, right?\n>>>>\n>>>> Checkpointer sends off activity statistics to the stats collector in\n>>>> two places, by calling pgstat_send_bgwriter(). What about calling\n>>>> pgstat_send_slru() just after pgstat_send_bgwriter()?\n>>>>\n>>>\n>>> Yep, that's what I proposed.\n>>>\n>>>> In previous email, I mentioned checkpointer just as an example.\n>>>> So probably we need to investigate what process should send slru stats,\n>>>> other than checkpointer. I guess that at least autovacuum worker,\n>>>> logical replication walsender and parallel query worker (maybe this has\n>>>> been already covered by parallel query some mechanisms. Sorry I've\n>>>> not checked that) would need to send its slru stats.\n>>>>\n>>>\n>>> Probably. Do you have any other process type in mind?\n> \n> No. For now what I'm in mind are just checkpointer, autovacuum worker,\n> logical replication walsender and parallel query worker. Seems logical\n> replication worker and syncer have sent slru stats via pgstat_report_stat().\n\nLet me go back to this topic. As far as I read the code again, logical\nwalsender reports the stats at the exit via pgstat_beshutdown_hook()\nprocess-exit callback. But it doesn't report the stats while it's running.\nThis is not the problem only for SLRU stats. We would need to consider\nhow to handle the stats by logical walsender, separately from SLRU stats.\n\nAutovacuum worker reports the stats at the exit via pgstat_beshutdown_hook(),\ntoo. Unlike logical walsender, autovacuum worker is not the process that\nbasically keeps running during the service. It exits after it does vacuum or\nanalyze. So it's not bad to report the stats only at the exit, in autovacuum\nworker case. There is no need to add extra code for SLRU stats report by\nautovacuum worker.\n\nParallel worker is in the same situation as autovacuum worker. Its lifetime\nis basically short and its stats is reported at the exit via\npgstat_beshutdown_hook().\n\npgstat_beshutdown_hook() reports the stats only when MyDatabaseId is valid.\nCheckpointer calls pgstat_beshutdown_hook() at the exit, but doesn't report\nthe stats because its MyDatabaseId is invalid. Also it doesn't report the SLRU\nstats while it's running. As we discussed upthread, we need to make\ncheckpointer call pgstat_send_slru() just after pgstat_send_bgwriter().\n\nHowever even if we do this, the stats updated during the last checkpointer's\nactivity (e.g., shutdown checkpoint) seems not reported because\npgstat_beshutdown_hook() doesn't report the stats in checkpointer case.\nDo we need to address this issue? If yes, we would need to change\npgstat_beshutdown_hook() or register another checkpointer-exit callback\nthat sends the stats. Thought?\n\nStartup process is in the same situation as checkpointer process. It reports\nthe stats neither at the exit nor whle it's running. But, like logical\nwalsender, this seems not the problem only for SLRU stats. We would need to\nconsider how to handle the stats by startup process, separately from SLRU\nstats.\n\nTherefore what we can do right now seems to make checkpointer report the SLRU\nstats while it's running. Other issues need more time to investigate...\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 14 May 2020 15:27:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "On Thu, May 14, 2020 at 2:27 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Therefore what we can do right now seems to make checkpointer report the SLRU\n> stats while it's running. Other issues need more time to investigate...\n> Thought?\n\nI'm confused by why SLRU statistics are reported by messages sent to\nthe stats collector rather than by just directly updating shared\nmemory. For database or table statistics there can be any number of\nobjects and we can't know in advance how many there will be, so we\ncan't set aside shared memory for the stats in advance. For SLRUs,\nthere's no such problem. Just having the individual backends\nperiodically merge their accumulated backend-local counters into the\nshared counters seems like it would be way simpler and more\nperformant.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 14 May 2020 14:52:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm confused by why SLRU statistics are reported by messages sent to\n> the stats collector rather than by just directly updating shared\n> memory.\n\nIt would be better to consider that as an aspect of the WIP stats\ncollector redesign, rather than inventing a bespoke mechanism for\nSLRU stats that's outside the stats collector (and, no doubt,\nwould have its own set of bugs). We don't need to invent even more\npathways for this sort of data.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 May 2020 15:45:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SLRU statistics"
}
] |
[
{
"msg_contents": "Hi, greetings everyone.\n\nContinuing the process of improving windows port, I'm trying to fix some\nleaks.\n\nbest regards,\nRanier Vilela",
"msg_date": "Sun, 19 Jan 2020 17:49:08 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "On Sun, Jan 19, 2020 at 9:49 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n>\n> Continuing the process of improving windows port, I'm trying to fix some\n> leaks.\n>\n>\nSome of the code this patch touches is not windows port only, so the\nsubject might be misleading reviewers.\n\nIt will be easier to review if you break this patch into smaller and\nindependent committable patches, as one per file.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Sun, Jan 19, 2020 at 9:49 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Continuing the process of improving windows port, I'm trying to fix some leaks.Some of the code this patch touches is not windows port only, so the subject might be misleading reviewers.It will be easier to review if you break this patch into smaller and independent committable patches, as one per file.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 21 Jan 2020 10:17:51 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "Em ter., 21 de jan. de 2020 às 06:18, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escreveu:\n\n> Some of the code this patch touches is not windows port only, so the\n> subject might be misleading reviewers.\n>\n True. Some leaks occurs at other platforms.\n\n> It will be easier to review if you break this patch into smaller and\n> independent committable patches, as one per file.\n>\nDone.\n\nI separated the patch, one per file, to facilitate the review according to\nyour suggestion.\nIt looked like this:\n1. /src/backend/postmaster/postmaster.c\nIn case of failure, it was necessary to deallocate the param pointer and\nrelease the handle properly.\n2. /src/backend/port/win32_shmem.c\nIn case of failure, the reserved memory can be released immediately, within\nthe function.\n3. /src/common/restricted_token.c\nIf it is not possible to open the token, better release the dll, we may be\nthe only one to use it.\nIf it is not possible to allocate the SID, it was necessary to release the\nhandle and release the DLL properly.\nThe cmdline variable has yet to be released.\n4. src / backend / regex / rege_dfa.c\nThe free_dfa function must free the entire structure, including itself.\n5. src / backend / regex / regexec.c\nThe use of the NOERR () macro, hides the return, which causes the failure\nto free the memory properly.\n6. src / common / logging.c\nThe strdup function destroys the reference to the old pointer, in case of a\nloop, it is necessary to release it beforehand.\nThe free function with variable NULL, has no effect and can be called\nwithout problems.\n7. /src/backend/libpq/auth.c\nIn case of failure, it was necessary to release the handlers properly.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 21 Jan 2020 11:01:07 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 11:01:07AM -0300, Ranier Vilela wrote:\n> Done.\n\nI would recommend that you also run all the regression tests present\nin the source before sending a patch. If you don't know how to do\nthat, there is some documentation on the matter:\nhttps://www.postgresql.org/docs/current/regress-run.html\n\nIf you send any patches, it is always important to make sure that\nnothing you change breaks the existing coverage (on top of reading the\nsurrounding code and understanding its context, of course)\n\nHint: this crashes at initdb time.\n--\nMichael",
"msg_date": "Wed, 22 Jan 2020 15:12:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "Hi,\nAfter review the patches and build all and run regress checks for each\npatch, those are the ones that don't break.\nNot all leaks detected by Coverity are fixed.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 22 Jan 2020 17:51:51 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 05:51:51PM -0300, Ranier Vilela wrote:\n> After review the patches and build all and run regress checks for each\n> patch, those are the ones that don't break.\n\nThere is some progress. You should be careful about your patches,\nas they generate compiler warnings. Here is one quote from gcc-9:\nlogging.c:87:13: warning: passing argument 1 of ‘free’ discards \n‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]\n 87 | free(sgr_warning);\nBut there are others.\n\n if (strcmp(name, \"error\") == 0)\n+ {\n+ free(sgr_error);\n sgr_error = strdup(value);\n+ }\nI don't see the point of doing that in logging.c. pg_logging_init()\nis called only once per tools, so this cannot happen. Another point\nthat may matter here though is that we do not complain about OOMs.\nThat's really unlikely to happen, and if it happens it leads to\npartially colored output.\n\n- NOERR();\n+ if (ISERR())\n+ {\n+ freedfa(s);\n+ return v->err;\n+ }\nCan you design a query where this is a problem?\n\n pg_log_error(\"could not allocate SIDs: error code %lu\",\n GetLastError());\n+ CloseHandle(origToken);\n+ FreeLibrary(Advapi32Handle);\n[...]\n pg_log_error(\"could not open process token: error code %lu\",\n GetLastError());\n+ FreeLibrary(Advapi32Handle);\n return 0;\nFor those two ones, it looks that you are right. However, I think\nthat it would be safer to check if Advapi32Handle is NULL for both.\n\n@@ -187,6 +190,7 @@ get_restricted_token(void)\n }\n exit(x);\n }\n+ free(cmdline);\nAnything allocated with pg_strdup() should be free'd with pg_free(),\nthat's a matter of consistency.\n\n+++ b/src/backend/postmaster/postmaster.c\n@@ -4719,6 +4719,8 @@ retry:\n if (cmdLine[sizeof(cmdLine) - 2] != '\\0')\n {\n elog(LOG, \"subprocess command line too long\");\n+ UnmapViewOfFile(param);\n+ CloseHandle(paramHandle);\nThe three ones in postmaster.c are correct guesses. \n\n+ if (sspictx != NULL)\n+ {\n+ DeleteSecurityContext(sspictx);\n+ free(sspictx);\n+ }\n+ FreeCredentialsHandle(&sspicred);\nThis stuff is correctly free'd after calling AcceptSecurityContext()\nin the SSPI code, but not the two other code paths. Looks right.\nActually, for the first one, wouldn't it be better to free those\nresources *before* ereport(ERROR) on ERRCODE_PROTOCOL_VIOLATION?\nThat's an authentication path so it does not really matter but..\n\n ldap_unbind(*ldap);\n+ FreeLibrary(ldaphandle);\n return STATUS_ERROR;\nYep. That's consistent to clean up.\n\n+ if (VirtualFree(ShmemProtectiveRegion, 0, MEM_RELEASE) == 0)\n+ elog(FATAL, \"failed to release reserved memory region\n(addr=%p): error code %lu\",\n+ ShmemProtectiveRegion, GetLastError());\n return false;\nNo, that's not right. I think that it is possible to loop over\nShmemProtectiveRegion in some cases. And actually, your patch is dead\nwrong because this is some code called by the postmaster and it cannot\nuse FATAL.\n\n> Not all leaks detected by Coverity are fixed.\n\nCoverity is a static analyzer, it misses a lot of things tied to the\ncontext of the code, so you need to take its suggestions with a pinch\nof salt.\n--\nMichael",
"msg_date": "Fri, 24 Jan 2020 16:13:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "Em sex., 24 de jan. de 2020 às 04:13, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, Jan 22, 2020 at 05:51:51PM -0300, Ranier Vilela wrote:\n> > After review the patches and build all and run regress checks for each\n> > patch, those are the ones that don't break.\n>\n> There is some progress. You should be careful about your patches,\n> as they generate compiler warnings. Here is one quote from gcc-9:\n> logging.c:87:13: warning: passing argument 1 of ‘free’ discards\n> ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]\n> 87 | free(sgr_warning);\n>\nWell, in this cases, the solution is cast.\nfree((char *) sgr_warning);\n\nBut there are others.\n>\n> if (strcmp(name, \"error\") == 0)\n> + {\n> + free(sgr_error);\n> sgr_error = strdup(value);\n> + }\n> I don't see the point of doing that in logging.c. pg_logging_init()\n> is called only once per tools, so this cannot happen. Another point\n> that may matter here though is that we do not complain about OOMs.\n> That's really unlikely to happen, and if it happens it leads to\n> partially colored output.\n>\nCoverity show the alert, because he tries all the possibilites.Is inside a\nloop.\nIt seems to me that the only way to happen is by the user, by introducing a\nrepeated and wrong sequence.\nIf ok, we can discard this patch, but free doens't hurt here.\n\n\n> - NOERR();\n> + if (ISERR())\n> + {\n> + freedfa(s);\n> + return v->err;\n> + }\n> Can you design a query where this is a problem?\n>\n I think for now, I’m not able to do it.\nBut, the fix is better do not you think.\nThe macro hides the return and the exchange does not change the final size.\nIf the ISERR() it never occurs here, nor would we need the macro.\n\n pg_log_error(\"could not allocate SIDs: error code %lu\",\n> GetLastError());\n> + CloseHandle(origToken);\n> + FreeLibrary(Advapi32Handle);\n> [...]\n> pg_log_error(\"could not open process token: error code %lu\",\n> GetLastError());\n> + FreeLibrary(Advapi32Handle);\n> return 0;\n> For those two ones, it looks that you are right. However, I think\n> that it would be safer to check if Advapi32Handle is NULL for both.\n>\nMichael, I did it differently and modified the function to not need to test\nNULL, I think it was better.\n\n @@ -187,6 +190,7 @@ get_restricted_token(void)\n\n> }\n> exit(x);\n> }\n> + free(cmdline);\n> Anything allocated with pg_strdup() should be free'd with pg_free(),\n> that's a matter of consistency.\n>\nDone.\n\n\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -4719,6 +4719,8 @@ retry:\n> if (cmdLine[sizeof(cmdLine) - 2] != '\\0')\n> {\n> elog(LOG, \"subprocess command line too long\");\n> + UnmapViewOfFile(param);\n> + CloseHandle(paramHandle);\n> The three ones in postmaster.c are correct guesses.\n>\n> Does that mean it is correct?\n\n\n> + if (sspictx != NULL)\n> + {\n> + DeleteSecurityContext(sspictx);\n> + free(sspictx);\n> + }\n> + FreeCredentialsHandle(&sspicred);\n> This stuff is correctly free'd after calling AcceptSecurityContext()\n> in the SSPI code, but not the two other code paths. Looks right.\n> Actually, for the first one, wouldn't it be better to free those\n> resources *before* ereport(ERROR) on ERRCODE_PROTOCOL_VIOLATION?\n> That's an authentication path so it does not really matter but..\n>\n Done.\n\n\n> ldap_unbind(*ldap);\n> + FreeLibrary(ldaphandle);\n> return STATUS_ERROR;\n> Yep. That's consistent to clean up.\n>\nOk.\n\n>\n> + if (VirtualFree(ShmemProtectiveRegion, 0, MEM_RELEASE) == 0)\n> + elog(FATAL, \"failed to release reserved memory region\n> (addr=%p): error code %lu\",\n> + ShmemProtectiveRegion, GetLastError());\n> return false;\n> No, that's not right. I think that it is possible to loop over\n> ShmemProtectiveRegion in some cases. And actually, your patch is dead\n> wrong because this is some code called by the postmaster and it cannot\n> use FATAL.\n>\nFATAL changed to LOG, you are right.\nIn case of loop, VirtualAllocEx wouldn't be called again?\n\n\n> > Not all leaks detected by Coverity are fixed.\n>\n> Coverity is a static analyzer, it misses a lot of things tied to the\n> context of the code, so you need to take its suggestions with a pinch\n> of salt.\n>\nOh yes, true.\nI think that all alerts are true, because they test all possibilities, even\nthose that are rarely, or almost impossible to happen.\n\nThank you for the review.\n\nBest regards,\nRanier Vilela",
"msg_date": "Fri, 24 Jan 2020 09:37:25 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "Last time improvement to restricted_token.c\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 24 Jan 2020 09:59:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 09:37:25AM -0300, Ranier Vilela wrote:\n> Em sex., 24 de jan. de 2020 às 04:13, Michael Paquier <michael@paquier.xyz>\n> escreveu:\n>> There is some progress. You should be careful about your patches,\n>> as they generate compiler warnings. Here is one quote from gcc-9:\n>> logging.c:87:13: warning: passing argument 1 of ‘free’ discards\n>> ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]\n>> 87 | free(sgr_warning);\n>\n> Well, in this cases, the solution is cast.\n> free((char *) sgr_warning);\n\nApplying blindly a cast is never a good practice.\n\n>> if (strcmp(name, \"error\") == 0)\n>> + {\n>> + free(sgr_error);\n>> sgr_error = strdup(value);\n>> + }\n>> I don't see the point of doing that in logging.c. pg_logging_init()\n>> is called only once per tools, so this cannot happen. Another point\n>> that may matter here though is that we do not complain about OOMs.\n>> That's really unlikely to happen, and if it happens it leads to\n>> partially colored output.\n>\n> Coverity show the alert, because he tries all the possibilites.Is\n> inside a loop. It seems to me that the only way to happen is by the\n> user, by introducing a repeated and wrong sequence.\n\nAgain, Coverity may say something that does not apply to the reality,\nand sometimes it misses some spots. Here we should be looking at\nquery patterns which involve a memory leak. So I'd rather look at\nthat separately, and actually on a separate thread because that's not\na Windows-only code path. If you'd look at the rest of the regex\ncode, I suspect that there could a couple of ramifications which have\nsimilar problems (I haven't looked at that myself).\n\n>> For those two ones, it looks that you are right. However, I think\n>> that it would be safer to check if Advapi32Handle is NULL for both.\n>\n> Michael, I did it differently and modified the function to not need to test\n> NULL, I think it was better.\n\nadvapi32.dll should be present in any modern Windows platform, so\nlogging an error is actually fine by me instead of a warning.\n\nI have shaved from the patch the parts which are not completely\nrelevant to this thread, and committed a version addressing the most\nobvious leaks after doing more tests, including the changes for\nrestricted_token.c as of 10a5252. Thanks.\n--\nMichael",
"msg_date": "Mon, 27 Jan 2020 11:04:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "Em dom., 26 de jan. de 2020 às 23:04, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Fri, Jan 24, 2020 at 09:37:25AM -0300, Ranier Vilela wrote:\n> > Em sex., 24 de jan. de 2020 às 04:13, Michael Paquier <\n> michael@paquier.xyz>\n> > escreveu:\n> >> There is some progress. You should be careful about your patches,\n> >> as they generate compiler warnings. Here is one quote from gcc-9:\n> >> logging.c:87:13: warning: passing argument 1 of ‘free’ discards\n> >> ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]\n> >> 87 | free(sgr_warning);\n> >\n> > Well, in this cases, the solution is cast.\n> > free((char *) sgr_warning);\n>\n> Applying blindly a cast is never a good practice.\n>\nOk.\n\n>\n> >> if (strcmp(name, \"error\") == 0)\n> >> + {\n> >> + free(sgr_error);\n> >> sgr_error = strdup(value);\n> >> + }\n> >> I don't see the point of doing that in logging.c. pg_logging_init()\n> >> is called only once per tools, so this cannot happen. Another point\n> >> that may matter here though is that we do not complain about OOMs.\n> >> That's really unlikely to happen, and if it happens it leads to\n> >> partially colored output.\n> >\n> > Coverity show the alert, because he tries all the possibilites.Is\n> > inside a loop. It seems to me that the only way to happen is by the\n> > user, by introducing a repeated and wrong sequence.\n>\n> Again, Coverity may say something that does not apply to the reality,\n> and sometimes it misses some spots. Here we should be looking at\n> query patterns which involve a memory leak. So I'd rather look at\n> that separately, and actually on a separate thread because that's not\n> a Windows-only code path. If you'd look at the rest of the regex\n> code, I suspect that there could a couple of ramifications which have\n> similar problems (I haven't looked at that myself).\n>\nSure, as soon as I have time, I take another look.\n\n>\n> >> For those two ones, it looks that you are right. However, I think\n> >> that it would be safer to check if Advapi32Handle is NULL for both.\n> >\n> > Michael, I did it differently and modified the function to not need to\n> test\n> > NULL, I think it was better.\n>\n> advapi32.dll should be present in any modern Windows platform, so\n> logging an error is actually fine by me instead of a warning.\n>\n> I have shaved from the patch the parts which are not completely\n> relevant to this thread, and committed a version addressing the most\n> obvious leaks after doing more tests, including the changes for\n> restricted_token.c as of 10a5252. Thanks.\n>\nThank you Michael.\n\nbest regards,\nRanier Vilela\n\nEm dom., 26 de jan. de 2020 às 23:04, Michael Paquier <michael@paquier.xyz> escreveu:On Fri, Jan 24, 2020 at 09:37:25AM -0300, Ranier Vilela wrote:\n> Em sex., 24 de jan. de 2020 às 04:13, Michael Paquier <michael@paquier.xyz>\n> escreveu:\n>> There is some progress. You should be careful about your patches,\n>> as they generate compiler warnings. Here is one quote from gcc-9:\n>> logging.c:87:13: warning: passing argument 1 of ‘free’ discards\n>> ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]\n>> 87 | free(sgr_warning);\n>\n> Well, in this cases, the solution is cast.\n> free((char *) sgr_warning);\n\nApplying blindly a cast is never a good practice.Ok. \n\n>> if (strcmp(name, \"error\") == 0)\n>> + {\n>> + free(sgr_error);\n>> sgr_error = strdup(value);\n>> + }\n>> I don't see the point of doing that in logging.c. pg_logging_init()\n>> is called only once per tools, so this cannot happen. Another point\n>> that may matter here though is that we do not complain about OOMs.\n>> That's really unlikely to happen, and if it happens it leads to\n>> partially colored output.\n>\n> Coverity show the alert, because he tries all the possibilites.Is\n> inside a loop. It seems to me that the only way to happen is by the\n> user, by introducing a repeated and wrong sequence.\n\nAgain, Coverity may say something that does not apply to the reality,\nand sometimes it misses some spots. Here we should be looking at\nquery patterns which involve a memory leak. So I'd rather look at\nthat separately, and actually on a separate thread because that's not\na Windows-only code path. If you'd look at the rest of the regex\ncode, I suspect that there could a couple of ramifications which have\nsimilar problems (I haven't looked at that myself).Sure, as soon as I have time, I take another look. \n\n>> For those two ones, it looks that you are right. However, I think\n>> that it would be safer to check if Advapi32Handle is NULL for both.\n>\n> Michael, I did it differently and modified the function to not need to test\n> NULL, I think it was better.\n\nadvapi32.dll should be present in any modern Windows platform, so\nlogging an error is actually fine by me instead of a warning.\n\nI have shaved from the patch the parts which are not completely\nrelevant to this thread, and committed a version addressing the most\nobvious leaks after doing more tests, including the changes for\nrestricted_token.c as of 10a5252. Thanks.Thank you Michael.best regards,Ranier Vilela",
"msg_date": "Mon, 27 Jan 2020 10:39:35 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 2:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n> No, that's not right. I think that it is possible to loop over\n> ShmemProtectiveRegion in some cases. And actually, your patch is dead\n> wrong because this is some code called by the postmaster and it cannot\n> use FATAL.\n\nUh, really? I am not aware of such a rule.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 15:54:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "On 2020-Jan-28, Robert Haas wrote:\n\n> On Fri, Jan 24, 2020 at 2:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > No, that's not right. I think that it is possible to loop over\n> > ShmemProtectiveRegion in some cases. And actually, your patch is dead\n> > wrong because this is some code called by the postmaster and it cannot\n> > use FATAL.\n> \n> Uh, really? I am not aware of such a rule.\n\nI don't think we have ever expressed it as such, but certainly we prefer\npostmaster to be super robust ... rather live with a some hundred bytes\nleak rather than have it die and take the whole database service down\nfor what's essentially a fringe bug that has bothered no one in a decade\nand a half.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 28 Jan 2020 18:06:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "Em ter., 28 de jan. de 2020 às 17:54, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> On Fri, Jan 24, 2020 at 2:13 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> > No, that's not right. I think that it is possible to loop over\n> > ShmemProtectiveRegion in some cases. And actually, your patch is dead\n> > wrong because this is some code called by the postmaster and it cannot\n> > use FATAL.\n>\n> Uh, really? I am not aware of such a rule.\n>\n> Well, in postmaster.c has a structure that makes use of the variable\nShmemProtectiveRegion, I think it is related to the function in\nsrc/backend/port/win32_shmem.c.\nOn line 575 in src / backend / port / win32_shmem.c, there is a comment\nthat tells to not use FATAL.\n\"Don't use FATAL since we're running in the postmaster.\"\n\nregards,\nRanier Vilela\n\nEm ter., 28 de jan. de 2020 às 17:54, Robert Haas <robertmhaas@gmail.com> escreveu:On Fri, Jan 24, 2020 at 2:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n> No, that's not right. I think that it is possible to loop over\n> ShmemProtectiveRegion in some cases. And actually, your patch is dead\n> wrong because this is some code called by the postmaster and it cannot\n> use FATAL.\n\nUh, really? I am not aware of such a rule.\nWell, in postmaster.c has a structure that makes use of the variable ShmemProtectiveRegion, I think it is related to the function insrc/backend/port/win32_shmem.c.On line 575 in src / backend / port / win32_shmem.c, there is a comment that tells to not use FATAL.\"Don't use FATAL since we're running in the postmaster.\"regards,Ranier Vilela",
"msg_date": "Tue, 28 Jan 2020 18:08:17 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 4:06 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I don't think we have ever expressed it as such, but certainly we prefer\n> postmaster to be super robust ... rather live with a some hundred bytes\n> leak rather than have it die and take the whole database service down\n> for what's essentially a fringe bug that has bothered no one in a decade\n> and a half.\n\nWell, yeah. I mean, I'm not saying it's a good idea in this instance\nto FATAL here. I'm just saying that I don't think there is a general\nrule that code which does FATAL in the postmaster is automatically\nwrong, which is what I took Michael to be suggesting.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 16:11:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "Em ter., 28 de jan. de 2020 às 18:06, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Jan-28, Robert Haas wrote:\n>\n> > On Fri, Jan 24, 2020 at 2:13 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> > > No, that's not right. I think that it is possible to loop over\n> > > ShmemProtectiveRegion in some cases. And actually, your patch is dead\n> > > wrong because this is some code called by the postmaster and it cannot\n> > > use FATAL.\n> >\n> > Uh, really? I am not aware of such a rule.\n>\n> I don't think we have ever expressed it as such, but certainly we prefer\n> postmaster to be super robust ... rather live with a some hundred bytes\n> leak rather than have it die and take the whole database service down\n> for what's essentially a fringe bug that has bothered no one in a decade\n> and a half.\n>\nMaybe it didn't bother anyone, because the Windows port is much less used.\nAnyway, I believe that freeing the memory before returning false, will not\nbring down the service, changing the patch to LOG, instead of FATAL.\nThe primary error of the patch was to use FATAL.\n\nregards,\nRanier Vilela\n\nEm ter., 28 de jan. de 2020 às 18:06, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Jan-28, Robert Haas wrote:\n\n> On Fri, Jan 24, 2020 at 2:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > No, that's not right. I think that it is possible to loop over\n> > ShmemProtectiveRegion in some cases. And actually, your patch is dead\n> > wrong because this is some code called by the postmaster and it cannot\n> > use FATAL.\n> \n> Uh, really? I am not aware of such a rule.\n\nI don't think we have ever expressed it as such, but certainly we prefer\npostmaster to be super robust ... rather live with a some hundred bytes\nleak rather than have it die and take the whole database service down\nfor what's essentially a fringe bug that has bothered no one in a decade\nand a half.Maybe it didn't bother anyone, because the Windows port is much less used. Anyway, I believe that freeing the memory before returning false, will not bring down the service, changing the patch to LOG, instead of FATAL.The primary error of the patch was to use FATAL.regards,Ranier Vilela",
"msg_date": "Tue, 28 Jan 2020 18:19:48 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 04:11:47PM -0500, Robert Haas wrote:\n> On Tue, Jan 28, 2020 at 4:06 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> I don't think we have ever expressed it as such, but certainly we prefer\n>> postmaster to be super robust ... rather live with a some hundred bytes\n>> leak rather than have it die and take the whole database service down\n>> for what's essentially a fringe bug that has bothered no one in a decade\n>> and a half.\n> \n> Well, yeah. I mean, I'm not saying it's a good idea in this instance\n> to FATAL here. I'm just saying that I don't think there is a general\n> rule that code which does FATAL in the postmaster is automatically\n> wrong, which is what I took Michael to be suggesting.\n\nRe-reading the thread, I can see your point that my previous email may\nread like a rule applying to the postmaster, so sorry for the\nconfusion.\n\nAnyway, I was referring to the point mentioned in three places of\npgwin32_ReserveSharedMemoryRegion() to not use FATAL for this\nroutine. The issue with the order of DLL loading is hard to miss..\n--\nMichael",
"msg_date": "Wed, 29 Jan 2020 16:24:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port, fix some resources leaks"
}
] |
[
{
"msg_contents": "Hi,\nPossible copy and past error, found in numeric.c.\nI believe I believe that the author's intention was to return const_zero.\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 19 Jan 2020 20:11:56 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] src\\backend\\utils\\adt\\numeric.c copy and past error"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Possible copy and past error, found in numeric.c.\n> I believe I believe that the author's intention was to return const_zero.\n\nDid you read the comment just above there?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Jan 2020 19:22:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src\\backend\\utils\\adt\\numeric.c copy and past error"
},
{
"msg_contents": "Hi,\nYes, but the comment it does not clarify that the return of the variable\n\"const_one\" is intentional, instead of \"const_zero\".\nAnybody with reads the source, can think which is a copy and paste mistake.\n\nregards\nRanier Vilela\n\nEm dom., 19 de jan. de 2020 às 21:22, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Possible copy and past error, found in numeric.c.\n> > I believe I believe that the author's intention was to return const_zero.\n>\n> Did you read the comment just above there?\n>\n> regards, tom lane\n>\n\n\nHi,Yes, but the comment it does not clarify that the return of the variable \"const_one\" is intentional, instead of \"const_zero\".Anybody with reads the source, can think which is a copy and paste mistake.regardsRanier Vilela\nEm dom., 19 de jan. de 2020 às 21:22, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Possible copy and past error, found in numeric.c.\n> I believe I believe that the author's intention was to return const_zero.\n\nDid you read the comment just above there?\n\n regards, tom lane",
"msg_date": "Sun, 19 Jan 2020 21:35:44 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] src\\backend\\utils\\adt\\numeric.c copy and past error"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Yes, but the comment it does not clarify that the return of the variable\n> \"const_one\" is intentional, instead of \"const_zero\".\n\nI'm not sure which part of \"NaN ^ 0 = 1\" doesn't clarify for you that\nthe intended result is 1.\n\nEven without the comment, if you'd bothered to run the regression tests\nyou'd have noted a failure of a test case clearly intended to test\nexactly this behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Jan 2020 19:46:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src\\backend\\utils\\adt\\numeric.c copy and past error"
}
] |
[
{
"msg_contents": "Hi,\nTwo possible race condition found.\n1. src\\backend\\port\\win32\\signal.c (line 82)\nThe var \"pg_signal_queue\", is accessed eleswhere with lock.\n\n2. src\\backend\\postmaster\\syslogger.c\nThe var \"rotation_requested\" is accessed elsewhere with lock.\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 19 Jan 2020 21:23:13 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Windows port, fix some missing locks"
}
] |
[
{
"msg_contents": "Hi Hackers:\n\n\n\nThis is a patch for unique elimination rewrite for distinct query.\n\nit will cost much for a big result set and some times it is not\n\nnecessary. The basic idea is the unique node like in the following\n\ncan be eliminated.\n\n\n1. select distinct pk, ... from t;\n\n2. select distinct uk-col1, uk-col2, ...\n\n from t where uk-col1 is not null and uk-col2 is not null;\n\n3. select distinct a, b .... from t group by a, b;\n\n4. select distinct t1.pk, t2.pk, ... from t1, t2.\n\n\nThe distinct keyword in above sql is obviously redundant,\n\nBut a SQL may join a lot of tables with tens of columns in target\n\nlist and a number of indexes. Finally the sql is hidden in\n\nhundreds of SQL in system, it will be hard to find it out.\n\nThat's why I want the kernel can keep watching it,\n\nbased on that it will not cost too much. Oracle has similar rewrite\n\nas well.\n\n\nThe rule for single relation is:\n\na). The primary key is choose in target list.\n\nb). The unique key is choose in the target list,\n\n and we can tell the result of the related column is not nullable.\n\n we can tell it by catalog and qual.\n\nc). The group-by columns is choose in target list.\n\nd). The target list in subquery has a distinct already.\n\n (select distinct xxx from (select distinct xxx from t2));\n\n\nThe rule for multi-relations join is:\n\ne). if any relation yield a unique result, then the result of join will be\n\n unique as well\n\n\nIf an sql matches any rule of above, we can remove the unique node.\n\nRule d) is not so common and complex to implement, so it is not\n\nincluded in this patch.\n\n\nImplementation:\n\nf). I choose the target list per table, if there is hasDistinctOn, the\nsource\n\n is the target list intersect distinctClause. or else, the source is\ntarget list only.\n\ng). the pk/uk columns information is gathered\nby RelationGetIndexAttrBitmap.\n\n a new filed RelationData.plain_uk_ukattrs is added and gathered as\nwell.\n\nh). As last if any rule matches, Query->distinctClause &\nQuery->hasDistinctOn\n\n will be cleared to avoid generating the related path.\n\n\nThere are also some fast paths to return earlier:\n\ni). If a table in join-list, but no columns is choose in target list.\n\nj). The join-list contains sub-query. (this rewrite happens after\nsub-query pull-up)\n\nk). Based on the cost of the checking, we check group by first and\n\n then PK and then UK + not null.\n\n\nThere is no impact for non-distinct query, as for distinct query, this rule\nwill\n\nincrease the total cost a bit if the distinct can't be removed. The unique\n\ncheck is most expensive, so here is the data to show the impact, a 4\n\ncolumns table, no pk, 1 uk with 2 columns.\n\n\nWith this feature disable: avg plan time: 0.095ms\n\nWith this feature enabled: avg plan time: 0.102ms\n\n\nBasically I think the cost would be ok.\n\n\nConcurrency:\n\nl). When we see a pk or uk index, so we remove the index on another\nsession,\n\nI think this would be ok because of MVCC rules.\n\nm). When we are creating an index in another session but it is not\ncompleted,\n\n suppose we can't get it with RelationGetIndexAttrBitmap. so it should be\nok\n\nas well.\n\n\n\nThe behavior can be changed online with enable_unique_elimination,\n\nit is true by default.\n\n\nThe patch is generated with the latest code on github,\n\nand the current HEAD is 34a0a81bfb388504deaa51b16a8bb531b827e519.\n\n\nThe make installcheck-world & check-world has pass.\n\nTest case join.sql and sysview.sql are impacted by this change\n\nand they are expected, the changed expected.out file is included in this\npatch.\n\n\nPlease let me know if you have any questions.\n\n\nThank you",
"msg_date": "Mon, 20 Jan 2020 11:41:49 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] query rewrite for distinct stage under some cases"
}
] |
[
{
"msg_contents": "Hi Team,\n\nIm looking for the roadmap of the feature - master-mater replication.\n\nIn Postgresql-10, Bi-Directional replication (BDR3) has been embedded in it.\n\nAre there any plans to have built-in master-master replications in future\nversions of Postgresql 13 (or) 14 ?\n\n\nThanks,\n\nPrabhu.R\n\nHi Team,Im looking for the roadmap of the feature - master-mater replication.In Postgresql-10, Bi-Directional replication (BDR3) has been embedded in it.Are there any plans to have built-in master-master replications in future versions of Postgresql 13 (or) 14 ?Thanks,Prabhu.R",
"msg_date": "Mon, 20 Jan 2020 10:05:58 +0530",
"msg_from": "Prabhu R <prabhu.nokia@gmail.com>",
"msg_from_op": true,
"msg_subject": "Master Master replication"
},
{
"msg_contents": "On Mon, 20 Jan 2020 at 12:37, Prabhu R <prabhu.nokia@gmail.com> wrote:\n\n> Hi Team,\n>\n> Im looking for the roadmap of the feature - master-mater replication.\n>\n> In Postgresql-10, Bi-Directional replication (BDR3) has been embedded in\n> it.\n>\n\nIt's available as an extension, it's not part of PostgreSQL 10.\n\nPostgreSQL 10 has single-master publish/subscribe.\n\n\n> Are there any plans to have built-in master-master replications in future\n> versions of Postgresql 13 (or) 14 ?\n>\n\nMy understanding is that in the long term that's probably the case, but I\ndon't speak for 2ndQ or anyone else when saying so.\n\nIt's exceedingly unlikely that nontrivial multimaster logical replication\nwould be possible in PostgreSQL 13 or even 14 given the logical replication\nfacilities that are currently in the tree. A *lot* of things would need to\nbe done, and in many cases the approaches used in the current external MM\nreplication solutions could not be directly adopted without a lot of\nredesign and revision as we already saw with the addition of basic logical\nreplication functionality in PostgreSQL 10.\n\nPersonally I'd be delighted to see people working towards this, but right\nnow I think most contributors are focused elsewhere.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Mon, 20 Jan 2020 at 12:37, Prabhu R <prabhu.nokia@gmail.com> wrote:Hi Team,Im looking for the roadmap of the feature - master-mater replication.In Postgresql-10, Bi-Directional replication (BDR3) has been embedded in it.It's available as an extension, it's not part of PostgreSQL 10.PostgreSQL 10 has single-master publish/subscribe. Are there any plans to have built-in master-master replications in future versions of Postgresql 13 (or) 14 ?My understanding is that in the long term that's probably the case, but I don't speak for 2ndQ or anyone else when saying so.It's exceedingly unlikely that nontrivial multimaster logical replication would be possible in PostgreSQL 13 or even 14 given the logical replication facilities that are currently in the tree. A lot of things would need to be done, and in many cases the approaches used in the current external MM replication solutions could not be directly adopted without a lot of redesign and revision as we already saw with the addition of basic logical replication functionality in PostgreSQL 10.Personally I'd be delighted to see people working towards this, but right now I think most contributors are focused elsewhere.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 20 Jan 2020 13:22:38 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Master Master replication"
}
] |
[
{
"msg_contents": "",
"msg_date": "Mon, 20 Jan 2020 14:11:05 +0900",
"msg_from": "=?UTF-8?B?6rmA64yA7Zi4?= <daiho1.kim@samsung.com>",
"msg_from_op": true,
"msg_subject": "Add limit option to copy function"
},
{
"msg_contents": "=?UTF-8?B?6rmA64yA7Zi4?= <daiho1.kim@samsung.com> writes:\n> I suggest adding a limit option to the copy function that limits count of input/output.\n> I think this will be useful for testing with sample data.\n\nI'm quite skeptical of the value of this. On the output side, you\ncan already do it with\n\nCOPY (SELECT ... LIMIT n) TO wherever;\n\nMoreover, that approach allows you to include an ORDER BY, which is\ngenerally good practice in any query that includes LIMIT, in case\nyou'd like deterministic results.\n\nOn the input side, it's true that you'd have to resort to some\noutside features (perhaps applying \"head\" to the input file, or\nsome such), or else copy the data into a temp table and post-process.\nBut that's true for most ways that you might want to adjust or\nfilter the input data; why should this one be different?\n\nWe don't consider that COPY is a general-purpose ETL engine, and\nhave resisted addition of features to it in the past because\nthey'd slow down the primary use-case. That objection applies\nhere too. Yeah, it's (probably) not a big slowdown ... but it's\nhard to justify any cost at all for a feature that is outside\nthe design scope of COPY.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jan 2020 15:29:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add limit option to copy function"
}
] |
[
{
"msg_contents": "Fix crash in BRIN inclusion op functions, due to missing datum copy.\n\nThe BRIN add_value() and union() functions need to make a longer-lived\ncopy of the argument, if they want to store it in the BrinValues struct\nalso passed as argument. The functions for the \"inclusion operator\nclasses\" used with box, range and inet types didn't take into account\nthat the union helper function might return its argument as is, without\nmaking a copy. Check for that case, and make a copy if necessary. That\ncase arises at least with the range_union() function, when one of the\narguments is an 'empty' range:\n\nCREATE TABLE brintest (n numrange);\nCREATE INDEX brinidx ON brintest USING brin (n);\nINSERT INTO brintest VALUES ('empty');\nINSERT INTO brintest VALUES (numrange(0, 2^1000::numeric));\nINSERT INTO brintest VALUES ('(-1, 0)');\n\nSELECT brin_desummarize_range('brinidx', 0);\nSELECT brin_summarize_range('brinidx', 0);\n\nBackpatch down to 9.5, where BRIN was introduced.\n\nDiscussion: https://www.postgresql.org/message-id/e6e1d6eb-0a67-36aa-e779-bcca59167c14%40iki.fi\nReviewed-by: Emre Hasegeli, Tom Lane, Alvaro Herrera\n\nBranch\n------\nREL9_5_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/98f0d283774b68895bc41413d7dd9c19e5608231\n\nModified Files\n--------------\nsrc/backend/access/brin/brin_inclusion.c | 16 ++++++++++++++--\n1 file changed, 14 insertions(+), 2 deletions(-)",
"msg_date": "Mon, 20 Jan 2020 08:41:50 +0000",
"msg_from": "Heikki Linnakangas <heikki.linnakangas@iki.fi>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix crash in BRIN inclusion op functions,\n due to missing datum c"
},
{
"msg_contents": "On 2020-Jan-20, Heikki Linnakangas wrote:\n\n> That case arises at least with the range_union() function, when one of\n> the arguments is an 'empty' range:\n> \n> CREATE TABLE brintest (n numrange);\n> CREATE INDEX brinidx ON brintest USING brin (n);\n> INSERT INTO brintest VALUES ('empty');\n> INSERT INTO brintest VALUES (numrange(0, 2^1000::numeric));\n> INSERT INTO brintest VALUES ('(-1, 0)');\n> \n> SELECT brin_desummarize_range('brinidx', 0);\n> SELECT brin_summarize_range('brinidx', 0);\n\nI noticed that this test increases line-wise coverage of\nbrin_inclusion.c by a few percentage points, so I added it.\n\nAgain, thanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Jan 2020 18:45:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix crash in BRIN inclusion op functions, due to missing\n datum c"
}
] |
[
{
"msg_contents": "Folks,\n\nAt least two cloud providers are now stuffing large amounts of\ninformation into the password field. This change makes it possible to\naccommodate that usage in interactive sessions.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 20 Jan 2020 19:01:16 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Increase psql's password buffer size"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> At least two cloud providers are now stuffing large amounts of\n> information into the password field. This change makes it possible to\n> accommodate that usage in interactive sessions.\n\nLike who? It seems like a completely silly idea. And if 2K is sane,\nwhy not much more?\n\n(I can't say that s/100/2048/ in one place is a particularly evil change;\nwhat bothers me is the likelihood that there are other places that won't\ncope with arbitrarily long passwords. Not all of them are necessarily\nunder our control, either.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jan 2020 13:12:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 01:12:35PM -0500, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > At least two cloud providers are now stuffing large amounts of\n> > information into the password field. This change makes it possible to\n> > accommodate that usage in interactive sessions.\n> \n> Like who?\n\nAWS and Azure are two examples I know of.\n\n> It seems like a completely silly idea. And if 2K is sane, why not\n> much more?\n\nGood question. Does it make sense to rearrange these things so they're\nallocated at runtime instead of compile time?\n\n> (I can't say that s/100/2048/ in one place is a particularly evil\n> change; what bothers me is the likelihood that there are other\n> places that won't cope with arbitrarily long passwords. Not all of\n> them are necessarily under our control, either.)\n\nI found one that is, so please find attached the next revision of the\npatch.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 20 Jan 2020 19:44:25 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 07:44:25PM +0100, David Fetter wrote:\n> On Mon, Jan 20, 2020 at 01:12:35PM -0500, Tom Lane wrote:\n> > David Fetter <david@fetter.org> writes:\n> > > At least two cloud providers are now stuffing large amounts of\n> > > information into the password field. This change makes it possible to\n> > > accommodate that usage in interactive sessions.\n> > \n> > Like who?\n> \n> AWS and Azure are two examples I know of.\n> \n> > It seems like a completely silly idea. And if 2K is sane, why not\n> > much more?\n> \n> Good question. Does it make sense to rearrange these things so they're\n> allocated at runtime instead of compile time?\n> \n> > (I can't say that s/100/2048/ in one place is a particularly evil\n> > change; what bothers me is the likelihood that there are other\n> > places that won't cope with arbitrarily long passwords. Not all of\n> > them are necessarily under our control, either.)\n> \n> I found one that is, so please find attached the next revision of the\n> patch.\n\nI found another place that assumes 100 bytes and upped it to 2048.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 20 Jan 2020 20:21:41 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Mon, Jan 20, 2020 at 07:44:25PM +0100, David Fetter wrote:\n>> On Mon, Jan 20, 2020 at 01:12:35PM -0500, Tom Lane wrote:\n>>> (I can't say that s/100/2048/ in one place is a particularly evil\n>>> change; what bothers me is the likelihood that there are other\n>>> places that won't cope with arbitrarily long passwords. Not all of\n>>> them are necessarily under our control, either.)\n\n>> I found one that is, so please find attached the next revision of the\n>> patch.\n\n> I found another place that assumes 100 bytes and upped it to 2048.\n\nSo this is pretty much exactly what I expected. And have you tried\nit with e.g. PAM, or LDAP?\n\nI think the AWS guys are fools to imagine that this will work in very\nmany places, and I don't see why we should be leading the charge to\nmake it work for them. What's the point of having a huge amount of\ndata in a password, anyway? You can't expect to get it back out\nagain, and there's no reason to believe that it adds any security\nafter a certain point. If they want a bunch of different things\ncontributing to the password, OK, but they could just hash those\nthings together and thereby keep their submitted password to a length\nthat will work with most services.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jan 2020 14:38:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "Hi,\n\nI think I should add my two cents.\n\nOn Mon, 20 Jan 2020 at 20:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > I found another place that assumes 100 bytes and upped it to 2048.\n\nThere one more place, in the code which is parsing .pgpass\n\n>\n> So this is pretty much exactly what I expected. And have you tried\n> it with e.g. PAM, or LDAP?\n>\n> I think the AWS guys are fools to imagine that this will work in very\n> many places, and I don't see why we should be leading the charge to\n> make it work for them. What's the point of having a huge amount of\n> data in a password, anyway?\n\nWe at Zalando are using JWT tokens as passwords. JWT tokens are\nself-contained and therefore quite huge (up to 700-800 bytes in our\ncase). Tokens have a limited lifetime (1 hour) and we are using PAM to\nverify them.\nAltogether the whole thing works like a charm. The only problem that\nit is not possible to copy&paste the token into psql password prompt,\nbut there is a workaround, export PGPASSWORD=verylongtokenstring &&\npsql\n\nJWT: https://jwt.io/\nPAM module to verify OAuth tokens: https://github.com/CyberDem0n/pam-oauth2\n\nRegards,\n--\nAlexander Kukushkin\n\n\n",
"msg_date": "Mon, 20 Jan 2020 21:17:47 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "Alexander Kukushkin <cyberdemn@gmail.com> writes:\n> I think I should add my two cents.\n> We at Zalando are using JWT tokens as passwords. JWT tokens are\n> self-contained and therefore quite huge (up to 700-800 bytes in our\n> case). Tokens have a limited lifetime (1 hour) and we are using PAM to\n> verify them.\n> Altogether the whole thing works like a charm. The only problem that\n> it is not possible to copy&paste the token into psql password prompt,\n> but there is a workaround, export PGPASSWORD=verylongtokenstring &&\n> psql\n\nI remain unconvinced that this is a good design, as compared to the\nalternative of hashing $large_secret_data down to a more typical\nlength for a password.\n\nQuite aside from whether or not you run into any implementation\nrestrictions on password length, using externally-sourced secret\ndata directly as a password seems like a lousy idea from a pure\nsecurity standpoint. What happens if somebody compromises your\ndatabase, or even just your connection to the database, and is\nable to read out the raw password? The damage is worse than the\nordinary case of just being able to get into your database account,\nbecause now the attacker has info about a formerly-secure upstream\ndatum, which probably lets him into some other things. It's not\nunlike using the same password across multiple services.\n\nIn the case you describe, you're also exposing that data to wherever\nthe PAM mechanism is keeping its secrets, hence presenting an even\nlarger attack surface. Hashing the data before it goes to any of\nthose places would go a long way towards mitigating the risk.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jan 2020 15:51:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 09:17:47PM +0100, Alexander Kukushkin wrote:\n> Hi,\n> \n> I think I should add my two cents.\n> \n> On Mon, 20 Jan 2020 at 20:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > I found another place that assumes 100 bytes and upped it to 2048.\n> \n> There one more place, in the code which is parsing .pgpass\n\nWhat I found that seems like it might be related was on line 6900 of\nsrc/interfaces/libpq/fe-connect.c (passwordFromFile):\n\n #define LINELEN NAMEDATALEN*5\n\nwhich is 315 (63*5) by default and isn't 100 on any sane setup. What\ndid I miss?\n\nIn any case, having the lengths be different in different places\nseems sub-optimal. PGPASSWORD is just a const char *, so could be\nquite long. The password prompted for by psql can be up to 100 bytes,\nand the one read from .pgpass is bounded from above by \n\n 315\n - 4 (colons)\n - 4 (shortest possible hostname)\n - 4 (usual port number)\n - 1 (shortest db name)\n - 1 (shortest possible username)\n -------------------------------\n 301\n\n> > So this is pretty much exactly what I expected. And have you tried\n> > it with e.g. PAM, or LDAP?\n> >\n> > I think the AWS guys are fools to imagine that this will work in very\n> > many places, and I don't see why we should be leading the charge to\n> > make it work for them. What's the point of having a huge amount of\n> > data in a password, anyway?\n> \n> We at Zalando are using JWT tokens as passwords. JWT tokens are\n> self-contained and therefore quite huge (up to 700-800 bytes in our\n> case). Tokens have a limited lifetime (1 hour) and we are using PAM to\n> verify them.\n> Altogether the whole thing works like a charm. The only problem that\n> it is not possible to copy&paste the token into psql password prompt,\n> but there is a workaround, export PGPASSWORD=verylongtokenstring &&\n> psql\n> \n> JWT: https://jwt.io/\n> PAM module to verify OAuth tokens: https://github.com/CyberDem0n/pam-oauth2\n\nThis reminds me of a patch that implemented PGPASSCOMMAND.\nhttps://www.postgresql.org/message-id/flat/CAE35ztOGZqgwae3mBA%3DL97pSg3kvin2xycQh%3Dir%3D5NiwCApiYQ%40mail.gmail.com\n\nDiscussion of that seems to have trailed off, though. My thought on\nthat was that it was making a decision about the presence of both a\n.pgpass file and a PGPASSCOMMAND setting that it shouldn't have made,\ni.e. it decided which took precedence. I think it should fail when\npresented with both, as there's not a single right answer that will\ncover all cases.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 20 Jan 2020 22:39:23 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "\n\nOn 2020/01/21 4:21, David Fetter wrote:\n> On Mon, Jan 20, 2020 at 07:44:25PM +0100, David Fetter wrote:\n>> On Mon, Jan 20, 2020 at 01:12:35PM -0500, Tom Lane wrote:\n>>> David Fetter <david@fetter.org> writes:\n>>>> At least two cloud providers are now stuffing large amounts of\n>>>> information into the password field. This change makes it possible to\n>>>> accommodate that usage in interactive sessions.\n>>>\n>>> Like who?\n>>\n>> AWS and Azure are two examples I know of.\n>>\n>>> It seems like a completely silly idea. And if 2K is sane, why not\n>>> much more?\n>>\n>> Good question. Does it make sense to rearrange these things so they're\n>> allocated at runtime instead of compile time?\n>>\n>>> (I can't say that s/100/2048/ in one place is a particularly evil\n>>> change; what bothers me is the likelihood that there are other\n>>> places that won't cope with arbitrarily long passwords. Not all of\n>>> them are necessarily under our control, either.)\n>>\n>> I found one that is, so please find attached the next revision of the\n>> patch.\n> \n> I found another place that assumes 100 bytes and upped it to 2048.\n\nThere are other places that 100 bytes password length is assumed.\nIt's better to check the 0001 patch that posted in the following thread.\nhttps://www.postgresql.org/message-id/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com\n\nI have no strong opinion about the maximum length of password,\nfor now. But IMO it's worth committing that 0001 patch as the first step\nfor this problem.\n\nAlso IMO the more problematic thing is that psql silently truncates\nthe password specified in the prompt into 99B if its length is\nmore than 99B. I think that psql should emit a warning in this case\nso that users can notice that.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 21 Jan 2020 14:42:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 02:42:07PM +0900, Fujii Masao wrote:\n> I have no strong opinion about the maximum length of password,\n> for now. But IMO it's worth committing that 0001 patch as the first step\n> for this problem.\n> \n> Also IMO the more problematic thing is that psql silently truncates\n> the password specified in the prompt into 99B if its length is\n> more than 99B. I think that psql should emit a warning in this case\n> so that users can notice that.\n\nI think we should be using a macro to define the maximum length, rather\nthan have 100 used in various places.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 Jan 2020 10:12:52 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 10:12:52AM -0500, Bruce Momjian wrote:\n> On Tue, Jan 21, 2020 at 02:42:07PM +0900, Fujii Masao wrote:\n> > I have no strong opinion about the maximum length of password,\n> > for now. But IMO it's worth committing that 0001 patch as the first step\n> > for this problem.\n> > \n> > Also IMO the more problematic thing is that psql silently truncates\n> > the password specified in the prompt into 99B if its length is\n> > more than 99B. I think that psql should emit a warning in this case\n> > so that users can notice that.\n> \n> I think we should be using a macro to define the maximum length, rather\n> than have 100 used in various places.\n\nIt's not just 100 in some places. It's different in different places,\nwhich goes to your point.\n\nHow about using a system that doesn't meaningfully impose a maximum\nlength? The shell variable is a const char *, so why not just\nre(p)alloc as needed?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 21 Jan 2020 16:19:13 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 04:19:13PM +0100, David Fetter wrote:\n> On Tue, Jan 21, 2020 at 10:12:52AM -0500, Bruce Momjian wrote:\n> > I think we should be using a macro to define the maximum length, rather\n> > than have 100 used in various places.\n> \n> It's not just 100 in some places. It's different in different places,\n> which goes to your point.\n> \n> How about using a system that doesn't meaningfully impose a maximum\n> length? The shell variable is a const char *, so why not just\n> re(p)alloc as needed?\n\nUh, how do you know how big to make the buffer that receives the read?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 Jan 2020 10:23:59 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 10:23:59AM -0500, Bruce Momjian wrote:\n> On Tue, Jan 21, 2020 at 04:19:13PM +0100, David Fetter wrote:\n> > On Tue, Jan 21, 2020 at 10:12:52AM -0500, Bruce Momjian wrote:\n> > > I think we should be using a macro to define the maximum length, rather\n> > > than have 100 used in various places.\n> > \n> > It's not just 100 in some places. It's different in different places,\n> > which goes to your point.\n> > \n> > How about using a system that doesn't meaningfully impose a maximum\n> > length? The shell variable is a const char *, so why not just\n> > re(p)alloc as needed?\n> \n> Uh, how do you know how big to make the buffer that receives the read?\n\nYou can start at any size, possibly even 100, and then increase the\nsize in a loop along the lines of (untested)\n\nmy_size = 100;\nmy_buf = char[my_size];\ncurr_size = 0;\nwhile (c = getchar() != '\\0')\n{\n my_buf[curr_size++] = c;\n if (curr_size == my_size) /* If we want an absolute maximum,\n this'd be the place to test for it.\n */\n {\n my_size *= 2;\n repalloc(my_buf, my_size);\n }\n}\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 21 Jan 2020 19:05:47 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 07:05:47PM +0100, David Fetter wrote:\n> On Tue, Jan 21, 2020 at 10:23:59AM -0500, Bruce Momjian wrote:\n> > On Tue, Jan 21, 2020 at 04:19:13PM +0100, David Fetter wrote:\n> > > On Tue, Jan 21, 2020 at 10:12:52AM -0500, Bruce Momjian wrote:\n> > > > I think we should be using a macro to define the maximum length, rather\n> > > > than have 100 used in various places.\n> > > \n> > > It's not just 100 in some places. It's different in different places,\n> > > which goes to your point.\n> > > \n> > > How about using a system that doesn't meaningfully impose a maximum\n> > > length? The shell variable is a const char *, so why not just\n> > > re(p)alloc as needed?\n> > \n> > Uh, how do you know how big to make the buffer that receives the read?\n> \n> You can start at any size, possibly even 100, and then increase the\n> size in a loop along the lines of (untested)\n\n[and unworkable]\n\nI should have tested the code, but my point about using rep?alloc()\nremains.\n\nBest,\nDavid.\n\nWorking code:\n\nint main(int argc, char **argv)\n{\n\tsize_t my_size = 2,\n\t\t curr_size = 0;\n\tchar *buf;\n\tint c;\n\n\tbuf = (char *) malloc(my_size);\n\n\tprintf(\"Enter a nice, long string.\\n\");\n\n\twhile( (c = getchar()) != '\\0' )\n\t{\n\t\tbuf[curr_size++] = c;\n\t\tif (curr_size == my_size)\n\t\t{\n\t\t\tmy_size *= 2;\n\t\t\tbuf = (char *) realloc(buf, my_size);\n\t\t}\n\t}\n\tprintf(\"The string %s is %zu bytes long.\\n\", buf, curr_size);\n}\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 22 Jan 2020 01:41:04 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "\n\nOn 2020/01/22 0:12, Bruce Momjian wrote:\n> On Tue, Jan 21, 2020 at 02:42:07PM +0900, Fujii Masao wrote:\n>> I have no strong opinion about the maximum length of password,\n>> for now. But IMO it's worth committing that 0001 patch as the first step\n>> for this problem.\n>>\n>> Also IMO the more problematic thing is that psql silently truncates\n>> the password specified in the prompt into 99B if its length is\n>> more than 99B. I think that psql should emit a warning in this case\n>> so that users can notice that.\n> \n> I think we should be using a macro to define the maximum length, rather\n> than have 100 used in various places.\n\n+1 as the first step for this issue. The patch that I mentioned\nupthread actually does that.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 22 Jan 2020 11:01:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "\n\nOn 2020/01/22 9:41, David Fetter wrote:\n> On Tue, Jan 21, 2020 at 07:05:47PM +0100, David Fetter wrote:\n>> On Tue, Jan 21, 2020 at 10:23:59AM -0500, Bruce Momjian wrote:\n>>> On Tue, Jan 21, 2020 at 04:19:13PM +0100, David Fetter wrote:\n>>>> On Tue, Jan 21, 2020 at 10:12:52AM -0500, Bruce Momjian wrote:\n>>>>> I think we should be using a macro to define the maximum length, rather\n>>>>> than have 100 used in various places.\n>>>>\n>>>> It's not just 100 in some places. It's different in different places,\n>>>> which goes to your point.\n>>>>\n>>>> How about using a system that doesn't meaningfully impose a maximum\n>>>> length? The shell variable is a const char *, so why not just\n>>>> re(p)alloc as needed?\n>>>\n>>> Uh, how do you know how big to make the buffer that receives the read?\n>>\n>> You can start at any size, possibly even 100, and then increase the\n>> size in a loop along the lines of (untested)\n\nThat's possible, but I like having the (reasonable) upper limit on that\nrather than arbitrary size.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 22 Jan 2020 11:03:42 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "> On 22 Jan 2020, at 01:41, David Fetter <david@fetter.org> wrote:\n> \n> On Tue, Jan 21, 2020 at 07:05:47PM +0100, David Fetter wrote:\n>> On Tue, Jan 21, 2020 at 10:23:59AM -0500, Bruce Momjian wrote:\n>>> On Tue, Jan 21, 2020 at 04:19:13PM +0100, David Fetter wrote:\n>>>> On Tue, Jan 21, 2020 at 10:12:52AM -0500, Bruce Momjian wrote:\n>>>>> I think we should be using a macro to define the maximum length, rather\n>>>>> than have 100 used in various places.\n>>>> \n>>>> It's not just 100 in some places. It's different in different places,\n>>>> which goes to your point.\n>>>> \n>>>> How about using a system that doesn't meaningfully impose a maximum\n>>>> length? The shell variable is a const char *, so why not just\n>>>> re(p)alloc as needed?\n>>> \n>>> Uh, how do you know how big to make the buffer that receives the read?\n>> \n>> You can start at any size, possibly even 100, and then increase the\n>> size in a loop along the lines of (untested)\n\nIt doesn't seem like a terribly safe pattern to have the client decide the read\nbuffer without disclosing the size, and have the server resize the input buffer\nto an arbitrary size as input comes in.\n\n> \t\t\tmy_size *= 2;\n> \t\t\tbuf = (char *) realloc(buf, my_size);\n\nI know it's just example code, but using buf as the input to realloc like this\nrisks a memleak when realloc fails as the original buf pointer is overwritten.\nUsing a temporary pointer for ther returnvalue avoids that, which is how\npg_repalloc and repalloc does it.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 22 Jan 2020 09:20:00 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "On 2020/01/22 11:01, Fujii Masao wrote:\n> \n> \n> On 2020/01/22 0:12, Bruce Momjian wrote:\n>> On Tue, Jan 21, 2020 at 02:42:07PM +0900, Fujii Masao wrote:\n>>> I have no strong opinion about the maximum length of password,\n>>> for now. But IMO it's worth committing that 0001 patch as the first step\n>>> for this problem.\n>>>\n>>> Also IMO the more problematic thing is that psql silently truncates\n>>> the password specified in the prompt into 99B if its length is\n>>> more than 99B. I think that psql should emit a warning in this case\n>>> so that users can notice that.\n>>\n>> I think we should be using a macro to define the maximum length, rather\n>> than have 100 used in various places.\n> \n> +1 as the first step for this issue. The patch that I mentioned\n> upthread actually does that.\n\nAttached is the patch that Nathan proposed at [1] and I think that\nit's worth applying. I'd like to add this to next CommitFest.\nThought?\n\n[1] https://www.postgresql.org/message-id/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Wed, 19 Feb 2020 22:16:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Attached is the patch that Nathan proposed at [1] and I think that\n> it's worth applying. I'd like to add this to next CommitFest.\n> Thought?\n\nI can't get excited about this in the least. For any \"normal\" use of\npasswords, 100 bytes is surely far more than sufficient. Furthermore,\nif there is someone out there for whom it isn't sufficient, they're not\ngoing to want to build custom versions of Postgres to change it.\n\nIf we think that longer passwords are actually a thing to be concerned\nabout, then what we need to do is change all these places to support\nexpansible buffers. I'm not taking a position on whether that's worth\nthe trouble ... but I do take the position that just inserting a\n#define is a waste of time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Feb 2020 15:48:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase psql's password buffer size"
}
] |
[
{
"msg_contents": "Debian reports that libxml2 is dropping the xml2-config binary:\n\nDate: Mon, 20 Jan 2020 20:42:47 +0100\nFrom: Mattia Rizzolo <mattia@debian.org>\nReply-To: Mattia Rizzolo <mattia@debian.org>, 949428@bugs.debian.org\nSubject: Bug#949428: postgresql-12: FTBFS with libxml2 2.9.10 (uses xml2-config)\n\nSource: postgresql-12\nVersion: 12.1-2\nSeverity: important\nTags: ftbfs\nUser: libxml2@packages.debian.org\nUsertags: ftbfs-2.9.10 xml2-config\n\n\nDear maintainer,\n\nyour package is using `xml2-config` to detect and use libxml2. I'm\nremoving that script, so please update your build system to use\npkg-config instead.\n\nThe libxml2 package in experimental already doesn't ship the xml2-config\nscript.\n\nAttached is the full build log, hopefully relevant excerpt follows:\n\n\nchecking for xml2-config... no\n...\nconfigure: error: header file <libxml/parser.h> is required for XML support\n\n[...]\n----- End forwarded message -----\n\nLuckily the ./configure script is compatible enough so that this hack\nworks: (tested on master)\n\n./configure --with-libxml XML2_CONFIG='pkg-config libxml-2.0'\n[...]\nchecking for XML2_CONFIG... pkg-config libxml-2.0\n[...]\nchecking for xmlSaveToBuffer in -lxml2... yes\n[...]\nchecking libxml/parser.h usability... yes\nchecking libxml/parser.h presence... yes\nchecking for libxml/parser.h... yes\n\nWe should teach configure.in to recognize that natively as well, I\nguess.\n\nChristoph\n\n\n",
"msg_date": "Mon, 20 Jan 2020 21:47:15 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "libxml2 is dropping xml2-config"
},
{
"msg_contents": "Re: To PostgreSQL Hackers 2020-01-20 <20200120204715.GA73984@msg.df7cb.de>\n> Debian reports that libxml2 is dropping the xml2-config binary:\n\nPlease disregard that, I had assumed this was a change made by libxml2\nupstream. I'm in contact with the libxml2 Debian maintainer to get\nthat change off the table.\n\n> We should teach configure.in to recognize that natively as well, I\n> guess.\n\n(Putting in support for pkg-config still makes sense, though.)\n\nChristoph\n\n\n",
"msg_date": "Mon, 20 Jan 2020 22:16:37 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: libxml2 is dropping xml2-config"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: To PostgreSQL Hackers 2020-01-20 <20200120204715.GA73984@msg.df7cb.de>\n>> Debian reports that libxml2 is dropping the xml2-config binary:\n\n> Please disregard that, I had assumed this was a change made by libxml2\n> upstream. I'm in contact with the libxml2 Debian maintainer to get\n> that change off the table.\n\nI was wondering about that --- I had thought libxml2 upstream was\nnot terribly active anymore.\n\n> (Putting in support for pkg-config still makes sense, though.)\n\nPerhaps. Are there any platforms where libxml2 doesn't install a\npkg-config file? What are we supposed to do if there's no pkg-config?\n(The workaround we have for that with ICU is sufficiently bletcherous\nthat I'm not eager to replicate it for libxml2...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jan 2020 19:51:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libxml2 is dropping xml2-config"
},
{
"msg_contents": "On 1/20/20 5:51 PM, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n>> Re: To PostgreSQL Hackers 2020-01-20 <20200120204715.GA73984@msg.df7cb.de>\n>>> Debian reports that libxml2 is dropping the xml2-config binary:\n> \n> Perhaps. Are there any platforms where libxml2 doesn't install a\n> pkg-config file? What are we supposed to do if there's no pkg-config?\n> (The workaround we have for that with ICU is sufficiently bletcherous\n> that I'm not eager to replicate it for libxml2...)\n\nYes -- at least Ubuntu < 18.04 does not install pkg-config for libxml2. \nI have not checked Debian yet, but I imagine < 8 will have the same issue.\n\nRHEL/CentOS 6/7 look OK and I'm betting 8 is OK as well.\n\nSo, based on my limited testing thus far it is not universally \nsupported, even on non-EOL distros.\n\nChristoph, are you saying we perhaps won't need to make this change?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 21 Jan 2020 09:47:46 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: libxml2 is dropping xml2-config"
},
{
"msg_contents": "Re: Tom Lane 2020-01-21 <6994.1579567876@sss.pgh.pa.us>\n> > (Putting in support for pkg-config still makes sense, though.)\n> \n> Perhaps. Are there any platforms where libxml2 doesn't install a\n> pkg-config file? What are we supposed to do if there's no pkg-config?\n\nI can't comment on the libxml2 part, but making pkg-config a hard\nrequirement to build PG would probably be a safe bet nowadays. (I'm\nstill not arguing that we should.)\n\nRe: David Steele 2020-01-21 <95349047-31dd-c7dc-df17-b488c2d3441f@pgmasters.net>\n> Yes -- at least Ubuntu < 18.04 does not install pkg-config for libxml2. I\n> have not checked Debian yet, but I imagine < 8 will have the same issue.\n\nThat is not true, I just verified that both 16.04 and 14.04 (already\nEOL) have a working `pkg-config libxml-2.0 --libs`.\n\n> Christoph, are you saying we perhaps won't need to make this change?\n\nI'm saying that the Debian libxml2 maintainer shouldn't try to make\nthis change unilaterally without libxml2 upstream.\n\nChristoph\n\n\n",
"msg_date": "Thu, 23 Jan 2020 15:09:53 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: libxml2 is dropping xml2-config"
},
{
"msg_contents": "On 1/23/20 7:09 AM, Christoph Berg wrote:\n > Re: David Steele 2020-01-21 \n<95349047-31dd-c7dc-df17-b488c2d3441f@pgmasters.net>\n >> Yes -- at least Ubuntu < 18.04 does not install pkg-config for \nlibxml2. I\n >> have not checked Debian yet, but I imagine < 8 will have the same issue.\n >\n > That is not true, I just verified that both 16.04 and 14.04 (already\n > EOL) have a working `pkg-config libxml-2.0 --libs`.\n\nYou are correct. My build script was not explicitly installing \npkg-config on <= 16.04. 12.04 also seems to work fine.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 24 Jan 2020 09:10:35 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: libxml2 is dropping xml2-config"
}
] |
[
{
"msg_contents": "Hi,\n\nFound out today that BRIN indexes don't really work for PostGIS and box\ndatatypes.\n\nSince\nhttps://github.com/postgres/postgres/commit/7e534adcdc70866e7be74d626b0ed067c890a251\nPostgres\nrequires datatype to provide correlation statistics. Such statistics wasn't\nprovided by PostGIS and box types.\n\nToday I tried to replace a 200gb gist index with 8mb brin index and queries\ndidn't work as expected - it was never used. set enable_seqscan=off helped\nfor a bit but that's not a permanent solution.\nPlans for context:\nhttps://gist.github.com/Komzpa/2cd396ec9b65e2c93341e9934d974826\n\nDebugging session on #postgis IRC channel leads to this ticket to create a\n(not that meaningful) correlation statistics for geometry datatype:\nhttps://trac.osgeo.org/postgis/ticket/4625#ticket\n\nPostgres Professional mentioned symptoms of the issue in their in-depth\nmanual: https://habr.com/ru/company/postgrespro/blog/346460/ - box datatype\nshowed same unusable BRIN symptoms for them.\n\nA reasonable course of action on Postgres side seems to be to not assume\nselectivity of 1 in absence of correlation statistics, but something that\nwould prefer such an index to a parallel seq scan, but higher than similar\nGIST.\n\nAny other ideas?\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,Found out today that BRIN indexes don't really work for PostGIS and box datatypes.Since https://github.com/postgres/postgres/commit/7e534adcdc70866e7be74d626b0ed067c890a251 Postgres requires datatype to provide correlation statistics. Such statistics wasn't provided by PostGIS and box types.Today I tried to replace a 200gb gist index with 8mb brin index and queries didn't work as expected - it was never used. set enable_seqscan=off helped for a bit but that's not a permanent solution.Plans for context: https://gist.github.com/Komzpa/2cd396ec9b65e2c93341e9934d974826Debugging session on #postgis IRC channel leads to this ticket to create a (not that meaningful) correlation statistics for geometry datatype: https://trac.osgeo.org/postgis/ticket/4625#ticketPostgres Professional mentioned symptoms of the issue in their in-depth manual: https://habr.com/ru/company/postgrespro/blog/346460/ - box datatype showed same unusable BRIN symptoms for them.A reasonable course of action on Postgres side seems to be to not assume selectivity of 1 in absence of correlation statistics, but something that would prefer such an index to a parallel seq scan, but higher than similar GIST.Any other ideas?-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Tue, 21 Jan 2020 00:00:53 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "BRIN cost estimate breaks geometric indexes"
},
{
"msg_contents": "On 21.01.2020 0:00, Darafei \"Komяpa\" Praliaskouski wrote:\n> Hi,\n>\n> Found out today that BRIN indexes don't really work for PostGIS and \n> box datatypes.\n>\n> Since \n> https://github.com/postgres/postgres/commit/7e534adcdc70866e7be74d626b0ed067c890a251 Postgres \n> requires datatype to provide correlation statistics. Such statistics \n> wasn't provided by PostGIS and box types.\n>\n> Today I tried to replace a 200gb gist index with 8mb brin index and \n> queries didn't work as expected - it was never used. set \n> enable_seqscan=off helped for a bit but that's not a permanent solution.\n> Plans for context: \n> https://gist.github.com/Komzpa/2cd396ec9b65e2c93341e9934d974826\n>\n> Debugging session on #postgis IRC channel leads to this ticket to \n> create a (not that meaningful) correlation statistics for geometry \n> datatype: https://trac.osgeo.org/postgis/ticket/4625#ticket\n>\n> Postgres Professional mentioned symptoms of the issue in their \n> in-depth manual: \n> https://habr.com/ru/company/postgrespro/blog/346460/ - box datatype \n> showed same unusable BRIN symptoms for them.\n\n\n(Translated to English: \nhttps://habr.com/en/company/postgrespro/blog/452900/)\n\n\n> A reasonable course of action on Postgres side seems to be to not \n> assume selectivity of 1 in absence of correlation statistics, but \n> something that would prefer such an index to a parallel seq scan, but \n> higher than similar GIST.\n>\n> Any other ideas?\n\n\nAs far as I understand, correlation is computed only for sortable types, \nwhich means that the current concept of correlation works as intended \nonly for B-tree indexes.\n\nIdeally, correlation should be computed for (attribute, index) pair, \ntaking into account order of values returned by the index scan. Less \nideal but more easier approach can be to ignore the computed correlation \nfor any index access except B-tree, and just assume some predefined \nconstant.\n\n\n\n\n",
"msg_date": "Tue, 21 Jan 2020 02:07:17 +0300",
"msg_from": "Egor Rogov <e.rogov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: BRIN cost estimate breaks geometric indexes"
},
{
"msg_contents": "Hi,\n\nPatch may look as simple as this one:\nhttps://patch-diff.githubusercontent.com/raw/postgres/postgres/pull/49.diff\n\nPrevious mention in -hackers is available at\nhttps://postgrespro.com/list/id/CAKJS1f9n-Wapop5Xz1dtGdpdqmzeGqQK4sV2MK-zZugfC14Xtw@mail.gmail.com\n-\nseems everyone overlooked that patch there breaks geometric indexing back\nthen.\n\nOn Tue, Jan 21, 2020 at 2:07 AM Egor Rogov <e.rogov@postgrespro.ru> wrote:\n\n> On 21.01.2020 0:00, Darafei \"Komяpa\" Praliaskouski wrote:\n> > Hi,\n> >\n> > Found out today that BRIN indexes don't really work for PostGIS and\n> > box datatypes.\n> >\n> > Since\n> >\n> https://github.com/postgres/postgres/commit/7e534adcdc70866e7be74d626b0ed067c890a251 Postgres\n>\n> > requires datatype to provide correlation statistics. Such statistics\n> > wasn't provided by PostGIS and box types.\n> >\n> > Today I tried to replace a 200gb gist index with 8mb brin index and\n> > queries didn't work as expected - it was never used. set\n> > enable_seqscan=off helped for a bit but that's not a permanent solution.\n> > Plans for context:\n> > https://gist.github.com/Komzpa/2cd396ec9b65e2c93341e9934d974826\n> >\n> > Debugging session on #postgis IRC channel leads to this ticket to\n> > create a (not that meaningful) correlation statistics for geometry\n> > datatype: https://trac.osgeo.org/postgis/ticket/4625#ticket\n> >\n> > Postgres Professional mentioned symptoms of the issue in their\n> > in-depth manual:\n> > https://habr.com/ru/company/postgrespro/blog/346460/ - box datatype\n> > showed same unusable BRIN symptoms for them.\n>\n>\n> (Translated to English:\n> https://habr.com/en/company/postgrespro/blog/452900/)\n>\n>\n> > A reasonable course of action on Postgres side seems to be to not\n> > assume selectivity of 1 in absence of correlation statistics, but\n> > something that would prefer such an index to a parallel seq scan, but\n> > higher than similar GIST.\n> >\n> > Any other ideas?\n>\n>\n> As far as I understand, correlation is computed only for sortable types,\n> which means that the current concept of correlation works as intended\n> only for B-tree indexes.\n>\n> Ideally, correlation should be computed for (attribute, index) pair,\n> taking into account order of values returned by the index scan. Less\n> ideal but more easier approach can be to ignore the computed correlation\n> for any index access except B-tree, and just assume some predefined\n> constant.\n>\n>\n>\n>\n>\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,Patch may look as simple as this one:https://patch-diff.githubusercontent.com/raw/postgres/postgres/pull/49.diffPrevious mention in -hackers is available at https://postgrespro.com/list/id/CAKJS1f9n-Wapop5Xz1dtGdpdqmzeGqQK4sV2MK-zZugfC14Xtw@mail.gmail.com - seems everyone overlooked that patch there breaks geometric indexing back then.On Tue, Jan 21, 2020 at 2:07 AM Egor Rogov <e.rogov@postgrespro.ru> wrote:On 21.01.2020 0:00, Darafei \"Komяpa\" Praliaskouski wrote:\n> Hi,\n>\n> Found out today that BRIN indexes don't really work for PostGIS and \n> box datatypes.\n>\n> Since \n> https://github.com/postgres/postgres/commit/7e534adcdc70866e7be74d626b0ed067c890a251 Postgres \n> requires datatype to provide correlation statistics. Such statistics \n> wasn't provided by PostGIS and box types.\n>\n> Today I tried to replace a 200gb gist index with 8mb brin index and \n> queries didn't work as expected - it was never used. set \n> enable_seqscan=off helped for a bit but that's not a permanent solution.\n> Plans for context: \n> https://gist.github.com/Komzpa/2cd396ec9b65e2c93341e9934d974826\n>\n> Debugging session on #postgis IRC channel leads to this ticket to \n> create a (not that meaningful) correlation statistics for geometry \n> datatype: https://trac.osgeo.org/postgis/ticket/4625#ticket\n>\n> Postgres Professional mentioned symptoms of the issue in their \n> in-depth manual: \n> https://habr.com/ru/company/postgrespro/blog/346460/ - box datatype \n> showed same unusable BRIN symptoms for them.\n\n\n(Translated to English: \nhttps://habr.com/en/company/postgrespro/blog/452900/)\n\n\n> A reasonable course of action on Postgres side seems to be to not \n> assume selectivity of 1 in absence of correlation statistics, but \n> something that would prefer such an index to a parallel seq scan, but \n> higher than similar GIST.\n>\n> Any other ideas?\n\n\nAs far as I understand, correlation is computed only for sortable types, \nwhich means that the current concept of correlation works as intended \nonly for B-tree indexes.\n\nIdeally, correlation should be computed for (attribute, index) pair, \ntaking into account order of values returned by the index scan. Less \nideal but more easier approach can be to ignore the computed correlation \nfor any index access except B-tree, and just assume some predefined \nconstant.\n\n\n\n\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Fri, 14 Feb 2020 18:20:50 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: BRIN cost estimate breaks geometric indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nHere are some patches to get rid of frequent system calls.\n\n0001 changes all qualifying WaitLatch() calls to use a new function\nWaitMyLatch() that reuses a common WaitEventSet. That's for callers\nthat only want to wait for their own latch or an optional timeout,\nwith automatic exit-on-postmaster-death.\n\n0002 changes condition_variable.c to use WaitMyLatch(), instead of its\nown local thing like that. Perhaps this makes up for the use of the\nextra fd consumed by 0001.\n\n0003 changes pgstat.c to use its own local reusable WaitEventSet.\n\nTo see what I'm talking about, try tracing a whole cluster with eg\nstrace/truss/dtruss -f postgres -D pgdata. This applies to Linux\nsystems, or BSD/macOS systems with the nearby kqueue patch applied.\nOn systems that fall back to poll(), there aren't any setup/teardown\nsyscalls.",
"msg_date": "Tue, 21 Jan 2020 13:45:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 1:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here are some patches to get rid of frequent system calls.\n\nHere is one more case that I was sitting on because I wasn't sure how\nto do it: walreceiver.c. To make that work, libpq needs to be able to\ntell you when the socket has changed, which I did with a counter that\nis exposed to client code in patch 0004. The walreceiver change in\n0005 works (trace the system calls on walreceiver to see the\ndifference), but perhaps we can come up with a better way to code it\nso that eg logical/worker.c doesn't finish up duplicating the logic.\nThoughts?",
"msg_date": "Sat, 8 Feb 2020 10:00:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Sat, Feb 8, 2020 at 10:00 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Jan 21, 2020 at 1:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here are some patches to get rid of frequent system calls.\n>\n> Here is one more case that I was sitting on because I wasn't sure how\n> to do it: walreceiver.c. To make that work, libpq needs to be able to\n> tell you when the socket has changed, which I did with a counter that\n> is exposed to client code in patch 0004. The walreceiver change in\n> 0005 works (trace the system calls on walreceiver to see the\n> difference), but perhaps we can come up with a better way to code it\n> so that eg logical/worker.c doesn't finish up duplicating the logic.\n> Thoughts?\n\n(To be clear: I know the 0005 patch doesn't clean up after itself in\nvarious cases, it's for discussion only to see if others have ideas\nabout how to structure things to suit various potential users of\nlibpqwalreceiver.so.)\n\n\n",
"msg_date": "Sat, 8 Feb 2020 10:15:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Sat, Feb 8, 2020 at 10:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Here are some patches to get rid of frequent system calls.\n\nHere's a new version of this patch set. It gets rid of all temporary\nWaitEventSets except a couple I mentioned in another thread[1].\nWaitLatch() uses CommonWaitSet, and calls to WaitLatchOrSocket() are\nreplaced by either the existing FeBeWaitSet (walsender, gssapi/openssl\nauth are also candidates) or a special purpose long lived WaitEventSet\n(replication, postgres_fdw, pgstats). It passes make check-world with\nWAIT_USE_POLL, WAIT_USE_KQUEUE, WAIT_USE_EPOLL, all with and without\n-DEXEC_BACKEND, and make check with WAIT_USE_WIN32 (Appveyor).\n\n0001: \"Don't use EV_CLEAR for kqueue events.\"\n\nThis fixes a problem in the kqueue implementation that only shows up\nonce you switch to long lived WaitEventSets. It needs to be\nlevel-triggered like the other implementations, for example because\nthere's a place in the recovery tests where we wait twice in a row\nwithout trying to do I/O in between. (This is a bit like commit\n3b790d256f8 that fixed a similar problem on Windows.)\n\n0002: \"Use a long lived WaitEventSet for WaitLatch().\"\n\nIn the last version, I had a new function WaitMyLatch(), but that\ndidn't help with recoveryWakeupLatch. Switching between latches\ndoesn't require a syscall, so I didn't want to have a separate WES and\nfunction just for that. So I went back to using plain old\nWaitLatch(), and made it \"modify\" the latch every time before waiting\non CommonWaitSet. An alternative would be to get rid of the concept\nof latches other than MyLatch, and change the function to\nWaitMyLatch(). A similar thing happens for exit_on_postmaster_death,\nfor which I didn't want to have a separate WES, so I just set that\nflag every time. Thoughts?\n\n0003: \"Use regular WaitLatch() for condition variables.\"\n\nThat mechanism doesn't need its own WES anymore.\n\n0004: \"Introduce RemoveWaitEvent().\"\n\nWe'll need that to be able to handle connections that are closed and\nreopened under the covers by libpq (replication, postgres_fdw). We\nalso wanted this on a couple of other threads for multiplexing FDWs,\nto be able to adjust the wait set dynamically for a proposed async\nAppend feature.\n\nThe implementation is a little naive, and leaves holes in the\n\"pollfds\" and \"handles\" arrays (for poll() and win32 implementations).\nThat could be improved with a bit more footwork in a later patch.\n\nXXX The Windows version is based on reading documentation. I'd be\nvery interested to know if check-world passes (especially\ncontrib/postgres_fdw and src/test/recovery). Unfortunately my\nappveyor.yml fu is not yet strong enough.\n\n0005: \"libpq: Add PQsocketChangeCount to advertise socket changes.\"\n\nTo support a long lived WES, libpq needs a way tell us when the socket\nchanges underneath our feet. This is the simplest thing I could think\nof; better ideas welcome.\n\n0006: \"Reuse a WaitEventSet in libpqwalreceiver.c.\"\n\nRather than having all users of libpqwalreceiver.c deal with the\ncomplicated details of wait set management, have libpqwalreceiver\nexpose a waiting interface that understands socket changes.\n\nUnfortunately, I couldn't figure out how to use CommonWaitSet for this\n(ie adding and removing sockets to that as required), due to\ncomplications with the bookkeeping required to provide the fd_closed\nflag to RemoveWaitEvent(). So it creates its own internal long lived\nWaitEventSet.\n\n0007: \"Use a WaitEventSet for postgres_fdw.\"\n\nCreate a single WaitEventSet and use it for all FDW connections. By\nhaving our own dedicated WES, we can do the bookkeeping required to\nknow when sockets have been closed or need to removed from kernel wait\nsets explicitly (which would be much harder to achieve if\nCommonWaitSet were to be used like that; you need to know when sockets\nare closed by other code, so you can provide fd_closed to\nRemoveWaitEvent()).\n\nConcretely, if you use just one postgres_fdw connection, you'll see\njust epoll_wait()/kevent() calls for waits, but whever you switch\nbetween different connections, you'll see eg EPOLL_DEL/EV_DELETE\nfollowed by EPOLL_ADD/EV_ADD when the set is adjusted (in the kqueue\nimplementation these could be collapse into the following wait, but I\nhaven't done the work for that). An alternative would be to have one\nWES per FDW connection, but that seemed wasteful of file descriptors.\n\n0008: \"Use WL_EXIT_ON_PM_DEATH in FeBeWaitSet.\"\n\nThe FATAL message you get if you happen to be waiting for IO rather\nthan waiting somewhere else seems arbitrarily different. By switching\nto a standard automatic exit, it opens the possibility of using\nFeBeWaitSet in a couple more places that would otherwise need to\ncreate their own WES (see also [1]). Thoughts?\n\n0009: \"Use FeBeWaitSet for walsender.c.\"\n\nEnabled by 0008.\n\n0010: \"Introduce a WaitEventSet for the stats collector.\"\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK%3Dm9dLrq42oWQ4XfK9iDjGiZVwpQ1HkHrAPfG7Kh681g%40mail.gmail.com",
"msg_date": "Thu, 27 Feb 2020 12:17:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "Hello.\n\nI looked this.\n\nAt Thu, 27 Feb 2020 12:17:45 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Sat, Feb 8, 2020 at 10:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > Here are some patches to get rid of frequent system calls.\n> \n> Here's a new version of this patch set. It gets rid of all temporary\n> WaitEventSets except a couple I mentioned in another thread[1].\n> WaitLatch() uses CommonWaitSet, and calls to WaitLatchOrSocket() are\n> replaced by either the existing FeBeWaitSet (walsender, gssapi/openssl\n> auth are also candidates) or a special purpose long lived WaitEventSet\n> (replication, postgres_fdw, pgstats). It passes make check-world with\n> WAIT_USE_POLL, WAIT_USE_KQUEUE, WAIT_USE_EPOLL, all with and without\n> -DEXEC_BACKEND, and make check with WAIT_USE_WIN32 (Appveyor).\n> \n> 0001: \"Don't use EV_CLEAR for kqueue events.\"\n> \n> This fixes a problem in the kqueue implementation that only shows up\n> once you switch to long lived WaitEventSets. It needs to be\n> level-triggered like the other implementations, for example because\n> there's a place in the recovery tests where we wait twice in a row\n> without trying to do I/O in between. (This is a bit like commit\n> 3b790d256f8 that fixed a similar problem on Windows.)\n\nIt looks fine in the light of document of kqueue.\n\n> 0002: \"Use a long lived WaitEventSet for WaitLatch().\"\n> \n> In the last version, I had a new function WaitMyLatch(), but that\n> didn't help with recoveryWakeupLatch. Switching between latches\n> doesn't require a syscall, so I didn't want to have a separate WES and\n> function just for that. So I went back to using plain old\n> WaitLatch(), and made it \"modify\" the latch every time before waiting\n> on CommonWaitSet. An alternative would be to get rid of the concept\n> of latches other than MyLatch, and change the function to\n> WaitMyLatch(). A similar thing happens for exit_on_postmaster_death,\n> for which I didn't want to have a separate WES, so I just set that\n> flag every time. Thoughts?\n\nIt is surely an improvement from that we create a full-fledged WES\nevery time. The name CommonWaitSet gives an impression as if it is\nused for a variety of waitevent sets, but it is used only by\nWaitLatch. So I would name it LatchWaitSet. With that name I won't be\nsurprised by that the function is pointing WL_LATCH_SET by index 0\nwithout any explanation when calling ModifyWaitSet.\n\n@@ -700,7 +739,11 @@ FreeWaitEventSet(WaitEventSet *set)\n \tReleaseExternalFD();\n #elif defined(WAIT_USE_KQUEUE)\n \tclose(set->kqueue_fd);\n-\tReleaseExternalFD();\n+\tif (set->kqueue_fd >= 0)\n+\t{\n+\t\tclose(set->kqueue_fd);\n+\t\tReleaseExternalFD();\n+\t}\n\nDid you forget to remove the close() outside the if block?\nDon't we need the same amendment for epoll_fd with kqueue_fd?\n\nWaitLatch is defined as \"If the latch is already set (and WL_LATCH_SET\nis given), the function returns immediately.\". But now the function\nreacts to latch even if WL_LATCH_SET is not set. I think actually it\nis alwys set so I think we need to modify Assert and function comment\nfollowing the change.\n\n> 0003: \"Use regular WaitLatch() for condition variables.\"\n> \n> That mechanism doesn't need its own WES anymore.\n\nLooks fine.\n\n> 0004: \"Introduce RemoveWaitEvent().\"\n> \n> We'll need that to be able to handle connections that are closed and\n> reopened under the covers by libpq (replication, postgres_fdw). We\n> also wanted this on a couple of other threads for multiplexing FDWs,\n> to be able to adjust the wait set dynamically for a proposed async\n> Append feature.\n> \n> The implementation is a little naive, and leaves holes in the\n> \"pollfds\" and \"handles\" arrays (for poll() and win32 implementations).\n> That could be improved with a bit more footwork in a later patch.\n> \n> XXX The Windows version is based on reading documentation. I'd be\n> very interested to know if check-world passes (especially\n> contrib/postgres_fdw and src/test/recovery). Unfortunately my\n> appveyor.yml fu is not yet strong enough.\n\nI didn't find the documentation about INVALID_HANDLE_VALUE in\nlpHandles. Could you give me the URL for that?\n\nI didn't run recoverycheck because because I couldn't install IPC::Run\nfor ActivePerl.. But contribcheck succeeded.\n\n+\tfor (int i = 0; i < nevents; ++i)\n+\t\tset->handles[i + 1] = INVALID_HANDLE_VALUE;\n\nIt accesses set->handles[nevents], which is overrun.\n\n+\t/* Set up the free list. */\n+\tfor (int i = 0; i < nevents; ++i)\n+\t\tset->events[i].next_free = i + 1;\n+\tset->events[nevents - 1].next_free = -1;\n\nIt sets the last element twice. (harmless but useless).\n\n \tset->handles = (HANDLE) data;\n \tdata += MAXALIGN(sizeof(HANDLE) * nevents);\n\nIt is not an issue of thie patch, but does hadles need nevents + 1\nelements?\n\nWaitEventSetSize is not checking its parameter range.\n\nI'l continue reviewing in later mail.\n\n> 0005: \"libpq: Add PQsocketChangeCount to advertise socket changes.\"\n.... \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 10 Mar 2020 08:19:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "Hello.\n\nAt Tue, 10 Mar 2020 08:19:24 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \nme> I'l continue reviewing in later mail.\nme> \nme> > 0005: \"libpq: Add PQsocketChangeCount to advertise socket changes.\"\nme> .... \n\nAt Thu, 27 Feb 2020 12:17:45 +1300, Thomas Munro <thomas.munro@gmail.com> wrote\nin \n> On Sat, Feb 8, 2020 at 10:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> 0005: \"libpq: Add PQsocketChangeCount to advertise socket changes.\"\n> \n> To support a long lived WES, libpq needs a way tell us when the socket\n> changes underneath our feet. This is the simplest thing I could think\n> of; better ideas welcome.\n\nI think at least windows is the reason for not just detecting the\nchange of the value of fd. Instead of counting disconnection, we\ncould use event libpq-event.\n\nQregisterEventProc returns false for all of bad-parameter,\nalready-registered, out-of-memory and proc-rejection. I don't think it\nis usable interface so the attached 0005 patch fixes that. (but I\nfound it not necessarily needed after making 0007, but I included it\nas a proposal separate from this patch set. It's not including the\ncorresponding doc fix.).\n\n> 0006: \"Reuse a WaitEventSet in libpqwalreceiver.c.\"\n> \n> Rather than having all users of libpqwalreceiver.c deal with the\n> complicated details of wait set management, have libpqwalreceiver\n> expose a waiting interface that understands socket changes.\n\nLooks reasonable. The attached 0006 and 0007 are a possible\nreplacement if we use libpq-event.\n\n> Unfortunately, I couldn't figure out how to use CommonWaitSet for this\n> (ie adding and removing sockets to that as required), due to\n> complications with the bookkeeping required to provide the fd_closed\n> flag to RemoveWaitEvent(). So it creates its own internal long lived\n> WaitEventSet.\n\nAgreed since they are used different way. But with the attached closed\nconnection is marked as wes_socket_position = -1.\n\n> 0007: \"Use a WaitEventSet for postgres_fdw.\"\n\nContinues..\n\nThe attached are:\n0001-0004 Not changed\n0005 Fix interface of PQregisterEventProc\n0006 Add new libpq event for this use.\n0007 Another version of \"0006 Reuse a WaitEventSet in\n libpqwalreceiver.c\" based on libpq event.\n0008-0011 Not changed (old 0007-0010, blindly appended)\n\npassed the regression (includeing TAP recovery test) up to here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 13 Mar 2020 16:21:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "At Fri, 13 Mar 2020 16:21:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > 0007: \"Use a WaitEventSet for postgres_fdw.\"\n> \n> Continues..\n\nAt Thu, 27 Feb 2020 12:17:45 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> 0007: \"Use a WaitEventSet for postgres_fdw.\"\n> \n> Create a single WaitEventSet and use it for all FDW connections. By\n> having our own dedicated WES, we can do the bookkeeping required to\n> know when sockets have been closed or need to removed from kernel wait\n> sets explicitly (which would be much harder to achieve if\n> CommonWaitSet were to be used like that; you need to know when sockets\n> are closed by other code, so you can provide fd_closed to\n> RemoveWaitEvent()).\n\nIt is straightforward and looks fine. If we use the libpq-event-based\nsolution instead of using the changecount, pgfdw_wait_for_socket and\ndisconnect_pg_server would be a bit simpler.\n\n> Concretely, if you use just one postgres_fdw connection, you'll see\n> just epoll_wait()/kevent() calls for waits, but whever you switch\n> between different connections, you'll see eg EPOLL_DEL/EV_DELETE\n> followed by EPOLL_ADD/EV_ADD when the set is adjusted (in the kqueue\n> implementation these could be collapse into the following wait, but I\n\nSuch syscall sequences is shorter than or equal to what we issue now,\nso this patch is still an improvement.\n\n> haven't done the work for that). An alternative would be to have one\n> WES per FDW connection, but that seemed wasteful of file descriptors.\n\nIn the multi-connection case, we can save both fds and syscalls by one\nWaitEventSet containing all connection fds together. WaitEventSetWait\nshould take event mask parameter or ModifyWaitEvent should set the\nmask in that case. The event is now completely level-triggered so we\ncan safely ignore unwanted events.\n\n> 0008: \"Use WL_EXIT_ON_PM_DEATH in FeBeWaitSet.\"\n> \n> The FATAL message you get if you happen to be waiting for IO rather\n> than waiting somewhere else seems arbitrarily different. By switching\n> to a standard automatic exit, it opens the possibility of using\n> FeBeWaitSet in a couple more places that would otherwise need to\n> create their own WES (see also [1]). Thoughts?\n\nDon't we really need any message on backend disconnection due to\npostmaster death? As for authentication code, it seems to me the\nrationale for not writing the log is that the connection has not been\nestablished at the time. (That depends on what we think the\n\"connection\" is.)\n\nAnd I suppose that the reason for the authentication logic ignoring\nsignals is that clients expect that authentication should be completed\nas far as possible.\n\n> 0009: \"Use FeBeWaitSet for walsender.c.\"\n> \n> Enabled by 0008.\n\nIt works and doesn't change behavior. But I found it a bit difficult\nto find what events the WaitEventSetWait waits. Maybe a comment at\nthe caller sites would be sufficient. I think any comment about the\nbare number \"0\" as the event position of ModifyWaitEvent is also\nneeded.\n\n> 0010: \"Introduce a WaitEventSet for the stats collector.\"\n\nThis looks fine. The variable event is defined outside its effective\nscope but almost all of its function variables the same way.\n\n\n\nBy the way I found that pqcomm.c uses -1 instead of PGINVALID_SOCKET\nfor AddWaitEventToSet.\n\n> [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK%3Dm9dLrq42oWQ4XfK9iDjGiZVwpQ1HkHrAPfG7Kh681g%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 30 Mar 2020 14:14:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "> On 13 Mar 2020, at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> The attached are:\n> 0001-0004 Not changed\n> 0005 Fix interface of PQregisterEventProc\n> 0006 Add new libpq event for this use.\n> 0007 Another version of \"0006 Reuse a WaitEventSet in\n> libpqwalreceiver.c\" based on libpq event.\n> 0008-0011 Not changed (old 0007-0010, blindly appended)\n\nSince 0001 has been applied already in 9b8aa0929390a, the patchtester is unable\nto make heads or tails with trying this patchset. Can you please submit a\nrebased version without the already applied changes?\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 2 Jul 2020 15:29:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 13 Mar 2020, at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> The attached are:\n>> 0001-0004 Not changed\n>> 0005 Fix interface of PQregisterEventProc\n>> 0006 Add new libpq event for this use.\n>> 0007 Another version of \"0006 Reuse a WaitEventSet in\n>> libpqwalreceiver.c\" based on libpq event.\n>> 0008-0011 Not changed (old 0007-0010, blindly appended)\n\n> Since 0001 has been applied already in 9b8aa0929390a, the patchtester is unable\n> to make heads or tails with trying this patchset. Can you please submit a\n> rebased version without the already applied changes?\n\nWhile I've not really looked at this patchset, I did happen to notice\n0005, and I think that's utterly unacceptable as written. You can't\nbreak API/ABI on a released libpq function. I suppose the patch is\nassuming that an enum return value is ABI-equivalent to int, but that\nassumption is faulty. Moreover, what's the point? None of the later\npatches require this AFAICS. (The patch could probably be salvaged\nABI-wise by making the error codes be integer #define's not an enum,\nbut I fail to see the point of changing this at all. I also don't\nmuch like the idea of allowing callers to assume that there is a fixed\nset of possible failure conditions for PQregisterEventProc. If we\nwanted to return error details, it'd likely be better to say that an\nerror message is left in conn->errorMessage.)\n\n0006 is an even more egregious ABI break; you can't renumber existing\nenum values without breaking applications. That in itself could be\nfixed by putting the new value at the end. But I'd still object very\nstrongly to 0006, because I do not think that it's acceptable to\nhave pqDropConnection correspond to an application-visible event.\nWe use that all over the place for cases that should not be\napplication-visible, for example when deciding that a connection attempt\nhas failed and moving on to the next candidate host. We already have\nPGEVT_CONNRESET and PGEVT_CONNDESTROY as application-visible connection\nstate change events, and I don't see why those aren't sufficient.\n\n(BTW, even if these weren't ABI breaks, where are the documentation\nchanges to go with them?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jul 2020 15:22:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Sun, Jul 12, 2020 at 7:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> [...complaints about 0005 and 0006...] We already have\n> PGEVT_CONNRESET and PGEVT_CONNDESTROY as application-visible connection\n> state change events, and I don't see why those aren't sufficient.\n\nI think Horiguchi-san's general idea of using event callbacks for this\nsounds much more promising than my earlier idea of exposing a change\ncounter (that really was terrible). If it can be done with existing\nevents then that's even better. Perhaps he and/or I can look into\nthat for the next CF.\n\nIn the meantime, here's a rebase of the more straightforward patches\nin the stack. These are the ones that deal only with fixed sets of\nfile descriptors, and they survive check-world on Linux,\nLinux+EXEC_BACKEND (with ASLR disabled) and FreeBSD, and at least\ncheck on macOS and Windows (my CI recipes need more work to get\ncheck-world working on those two). There's one user-visible change\nthat I'd appreciate feedback on: I propose to drop the FATAL error\nwhen the postmaster goes away, to make things more consistent. See\nbelow for more on that.\n\nResponding to earlier review from Horiguchi-san:\n\nOn Tue, Mar 10, 2020 at 12:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > 0002: \"Use a long lived WaitEventSet for WaitLatch().\"\n> >\n> > In the last version, I had a new function WaitMyLatch(), but that\n> > didn't help with recoveryWakeupLatch. Switching between latches\n> > doesn't require a syscall, so I didn't want to have a separate WES and\n> > function just for that. So I went back to using plain old\n> > WaitLatch(), and made it \"modify\" the latch every time before waiting\n> > on CommonWaitSet. An alternative would be to get rid of the concept\n> > of latches other than MyLatch, and change the function to\n> > WaitMyLatch(). A similar thing happens for exit_on_postmaster_death,\n> > for which I didn't want to have a separate WES, so I just set that\n> > flag every time. Thoughts?\n>\n> It is surely an improvement from that we create a full-fledged WES\n> every time. The name CommonWaitSet gives an impression as if it is\n> used for a variety of waitevent sets, but it is used only by\n> WaitLatch. So I would name it LatchWaitSet. With that name I won't be\n> surprised by that the function is pointing WL_LATCH_SET by index 0\n> without any explanation when calling ModifyWaitSet.\n\nOk, I changed it to LatchWaitSet. I also replaced the index 0 with a\nsymbolic name LatchWaitSetLatchPos, to make that clearer.\n\n> @@ -700,7 +739,11 @@ FreeWaitEventSet(WaitEventSet *set)\n> ReleaseExternalFD();\n> #elif defined(WAIT_USE_KQUEUE)\n> close(set->kqueue_fd);\n> - ReleaseExternalFD();\n> + if (set->kqueue_fd >= 0)\n> + {\n> + close(set->kqueue_fd);\n> + ReleaseExternalFD();\n> + }\n>\n> Did you forget to remove the close() outside the if block?\n> Don't we need the same amendment for epoll_fd with kqueue_fd?\n\nHmm, maybe I screwed that up when resolving a conflict with the\nReleaseExternalFD() stuff. Fixed.\n\n> WaitLatch is defined as \"If the latch is already set (and WL_LATCH_SET\n> is given), the function returns immediately.\". But now the function\n> reacts to latch even if WL_LATCH_SET is not set. I think actually it\n> is alwys set so I think we need to modify Assert and function comment\n> following the change.\n\nIt seems a bit silly to call WaitLatch() if you don't want to wait for\na latch, but I think we can keep that comment and logic by assigning\nset->latch = NULL when you wait without WL_LATCH_SET. I tried that in\nthe attached.\n\n> > 0004: \"Introduce RemoveWaitEvent().\"\n> >\n> > We'll need that to be able to handle connections that are closed and\n> > reopened under the covers by libpq (replication, postgres_fdw). We\n> > also wanted this on a couple of other threads for multiplexing FDWs,\n> > to be able to adjust the wait set dynamically for a proposed async\n> > Append feature.\n> >\n> > The implementation is a little naive, and leaves holes in the\n> > \"pollfds\" and \"handles\" arrays (for poll() and win32 implementations).\n> > That could be improved with a bit more footwork in a later patch.\n> >\n> > XXX The Windows version is based on reading documentation. I'd be\n> > very interested to know if check-world passes (especially\n> > contrib/postgres_fdw and src/test/recovery). Unfortunately my\n> > appveyor.yml fu is not yet strong enough.\n>\n> I didn't find the documentation about INVALID_HANDLE_VALUE in\n> lpHandles. Could you give me the URL for that?\n\nI was looking for how you do the equivalent of Unix file descriptor -1\nin a call to poll(), and somewhere I read that INVALID_HANDLE_VALUE\nhas the right effect. I can't find that reference now. Apparently it\nworks because that's the pseudo-handle value -1 that is returned by\nGetCurrentProcess(), and waiting for your own process waits forever so\nit's a suitable value for holes in an array of event handles. We\nshould probably call GetCurrentProcess() instead, but really that is\njust stand-in code: we should rewrite it so that we don't need holes!\nThat might require a second array for use by the poll and win32\nimplementations. Let's return to that in a later CF?\n\nOn Mon, Mar 30, 2020 at 6:15 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > 0008: \"Use WL_EXIT_ON_PM_DEATH in FeBeWaitSet.\"\n> >\n> > The FATAL message you get if you happen to be waiting for IO rather\n> > than waiting somewhere else seems arbitrarily different. By switching\n> > to a standard automatic exit, it opens the possibility of using\n> > FeBeWaitSet in a couple more places that would otherwise need to\n> > create their own WES (see also [1]). Thoughts?\n>\n> Don't we really need any message on backend disconnection due to\n> postmaster death? As for authentication code, it seems to me the\n> rationale for not writing the log is that the connection has not been\n> established at the time. (That depends on what we think the\n> \"connection\" is.)\n\nTo me, it looks like this variation is just from a time when\npostmaster death handling was less standardised. The comment (removed\nby this patch) even introduces the topic of postmaster exit as if\nyou've never heard of it, in this arbitrary location in the tree, one\nwait point among many (admittedly one that is often reached). I don't\nthink there is any good reason for random timing to affect whether or\nnot you get a hand crafted FATAL message.\n\nHowever, if others don't like this change, we could drop this patch\nand still use FeBeWaitSet for walsender.c. It'd just need to handle\npostmaster death explicitly. It felt weird to be adding new code that\nhas to handle postmaster death explicitly, which is what led me to\nnotice that the existing FeBeWaitSet coding (and resulting user\nexperience) is different from other code.\n\n> > 0009: \"Use FeBeWaitSet for walsender.c.\"\n> >\n> > Enabled by 0008.\n>\n> It works and doesn't change behavior. But I found it a bit difficult\n> to find what events the WaitEventSetWait waits. Maybe a comment at\n> the caller sites would be sufficient. I think any comment about the\n> bare number \"0\" as the event position of ModifyWaitEvent is also\n> needed.\n\nAgreed. I changed it to use symbolic names\nFeBeWaitSet{Socket,Latch}Pos in the new code and also in the\npre-existing code like this.\n\n> By the way I found that pqcomm.c uses -1 instead of PGINVALID_SOCKET\n> for AddWaitEventToSet.\n\nOh yeah, that's a pre-existing problem. Fixed, since that code was\nchanged by the patch anyway.",
"msg_date": "Tue, 14 Jul 2020 18:51:19 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 6:51 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> In the meantime, here's a rebase of the more straightforward patches\n> in the stack. These are the ones that deal only with fixed sets of\n> file descriptors, and they survive check-world on Linux,\n> Linux+EXEC_BACKEND (with ASLR disabled) and FreeBSD, and at least\n> check on macOS and Windows (my CI recipes need more work to get\n> check-world working on those two). There's one user-visible change\n> that I'd appreciate feedback on: I propose to drop the FATAL error\n> when the postmaster goes away, to make things more consistent. See\n> below for more on that.\n\nHere's the effect of patches 0001-0003 on the number of relevant\nsystem calls generate by \"make check\" on Linux and FreeBSD, according\nto strace/truss -f -c:\n\nepoll_create1: 4,825 -> 865\nepoll_ctl: 12,454 -> 2,721\nepoll_wait: ~45k -> ~45k\nclose: ~81k -> ~77k\n\nkqueue: 4,618 -> 866\nkevent: ~54k -> ~46k\nclose: ~65k -> ~61k\n\nI pushed those three patches, but will wait for more discussion on the rest.\n\n\n",
"msg_date": "Thu, 30 Jul 2020 17:50:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Thu, Jul 30, 2020 at 5:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I pushed those three patches, but will wait for more discussion on the rest.\n\nAnd here's a rebase, to make cfbot happy.",
"msg_date": "Fri, 31 Jul 2020 09:42:03 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Fri, Jul 31, 2020 at 9:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jul 30, 2020 at 5:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I pushed those three patches, but will wait for more discussion on the rest.\n>\n> And here's a rebase, to make cfbot happy.\n\nAnd again.\n\nTo restate the two goals of the remaining patches:\n\n1. Use FeBeWaitSet for walsender, instead of setting up and tearing\ndown temporary WaitEventSets all the time.\n2. Use the standard WL_EXIT_ON_PM_DEATH flag for FeBeWaitSet, instead\nof the special case ereport(FATAL, ... \"terminating connection due to\nunexpected postmaster exit\" ...).\n\nFor point 2, the question I am raising is: why should users get a\nspecial FATAL message in some places and not others, for PM death?\nHowever, if people are attached to that behaviour, we could still\nachieve goal 1 without goal 2 by handling PM death explicitly in\nwalsender.c and I'd be happy to post an alternative patch set like\nthat.",
"msg_date": "Tue, 5 Jan 2021 18:10:09 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 6:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> For point 2, the question I am raising is: why should users get a\n> special FATAL message in some places and not others, for PM death?\n> However, if people are attached to that behaviour, we could still\n> achieve goal 1 without goal 2 by handling PM death explicitly in\n> walsender.c and I'd be happy to post an alternative patch set like\n> that.\n\nHere's the alternative patch set, with no change to existing error\nmessage behaviour. I'm going to commit this version and close this CF\nitem soon if there are no objections.",
"msg_date": "Sat, 27 Feb 2021 14:48:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
},
{
"msg_contents": "On Sat, Feb 27, 2021 at 2:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's the alternative patch set, with no change to existing error\n> message behaviour. I'm going to commit this version and close this CF\n> item soon if there are no objections.\n\nPushed.\n\nThat leaves just walreceiver and postgres_fdw in need of WaitEventSet\nimprovements along these lines. The raw ingredients for that were\npresent in earlier patches, and I think Horiguchi-san had the right\nidea: we should use the libpq event system to adjust our WES as\nappropriate when it changes the socket underneath us. I will leave\nthis CF entry open a bit longer in case he would like to post an\nupdated version of that part (considering Tom's feedback[1]). If not,\nwe can close this and try again in the next cycle.\n\n[1] https://www.postgresql.org/message-id/2446176.1594495351%40sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 1 Mar 2021 16:21:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing WaitEventSet syscall churn"
}
] |
[
{
"msg_contents": "In TODO wiki:\nhttps://wiki.postgresql.org/wiki/TODO\n\n- Allow a stalled COPY to exit if the backend is terminated\n\n Re: possible bug not in open items\n https://www.postgresql.org/message-id/flat/200904091648.n39GmMJ07139%40momjian.us#d86f9ba37b4b34d3931c7152a028fe45\n\nHasn't this been fixed?\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 21 Jan 2020 15:06:11 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "TODO: Allow a stalled COPY to exit if the backend is terminated"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen I was researching the maximum length of password in PostgreSQL\nto answer the question from my customer, I found that there are two\nminor issues in .pgpass file.\n\n(1) If the length of a line in .pgpass file is larger than 319B,\n libpq silently treats each 319B in the line as a separate\n setting line.\n\n(2) The document explains that a line beginning with # is treated\n as a comment in .pgpass. But as far as I read the code,\n there is no code doing such special handling. Whether a line\n begins with # or not, libpq just checks that the first token\n in the line match with the host. That is, if you try to connect\n to the host with the hostname beginning with #,\n it can match to the line beginning with # in .pgpass.\n\n Also if the length of that \"comment\" line is larger than 319B,\n the latter part of the line can be treated as valid setting.\n\nYou may think that these unexpected behaviors are not so harmful\nin practice because \"usually\" the length of password setting line is\nless than 319B and the hostname beginning with # is less likely to be\nused. But the problem exists. And there are people who want to use\nlarge password or to write a long comment (e.g., with multibyte\ncharacters like Japanese) in .pgass, so these may be more harmful\nin the near future.\n\nFor (1), I think that we should make libpq warn if the length of a line\nis larger than 319B, and throw away the remaining part beginning from\n320B position. Whether to enlarge the length of a line should be\na separate discussion, I think.\n\nFor (2), libpq should treat any lines beginning with # as comments.\n\nI've not created the patch yet, but will do if we reach to\nthe consensus.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 21 Jan 2020 15:27:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Minor issues in .pgpass"
},
{
"msg_contents": "> On 21 Jan 2020, at 07:27, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> For (2), libpq should treat any lines beginning with # as comments.\n\nI haven't read the code to confirm that it really isn't, but +1 on making it\nso. I can't see a reason for allowing a hostname to start with #, but allowing\ncomments does seem useful.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 21 Jan 2020 22:59:50 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 03:27:50PM +0900, Fujii Masao wrote:\n> Hi,\n> \n> When I was researching the maximum length of password in PostgreSQL\n> to answer the question from my customer, I found that there are two\n> minor issues in .pgpass file.\n> \n> (1) If the length of a line in .pgpass file is larger than 319B,\n> libpq silently treats each 319B in the line as a separate\n> setting line.\n\nThis seems like a potentially serious bug. For example, a truncated\npassword could get retried enough times to raise intruder alarms, and\nit wouldn't be easy to track down.\n\n> (2) The document explains that a line beginning with # is treated\n> as a comment in .pgpass. But as far as I read the code,\n> there is no code doing such special handling.\n\nThis is a flat-out bug, as it violates a promise the documentation has\nmade.\n\n> Also if the length of that \"comment\" line is larger than 319B,\n> the latter part of the line can be treated as valid setting.\n\n> You may think that these unexpected behaviors are not so harmful\n> in practice because \"usually\" the length of password setting line is\n> less than 319B and the hostname beginning with # is less likely to be\n> used. But the problem exists. And there are people who want to use\n> large password or to write a long comment (e.g., with multibyte\n> characters like Japanese) in .pgass, so these may be more harmful\n> in the near future.\n> \n> For (1), I think that we should make libpq warn if the length of a line\n> is larger than 319B, and throw away the remaining part beginning from\n> 320B position. Whether to enlarge the length of a line should be\n> a separate discussion, I think.\n\nAgreed.\n\n> For (2), libpq should treat any lines beginning with # as comments.\n\nWould it make sense for lines starting with whitespace and then # to\nbe treated as comments, too, e.g.:\n\n # Please don't treat this as a parameter\n\n?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 22 Jan 2020 01:06:21 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "On 2020/01/22 9:06, David Fetter wrote:\n> On Tue, Jan 21, 2020 at 03:27:50PM +0900, Fujii Masao wrote:\n>> Hi,\n>>\n>> When I was researching the maximum length of password in PostgreSQL\n>> to answer the question from my customer, I found that there are two\n>> minor issues in .pgpass file.\n>>\n>> (1) If the length of a line in .pgpass file is larger than 319B,\n>> libpq silently treats each 319B in the line as a separate\n>> setting line.\n> \n> This seems like a potentially serious bug. For example, a truncated\n> password could get retried enough times to raise intruder alarms, and\n> it wouldn't be easy to track down.\n> \n>> (2) The document explains that a line beginning with # is treated\n>> as a comment in .pgpass. But as far as I read the code,\n>> there is no code doing such special handling.\n> \n> This is a flat-out bug, as it violates a promise the documentation has\n> made.\n> \n>> Also if the length of that \"comment\" line is larger than 319B,\n>> the latter part of the line can be treated as valid setting.\n> \n>> You may think that these unexpected behaviors are not so harmful\n>> in practice because \"usually\" the length of password setting line is\n>> less than 319B and the hostname beginning with # is less likely to be\n>> used. But the problem exists. And there are people who want to use\n>> large password or to write a long comment (e.g., with multibyte\n>> characters like Japanese) in .pgass, so these may be more harmful\n>> in the near future.\n>>\n>> For (1), I think that we should make libpq warn if the length of a line\n>> is larger than 319B, and throw away the remaining part beginning from\n>> 320B position. Whether to enlarge the length of a line should be\n>> a separate discussion, I think.\n> \n> Agreed.\n> \n>> For (2), libpq should treat any lines beginning with # as comments.\n\nPatch attached. This patch does the above (1) and (2).\n\n> Would it make sense for lines starting with whitespace and then # to\n> be treated as comments, too, e.g.:\n\nCould you tell me why you want to treat such a line as comment?\nBasically I don't want to change the existing rules for parsing\n.pgpass file more thane necessary.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Thu, 13 Feb 2020 02:01:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nFirst of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\r\n\r\nI've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\r\n\r\n1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\r\n2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables. Also, I would choose a more appropriate name for \"tmp\" variable.\r\n\r\nI believe if you move the following lines before the conditional statement and simply and change the if statement to \"if (len >= sizeof(buf) - 1)\", it will serve the purpose.\r\n========================================\r\n/* strip trailing newline and carriage return */\r\nlen = pg_strip_crlf(buf);\r\n\r\nif (len == 0)\r\n continue;\r\n========================================\r\n\r\nSo, the patch should look like this in my opinion (ignore the formatting issues as this is just to give you an idea of what I mean):\r\n\r\ndiff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c\r\nindex 408000a..6ca262f 100644\r\n--- a/src/interfaces/libpq/fe-connect.c\r\n+++ b/src/interfaces/libpq/fe-connect.c\r\n@@ -6949,6 +6949,7 @@ passwordFromFile(const char *hostname, const char *port, const char *dbname,\r\n {\r\n FILE *fp;\r\n struct stat stat_buf;\r\n+ int line_number = 0;\r\n \r\n #define LINELEN NAMEDATALEN*5\r\n char buf[LINELEN];\r\n@@ -7018,10 +7019,40 @@ passwordFromFile(const char *hostname, const char *port, const char *dbname,\r\n if (fgets(buf, sizeof(buf), fp) == NULL)\r\n break;\r\n \r\n- /* strip trailing newline and carriage return */\r\n- len = pg_strip_crlf(buf);\r\n+ line_number++;\r\n \r\n- if (len == 0)\r\n+ /* strip trailing newline and carriage return */\r\n+ len = pg_strip_crlf(buf);\r\n+\r\n+ if (len == 0)\r\n+ continue;\r\n+\r\n+ if (len >= sizeof(buf) - 1)\r\n+ {\r\n+ char tmp[LINELEN];\r\n+\r\n+ /*\r\n+ * Warn if this password setting line is too long,\r\n+ * because it's unexpectedly truncated.\r\n+ */\r\n+ if (buf[0] != '#')\r\n+ fprintf(stderr,\r\n+ libpq_gettext(\"WARNING: line %d too long in password file \\\"%s\\\"\\n\"),\r\n+ line_number, pgpassfile);\r\n+\r\n+ /* eat rest of the line */\r\n+ while (!feof(fp) && !ferror(fp))\r\n+ {\r\n+ if (fgets(tmp, sizeof(tmp), fp) == NULL)\r\n+ break;\r\n+ len = strlen(tmp);\r\n+ if (len < sizeof(tmp) -1 || tmp[len - 1] == '\\n')\r\n+ break;\r\n+ }\r\n+ }\r\n+\r\n+ /* ignore comments */\r\n+ if (buf[0] == '#')\r\n\r\n---\r\nHighgo Software (Canada/China/Pakistan)\r\nURL : www.highgo.ca\r\nADDR: 10318 WHALLEY BLVD, Surrey, BC\r\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\r\nSKYPE: engineeredvirus\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Fri, 28 Feb 2020 15:46:18 +0000",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "\n\nOn 2020/02/29 0:46, Hamid Akhtar wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> First of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\n> \n> I've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\n\nThanks for the review and comments!\n\n> 1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\n\nYes.\n\n> 2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables.\n\nBasically I don't want to use the same variable for several purposes\nbecause which would decrease the code readability.\n\n> Also, I would choose a more appropriate name for \"tmp\" variable.\n\nYeah, so what about \"rest\" as the variable name?\n\n> I believe if you move the following lines before the conditional statement and simply and change the if statement to \"if (len >= sizeof(buf) - 1)\", it will serve the purpose.\n\nISTM that this doesn't work correctly when the \"buf\" contains\ntrailing carriage returns but not newlines (i.e., this line is too long\nso the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\nshorten the \"buf\" and then its return value \"len\" should become\nless than sizeof(buf). So the following condition always becomes\nfalse unexpectedly in that case even though there is still rest of\nthe line to eat.\n\n> + if (len >= sizeof(buf) - 1)\n> + {\n> + char tmp[LINELEN];\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 2 Mar 2020 22:07:14 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/02/29 0:46, Hamid Akhtar wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: not tested\n> > Implements feature: not tested\n> > Spec compliant: not tested\n> > Documentation: not tested\n> >\n> > First of all, this seems like fixing a valid issue, albeit, the\n> probability of somebody messing is low, but it is still better to fix this\n> problem.\n> >\n> > I've not tested the patch in any detail, however, there are a couple of\n> comments I have before I proceed on with detailed testing.\n>\n> Thanks for the review and comments!\n>\n> > 1. pgindent is showing a few issues with formatting. Please have a look\n> and resolve those.\n>\n> Yes.\n>\n> > 2. I think you can potentially use \"len\" variable instead of introducing\n> \"buflen\" and \"tmplen\" variables.\n>\n> Basically I don't want to use the same variable for several purposes\n> because which would decrease the code readability.\n>\n> > Also, I would choose a more appropriate name for \"tmp\" variable.\n>\n> Yeah, so what about \"rest\" as the variable name?\n>\n> > I believe if you move the following lines before the conditional\n> statement and simply and change the if statement to \"if (len >= sizeof(buf)\n> - 1)\", it will serve the purpose.\n>\n> ISTM that this doesn't work correctly when the \"buf\" contains\n> trailing carriage returns but not newlines (i.e., this line is too long\n> so the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\n> shorten the \"buf\" and then its return value \"len\" should become\n> less than sizeof(buf). So the following condition always becomes\n> false unexpectedly in that case even though there is still rest of\n> the line to eat.\n>\n\nPer code comments for pg_strip_crlf:\n\"pg_strip_crlf -- Remove any trailing newline and carriage return\"\n\nIf the buf read contains a newline or a carriage return at the end, then\nclearly the line\nis not exceeding the sizeof(buf). If alternatively, it doesn't, then\npg_strip_crlf will have\nno effect on string length and for any lines exceeding sizeof(buf), the\nfollowing conditional\nstatement becomes true.\n\n\n> > + if (len >= sizeof(buf) - 1)\n> > + {\n> > + char tmp[LINELEN];\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> NTT DATA CORPORATION\n> Advanced Platform Technology Group\n> Research and Development Headquarters\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/02/29 0:46, Hamid Akhtar wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> First of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\n> \n> I've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\n\nThanks for the review and comments!\n\n> 1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\n\nYes.\n\n> 2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables.\n\nBasically I don't want to use the same variable for several purposes\nbecause which would decrease the code readability.\n\n> Also, I would choose a more appropriate name for \"tmp\" variable.\n\nYeah, so what about \"rest\" as the variable name?\n\n> I believe if you move the following lines before the conditional statement and simply and change the if statement to \"if (len >= sizeof(buf) - 1)\", it will serve the purpose.\n\nISTM that this doesn't work correctly when the \"buf\" contains\ntrailing carriage returns but not newlines (i.e., this line is too long\nso the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\nshorten the \"buf\" and then its return value \"len\" should become\nless than sizeof(buf). So the following condition always becomes\nfalse unexpectedly in that case even though there is still rest of\nthe line to eat.Per code comments for pg_strip_crlf:\"pg_strip_crlf -- Remove any trailing newline and carriage return\" If the buf read contains a newline or a carriage return at the end, then clearly the lineis not exceeding the sizeof(buf). If alternatively, it doesn't, then pg_strip_crlf will have no effect on string length and for any lines exceeding sizeof(buf), the following conditionalstatement becomes true.\n\n> + if (len >= sizeof(buf) - 1)\n> + {\n> + char tmp[LINELEN];\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Tue, 3 Mar 2020 17:38:14 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "On Tue, Mar 3, 2020 at 5:38 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n>\n>\n> On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n>\n>>\n>>\n>> On 2020/02/29 0:46, Hamid Akhtar wrote:\n>> > The following review has been posted through the commitfest application:\n>> > make installcheck-world: not tested\n>> > Implements feature: not tested\n>> > Spec compliant: not tested\n>> > Documentation: not tested\n>> >\n>> > First of all, this seems like fixing a valid issue, albeit, the\n>> probability of somebody messing is low, but it is still better to fix this\n>> problem.\n>> >\n>> > I've not tested the patch in any detail, however, there are a couple of\n>> comments I have before I proceed on with detailed testing.\n>>\n>> Thanks for the review and comments!\n>>\n>> > 1. pgindent is showing a few issues with formatting. Please have a look\n>> and resolve those.\n>>\n>> Yes.\n>>\n>> > 2. I think you can potentially use \"len\" variable instead of\n>> introducing \"buflen\" and \"tmplen\" variables.\n>>\n>> Basically I don't want to use the same variable for several purposes\n>> because which would decrease the code readability.\n>>\n>\nThat is fine.\n\n\n>\n>> > Also, I would choose a more appropriate name for \"tmp\" variable.\n>>\n>> Yeah, so what about \"rest\" as the variable name?\n>>\n>\nMay be something like \"excess_buf\" or any other one that describes that\nthese bytes are to be discarded.\n\n\n>\n>> > I believe if you move the following lines before the conditional\n>> statement and simply and change the if statement to \"if (len >= sizeof(buf)\n>> - 1)\", it will serve the purpose.\n>>\n>> ISTM that this doesn't work correctly when the \"buf\" contains\n>> trailing carriage returns but not newlines (i.e., this line is too long\n>> so the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\n>> shorten the \"buf\" and then its return value \"len\" should become\n>> less than sizeof(buf). So the following condition always becomes\n>> false unexpectedly in that case even though there is still rest of\n>> the line to eat.\n>>\n>\n> Per code comments for pg_strip_crlf:\n> \"pg_strip_crlf -- Remove any trailing newline and carriage return\"\n>\n> If the buf read contains a newline or a carriage return at the end, then\n> clearly the line\n> is not exceeding the sizeof(buf). If alternatively, it doesn't, then\n> pg_strip_crlf will have\n> no effect on string length and for any lines exceeding sizeof(buf), the\n> following conditional\n> statement becomes true.\n>\n>\n>> > + if (len >= sizeof(buf) - 1)\n>> > + {\n>> > + char tmp[LINELEN];\n>>\n>> Regards,\n>>\n>> --\n>> Fujii Masao\n>> NTT DATA CORPORATION\n>> Advanced Platform Technology Group\n>> Research and Development Headquarters\n>>\n>\n>\n> --\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca\n> ADDR: 10318 WHALLEY BLVD, Surrey, BC\n> CELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\n> SKYPE: engineeredvirus\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Tue, Mar 3, 2020 at 5:38 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/02/29 0:46, Hamid Akhtar wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> First of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\n> \n> I've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\n\nThanks for the review and comments!\n\n> 1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\n\nYes.\n\n> 2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables.\n\nBasically I don't want to use the same variable for several purposes\nbecause which would decrease the code readability.That is fine. \n\n> Also, I would choose a more appropriate name for \"tmp\" variable.\n\nYeah, so what about \"rest\" as the variable name?May be something like \"excess_buf\" or any other one that describes that these bytes are to be discarded. \n\n> I believe if you move the following lines before the conditional statement and simply and change the if statement to \"if (len >= sizeof(buf) - 1)\", it will serve the purpose.\n\nISTM that this doesn't work correctly when the \"buf\" contains\ntrailing carriage returns but not newlines (i.e., this line is too long\nso the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\nshorten the \"buf\" and then its return value \"len\" should become\nless than sizeof(buf). So the following condition always becomes\nfalse unexpectedly in that case even though there is still rest of\nthe line to eat.Per code comments for pg_strip_crlf:\"pg_strip_crlf -- Remove any trailing newline and carriage return\" If the buf read contains a newline or a carriage return at the end, then clearly the lineis not exceeding the sizeof(buf). If alternatively, it doesn't, then pg_strip_crlf will have no effect on string length and for any lines exceeding sizeof(buf), the following conditionalstatement becomes true.\n\n> + if (len >= sizeof(buf) - 1)\n> + {\n> + char tmp[LINELEN];\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Tue, 3 Mar 2020 18:07:10 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "\n\nOn 2020/03/03 21:38, Hamid Akhtar wrote:\n> \n> \n> On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> \n> On 2020/02/29 0:46, Hamid Akhtar wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: not tested\n> > Implements feature: not tested\n> > Spec compliant: not tested\n> > Documentation: not tested\n> >\n> > First of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\n> >\n> > I've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\n> \n> Thanks for the review and comments!\n> \n> > 1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\n> \n> Yes.\n> \n> > 2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables.\n> \n> Basically I don't want to use the same variable for several purposes\n> because which would decrease the code readability.\n> \n> > Also, I would choose a more appropriate name for \"tmp\" variable.\n> \n> Yeah, so what about \"rest\" as the variable name?\n> \n> > I believe if you move the following lines before the conditional statement and simply and change the if statement to \"if (len >= sizeof(buf) - 1)\", it will serve the purpose.\n> \n> ISTM that this doesn't work correctly when the \"buf\" contains\n> trailing carriage returns but not newlines (i.e., this line is too long\n> so the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\n> shorten the \"buf\" and then its return value \"len\" should become\n> less than sizeof(buf). So the following condition always becomes\n> false unexpectedly in that case even though there is still rest of\n> the line to eat.\n> \n> \n> Per code comments for pg_strip_crlf:\n> \"pg_strip_crlf -- Remove any trailing newline and carriage return\"\n> If the buf read contains a newline or a carriage return at the end, then clearly the line\n> is not exceeding the sizeof(buf).\n\nNo if the length of the setting line exceeds sizeof(buf) and\nthe buf contains only a carriage return at the end and not newline.\nThis case can happen because fgets() stops reading when a newline\n(not a carriage return) is found. Normal users are very unlikely to\nadd a carriage return into the middle of the pgpass setting line\nin practice, though. But IMO the code should handle even this\ncase because it *can* happen, if the code is not so complicated.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 4 Mar 2020 00:57:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "On 2020/03/03 22:07, Hamid Akhtar wrote:\n> On Tue, Mar 3, 2020 at 5:38 PM Hamid Akhtar <hamid.akhtar@gmail.com <mailto:hamid.akhtar@gmail.com>> wrote:\n> \n> \n> \n> On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> \n> On 2020/02/29 0:46, Hamid Akhtar wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: not tested\n> > Implements feature: not tested\n> > Spec compliant: not tested\n> > Documentation: not tested\n> >\n> > First of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\n> >\n> > I've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\n> \n> Thanks for the review and comments!\n> \n> > 1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\n> \n> Yes.\n\nFixed. Attached is the updated version of the patch.\nI marked this CF entry as \"Needs Review\" again.\n\n> > 2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables.\n> \n> Basically I don't want to use the same variable for several purposes\n> because which would decrease the code readability.\n> \n> \n> That is fine.\n> \n> \n> > Also, I would choose a more appropriate name for \"tmp\" variable.\n> \n> Yeah, so what about \"rest\" as the variable name?\n> \n> \n> May be something like \"excess_buf\" or any other one that describes that these bytes are to be discarded.\n\nThanks for the comment! But IMO that \"rest\" is not\nso bad choice, so for now I used \"rest\" in the latest patch.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Wed, 4 Mar 2020 19:04:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "On Tue, Mar 3, 2020 at 8:57 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/03/03 21:38, Hamid Akhtar wrote:\n> >\n> >\n> > On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com\n> <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> >\n> >\n> >\n> > On 2020/02/29 0:46, Hamid Akhtar wrote:\n> > > The following review has been posted through the commitfest\n> application:\n> > > make installcheck-world: not tested\n> > > Implements feature: not tested\n> > > Spec compliant: not tested\n> > > Documentation: not tested\n> > >\n> > > First of all, this seems like fixing a valid issue, albeit, the\n> probability of somebody messing is low, but it is still better to fix this\n> problem.\n> > >\n> > > I've not tested the patch in any detail, however, there are a\n> couple of comments I have before I proceed on with detailed testing.\n> >\n> > Thanks for the review and comments!\n> >\n> > > 1. pgindent is showing a few issues with formatting. Please have\n> a look and resolve those.\n> >\n> > Yes.\n> >\n> > > 2. I think you can potentially use \"len\" variable instead of\n> introducing \"buflen\" and \"tmplen\" variables.\n> >\n> > Basically I don't want to use the same variable for several purposes\n> > because which would decrease the code readability.\n> >\n> > > Also, I would choose a more appropriate name for \"tmp\" variable.\n> >\n> > Yeah, so what about \"rest\" as the variable name?\n> >\n> > > I believe if you move the following lines before the conditional\n> statement and simply and change the if statement to \"if (len >= sizeof(buf)\n> - 1)\", it will serve the purpose.\n> >\n> > ISTM that this doesn't work correctly when the \"buf\" contains\n> > trailing carriage returns but not newlines (i.e., this line is too\n> long\n> > so the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\n> > shorten the \"buf\" and then its return value \"len\" should become\n> > less than sizeof(buf). So the following condition always becomes\n> > false unexpectedly in that case even though there is still rest of\n> > the line to eat.\n> >\n> >\n> > Per code comments for pg_strip_crlf:\n> > \"pg_strip_crlf -- Remove any trailing newline and carriage return\"\n> > If the buf read contains a newline or a carriage return at the end, then\n> clearly the line\n> > is not exceeding the sizeof(buf).\n>\n> No if the length of the setting line exceeds sizeof(buf) and\n> the buf contains only a carriage return at the end and not newline.\n> This case can happen because fgets() stops reading when a newline\n> (not a carriage return) is found. Normal users are very unlikely to\n> add a carriage return into the middle of the pgpass setting line\n> in practice, though. But IMO the code should handle even this\n> case because it *can* happen, if the code is not so complicated.\n>\n\nI'm not sure if I understand your comment here. From the code of\npg_strip_crlf\nI see that it is handling both carriage return and/or new line at the end\nof a\nstring:\n=============\nsrc/common/string.c\n=============\nwhile (len > 0 && (str[len - 1] == '\\n' || str[len - 1] == '\\r'))\n str[--len] = '\\0';\n=============\n\n\n> Regards,\n>\n>\n> --\n> Fujii Masao\n> NTT DATA CORPORATION\n> Advanced Platform Technology Group\n> Research and Development Headquarters\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Tue, Mar 3, 2020 at 8:57 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/03/03 21:38, Hamid Akhtar wrote:\n> \n> \n> On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> \n> On 2020/02/29 0:46, Hamid Akhtar wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: not tested\n> > Implements feature: not tested\n> > Spec compliant: not tested\n> > Documentation: not tested\n> >\n> > First of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\n> >\n> > I've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\n> \n> Thanks for the review and comments!\n> \n> > 1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\n> \n> Yes.\n> \n> > 2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables.\n> \n> Basically I don't want to use the same variable for several purposes\n> because which would decrease the code readability.\n> \n> > Also, I would choose a more appropriate name for \"tmp\" variable.\n> \n> Yeah, so what about \"rest\" as the variable name?\n> \n> > I believe if you move the following lines before the conditional statement and simply and change the if statement to \"if (len >= sizeof(buf) - 1)\", it will serve the purpose.\n> \n> ISTM that this doesn't work correctly when the \"buf\" contains\n> trailing carriage returns but not newlines (i.e., this line is too long\n> so the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\n> shorten the \"buf\" and then its return value \"len\" should become\n> less than sizeof(buf). So the following condition always becomes\n> false unexpectedly in that case even though there is still rest of\n> the line to eat.\n> \n> \n> Per code comments for pg_strip_crlf:\n> \"pg_strip_crlf -- Remove any trailing newline and carriage return\"\n> If the buf read contains a newline or a carriage return at the end, then clearly the line\n> is not exceeding the sizeof(buf).\n\nNo if the length of the setting line exceeds sizeof(buf) and\nthe buf contains only a carriage return at the end and not newline.\nThis case can happen because fgets() stops reading when a newline\n(not a carriage return) is found. Normal users are very unlikely to\nadd a carriage return into the middle of the pgpass setting line\nin practice, though. But IMO the code should handle even this\ncase because it *can* happen, if the code is not so complicated.I'm not sure if I understand your comment here. From the code of pg_strip_crlfI see that it is handling both carriage return and/or new line at the end of a string:=============src/common/string.c=============while (len > 0 && (str[len - 1] == '\\n' || str[len - 1] == '\\r')) str[--len] = '\\0';=============\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Wed, 4 Mar 2020 16:39:55 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "\n\nOn 2020/03/04 20:39, Hamid Akhtar wrote:\n> \n> \n> On Tue, Mar 3, 2020 at 8:57 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> \n> On 2020/03/03 21:38, Hamid Akhtar wrote:\n> >\n> >\n> > On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\n> >\n> >\n> >\n> > On 2020/02/29 0:46, Hamid Akhtar wrote:\n> > > The following review has been posted through the commitfest application:\n> > > make installcheck-world: not tested\n> > > Implements feature: not tested\n> > > Spec compliant: not tested\n> > > Documentation: not tested\n> > >\n> > > First of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\n> > >\n> > > I've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\n> >\n> > Thanks for the review and comments!\n> >\n> > > 1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\n> >\n> > Yes.\n> >\n> > > 2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables.\n> >\n> > Basically I don't want to use the same variable for several purposes\n> > because which would decrease the code readability.\n> >\n> > > Also, I would choose a more appropriate name for \"tmp\" variable.\n> >\n> > Yeah, so what about \"rest\" as the variable name?\n> >\n> > > I believe if you move the following lines before the conditional statement and simply and change the if statement to \"if (len >= sizeof(buf) - 1)\", it will serve the purpose.\n> >\n> > ISTM that this doesn't work correctly when the \"buf\" contains\n> > trailing carriage returns but not newlines (i.e., this line is too long\n> > so the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\n> > shorten the \"buf\" and then its return value \"len\" should become\n> > less than sizeof(buf). So the following condition always becomes\n> > false unexpectedly in that case even though there is still rest of\n> > the line to eat.\n> >\n> >\n> > Per code comments for pg_strip_crlf:\n> > \"pg_strip_crlf -- Remove any trailing newline and carriage return\"\n> > If the buf read contains a newline or a carriage return at the end, then clearly the line\n> > is not exceeding the sizeof(buf).\n> \n> No if the length of the setting line exceeds sizeof(buf) and\n> the buf contains only a carriage return at the end and not newline.\n> This case can happen because fgets() stops reading when a newline\n> (not a carriage return) is found. Normal users are very unlikely to\n> add a carriage return into the middle of the pgpass setting line\n> in practice, though. But IMO the code should handle even this\n> case because it *can* happen, if the code is not so complicated.\n> \n> \n> I'm not sure if I understand your comment here. From the code of pg_strip_crlf\n> I see that it is handling both carriage return and/or new line at the end of a\n> string:\n\nSo if \"buf\" contains a carriage return at the end, it's removed and\nthe \"len\" that pg_strip_crlf() returns obviously should be smaller\nthan sizeof(buf). This causes the following condition that you\nproposed as follows to always be false (i.e., len < sizeof(buf) - 1)\neven when there are still rest of line. So we cannot eat rest of\nthe line even though it exists. I'm missing something?\n\n+ if (len >= sizeof(buf) - 1)\n+ {\n+ char tmp[LINELEN];\n+\n+ /*\n+ * Warn if this password setting line is too long,\n+ * because it's unexpectedly truncated.\n+ */\n+ if (buf[0] != '#')\n+ fprintf(stderr,\n+ libpq_gettext(\"WARNING: line %d too long in password file \\\"%s\\\"\\n\"),\n+ line_number, pgpassfile);\n+\n+ /* eat rest of the line */\n+ while (!feof(fp) && !ferror(fp))\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 4 Mar 2020 20:54:06 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "On Wed, Mar 4, 2020 at 4:54 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/03/04 20:39, Hamid Akhtar wrote:\n> >\n> >\n> > On Tue, Mar 3, 2020 at 8:57 PM Fujii Masao <masao.fujii@oss.nttdata.com\n> <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> >\n> >\n> >\n> > On 2020/03/03 21:38, Hamid Akhtar wrote:\n> > >\n> > >\n> > > On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\n> > >\n> > >\n> > >\n> > > On 2020/02/29 0:46, Hamid Akhtar wrote:\n> > > > The following review has been posted through the\n> commitfest application:\n> > > > make installcheck-world: not tested\n> > > > Implements feature: not tested\n> > > > Spec compliant: not tested\n> > > > Documentation: not tested\n> > > >\n> > > > First of all, this seems like fixing a valid issue,\n> albeit, the probability of somebody messing is low, but it is still better\n> to fix this problem.\n> > > >\n> > > > I've not tested the patch in any detail, however, there\n> are a couple of comments I have before I proceed on with detailed testing.\n> > >\n> > > Thanks for the review and comments!\n> > >\n> > > > 1. pgindent is showing a few issues with formatting.\n> Please have a look and resolve those.\n> > >\n> > > Yes.\n> > >\n> > > > 2. I think you can potentially use \"len\" variable instead\n> of introducing \"buflen\" and \"tmplen\" variables.\n> > >\n> > > Basically I don't want to use the same variable for several\n> purposes\n> > > because which would decrease the code readability.\n> > >\n> > > > Also, I would choose a more appropriate name for \"tmp\"\n> variable.\n> > >\n> > > Yeah, so what about \"rest\" as the variable name?\n> > >\n> > > > I believe if you move the following lines before the\n> conditional statement and simply and change the if statement to \"if (len >=\n> sizeof(buf) - 1)\", it will serve the purpose.\n> > >\n> > > ISTM that this doesn't work correctly when the \"buf\" contains\n> > > trailing carriage returns but not newlines (i.e., this line\n> is too long\n> > > so the \"buf\" doesn't include newline). In this case,\n> pg_strip_crlf()\n> > > shorten the \"buf\" and then its return value \"len\" should\n> become\n> > > less than sizeof(buf). So the following condition always\n> becomes\n> > > false unexpectedly in that case even though there is still\n> rest of\n> > > the line to eat.\n> > >\n> > >\n> > > Per code comments for pg_strip_crlf:\n> > > \"pg_strip_crlf -- Remove any trailing newline and carriage return\"\n> > > If the buf read contains a newline or a carriage return at the\n> end, then clearly the line\n> > > is not exceeding the sizeof(buf).\n> >\n> > No if the length of the setting line exceeds sizeof(buf) and\n> > the buf contains only a carriage return at the end and not newline.\n> > This case can happen because fgets() stops reading when a newline\n> > (not a carriage return) is found. Normal users are very unlikely to\n> > add a carriage return into the middle of the pgpass setting line\n> > in practice, though. But IMO the code should handle even this\n> > case because it *can* happen, if the code is not so complicated.\n> >\n> >\n> > I'm not sure if I understand your comment here. From the code of\n> pg_strip_crlf\n> > I see that it is handling both carriage return and/or new line at the\n> end of a\n> > string:\n>\n> So if \"buf\" contains a carriage return at the end, it's removed and\n> the \"len\" that pg_strip_crlf() returns obviously should be smaller\n> than sizeof(buf). This causes the following condition that you\n> proposed as follows to always be false (i.e., len < sizeof(buf) - 1)\n> even when there are still rest of line. So we cannot eat rest of\n> the line even though it exists. I'm missing something?\n>\n\nNo, you are perfectly fine. I now understand where you are coming from. So,\nall good now.\n\n\n>\n> + if (len >= sizeof(buf) - 1)\n> + {\n> + char tmp[LINELEN];\n> +\n> + /*\n> + * Warn if this password setting line is too long,\n> + * because it's unexpectedly truncated.\n> + */\n> + if (buf[0] != '#')\n> + fprintf(stderr,\n> + libpq_gettext(\"WARNING:\n> line %d too long in password file \\\"%s\\\"\\n\"),\n> + line_number, pgpassfile);\n> +\n> + /* eat rest of the line */\n> + while (!feof(fp) && !ferror(fp))\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> NTT DATA CORPORATION\n> Advanced Platform Technology Group\n> Research and Development Headquarters\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Wed, Mar 4, 2020 at 4:54 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/03/04 20:39, Hamid Akhtar wrote:\n> \n> \n> On Tue, Mar 3, 2020 at 8:57 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> \n> On 2020/03/03 21:38, Hamid Akhtar wrote:\n> >\n> >\n> > On Mon, Mar 2, 2020 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\n> >\n> >\n> >\n> > On 2020/02/29 0:46, Hamid Akhtar wrote:\n> > > The following review has been posted through the commitfest application:\n> > > make installcheck-world: not tested\n> > > Implements feature: not tested\n> > > Spec compliant: not tested\n> > > Documentation: not tested\n> > >\n> > > First of all, this seems like fixing a valid issue, albeit, the probability of somebody messing is low, but it is still better to fix this problem.\n> > >\n> > > I've not tested the patch in any detail, however, there are a couple of comments I have before I proceed on with detailed testing.\n> >\n> > Thanks for the review and comments!\n> >\n> > > 1. pgindent is showing a few issues with formatting. Please have a look and resolve those.\n> >\n> > Yes.\n> >\n> > > 2. I think you can potentially use \"len\" variable instead of introducing \"buflen\" and \"tmplen\" variables.\n> >\n> > Basically I don't want to use the same variable for several purposes\n> > because which would decrease the code readability.\n> >\n> > > Also, I would choose a more appropriate name for \"tmp\" variable.\n> >\n> > Yeah, so what about \"rest\" as the variable name?\n> >\n> > > I believe if you move the following lines before the conditional statement and simply and change the if statement to \"if (len >= sizeof(buf) - 1)\", it will serve the purpose.\n> >\n> > ISTM that this doesn't work correctly when the \"buf\" contains\n> > trailing carriage returns but not newlines (i.e., this line is too long\n> > so the \"buf\" doesn't include newline). In this case, pg_strip_crlf()\n> > shorten the \"buf\" and then its return value \"len\" should become\n> > less than sizeof(buf). So the following condition always becomes\n> > false unexpectedly in that case even though there is still rest of\n> > the line to eat.\n> >\n> >\n> > Per code comments for pg_strip_crlf:\n> > \"pg_strip_crlf -- Remove any trailing newline and carriage return\"\n> > If the buf read contains a newline or a carriage return at the end, then clearly the line\n> > is not exceeding the sizeof(buf).\n> \n> No if the length of the setting line exceeds sizeof(buf) and\n> the buf contains only a carriage return at the end and not newline.\n> This case can happen because fgets() stops reading when a newline\n> (not a carriage return) is found. Normal users are very unlikely to\n> add a carriage return into the middle of the pgpass setting line\n> in practice, though. But IMO the code should handle even this\n> case because it *can* happen, if the code is not so complicated.\n> \n> \n> I'm not sure if I understand your comment here. From the code of pg_strip_crlf\n> I see that it is handling both carriage return and/or new line at the end of a\n> string:\n\nSo if \"buf\" contains a carriage return at the end, it's removed and\nthe \"len\" that pg_strip_crlf() returns obviously should be smaller\nthan sizeof(buf). This causes the following condition that you\nproposed as follows to always be false (i.e., len < sizeof(buf) - 1)\neven when there are still rest of line. So we cannot eat rest of\nthe line even though it exists. I'm missing something?No, you are perfectly fine. I now understand where you are coming from. So, all good now. \n\n+ if (len >= sizeof(buf) - 1)\n+ {\n+ char tmp[LINELEN];\n+\n+ /*\n+ * Warn if this password setting line is too long,\n+ * because it's unexpectedly truncated.\n+ */\n+ if (buf[0] != '#')\n+ fprintf(stderr,\n+ libpq_gettext(\"WARNING: line %d too long in password file \\\"%s\\\"\\n\"),\n+ line_number, pgpassfile);\n+\n+ /* eat rest of the line */\n+ while (!feof(fp) && !ferror(fp))\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Wed, 4 Mar 2020 17:45:38 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nTested and looks fine to me.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Wed, 04 Mar 2020 14:01:16 +0000",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor issues in .pgpass"
},
{
"msg_contents": "\n\nOn 2020/03/04 23:01, Hamid Akhtar wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n> \n> Tested and looks fine to me.\n> \n> The new status of this patch is: Ready for Committer\n\nMany thanks for testing and reviewing the patch!\nI pushed it.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 5 Mar 2020 13:07:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor issues in .pgpass"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're about 10 days from the end of the 2020-01 CF. The current status\nof the CF is this:\n\n - Needs review: 118\n - Waiting on Author: 42\n - Ready for Committer: 10\n - Committed: 36\n - Moved to next CF: 3\n - Returned with Feedback: 1\n - Rejected: 3\n - Withdrawn: 2\n\nAbout half of the WoA patches are inactive for a long time, i.e. have\nbeen marked like that before 2020-01 (and sometimes long before that)\nand there have been no substantive updates. The chance of that changing\n(i.e. getting a new patch version and a meaningful review) in the last\ncouple of days of the CF seem slim, and there's plenty of patches that\nare being actively discussed, so I plan to start moving those inactive\npatches to 2020-03 over the weekend/early next week.\n\nSo maybe check if this applies to one of your patches, and try to move\nthe patch forward.\n\nTo some extent this applies to patches in \"needs review\" state too,\nalthough it's less clear who to ping for those. Maybe if one of your\npatches is waiting for a review, try pinging people who already did a\nreview in the past.\n\n\n\nsincerely, your commitfest dictator\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 15:49:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "We're getting close to the end of 2020-01 CF"
},
{
"msg_contents": "On 2020-Jan-21, Tomas Vondra wrote:\n\n> About half of the WoA patches are inactive for a long time, i.e. have\n> been marked like that before 2020-01 (and sometimes long before that)\n> and there have been no substantive updates. The chance of that changing\n> (i.e. getting a new patch version and a meaningful review) in the last\n> couple of days of the CF seem slim, and there's plenty of patches that\n> are being actively discussed, so I plan to start moving those inactive\n> patches to 2020-03 over the weekend/early next week.\n\nIn the previous commitfest that I ran, I closed as returned-with-feedback\nany patches that were waiting-on-author and had not changed for a very\nlong time. My rationale was that preserving those dead entries serves\nno useful purpose. If the author or somebody else wants to move the\npatch forward, they can post a new version now, or submit a new CF entry\nlater. We don't need zombies.\n\nI did provide a list of such patches, so that any interested onlookers\ncan act.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 12:25:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: We're getting close to the end of 2020-01 CF"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 12:25:14PM -0300, Alvaro Herrera wrote:\n>On 2020-Jan-21, Tomas Vondra wrote:\n>\n>> About half of the WoA patches are inactive for a long time, i.e. have\n>> been marked like that before 2020-01 (and sometimes long before that)\n>> and there have been no substantive updates. The chance of that changing\n>> (i.e. getting a new patch version and a meaningful review) in the last\n>> couple of days of the CF seem slim, and there's plenty of patches that\n>> are being actively discussed, so I plan to start moving those inactive\n>> patches to 2020-03 over the weekend/early next week.\n>\n>In the previous commitfest that I ran, I closed as returned-with-feedback\n>any patches that were waiting-on-author and had not changed for a very\n>long time. My rationale was that preserving those dead entries serves\n>no useful purpose. If the author or somebody else wants to move the\n>patch forward, they can post a new version now, or submit a new CF entry\n>later. We don't need zombies.\n>\n\nYeah, you're right returning them with feedback seems more appropriate,\ngiven the long inactivity. Plus, the CF app apparently does not allow\nmoving WoA patches to the next CF anyway.\n\n>I did provide a list of such patches, so that any interested onlookers\n>can act.\n>\n\nMakes sense. I think the patches this would apply to are:\n\n- fix for BUG #3720: wrong results at using ltree\n https://commitfest.postgresql.org/26/2054/\n (WoA since 2019/11/25)\n\n- Fix Deadlock Issue in Single User Mode When IO Failure Occurs\n https://commitfest.postgresql.org/26/2003/\n (WoA since 2019/11/25)\n \n- Use heap_multi_insert for catalog relations\n https://commitfest.postgresql.org/26/2125/\n (WoA since 2019/11/26)\n\n- Expose queryid in pg_stat_activity in log_line_prefix\n https://commitfest.postgresql.org/26/2069/\n (WoA since 2019/11/29)\n\n- Shared Memory Context\n https://commitfest.postgresql.org/26/2325/\n (WoA since 2019/12/01)\n \n- Shared system catalog cache\n https://commitfest.postgresql.org/26/2326/\n (WoA since 2019/12/01)\n\n- Transactions involving multiple postgres foreign servers\n https://commitfest.postgresql.org/26/1574/\n (WoA since 2019/12/01)\n\n- Speed up transaction completion faster after many relations are\n accessed in a transaction\n https://commitfest.postgresql.org/26/1993/\n (WoA since 2019/12/01)\n\n- [WIP] Temporal query processing with range types - Temporal\n Normalization\n https://commitfest.postgresql.org/26/2045/\n (WoA since 2019/12/01)\n\n- Autoprepare: implicitly replace literals with parameters and store\n generalized plan\n https://commitfest.postgresql.org/26/1747/\n (WoA since 2019/12/01)\n\n- Shared-memory based stats collector\n https://commitfest.postgresql.org/26/1708/\n (WoA since 2019/12/01)\n\n- Report all I/O errors in buffile.c\n https://commitfest.postgresql.org/26/2365/\n (WoA since 2019/12/10)\n\n- Preserve versions of initdb-created collations in pg_upgrade\n https://commitfest.postgresql.org/26/2328/\n (WoA since 2019/11/29)\n\n- Global temporary tables\n https://commitfest.postgresql.org/26/2233/\n (WoA since 2019/12/01)\n\n- Add more compile-time asserts\n https://commitfest.postgresql.org/26/2286/\n (WoA since 2019/12/24)\n\n- Invalid permission check in pg_stats for functional indexes\n https://commitfest.postgresql.org/26/2274/\n (WoA since 2019/11/28)\n \n- Fix PostgreSQL server build and install problems under MSYS2\n https://commitfest.postgresql.org/26/2366/\n (WoA since 2019/12/31)\n\n- FETCH FIRST clause WITH TIES option\n https://commitfest.postgresql.org/26/1844/\n (WoA since 2019/11/28)\n \n- Ltree, lquery, and ltxtquery binary protocol support\n https://commitfest.postgresql.org/26/2242/\n (WoA since 2019/11/29)\n\n- Improve search for missing parent downlinks in amcheck\n https://commitfest.postgresql.org/26/2140/\n (WoA since 2019/11/29)\n\n- Run-time pruning for ModifyTable\n https://commitfest.postgresql.org/26/2173/\n (WoA since 2019/11/27)\n\n- Connection string usage for Core Postgresql client applications\n https://commitfest.postgresql.org/26/2354/\n (WoA since 2019/11/25)\n\n- Psql patch to show access methods info\n https://commitfest.postgresql.org/26/1689/\n (WoA since 2019/11/27)\n\n- Row filtering for logical replication\n https://commitfest.postgresql.org/26/2270/\n (WoA since 2019/11/28)\n\nThose are the patches that have been set as WoA before this CF, and have\nnot been updated since. It's quite possible the state is stale for some\nof those patches, although I've tried to check if there were any\nmessages on the list.\n\nI'll ping the authors off-list too.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 17:20:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: We're getting close to the end of 2020-01 CF"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 05:20:17PM +0100, Tomas Vondra wrote:\n> Yeah, you're right returning them with feedback seems more appropriate,\n> given the long inactivity. Plus, the CF app apparently does not allow\n> moving WoA patches to the next CF anyway.\n\nFWIW, I tend to take a base of two weeks as a sensible period of\ntime as that's half the CF period when I do the classification job.\n\n> Those are the patches that have been set as WoA before this CF, and have\n> not been updated since. It's quite possible the state is stale for some\n> of those patches, although I've tried to check if there were any\n> messages on the list.\n\nYou need to be careful about bug fixes, as these are things that we\ndon't want to lose track of. Another thing that I noticed in the past\nis that some patches are registered as bug fixes, but they actually\nimplement a new feature. So there can be tricky cases.\n--\nMichael",
"msg_date": "Wed, 22 Jan 2020 14:09:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: We're getting close to the end of 2020-01 CF"
},
{
"msg_contents": "On 2020-Jan-22, Michael Paquier wrote:\n\n> On Tue, Jan 21, 2020 at 05:20:17PM +0100, Tomas Vondra wrote:\n\n> > Those are the patches that have been set as WoA before this CF, and have\n> > not been updated since. It's quite possible the state is stale for some\n> > of those patches, although I've tried to check if there were any\n> > messages on the list.\n> \n> You need to be careful about bug fixes, as these are things that we\n> don't want to lose track of.\n\nOh yeah, I forgot to mention the exception for bug fixes. I agree these\nshould almost never be RwF (or in any way closed other than Committed,\nreally, except in, err, exceptional cases).\n\n> Another thing that I noticed in the past\n> is that some patches are registered as bug fixes, but they actually\n> implement a new feature. So there can be tricky cases.\n\nOh yeah, I think the CFM should exercise judgement and reclassify.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Jan 2020 10:19:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: We're getting close to the end of 2020-01 CF"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 02:09:39PM +0900, Michael Paquier wrote:\n>On Tue, Jan 21, 2020 at 05:20:17PM +0100, Tomas Vondra wrote:\n>> Yeah, you're right returning them with feedback seems more appropriate,\n>> given the long inactivity. Plus, the CF app apparently does not allow\n>> moving WoA patches to the next CF anyway.\n>\n>FWIW, I tend to take a base of two weeks as a sensible period of\n>time as that's half the CF period when I do the classification job.\n>\n\nYeah. I've only nagged about patches that have been set to WoA before\nthe CF began, so far.\n\n>> Those are the patches that have been set as WoA before this CF, and have\n>> not been updated since. It's quite possible the state is stale for some\n>> of those patches, although I've tried to check if there were any\n>> messages on the list.\n>\n>You need to be careful about bug fixes, as these are things that we\n>don't want to lose track of. Another thing that I noticed in the past\n>is that some patches are registered as bug fixes, but they actually\n>implement a new feature. So there can be tricky cases.\n>--\n\nMakes sense.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Jan 2020 15:26:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: We're getting close to the end of 2020-01 CF"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nI've attached a patch for a couple of new options for VACUUM:\r\nMAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP. The motive\r\nbehind these options is to allow table owners to easily vacuum only\r\nthe TOAST table or only the main relation. This is especially useful\r\nfor TOAST tables since roles do not have access to the pg_toast schema\r\nby default and some users may find it difficult to discover the name\r\nof a relation's TOAST table. Next, I will explain a couple of the\r\nmain design decisions.\r\n\r\nI chose to call the option SECONDARY_RELATION_CLEANUP instead of\r\nsomething like TOAST_TABLE_CLEANUP for two reasons. First, other\r\ntypes of secondary relations may be added in the future, and it may be\r\nconvenient to put them under the umbrella of this option. Second, it\r\nseemed like it could be outside of the project's style to use the name\r\nof internal storage mechanisms in a user-facing VACUUM option.\r\nHowever, I am not wedded to the chosen name, as I am sure there are\r\ngood arguments for something like TOAST_TABLE_CLEANUP.\r\n\r\nI chose to implement MAIN_RELATION_CLEANUP within vacuum_rel() instead\r\nof expand_vacuum_rel()/get_all_vacuum_rels(). This allows us to reuse\r\nmost of the existing code with minimal changes, and it avoids adding\r\ncomplexity to the lookups and ownership checks in expand_vacuum_rel()\r\nand get_all_vacuum_rels() (especially the partition lookup logic).\r\nThe main tradeoffs of this approach are that we will still create a\r\ntransaction for the main relation and that we will still lock the main\r\nrelation.\r\n\r\nI reused the existing VACOPT_SKIPTOAST option to implement\r\nSECONDARY_RELATION_CLEANUP. This option is currently only used for\r\nautovacuum.\r\n\r\nI chose to disallow disabling both *_RELATION_CLEANUP options\r\ntogether, as this would essentially cause the VACUUM command to take\r\nno action. I disallowed using FULL when SECONDARY_RELATION_CLEANUP is\r\ndisabled, as the TOAST table is automatically rebuilt by\r\ncluster_rel(). I do allow using FULL when MAIN_RELATION_CLEANUP is\r\ndisabled, which is taken to mean that cluster_rel() should be run on\r\nthe TOAST table. Finally, I disallowed using ANALYZE when\r\nMAIN_RELATION_CLEANUP is disabled, as it is not presently possible to\r\nanalyze TOAST tables.\r\n\r\nI will add this patch to the next commitfest. I look forward to your\r\nfeedback.\r\n\r\nNathan",
"msg_date": "Tue, 21 Jan 2020 21:21:46 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options to\n VACUUM"
},
{
"msg_contents": "On 21/01/2020 22:21, Bossart, Nathan wrote:\n> I've attached a patch for a couple of new options for VACUUM:\n> MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP. The motive\n> behind these options is to allow table owners to easily vacuum only\n> the TOAST table or only the main relation. This is especially useful\n> for TOAST tables since roles do not have access to the pg_toast schema\n> by default and some users may find it difficult to discover the name\n> of a relation's TOAST table.\n\n\nCould you explain why one would want to do this? Autovacuum will\nalready deal with the tables separately as needed, but I don't see when\na manual vacuum would want to make this distinction.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Tue, 21 Jan 2020 22:38:20 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 09:21:46PM +0000, Bossart, Nathan wrote:\n> I've attached a patch for a couple of new options for VACUUM:\n> MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP. The motive\n> behind these options is to allow table owners to easily vacuum only\n> the TOAST table or only the main relation. This is especially useful\n> for TOAST tables since roles do not have access to the pg_toast schema\n> by default and some users may find it difficult to discover the name\n> of a relation's TOAST table. Next, I will explain a couple of the\n> main design decisions.\n\nSo that's similar to the autovacuum reloptions, but to be able to\nenforce one policy or another manually. Any issues with autovacuum\nnot able to keep up the bloat pace and where you need to issue manual\nVACUUMs in periods of low activity, like nightly VACUUMs?\n\n> I chose to call the option SECONDARY_RELATION_CLEANUP instead of\n> something like TOAST_TABLE_CLEANUP for two reasons. First, other\n> types of secondary relations may be added in the future, and it may be\n> convenient to put them under the umbrella of this option. Second, it\n> seemed like it could be outside of the project's style to use the name\n> of internal storage mechanisms in a user-facing VACUUM option.\n> However, I am not wedded to the chosen name, as I am sure there are\n> good arguments for something like TOAST_TABLE_CLEANUP.\n\nIf other types of relations are added in the future, wouldn't it make\nsense to have one switch for each one of those types then? A relation\ncould have a toast relation associated to it, as much as a foo\nrelation or a hoge relation, in which case SECONDARY brings little\ncontrol.\n\n> I chose to implement MAIN_RELATION_CLEANUP within vacuum_rel() instead\n> of expand_vacuum_rel()/get_all_vacuum_rels(). This allows us to reuse\n> most of the existing code with minimal changes, and it avoids adding\n> complexity to the lookups and ownership checks in expand_vacuum_rel()\n> and get_all_vacuum_rels() (especially the partition lookup logic).\n> The main tradeoffs of this approach are that we will still create a\n> transaction for the main relation and that we will still lock the main\n> relation.\n\nYeah, likely we should not make things more confusing in this area.\nThis was tricky enough to deal with with the recent VACUUM\nrefactoring for multiple relations.\n\n> I reused the existing VACOPT_SKIPTOAST option to implement\n> SECONDARY_RELATION_CLEANUP. This option is currently only used for\n> autovacuum.\n\nMy take would be to rename this option, and reuse it for consistency.\n\n> I chose to disallow disabling both *_RELATION_CLEANUP options\n> together, as this would essentially cause the VACUUM command to take\n> no action.\n\nMy first reaction is why? Agreed that it is a bit crazy to combine\nboth options, but if you add the argument related to more relation\ntypes like toast..\n--\nMichael",
"msg_date": "Wed, 22 Jan 2020 14:01:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On 1/21/20, 1:39 PM, \"Vik Fearing\" <vik.fearing@2ndquadrant.com> wrote:\r\n> On 21/01/2020 22:21, Bossart, Nathan wrote:\r\n>> I've attached a patch for a couple of new options for VACUUM:\r\n>> MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP. The motive\r\n>> behind these options is to allow table owners to easily vacuum only\r\n>> the TOAST table or only the main relation. This is especially useful\r\n>> for TOAST tables since roles do not have access to the pg_toast schema\r\n>> by default and some users may find it difficult to discover the name\r\n>> of a relation's TOAST table.\r\n>\r\n>\r\n> Could you explain why one would want to do this? Autovacuum will\r\n> already deal with the tables separately as needed, but I don't see when\r\n> a manual vacuum would want to make this distinction.\r\n\r\nThe main use case I'm targeting is when the level of bloat or\r\ntransaction ages of a relation and its TOAST table have significantly\r\ndiverged. In these scenarios, it could be beneficial to be able to\r\nvacuum just one or the other, especially if the tables are large.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 24 Jan 2020 21:24:45 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "Hi Michael,\r\n\r\nThanks for taking a look.\r\n\r\nOn 1/21/20, 9:02 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Tue, Jan 21, 2020 at 09:21:46PM +0000, Bossart, Nathan wrote:\r\n>> I've attached a patch for a couple of new options for VACUUM:\r\n>> MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP. The motive\r\n>> behind these options is to allow table owners to easily vacuum only\r\n>> the TOAST table or only the main relation. This is especially useful\r\n>> for TOAST tables since roles do not have access to the pg_toast schema\r\n>> by default and some users may find it difficult to discover the name\r\n>> of a relation's TOAST table. Next, I will explain a couple of the\r\n>> main design decisions.\r\n>\r\n> So that's similar to the autovacuum reloptions, but to be able to\r\n> enforce one policy or another manually. Any issues with autovacuum\r\n> not able to keep up the bloat pace and where you need to issue manual\r\n> VACUUMs in periods of low activity, like nightly VACUUMs?\r\n\r\nThere have been a couple of occasions where I have seen the TOAST\r\ntable become the most bloated part of the relation. When this\r\nhappens, it would be handy to be able to avoid scanning the heap and\r\nindexes. I am not aware of any concrete problems with autovacuum\r\nother than needing to tune the parameters for certain workloads.\r\n\r\n>> I chose to call the option SECONDARY_RELATION_CLEANUP instead of\r\n>> something like TOAST_TABLE_CLEANUP for two reasons. First, other\r\n>> types of secondary relations may be added in the future, and it may be\r\n>> convenient to put them under the umbrella of this option. Second, it\r\n>> seemed like it could be outside of the project's style to use the name\r\n>> of internal storage mechanisms in a user-facing VACUUM option.\r\n>> However, I am not wedded to the chosen name, as I am sure there are\r\n>> good arguments for something like TOAST_TABLE_CLEANUP.\r\n>\r\n> If other types of relations are added in the future, wouldn't it make\r\n> sense to have one switch for each one of those types then? A relation\r\n> could have a toast relation associated to it, as much as a foo\r\n> relation or a hoge relation, in which case SECONDARY brings little\r\n> control.\r\n\r\nThis is a good point. I've renamed the option to TOAST_TABLE_CLEANUP\r\nin v2.\r\n\r\n>> I chose to implement MAIN_RELATION_CLEANUP within vacuum_rel() instead\r\n>> of expand_vacuum_rel()/get_all_vacuum_rels(). This allows us to reuse\r\n>> most of the existing code with minimal changes, and it avoids adding\r\n>> complexity to the lookups and ownership checks in expand_vacuum_rel()\r\n>> and get_all_vacuum_rels() (especially the partition lookup logic).\r\n>> The main tradeoffs of this approach are that we will still create a\r\n>> transaction for the main relation and that we will still lock the main\r\n>> relation.\r\n>\r\n> Yeah, likely we should not make things more confusing in this area.\r\n> This was tricky enough to deal with with the recent VACUUM\r\n> refactoring for multiple relations.\r\n\r\nFinding a way to avoid the lock on the main relation could be a future\r\nimprovement, as that would allow you to manually vacuum both the main\r\nrelation and its TOAST table in parallel.\r\n\r\n>> I reused the existing VACOPT_SKIPTOAST option to implement\r\n>> SECONDARY_RELATION_CLEANUP. This option is currently only used for\r\n>> autovacuum.\r\n>\r\n> My take would be to rename this option, and reuse it for consistency.\r\n\r\nDone.\r\n\r\n>> I chose to disallow disabling both *_RELATION_CLEANUP options\r\n>> together, as this would essentially cause the VACUUM command to take\r\n>> no action.\r\n>\r\n> My first reaction is why? Agreed that it is a bit crazy to combine\r\n> both options, but if you add the argument related to more relation\r\n> types like toast..\r\n\r\nYes, I suppose we have the same problem if you disable\r\nMAIN_RELATION_CLEANUP and the relation has no TOAST table. In any\r\ncase, allowing both options to be disabled shouldn't hurt anything.\r\n\r\nNathan",
"msg_date": "Fri, 24 Jan 2020 21:31:26 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 09:31:26PM +0000, Bossart, Nathan wrote:\n> On 1/21/20, 9:02 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>> On Tue, Jan 21, 2020 at 09:21:46PM +0000, Bossart, Nathan wrote:\n>>> I've attached a patch for a couple of new options for VACUUM:\n>>> MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP. The motive\n>>> behind these options is to allow table owners to easily vacuum only\n>>> the TOAST table or only the main relation. This is especially useful\n>>> for TOAST tables since roles do not have access to the pg_toast schema\n>>> by default and some users may find it difficult to discover the name\n>>> of a relation's TOAST table. Next, I will explain a couple of the\n>>> main design decisions.\n>>\n>> So that's similar to the autovacuum reloptions, but to be able to\n>> enforce one policy or another manually. Any issues with autovacuum\n>> not able to keep up the bloat pace and where you need to issue manual\n>> VACUUMs in periods of low activity, like nightly VACUUMs?\n> \n> There have been a couple of occasions where I have seen the TOAST\n> table become the most bloated part of the relation. When this\n> happens, it would be handy to be able to avoid scanning the heap and\n> indexes. I am not aware of any concrete problems with autovacuum\n> other than needing to tune the parameters for certain workloads.\n\nThat's something I have faced as well. I have some applications\naround here where toast tables were the most bloated, and the\nvacuuming of the main relation ate time, putting more pressure on the\nvacuuming of the toast relation. So that's a fair argument in my\nopinion. \n\n>>> I chose to implement MAIN_RELATION_CLEANUP within vacuum_rel() instead\n>>> of expand_vacuum_rel()/get_all_vacuum_rels(). This allows us to reuse\n>>> most of the existing code with minimal changes, and it avoids adding\n>>> complexity to the lookups and ownership checks in expand_vacuum_rel()\n>>> and get_all_vacuum_rels() (especially the partition lookup logic).\n>>> The main tradeoffs of this approach are that we will still create a\n>>> transaction for the main relation and that we will still lock the main\n>>> relation.\n>>\n>> Yeah, likely we should not make things more confusing in this area.\n>> This was tricky enough to deal with with the recent VACUUM\n>> refactoring for multiple relations.\n> \n> Finding a way to avoid the lock on the main relation could be a future\n> improvement, as that would allow you to manually vacuum both the main\n> relation and its TOAST table in parallel.\n\nI am not sure that we actually need that at all, any catalog changes\ntake a lock on the parent relation first, and that's the conflicts we\nare looking at here with a share update exclusive lock.\n--\nMichael",
"msg_date": "Mon, 27 Jan 2020 11:28:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On 1/24/20, 2:14 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n>> Yes, I suppose we have the same problem if you disable\r\n>> MAIN_RELATION_CLEANUP and the relation has no TOAST table. In any\r\n>> case, allowing both options to be disabled shouldn't hurt anything.\r\n>\r\n> I've been thinking further in this area, and I'm wondering if it also\r\n> makes sense to remove the restriction on ANALYZE with\r\n> MAIN_RELATION_CLEANUP disabled. A command like\r\n>\r\n> VACUUM (ANALYZE, MAIN_RELATION_CLEANUP FALSE) test;\r\n>\r\n> could be interpreted as meaning we should vacuum the TOAST table and\r\n> analyze the main relation. Since the word \"cleanup\" is present in the\r\n> option name, this might not be too confusing.\r\n\r\nI've attached v3 of the patch, which removes the restriction on\r\nANALYZE with MAIN_RELATION_CLEANUP disabled.\r\n\r\nNathan",
"msg_date": "Wed, 5 Feb 2020 21:29:27 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "Here is a rebased version of the patch.\r\n\r\nNathan",
"msg_date": "Sun, 31 May 2020 22:13:39 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On Sun, May 31, 2020 at 10:13:39PM +0000, Bossart, Nathan wrote:\n> Here is a rebased version of the patch.\n\nShould bin/vacuumdb support this?\n\nShould vacuumdb have a way to pass an arbitrary option to the server, instead\nof tacking on options (which are frequently forgotten on the initial commit to\nthe backend VACUUM command) ? That has the advantage that vacuumdb could use\nnew options even when connecting to a new server version than client. I think\nit would be safe as long as it avoided characters like ')' and ';'. Maybe\nall that's needed is isdigit() || isalpha() || isspace() || c=='_'\n\n+ MAIN_RELATION_CLEANUP [ <replaceable class=\"parameter\">boolean</replaceable> ]\n+ TOAST_TABLE_CLEANUP [ <replaceable class=\"parameter\">boolean</replaceable> ]\n\nMaybe should be called TOAST_RELATION_CLEANUP \n\nSee attached.\n\n-- \nJustin",
"msg_date": "Mon, 13 Jul 2020 13:01:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "Hi,\r\n\r\nThanks for taking a look.\r\n\r\nOn 7/13/20, 11:02 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n> Should bin/vacuumdb support this?\r\n\r\nYes, it should. I've added it in v5 of the patch.\r\n\r\n> Should vacuumdb have a way to pass an arbitrary option to the server, instead\r\n> of tacking on options (which are frequently forgotten on the initial commit to\r\n> the backend VACUUM command) ? That has the advantage that vacuumdb could use\r\n> new options even when connecting to a new server version than client. I think\r\n> it would be safe as long as it avoided characters like ')' and ';'. Maybe\r\n> all that's needed is isdigit() || isalpha() || isspace() || c=='_'\r\n\r\nI like the idea of allowing users to specify arbitrary options so that\r\nthey are not constrained to the options in the version of vacuumdb\r\nthey are using. I suspect we will still want to keep the vacuumdb\r\noptions updated for consistency and ease-of-use, though. IMO this\r\ndeserves its own thread.\r\n\r\n> + MAIN_RELATION_CLEANUP [ <replaceable class=\"parameter\">boolean</replaceable> ]\r\n> + TOAST_TABLE_CLEANUP [ <replaceable class=\"parameter\">boolean</replaceable> ]\r\n>\r\n> Maybe should be called TOAST_RELATION_CLEANUP\r\n\r\nWhile using \"relation\" would be more consistent with the\r\nMAIN_RELATION_CLEANUP option, I initially chose \"table\" for\r\nconsistency with most of the documentation [0]. Thinking further, I\r\nbelieve this is still the right choice. While the term \"relation\"\r\nrefers to any type of object tracked in pg_class [1], a TOAST table\r\ncan only ever be a TOAST table. There are no other special TOAST\r\nrelation types (e.g. sequences, materialized views). On the other\r\nhand, it is possible to vacuum other types of \"main relations\" besides\r\nregular tables (e.g. materialized views), so MAIN_RELATION_CLEANUP\r\nalso seems right to me. Thoughts?\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/storage-toast.html\r\n[1] https://www.postgresql.org/docs/devel/catalog-pg-class.html",
"msg_date": "Tue, 14 Jul 2020 00:25:13 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to\n VACUUM"
},
{
"msg_contents": "On Tuesday, July 14, 2020 3:01 AM (GMT+9), Bossart, Nathan wrote:\r\n\r\nHi Nathan,\r\n\r\n>On 7/13/20, 11:02 AM, \"Justin Pryzby\" <pryzby(at)telsasoft(dot)com> wrote:\r\n>> Should bin/vacuumdb support this?\r\n>\r\n>Yes, it should. I've added it in v5 of the patch.\r\n\r\nThank you for the updated patch. I've joined as a reviewer.\r\nI've also noticed that you have incorporated Justin's suggested vacuumdb support\r\nin the recent patch, but in my opinion it'd be better to split them for better readability.\r\nAccording to the cfbot, patch applies cleanly and passes all the tests.\r\n\r\n[Use Case]\r\n>The main use case I'm targeting is when the level of bloat or\r\n>transaction ages of a relation and its TOAST table have significantly\r\n>diverged. In these scenarios, it could be beneficial to be able to\r\n>vacuum just one or the other, especially if the tables are large.\r\n>...\r\n>I reused the existing VACOPT_SKIPTOAST option to implement\r\n>SECONDARY_RELATION_CLEANUP. This option is currently only used for\r\n>autovacuum.\r\n\r\nPerhaps this has not gathered much attention yet because it's not experienced\r\nby many, but I don't see any problem with the additional options on manual\r\nVACUUM on top of existing autovacuum cleanups. And I think this is useful\r\nfor the special use case mentioned, especially that toast table access is not\r\nin public as per role limitation.\r\n\r\n[Docs]\r\nI also agree with \"TOAST_TABLE_CLEANUP\" and just name the options after the\r\nrespective proposed relation types in the future.\r\n\r\n+ <term><literal>MAIN_RELATION_CLEANUP</literal></term>\r\n+ <listitem>\r\n+ <para>\r\n+ Specifies that <command>VACUUM</command> should attempt to process the\r\n+ main relation. This is normally the desired behavior and is the default.\r\n+ Setting this option to false may be useful when it is necessary to only\r\n+ vacuum a relation's corresponding <literal>TOAST</literal> table.\r\n\r\nPerhaps it's just my own opinion, but I think the word \"process\" is vague for\r\na beginner in postgres reading the documents. OTOH, I know it's also used\r\nin the source code, so I guess it's just the convention. And \"process\" is\r\nintuititve as \"processing tables\". Anyway, just my 2 cents & isn't a strong\r\nopinion.\r\n\r\nAlso, there's an extra space between the 1st and 2nd sentences.\r\n\r\n\r\n+ <term><literal>TOAST_TABLE_CLEANUP</literal></term>\r\n+ <listitem>\r\n+ <para>\r\n+ Specifies that <command>VACUUM</command> should attempt to process the\r\n+ corresponding <literal>TOAST</literal> table for each relation, if one\r\n+ exists. This is normally the desired behavior and is the default.\r\n+ Setting this option to false may be useful when it is necessary to only\r\n+ vacuum the main relation. This option cannot be disabled when the\r\n+ <literal>FULL</literal> option is used.\r\n\r\nSame comments as above, & extra spaces in between the sentences. \r\n\r\n@@ -1841,9 +1865,16 @@ vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params)\r\n \t/*\r\n \t * Remember the relation's TOAST relation for later, if the caller asked\r\n \t * us to process it. In VACUUM FULL, though, the toast table is\r\n-\t * automatically rebuilt by cluster_rel so we shouldn't recurse to it.\r\n+\t * automatically rebuilt by cluster_rel, so we shouldn't recurse to it\r\n+\t * unless MAIN_RELATION_CLEANUP is disabled.\r\n\r\nThe additional last line is a bit confusing (and may be unnecessary/unrelated).\r\nTo clarify this thread on VACUUM FULL and my understanding of revised vacuum_rel below,\r\nwe allow MAIN_RELATION_CLEANUP option to be disabled (skip processing main relation)\r\nand TOAST_TABLE_CLEANUP should be disabled because cluster_rel() will process the \r\ntoast table anyway.\r\nIs my understanding correct? If yes, then maybe \"unless\" should be \"even if\" instead,\r\nor we can just remove the line.\r\n\r\n static bool\r\n-vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params)\r\n+vacuum_rel(Oid relid,\r\n+\t\t RangeVar *relation,\r\n+\t\t VacuumParams *params,\r\n+\t\t bool processing_toast_table)\r\n{\r\n...\r\n+\tbool\t\tprocess_toast;\r\n...\r\n\r\n-\tif (!(params->options & VACOPT_SKIPTOAST) && !(params->options & VACOPT_FULL))\r\n+\tprocess_toast = (params->options & VACOPT_TOAST_CLEANUP) != 0;\r\n+\r\n+\tif ((params->options & VACOPT_FULL) != 0 &&\r\n+\t\t(params->options & VACOPT_MAIN_REL_CLEANUP) != 0)\r\n+\t\tprocess_toast = false;\r\n+\r\n+\tif (process_toast)\r\n \t\ttoast_relid = onerel->rd_rel->reltoastrelid;\r\n \telse\r\n \t\ttoast_relid = InvalidOid;\r\n...\r\n\r\n \t * Do the actual work --- either FULL or \"lazy\" vacuum\r\n+\t *\r\n+\t * We skip this part if we're processing the main relation and\r\n+\t * MAIN_RELATION_CLEANUP has been disabled.\r\n \t */\r\n-\tif (params->options & VACOPT_FULL)\r\n+\tif ((params->options & VACOPT_MAIN_REL_CLEANUP) != 0 ||\r\n+\t\tprocessing_toast_table)\r\n...\r\n \tif (toast_relid != InvalidOid)\r\n-\t\tvacuum_rel(toast_relid, NULL, params);\r\n+\t\tvacuum_rel(toast_relid, NULL, params, true);\r\n\r\n\r\n\r\n>I've attached v3 of the patch, which removes the restriction on\r\n>ANALYZE with MAIN_RELATION_CLEANUP disabled.\r\n\r\nI've also confirmed those through regression + tap test in my own env\r\nand they've passed. I'll look into deeply again if I find problems.\r\n\r\nI think this follows the similar course of previously added VACUUM and\r\nvacuummdb options (for using and skipping truncate, index cleanup, etc.),\r\nso the patch seems almost plausible enough for me.\r\n\r\nRegards,\r\nKirk\r\n",
"msg_date": "Tue, 14 Jul 2020 05:34:01 +0000",
"msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 05:34:01AM +0000, k.jamison@fujitsu.com wrote:\n> I've also confirmed those through regression + tap test in my own env\n> and they've passed. I'll look into deeply again if I find problems.\n\n+ VACOPT_TOAST_CLEANUP = 1 << 6, /* process TOAST table, if any */\n+ VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7, /* don't skip any pages */\n+ VACOPT_MAIN_REL_CLEANUP = 1 << 8 /* process main relation */\n } VacuumOption;\n\nDo we actually need this much complication in the option set? It is\npossible to vacuum directly a toast table by passing directly its\nrelation name, with pg_toast as schema, so you can already vacuum a\ntoast relation without the main part. And I would guess that users\ncaring about the toast table specifically would know already how to do\nthat, even if it requires a simple script and a query on pg_class.\nNow there is a second part, where we'd like to vacuum the main\nrelation but not its toast table. My feeling by looking at this patch\ntoday is that we could just make VACOPT_SKIPTOAST an option available\nat user-level, and support all the cases discussed on this thread.\nAnd we have already all the code in place to support that in the\nbackend for autovacuum as relations are processed individually,\nwithout their toast tables if they have one.\n\n> I think this follows the similar course of previously added VACUUM and\n> vacuummdb options (for using and skipping truncate, index cleanup, etc.),\n> so the patch seems almost plausible enough for me.\n\n-static bool vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params);\n+static bool vacuum_rel(Oid relid,\n+ RangeVar *relation,\n+ VacuumParams *params,\n+ bool processing_toast_table);\n\nNot much a fan of the addition of this parameter on this routine to\ntrack down if the call should process a toast relation or not.\nCouldn't you just prevent the call to vacuum_rel() to happen at all?\n--\nMichael",
"msg_date": "Mon, 3 Aug 2020 15:47:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On 8/2/20, 11:47 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> + VACOPT_TOAST_CLEANUP = 1 << 6, /* process TOAST table, if any */\r\n> + VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7, /* don't skip any pages */\r\n> + VACOPT_MAIN_REL_CLEANUP = 1 << 8 /* process main relation */\r\n> } VacuumOption;\r\n>\r\n> Do we actually need this much complication in the option set? It is\r\n> possible to vacuum directly a toast table by passing directly its\r\n> relation name, with pg_toast as schema, so you can already vacuum a\r\n> toast relation without the main part. And I would guess that users\r\n> caring about the toast table specifically would know already how to do\r\n> that, even if it requires a simple script and a query on pg_class.\r\n> Now there is a second part, where we'd like to vacuum the main\r\n> relation but not its toast table. My feeling by looking at this patch\r\n> today is that we could just make VACOPT_SKIPTOAST an option available\r\n> at user-level, and support all the cases discussed on this thread.\r\n> And we have already all the code in place to support that in the\r\n> backend for autovacuum as relations are processed individually,\r\n> without their toast tables if they have one.\r\n\r\nMy main motive for adding the MAIN_RELATION_CLEANUP option is to allow\r\ntable owners to easily vacuum only a relation's TOAST table. Roles do\r\nnot have access to the pg_toast schema by default, so they might be\r\nrestricted from vacuuming their TOAST tables directly.\r\n\r\n> -static bool vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params);\r\n> +static bool vacuum_rel(Oid relid,\r\n> + RangeVar *relation,\r\n> + VacuumParams *params,\r\n> + bool processing_toast_table);\r\n>\r\n> Not much a fan of the addition of this parameter on this routine to\r\n> track down if the call should process a toast relation or not.\r\n> Couldn't you just prevent the call to vacuum_rel() to happen at all?\r\n\r\nI think it would be possible to skip calling vacuum_rel() from\r\nexpand_vacuum_rel()/get_all_vacuum_rels() as appropriate, but when I\r\nlooked into that approach originally, I was concerned that it would\r\nadd complexity to the lookups and ownership checks (especially the\r\npartition lookup logic). The main tradeoffs of the approach I went\r\nwith are that we still create a transaction for the main relation and\r\nthat we still lock the main relation.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 5 Aug 2020 00:56:48 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to\n VACUUM"
},
{
"msg_contents": "On Mon, 3 Aug 2020 at 15:47, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 14, 2020 at 05:34:01AM +0000, k.jamison@fujitsu.com wrote:\n> > I've also confirmed those through regression + tap test in my own env\n> > and they've passed. I'll look into deeply again if I find problems.\n>\n> + VACOPT_TOAST_CLEANUP = 1 << 6, /* process TOAST table, if any */\n> + VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7, /* don't skip any pages */\n> + VACOPT_MAIN_REL_CLEANUP = 1 << 8 /* process main relation */\n> } VacuumOption;\n>\n> Do we actually need this much complication in the option set? It is\n> possible to vacuum directly a toast table by passing directly its\n> relation name, with pg_toast as schema, so you can already vacuum a\n> toast relation without the main part. And I would guess that users\n> caring about the toast table specifically would know already how to do\n> that, even if it requires a simple script and a query on pg_class.\n\nYeah, I also doubt we really need to have this option in the core just\nfor the purpose of easily specifying toast relation to VACUUM command.\nIf the user doesn't know how to search the toast relation, I think we\ncan provide a script or an SQL function executes vacuum() C function\nwith the toast relation fetched by using the main relation. I\npersonally think VACUUM option basically should be present to control\nthe vacuum internal behavior granularly that the user cannot control\nfrom outside, although there are some exceptions: FREEZE and ANALYZE.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Aug 2020 18:56:35 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On Wed, Aug 05, 2020 at 12:56:48AM +0000, Bossart, Nathan wrote:\n> My main motive for adding the MAIN_RELATION_CLEANUP option is to allow\n> table owners to easily vacuum only a relation's TOAST table. Roles do\n> not have access to the pg_toast schema by default, so they might be\n> restricted from vacuuming their TOAST tables directly.\n\nTrue that you need an extra GRANT USAGE ON pg_toast to achieve that\nfor users with no privileges, but that's not impossible now either. I\nam not sure that this use-case justifies a new option and more\ncomplications in the code paths of vacuum though. So let's see first\nif others have an opinion to offer.\n--\nMichael",
"msg_date": "Thu, 6 Aug 2020 11:50:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On Thu, Aug 06, 2020 at 11:50:06AM +0900, Michael Paquier wrote:\n> True that you need an extra GRANT USAGE ON pg_toast to achieve that\n> for users with no privileges, but that's not impossible now either. I\n> am not sure that this use-case justifies a new option and more\n> complications in the code paths of vacuum though. So let's see first\n> if others have an opinion to offer.\n\nSeeing nothing happening here, I am marking the CF entry as returned\nwith feedback. FWIW, I still tend to think that we could call this\nstuff a day if we had an option to skip a toast relation when willing\nto vacuum the parent relation.\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 14:04:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 09:24:45PM +0000, Bossart, Nathan wrote:\n> On 1/21/20, 1:39 PM, \"Vik Fearing\" <vik.fearing@2ndquadrant.com> wrote:\n> > On 21/01/2020 22:21, Bossart, Nathan wrote:\n> >> I've attached a patch for a couple of new options for VACUUM:\n> >> MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP. The motive\n> >> behind these options is to allow table owners to easily vacuum only\n> >> the TOAST table or only the main relation. This is especially useful\n> >> for TOAST tables since roles do not have access to the pg_toast schema\n> >> by default and some users may find it difficult to discover the name\n> >> of a relation's TOAST table.\n> >\n> >\n> > Could you explain why one would want to do this? Autovacuum will\n> > already deal with the tables separately as needed, but I don't see when\n> > a manual vacuum would want to make this distinction.\n> \n> The main use case I'm targeting is when the level of bloat or\n> transaction ages of a relation and its TOAST table have significantly\n> diverged. In these scenarios, it could be beneficial to be able to\n> vacuum just one or the other, especially if the tables are large.\n\nThis just came up for me:\n\nI have a daily maintenance script which pro-actively vacuums tables: freezing\nhistoric partitions, vacuuming current tables if the table's relfrozenxid is\nold, and to encourage indexonly scan.\n\nI'm checking the greatest(age(toast,main)) and vacuum the table (and implicitly\nits toast) whenever either is getting old.\n\nBut it'd be more ideal if I could independently vacuum the main table if it's\nold, but not the toast table.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 27 Jan 2021 13:06:40 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On 1/27/21, 11:07 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n> This just came up for me:\r\n>\r\n> I have a daily maintenance script which pro-actively vacuums tables: freezing\r\n> historic partitions, vacuuming current tables if the table's relfrozenxid is\r\n> old, and to encourage indexonly scan.\r\n>\r\n> I'm checking the greatest(age(toast,main)) and vacuum the table (and implicitly\r\n> its toast) whenever either is getting old.\r\n>\r\n> But it'd be more ideal if I could independently vacuum the main table if it's\r\n> old, but not the toast table.\r\n\r\nThanks for chiming in.\r\n\r\nIt looks like we were leaning towards only adding the\r\nTOAST_TABLE_CLEANUP option, which is already implemented internally\r\nwith VACOPT_SKIPTOAST. It's already possible to vacuum a TOAST table\r\ndirectly, so we can probably do without the MAIN_RELATION_CLEANUP\r\noption.\r\n\r\nI've attached a new patch that only adds TOAST_TABLE_CLEANUP.\r\n\r\nNathan",
"msg_date": "Wed, 27 Jan 2021 23:16:26 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to\n VACUUM"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 11:16:26PM +0000, Bossart, Nathan wrote:\n> On 1/27/21, 11:07 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\n> > This just came up for me:\n> >\n> > I have a daily maintenance script which pro-actively vacuums tables: freezing\n> > historic partitions, vacuuming current tables if the table's relfrozenxid is\n> > old, and to encourage indexonly scan.\n> >\n> > I'm checking the greatest(age(toast,main)) and vacuum the table (and implicitly\n> > its toast) whenever either is getting old.\n> >\n> > But it'd be more ideal if I could independently vacuum the main table if it's\n> > old, but not the toast table.\n> \n> Thanks for chiming in.\n> \n> It looks like we were leaning towards only adding the\n> TOAST_TABLE_CLEANUP option, which is already implemented internally\n> with VACOPT_SKIPTOAST. It's already possible to vacuum a TOAST table\n> directly, so we can probably do without the MAIN_RELATION_CLEANUP\n> option.\n> \n> I've attached a new patch that only adds TOAST_TABLE_CLEANUP.\n\nThanks, I wrote my message after running into the issue and remembered this\nthread. I didn't necessarily mean to send another patch :)\n\nMy only comment is on the name: TOAST_TABLE_CLEANUP. \"Cleanup\" suggests that\nthe (main or only) purpose is to \"clean\" dead tuples to avoid bloat. But in my\nuse case, the main purpose is to avoid XID wraparound (or its warnings).\n\nOkay, my second only comment is that this:\n\n| This option cannot be disabled when the <literal>FULL</literal> option is\n| used.\n\nShould it instead be ignored if FULL is also specified ? Currently only\nPARALLEL and DISABLE_PAGE_SKIPPING cause an error when used with FULL. That's\ndocumented for PARALLEL, but I think it should also be documented for\nDISABLE_PAGE_SKIPPING (which is however an advanced option).\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 27 Jan 2021 19:08:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On 1/27/21, 5:08 PM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n> Thanks, I wrote my message after running into the issue and remembered this\r\n> thread. I didn't necessarily mean to send another patch :)\r\n\r\nNo worries. I lost track of this thread, but I don't mind picking it\r\nup again.\r\n\r\n> My only comment is on the name: TOAST_TABLE_CLEANUP. \"Cleanup\" suggests that\r\n> the (main or only) purpose is to \"clean\" dead tuples to avoid bloat. But in my\r\n> use case, the main purpose is to avoid XID wraparound (or its warnings).\r\n\r\nI chose TOAST_TABLE_CLEANUP to match the INDEX_CLEANUP option, but I'm\r\nnot wedded to that name. What do you think about PROCESS_TOAST_TABLE?\r\n\r\n> Okay, my second only comment is that this:\r\n>\r\n> | This option cannot be disabled when the <literal>FULL</literal> option is\r\n> | used.\r\n>\r\n> Should it instead be ignored if FULL is also specified ? Currently only\r\n> PARALLEL and DISABLE_PAGE_SKIPPING cause an error when used with FULL. That's\r\n> documented for PARALLEL, but I think it should also be documented for\r\n> DISABLE_PAGE_SKIPPING (which is however an advanced option).\r\n\r\nIMO we should emit an ERROR in this case. If we ignored it, we'd end\r\nup processing the TOAST table even though the user asked us to skip\r\nit.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 28 Jan 2021 18:16:09 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to\n VACUUM"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 06:16:09PM +0000, Bossart, Nathan wrote:\n> I chose TOAST_TABLE_CLEANUP to match the INDEX_CLEANUP option, but I'm\n> not wedded to that name. What do you think about PROCESS_TOAST_TABLE?\n\nMost of the other options use a verb, so using PROCESS, or even SKIP\nsounds like a good idea. More ideas: PROCESS_TOAST, SKIP_TOAST. I\ndon't like much the term CLEANUP here, as it may imply, at least to\nme, that the toast relation is getting partially processed.\n\n> IMO we should emit an ERROR in this case. If we ignored it, we'd end\n> up processing the TOAST table even though the user asked us to skip\n> it.\n\nIssuing an error makes the most sense to me per the argument based on\ncluster_rel() and copy_table_data(). Silently ignoring options can be \nconfusing for the end-user.\n\n+ <para>\n+ Do not clean up the TOAST table.\n+ </para>\nIs that enough? I would say instead: \"Skip the TOAST table associated\nto the table to vacuum, if any.\"\n--\nMichael",
"msg_date": "Fri, 29 Jan 2021 16:14:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On 1/28/21, 11:15 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Thu, Jan 28, 2021 at 06:16:09PM +0000, Bossart, Nathan wrote:\r\n>> I chose TOAST_TABLE_CLEANUP to match the INDEX_CLEANUP option, but I'm\r\n>> not wedded to that name. What do you think about PROCESS_TOAST_TABLE?\r\n>\r\n> Most of the other options use a verb, so using PROCESS, or even SKIP\r\n> sounds like a good idea. More ideas: PROCESS_TOAST, SKIP_TOAST. I\r\n> don't like much the term CLEANUP here, as it may imply, at least to\r\n> me, that the toast relation is getting partially processed.\r\n\r\nI changed it to PROCESS_TOAST.\r\n\r\n> + <para>\r\n> + Do not clean up the TOAST table.\r\n> + </para>\r\n> Is that enough? I would say instead: \"Skip the TOAST table associated\r\n> to the table to vacuum, if any.\"\r\n\r\nDone.\r\n\r\nNathan",
"msg_date": "Fri, 29 Jan 2021 18:43:44 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to\n VACUUM"
},
{
"msg_contents": "On Fri, Jan 29, 2021 at 06:43:44PM +0000, Bossart, Nathan wrote:\n> I changed it to PROCESS_TOAST.\n\nThanks. PROCESS_TOAST sounds good to me at the end for the option\nname, so let's just go with that.\n\n> Done.\n\nWhile on it, I could not resist with changing VACOPT_SKIPTOAST to\nVACOPT_PROCESS_TOAST on consistency grounds. This is used only in\nfour places in the code, so that's not invasive.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Mon, 8 Feb 2021 16:35:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On Mon, Feb 08, 2021 at 04:35:19PM +0900, Michael Paquier wrote:\n> On Fri, Jan 29, 2021 at 06:43:44PM +0000, Bossart, Nathan wrote:\n> > I changed it to PROCESS_TOAST.\n> \n> Thanks. PROCESS_TOAST sounds good to me at the end for the option\n> name, so let's just go with that.\n> \n> > Done.\n> \n> While on it, I could not resist with changing VACOPT_SKIPTOAST to\n> VACOPT_PROCESS_TOAST on consistency grounds. This is used only in\n> four places in the code, so that's not invasive.\n\n+1\n\n> @@ -971,6 +998,7 @@ help(const char *progname)\n> \tprintf(_(\" --min-mxid-age=MXID_AGE minimum multixact ID age of tables to vacuum\\n\"));\n> \tprintf(_(\" --min-xid-age=XID_AGE minimum transaction ID age of tables to vacuum\\n\"));\n> \tprintf(_(\" --no-index-cleanup don't remove index entries that point to dead tuples\\n\"));\n> +\tprintf(_(\" --no-process-toast skip the TOAST table associated to the table to vacuum, if any\\n\"));\n\nsay \"associated WITH\"\n\n> + corresponding <literal>TOAST</literal> table for each relation, if one\n> + exists. This is normally the desired behavior and is the default.\n> + Setting this option to false may be useful when it is necessary to only\n\nMaybe it should say \"when it is only necessary to\"\nBut what you've written isn't wrong, depending on what you mean.\n\n> @@ -244,6 +244,21 @@ PostgreSQL documentation\n> + Skip the TOAST table associated to the table to vacuum, if any.\n\nassociatd with\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 8 Feb 2021 02:46:45 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On 2/8/21, 12:47 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n> On Mon, Feb 08, 2021 at 04:35:19PM +0900, Michael Paquier wrote:\r\n>> On Fri, Jan 29, 2021 at 06:43:44PM +0000, Bossart, Nathan wrote:\r\n>> > I changed it to PROCESS_TOAST.\r\n>>\r\n>> Thanks. PROCESS_TOAST sounds good to me at the end for the option\r\n>> name, so let's just go with that.\r\n>>\r\n>> > Done.\r\n>>\r\n>> While on it, I could not resist with changing VACOPT_SKIPTOAST to\r\n>> VACOPT_PROCESS_TOAST on consistency grounds. This is used only in\r\n>> four places in the code, so that's not invasive.\r\n>\r\n> +1\r\n\r\n+1\r\n\r\n>> @@ -971,6 +998,7 @@ help(const char *progname)\r\n>> printf(_(\" --min-mxid-age=MXID_AGE minimum multixact ID age of tables to vacuum\\n\"));\r\n>> printf(_(\" --min-xid-age=XID_AGE minimum transaction ID age of tables to vacuum\\n\"));\r\n>> printf(_(\" --no-index-cleanup don't remove index entries that point to dead tuples\\n\"));\r\n>> + printf(_(\" --no-process-toast skip the TOAST table associated to the table to vacuum, if any\\n\"));\r\n>\r\n> say \"associated WITH\"\r\n>\r\n>> + corresponding <literal>TOAST</literal> table for each relation, if one\r\n>> + exists. This is normally the desired behavior and is the default.\r\n>> + Setting this option to false may be useful when it is necessary to only\r\n>\r\n> Maybe it should say \"when it is only necessary to\"\r\n> But what you've written isn't wrong, depending on what you mean.\r\n>\r\n>> @@ -244,6 +244,21 @@ PostgreSQL documentation\r\n>> + Skip the TOAST table associated to the table to vacuum, if any.\r\n>\r\n> associatd with\r\n\r\nThese suggestions seem reasonable to me. I've applied them in v9.\r\n\r\nNathan",
"msg_date": "Mon, 8 Feb 2021 18:59:45 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to\n VACUUM"
},
{
"msg_contents": "On Mon, Feb 08, 2021 at 06:59:45PM +0000, Bossart, Nathan wrote:\n> These suggestions seem reasonable to me. I've applied them in v9.\n\nSounds good to me, so applied.\n--\nMichael",
"msg_date": "Tue, 9 Feb 2021 14:18:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to VACUUM"
},
{
"msg_contents": "On 2/8/21, 9:19 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Mon, Feb 08, 2021 at 06:59:45PM +0000, Bossart, Nathan wrote:\r\n>> These suggestions seem reasonable to me. I've applied them in v9.\r\n>\r\n> Sounds good to me, so applied.\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 9 Feb 2021 17:43:55 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add MAIN_RELATION_CLEANUP and SECONDARY_RELATION_CLEANUP options\n to\n VACUUM"
}
] |
[
{
"msg_contents": "On Mon, Jan 06, 2020 at 04:33:46AM +0000, Simon Riggs wrote:\n> On Mon, 6 Jan 2020 at 04:13, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > I agree with the sentiment of the third doc change, but your patch removes\n> > > the mention of n_distinct, which isn't appropriate.\n> >\n> > I think it's correct to remove n_distinct there, as it's documented previously,\n> > since e5550d5f. That's a per-attribute option (not storage) and can't be\n> > specified there.\n> \n> OK, then agreed.\n\nAttached minimal patch with just this hunk.\n\nhttps://commitfest.postgresql.org/27/2417/\n=> RFC\n\nJustin\n\n(I'm resending in a new thread since it looks like the first message was\nsomehow sent as a reply to an unrelated thread.)",
"msg_date": "Tue, 21 Jan 2020 19:27:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 07:27:47PM -0600, Justin Pryzby wrote:\n> Attached minimal patch with just this hunk.\n> \n> https://commitfest.postgresql.org/27/2417/\n> => RFC\n\nI think that you should be more careful when you think that you create\na new thread. On my client for example, I can see that this message\nis part of its last thread and still holds references to the previous\nthread. My guess is that as you are using gmail you just changed the\nsubject, thinking that it actually created a new thread.\n\n> @@ -714,9 +714,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> <para>\n> <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for\n> fillfactor, toast and autovacuum storage parameters, as well as the\n> - following planner related parameters:\n> - <varname>effective_io_concurrency</varname>, <varname>parallel_workers</varname>, <varname>seq_page_cost</varname>,\n> - <varname>random_page_cost</varname>, <varname>n_distinct</varname> and <varname>n_distinct_inherited</varname>.\n> + <varname>parallel_workers</varname> planner parameter.\n\nRight. n_distinct_inherited and n_distinct can only be set for\nattribute, and the docs are clear about that.\n\neffective_io_concurrency, seq_page_cost and random_page_cost apply to\na tablespace. There is one other thing that this this paragraph\nmisses though: vacuum_index_cleanup is a parameter dedicated to\nvacuum, and not autovacuum. So to be clear I think that the first\nsentence should mention \"vacuum\" as much as \"autovacuum\" in the list\nof storage parameter types impacted by the lower-level lock taken.\n--\nMichael",
"msg_date": "Wed, 22 Jan 2020 13:53:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 01:53:32PM +0900, Michael Paquier wrote:\n>> @@ -714,9 +714,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n>> <para>\n>> <literal>SHARE UPDATE EXCLUSIVE</literal> lock will be taken for\n>> fillfactor, toast and autovacuum storage parameters, as well as the\n>> - following planner related parameters:\n>> - <varname>effective_io_concurrency</varname>, <varname>parallel_workers</varname>, <varname>seq_page_cost</varname>,\n>> - <varname>random_page_cost</varname>, <varname>n_distinct</varname> and <varname>n_distinct_inherited</varname>.\n>> + <varname>parallel_workers</varname> planner parameter.\n> \n> Right. n_distinct_inherited and n_distinct can only be set for\n> attribute, and the docs are clear about that.\n\nOkay, fixed that list and backpatched down to 10.\n--\nMichael",
"msg_date": "Fri, 24 Jan 2020 09:59:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc: alter table references bogus table-specific planner\n parameters"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI have been thinking about the orphaned prepared transaction problem in\nPostgreSQL and pondering on ways for handling it.\n\nA prepared transaction can be left unfinished (neither committed nor\nrollbacked) if the client has disappeared. It can happen for various\nreasons including a client crash, or a server crash leading to client's\nconnection getting terminated and never returning back. Another way a\nprepared transaction can be left unfinished is if a backup is restored that\ncarried the preparation steps, but not the steps closing the transaction.\n\nNeedless to mention that this does hamper maintenance work including\nvacuuming of dead tuples.\n\nFirst and foremost is to define what an orphaned transaction is. At this\nstage, I believe any prepared transaction that has been there for more than\nX time may be considered as an orphan. X may be defined as an integer in\nseconds (a GUC perhaps). May be there are better ways to define this.\nPlease feel free to chime in.\n\nThis leads to a question whether at server level, we need to be better at\nmanaging these orphaned prepared transactions. There are obviously other\nways of identifying such transactions by simply querying the\npg_prepared_xacts and checking transaction start date, which begs the\nquestion if there is a real need here to make a change in the server to\neither terminate these transactions (perhaps something similar to\nidle_in_transaction_session_timeout) or notify an administrator (less\npreferred as I believe notifications should be handled by some external\ntools, not by server).\n\nI see 3 potential solutions for solving this:\n(1) Only check for any prepared transactions when server is starting or\nrestarting (may be after a crash)\n(2) Have a background process that is checking on an idle timeout of\nprepared transactions\n(3) Do not make any change in the server and let the administrator handle\nthis by a client or an external tool\n\nOption (1) IMHO seems to be the least suitable one as I'd expect that when\na server is being started (or restarted) perhaps after a crash, it is done\nmanually and user can see the server startup logs. So it is very likely\nthat user will notice any prepared transactions that were created when the\nserver was previously running and take any necessary actions.\n\nOption (3) is let user manage it on their own, however they wish. This is\nthe simplest and the easiest way as we don't need to do anything here.\n\nOption (2) is probably the best solution IMHO. Though, it does require\nchanges in the server which might not be an undertaking we wish to not\npursue for this problem.\n\nSo in case we wish to move forward with Option (2), this will require a\nchange in the server. One potential place is in autovacuum by adding a\nsimilar change as it was done for idle_in_transaction_session_timeout, but\nrather than terminating the connection in this case, we simply abort/roll\nback the transaction. We could have a similar GUC for a prepared\ntransaction timeout. Though in this case, to be able to do that, we\nobviously need a backend process that can monitor the timer which will add\noverhead to any existing background process like the autovacuum, or\ncreation of a new background process (which is not such a good idea IMHO)\nwhich will add even more overhead.\n\nAt this stage, I'm not sure of the scale of changes this will require,\nhowever, I wanted to get an understanding and consensus on whether (a) this\nis something we should work on, and (b) whether an approach to implementing\na timeout makes sense.\n\nPlease feel free to share your thoughts here.\n\nRegards.\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nHello Everyone,I have been thinking about the orphaned prepared transaction problem in PostgreSQL and pondering on ways for handling it.A prepared transaction can be left unfinished (neither committed nor rollbacked) if the client has disappeared. It can happen for various reasons including a client crash, or a server crash leading to client's connection getting terminated and never returning back. Another way a prepared transaction can be left unfinished is if a backup is restored that carried the preparation steps, but not the steps closing the transaction. Needless to mention that this does hamper maintenance work including vacuuming of dead tuples.First and foremost is to define what an orphaned transaction is. At this stage, I believe any prepared transaction that has been there for more than X time may be considered as an orphan. X may be defined as an integer in seconds (a GUC perhaps). May be there are better ways to define this. Please feel free to chime in.This leads to a question whether at server level, we need to be better at managing these orphaned prepared transactions. There are obviously other ways of identifying such transactions by simply querying the pg_prepared_xacts and checking transaction start date, which begs the question if there is a real need here to make a change in the server to either terminate these transactions (perhaps something similar to idle_in_transaction_session_timeout) or notify an administrator (less preferred as I believe notifications should be handled by some external tools, not by server).I see 3 potential solutions for solving this:(1) Only check for any prepared transactions when server is starting or restarting (may be after a crash)(2) Have a background process that is checking on an idle timeout of prepared transactions(3) Do not make any change in the server and let the administrator handle this by a client or an external toolOption (1) IMHO seems to be the least suitable one as I'd expect that when a server is being started (or restarted) perhaps after a crash, it is done manually and user can see the server startup logs. So it is very likely that user will notice any prepared transactions that were created when the server was previously running and take any necessary actions.Option (3) is let user manage it on their own, however they wish. This is the simplest and the easiest way as we don't need to do anything here.Option (2) is probably the best solution IMHO. Though, it does require changes in the server which might not be an undertaking we wish to not pursue for this problem.So in case we wish to move forward with Option (2), this will require a change in the server. One potential place is in autovacuum by adding a similar change as it was done for idle_in_transaction_session_timeout, but rather than terminating the connection in this case, we simply abort/roll back the transaction. We could have a similar GUC for a prepared transaction timeout. Though in this case, to be able to do that, we obviously need a backend process that can monitor the timer which will add overhead to any existing background process like the autovacuum, or creation of a new background process (which is not such a good idea IMHO) which will add even more overhead.At this stage, I'm not sure of the scale of changes this will require, however, I wanted to get an understanding and consensus on whether (a) this is something we should work on, and (b) whether an approach to implementing a timeout makes sense.Please feel free to share your thoughts here.Regards.-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Wed, 22 Jan 2020 12:01:44 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Do we need to handle orphaned prepared transactions in the server?"
},
{
"msg_contents": "> First and foremost is to define what an orphaned transaction is. At\n> this stage, I believe any prepared transaction that has been there\n> for more than X time may be considered as an orphan. X may be defined\n> as an integer in seconds (a GUC perhaps). May be there are better\n> ways to define this. Please feel free to chime in.\n\n\nWhat about specifying a timeout when starting the prepared transaction?\n\nI can imagine situations where a timeout of hours might be needed/anticipated\n(e.g. really slow external systems) and situations where the developer\nknows that the other systems are never slower than a few seconds.\n\nSomething like:\n\n prepare transaction 42 timeout interval '2 days';\n\nor\n\n prepare transaction 42 timeout interval '30 second';\n\nOr maybe even with a fixed timestamp instead of an interval?\n\n prepare transaction 42 timeout timestamp '2020-01-30 14:00:00';\n\nThomas\n\n\n",
"msg_date": "Wed, 22 Jan 2020 08:10:30 +0100",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Wed, 22 Jan 2020 at 09:02, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>\n> At this stage, I'm not sure of the scale of changes this will require, however, I wanted to get an understanding and consensus on whether (a) this is something we should work on, and (b) whether an approach to implementing a timeout makes sense.\n>\n> Please feel free to share your thoughts here.\n\nThe intended use case of two phase transactions is ensuring atomic\ndurability of transactions across multiple database systems. This\nnecessarily means that there needs to be a failure tolerant agent that\nensures there is consensus about the status of the transaction and\nthen executes that consensus across all systems. In other words, there\nneeds to be a transaction manager for prepared statements to actually\nfulfil their purpose. Therefore I think that unilaterally timing out\nprepared statements is just shifting the consequences of a broken\nclient from availability to durability. But if durability was never a\nconcern, why is the client even using prepared statements?\n\nCiting the documentation:\n\n> PREPARE TRANSACTION is not intended for use in applications or interactive sessions. Its purpose is to allow an external transaction manager to perform atomic global transactions across multiple databases or other transactional resources. Unless you're writing a transaction manager, you probably shouldn't be using PREPARE TRANSACTION.\n\nRegards,\nAnts Aasma\n\n\n",
"msg_date": "Wed, 22 Jan 2020 10:45:21 +0200",
"msg_from": "Ants Aasma <ants@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Wed, 22 Jan 2020 at 16:45, Ants Aasma <ants@cybertec.at> wrote:\n\n> The intended use case of two phase transactions is ensuring atomic\n> durability of transactions across multiple database systems.\n\nExactly. I was trying to find a good way to say this.\n\nIt doesn't make much sense to embed a 2PC resolver in Pg unless it's\nan XA coordinator or similar. And generally it doesn't make sense for\nthe distributed transaction coordinator to reside alongside one of the\ndatasources being managed anyway, especially where failover and HA are\nin the picture.\n\nI *can* see it being useful, albeit rather heavyweight, to implement\nan XA coordinator on top of PostgreSQL. Mostly for HA and replication\nreasons. But generally you'd use postgres instances for the HA\ncoordinator and the DB(s) in which 2PC txns are being managed. While\nyou could run them in the same instance it'd probably mostly be for\ntoy-scale PoC/demo/testing use.\n\nSo I don't really see the point of doing anything with 2PC xacts\nwithin Pg proper. It's the job of the app that prepares the 2PC xacts,\nand if that app is unable to resolve them for some reason there's no\ngenerally-correct action to take without administrator action.\n\n\n",
"msg_date": "Wed, 22 Jan 2020 18:12:29 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Craig Ringer <craig@2ndquadrant.com> writes:\n> So I don't really see the point of doing anything with 2PC xacts\n> within Pg proper. It's the job of the app that prepares the 2PC xacts,\n> and if that app is unable to resolve them for some reason there's no\n> generally-correct action to take without administrator action.\n\nRight. It's the XA transaction manager's job not to forget uncommitted\ntransactions. Reasoning as though no TM exists is not only not very\nrelevant, but it might lead you to put in features that actually\nmake the TM's job harder. In particular, a timeout (or any other\nmechanism that leads PG to abort or commit a prepared transaction\nof its own accord) does that.\n\nOr another way to put it: the fundamental premise of a prepared\ntransaction is that it will be possible to commit it on-demand with\nextremely low chance of failure. Designing in a reason why we'd\nfail to be able to do that would be an anti-feature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Jan 2020 10:05:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Tom Lane schrieb am 22.01.2020 um 16:05:\n> Craig Ringer <craig@2ndquadrant.com> writes:\n>> So I don't really see the point of doing anything with 2PC xacts\n>> within Pg proper. It's the job of the app that prepares the 2PC xacts,\n>> and if that app is unable to resolve them for some reason there's no\n>> generally-correct action to take without administrator action.\n>\n> Right. It's the XA transaction manager's job not to forget uncommitted\n> transactions. Reasoning as though no TM exists is not only not very\n> relevant, but it might lead you to put in features that actually\n> make the TM's job harder. In particular, a timeout (or any other\n> mechanism that leads PG to abort or commit a prepared transaction\n> of its own accord) does that.\n>\n> Or another way to put it: the fundamental premise of a prepared\n> transaction is that it will be possible to commit it on-demand with\n> extremely low chance of failure. Designing in a reason why we'd\n> fail to be able to do that would be an anti-feature.\n\nThat's a fair point, but the reality is that not all XA transaction managers\ndo a good job with that.\n\nHaving somthing on the database side that can handle that in\nexceptional cases would be very welcome.\n\n(In Oracle you can't sometimes even run DML on tables where you have orphaned\nXA transactions - which is extremely annoying, because by default\nonly the DBA can clean that up)\n\nThomas\n\n\n\n\n",
"msg_date": "Wed, 22 Jan 2020 16:16:19 +0100",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Thomas Kellerer <shammat@gmx.net> writes:\n> Tom Lane schrieb am 22.01.2020 um 16:05:\n>> Right. It's the XA transaction manager's job not to forget uncommitted\n>> transactions. Reasoning as though no TM exists is not only not very\n>> relevant, but it might lead you to put in features that actually\n>> make the TM's job harder. In particular, a timeout (or any other\n>> mechanism that leads PG to abort or commit a prepared transaction\n>> of its own accord) does that.\n\n> That's a fair point, but the reality is that not all XA transaction managers\n> do a good job with that.\n\nIf you've got a crappy XA manager, you should get a better one, not\nask us to put in features that make PG unsafe to use with well-designed\nXA managers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Jan 2020 10:22:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 10:22:21AM -0500, Tom Lane wrote:\n> Thomas Kellerer <shammat@gmx.net> writes:\n> > Tom Lane schrieb am 22.01.2020 um 16:05:\n> >> Right. It's the XA transaction manager's job not to forget uncommitted\n> >> transactions. Reasoning as though no TM exists is not only not very\n> >> relevant, but it might lead you to put in features that actually\n> >> make the TM's job harder. In particular, a timeout (or any other\n> >> mechanism that leads PG to abort or commit a prepared transaction\n> >> of its own accord) does that.\n> \n> > That's a fair point, but the reality is that not all XA transaction managers\n> > do a good job with that.\n> \n> If you've got a crappy XA manager, you should get a better one, not\n> ask us to put in features that make PG unsafe to use with well-designed\n> XA managers.\n\nI think the big question is whether we want to make active prepared\ntransactions more visible to administrators, either during server start\nor idle duration.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 22 Jan 2020 12:15:59 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I think the big question is whether we want to make active prepared\n> transactions more visible to administrators, either during server start\n> or idle duration.\n\nThere's already the pg_prepared_xacts view ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Jan 2020 12:20:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Wed, 22 Jan 2020 at 23:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Kellerer <shammat@gmx.net> writes:\n> > Tom Lane schrieb am 22.01.2020 um 16:05:\n> >> Right. It's the XA transaction manager's job not to forget uncommitted\n> >> transactions. Reasoning as though no TM exists is not only not very\n> >> relevant, but it might lead you to put in features that actually\n> >> make the TM's job harder. In particular, a timeout (or any other\n> >> mechanism that leads PG to abort or commit a prepared transaction\n> >> of its own accord) does that.\n>\n> > That's a fair point, but the reality is that not all XA transaction managers\n> > do a good job with that.\n>\n> If you've got a crappy XA manager, you should get a better one, not\n> ask us to put in features that make PG unsafe to use with well-designed\n> XA managers.\n\nAgreed. Or use some bespoke script that does the cleanup that you\nthink is appropriate for your particular environment and set of bugs.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Thu, 23 Jan 2020 12:33:33 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, 23 Jan 2020 at 01:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I think the big question is whether we want to make active prepared\n> > transactions more visible to administrators, either during server start\n> > or idle duration.\n>\n> There's already the pg_prepared_xacts view ...\n\nI think Bruce has a point here. We shouldn't go around \"resolving\"\nprepared xacts, but the visibility of them is a problem for users.\nI've seen that myself quite enough times, even now that they cannot be\nused by default.\n\nOur monitoring and admin views are not keeping up with Pg's\ncomplexity. Resource retention is one area where that's becoming a\nusability and admin challenge. If a user has growing bloat (and have\nmanaged to figure that out, since we don't make it easy to do that\neither) or unexpected WAL retention they may find it hard to quickly\nwork out why.\n\nWe could definitely improve on that by exposing a view that integrates\neverything that holds down xmin and catalog_xmin. It'd show\n\n* the datfrozenxid and datminmxid for the oldest database\n * if that database is the current database, the relation(s) with the\noldest relfrozenxid and relminmxd\n * ... and the catalog relation(s) with the oldest relfrozenxid and\nrelminmxid if greater\n* the absolute xid and xid-age positions of entries in pg_replication_slots\n* pg_stat_replication connections (joined to pg_stat_replication if\nconnected) with their feedback xmin\n* pg_stat_activity backend_xid and backend_xmin for the backend(s)\nwith oldest values; this may be different sets of backends\n* pg_prepared_xacts entries by oldest xid\n\n... probably sorted by xid age.\n\nIt'd be good to expose some internal state too, which would usually\ncorrespond to the oldest values found in the above, but is useful for\ncross-checking:\n\n* RecentGlobalXmin and RecentGlobalDataXmin to show the xmin and\ncatalog_xmin actually used\n* procArray->replication_slot_xmin and procArray->replication_slot_catalog_xmin\n\nI'm not sure whether WAL retention (lsn tracking) should be in the\nsame view or a different one, but I lean toward different.\n\nI already have another TODO kicking around for me to write a view that\ngenerates a blocking locks graph, since pg_locks is really more of a\nbuilding block than a directly useful view for admins to understand\nthe system's state. And if that's not enough I also want to write a\ndecent bloat-checking view to include in the system views, since IMO\nlock-blocking, bloat, and resource retention are real monitoring pain\npoints right now.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Thu, 23 Jan 2020 12:56:41 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 12:56:41PM +0800, Craig Ringer wrote:\n> We could definitely improve on that by exposing a view that integrates\n> everything that holds down xmin and catalog_xmin. It'd show\n> \n> * the datfrozenxid and datminmxid for the oldest database\n> * if that database is the current database, the relation(s) with the\n> oldest relfrozenxid and relminmxd\n> * ... and the catalog relation(s) with the oldest relfrozenxid and\n> relminmxid if greater\n> * the absolute xid and xid-age positions of entries in pg_replication_slots\n> * pg_stat_replication connections (joined to pg_stat_replication if\n> connected) with their feedback xmin\n> * pg_stat_activity backend_xid and backend_xmin for the backend(s)\n> with oldest values; this may be different sets of backends\n> * pg_prepared_xacts entries by oldest xid\n\nIt seems to me that what you are describing here is a set of\nproperties good for a monitoring tool that we don't necessarily need\nto maintain in core. There are already tools able to do that in ways\nI think are better than what we could ever design, like\ncheck_pgactivity and such. And there are years of experience behind\nthat from the maintainers of such scripts and/or extensions.\n\nThe argument about Postgres getting more and more complex is true as\nthe code grows, but I am not really convinced that we need to make it\ngrow more with more layers that we think are good, because we may\nfinish by piling up stuff which are not actually that good in the long\nterm. I'd rather just focus in the core code on the basics with views\nthat map directly to what we have in memory and/or disk.\n--\nMichael",
"msg_date": "Thu, 23 Jan 2020 16:04:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, 23 Jan 2020 at 15:04, Michael Paquier <michael@paquier.xyz> wrote:\n\n> It seems to me that what you are describing here is a set of\n> properties good for a monitoring tool that we don't necessarily need\n> to maintain in core. There are already tools able to do that in ways\n> I think are better than what we could ever design, like\n> check_pgactivity and such.\n\nI really have to disagree here.\n\nRelying on external tools gives users who already have to piece\ntogether a lot of fragments even more moving parts to keep track of.\nIt introduces more places where new server releases may not be\nsupported in a timely manner by various tools users rely on. More\nplaces where users may get wrong or incomplete information from\noutdated or incorrect tools. I cite the monstrosity that\n\"check_postgres.pl\" has become as a specific example of why pushing\nour complexity onto external tools is not always the right answer.\n\nWe already have a number of views that prettify information to help\nadministrators operate the server. You could argue that\npg_stat_activity and pg_stat_replication are unnecessary for example;\nusers should use external tools to query pg_stat_get_activity(),\npg_stat_get_wal_senders(), pg_authid and pg_database directly to get\nthe information they need. Similarly, we could do away with\npg_stat_user_indexes and the like, as they're just convenience views\nover lower level information exposed by the server.\n\nBut can you really imagine using postgres day to day without pg_stat_activity?\n\nIt is my firm opinion that visibility into locking behaviour and lock\nwaits is of a similar level of importance. So is giving users some way\nto get insight into table and index bloat on our MVCC database. With\nthe enormous uptake of various forms of replication and HA it's also\nimportant that users also be able to see what's affecting resource\nretention - holding down vacuum, retaining WAL, etc.\n\nThe server knows more than any tools. Views in the server can also be\nmaintained along with the server to address changes in how it manages\nthings like resource retention, so external tools get a more\nconsistent insight into server behaviour.\n\n> I'd rather just focus in the core code on the basics with views\n> that map directly to what we have in memory and/or disk.\n\nPer above, I just can't agree with this. PostgreSQL is a system with\nend users who need to interact with it, most of whom will not know how\nits innards work. If we're going to position it even more as a\ncomponent in some larger stack such that it's not expected to really\nbe used standalone, then we should make some effort to guide users\ntoward the other components they will need *in our own documentation*\nand ensure they're tested and maintained.\n\nProposals to do that with HA and failover tooling, backup tooling etc\nhave never got off the ground. I think we do users a great disservice\nthere personally. I don't expect any proposal to bless specific\nmonitoring tools to be any more successful.\n\nMore importantly, I fail to see why every monitoring tool should\nreinvent the same information collection queries and views, each with\ntheir own unique bugs and quirks, when we can provide information\nusers need directly from the server.\n\nIn any case I guess it's all hot air unless I pony up a patch to show\nhow I think it should work.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Tue, 28 Jan 2020 12:04:00 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "So having seen the feedback on this thread, and I tend to agree with most\nof what has been said here, I also agree that the server core isn't really\nthe ideal place to handle the orphan prepared transactions.\n\nIdeally, these must be handled by a transaction manager, however, I do\nbelieve that we cannot let database suffer for failing of an external\nsoftware, and we did a similar change through introduction of idle in\ntransaction timeout behavior. That said, implementing something similar for\nthis feature is too much of an overhead both in terms of code complexity\nand resources utilisation (if the feature is implemented).\n\nI'm currently working on other options to tackle this problem.\n\n\nOn Tue, 28 Jan 2020 at 9:04 AM, Craig Ringer <craig@2ndquadrant.com> wrote:\n\n> On Thu, 23 Jan 2020 at 15:04, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > It seems to me that what you are describing here is a set of\n> > properties good for a monitoring tool that we don't necessarily need\n> > to maintain in core. There are already tools able to do that in ways\n> > I think are better than what we could ever design, like\n> > check_pgactivity and such.\n>\n> I really have to disagree here.\n>\n> Relying on external tools gives users who already have to piece\n> together a lot of fragments even more moving parts to keep track of.\n> It introduces more places where new server releases may not be\n> supported in a timely manner by various tools users rely on. More\n> places where users may get wrong or incomplete information from\n> outdated or incorrect tools. I cite the monstrosity that\n> \"check_postgres.pl\" has become as a specific example of why pushing\n> our complexity onto external tools is not always the right answer.\n>\n> We already have a number of views that prettify information to help\n> administrators operate the server. You could argue that\n> pg_stat_activity and pg_stat_replication are unnecessary for example;\n> users should use external tools to query pg_stat_get_activity(),\n> pg_stat_get_wal_senders(), pg_authid and pg_database directly to get\n> the information they need. Similarly, we could do away with\n> pg_stat_user_indexes and the like, as they're just convenience views\n> over lower level information exposed by the server.\n>\n> But can you really imagine using postgres day to day without\n> pg_stat_activity?\n>\n> It is my firm opinion that visibility into locking behaviour and lock\n> waits is of a similar level of importance. So is giving users some way\n> to get insight into table and index bloat on our MVCC database. With\n> the enormous uptake of various forms of replication and HA it's also\n> important that users also be able to see what's affecting resource\n> retention - holding down vacuum, retaining WAL, etc.\n>\n> The server knows more than any tools. Views in the server can also be\n> maintained along with the server to address changes in how it manages\n> things like resource retention, so external tools get a more\n> consistent insight into server behaviour.\n>\n> > I'd rather just focus in the core code on the basics with views\n> > that map directly to what we have in memory and/or disk.\n>\n> Per above, I just can't agree with this. PostgreSQL is a system with\n> end users who need to interact with it, most of whom will not know how\n> its innards work. If we're going to position it even more as a\n> component in some larger stack such that it's not expected to really\n> be used standalone, then we should make some effort to guide users\n> toward the other components they will need *in our own documentation*\n> and ensure they're tested and maintained.\n>\n> Proposals to do that with HA and failover tooling, backup tooling etc\n> have never got off the ground. I think we do users a great disservice\n> there personally. I don't expect any proposal to bless specific\n> monitoring tools to be any more successful.\n>\n> More importantly, I fail to see why every monitoring tool should\n> reinvent the same information collection queries and views, each with\n> their own unique bugs and quirks, when we can provide information\n> users need directly from the server.\n>\n> In any case I guess it's all hot air unless I pony up a patch to show\n> how I think it should work.\n>\n> --\n> Craig Ringer http://www.2ndQuadrant.com/\n> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n>\n>\n> --\nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nSo having seen the feedback on this thread, and I tend to agree with most of what has been said here, I also agree that the server core isn't really the ideal place to handle the orphan prepared transactions. Ideally, these must be handled by a transaction manager, however, I do believe that we cannot let database suffer for failing of an external software, and we did a similar change through introduction of idle in transaction timeout behavior. That said, implementing something similar for this feature is too much of an overhead both in terms of code complexity and resources utilisation (if the feature is implemented). I'm currently working on other options to tackle this problem.On Tue, 28 Jan 2020 at 9:04 AM, Craig Ringer <craig@2ndquadrant.com> wrote:On Thu, 23 Jan 2020 at 15:04, Michael Paquier <michael@paquier.xyz> wrote:\n\n> It seems to me that what you are describing here is a set of\n> properties good for a monitoring tool that we don't necessarily need\n> to maintain in core. There are already tools able to do that in ways\n> I think are better than what we could ever design, like\n> check_pgactivity and such.\n\nI really have to disagree here.\n\nRelying on external tools gives users who already have to piece\ntogether a lot of fragments even more moving parts to keep track of.\nIt introduces more places where new server releases may not be\nsupported in a timely manner by various tools users rely on. More\nplaces where users may get wrong or incomplete information from\noutdated or incorrect tools. I cite the monstrosity that\n\"check_postgres.pl\" has become as a specific example of why pushing\nour complexity onto external tools is not always the right answer.\n\nWe already have a number of views that prettify information to help\nadministrators operate the server. You could argue that\npg_stat_activity and pg_stat_replication are unnecessary for example;\nusers should use external tools to query pg_stat_get_activity(),\npg_stat_get_wal_senders(), pg_authid and pg_database directly to get\nthe information they need. Similarly, we could do away with\npg_stat_user_indexes and the like, as they're just convenience views\nover lower level information exposed by the server.\n\nBut can you really imagine using postgres day to day without pg_stat_activity?\n\nIt is my firm opinion that visibility into locking behaviour and lock\nwaits is of a similar level of importance. So is giving users some way\nto get insight into table and index bloat on our MVCC database. With\nthe enormous uptake of various forms of replication and HA it's also\nimportant that users also be able to see what's affecting resource\nretention - holding down vacuum, retaining WAL, etc.\n\nThe server knows more than any tools. Views in the server can also be\nmaintained along with the server to address changes in how it manages\nthings like resource retention, so external tools get a more\nconsistent insight into server behaviour.\n\n> I'd rather just focus in the core code on the basics with views\n> that map directly to what we have in memory and/or disk.\n\nPer above, I just can't agree with this. PostgreSQL is a system with\nend users who need to interact with it, most of whom will not know how\nits innards work. If we're going to position it even more as a\ncomponent in some larger stack such that it's not expected to really\nbe used standalone, then we should make some effort to guide users\ntoward the other components they will need *in our own documentation*\nand ensure they're tested and maintained.\n\nProposals to do that with HA and failover tooling, backup tooling etc\nhave never got off the ground. I think we do users a great disservice\nthere personally. I don't expect any proposal to bless specific\nmonitoring tools to be any more successful.\n\nMore importantly, I fail to see why every monitoring tool should\nreinvent the same information collection queries and views, each with\ntheir own unique bugs and quirks, when we can provide information\nusers need directly from the server.\n\nIn any case I guess it's all hot air unless I pony up a patch to show\nhow I think it should work.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Wed, 29 Jan 2020 23:04:10 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, 30 Jan 2020 at 02:04, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>\n> So having seen the feedback on this thread, and I tend to agree with most of what has been said here, I also agree that the server core isn't really the ideal place to handle the orphan prepared transactions.\n>\n> Ideally, these must be handled by a transaction manager, however, I do believe that we cannot let database suffer for failing of an external software, and we did a similar change through introduction of idle in transaction timeout behavior.\n\nThe difference, IMO, is that idle-in-transaction aborts don't affect\nanything we've promised to be durable.\n\nOnce you PREPARE TRANSACTION the DB has made a promise that that txn\nis durable. We don't have any consistent feedback channel to back to\napplications and say \"Hey, if you're not going to finish this up we\nneed to get rid of it soon, ok?\". If a distributed transaction manager\ngets consensus for commit and goes to COMMIT PREPARED a previously\nprepared txn only to find that it has vanished, that's a major\nproblem, and one that may bring the entire DTM to a halt until the\nadmin can intervene.\n\nThis isn't like idle-in-transaction aborts. It's closer to something\nlike uncommitting a previously committed transaction.\n\nI do think it'd make sense to ensure that the documentation clearly\nhighlights the impact of abandoned prepared xacts on server resource\nretention and performance, preferably with pointers to appropriate\nviews. I haven't reviewed the docs to see how clear that is already.\n\nI can also see an argument for a periodic log message (maybe from\nvacuum?) warning when old prepared xacts hold xmin down. Including one\nsent to the client application when an explicit VACUUM is executed.\n(In fact, it'd make sense to generalise that for all xmin-retention).\n\nBut I'm really not a fan of aborting such txns. If you operate with\nsome kind of broken global transaction manager that can forget or\nabandon prepared xacts, then fix it, or adopt site-local periodic\ncleanup tasks that understand your site's needs.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:28:34 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 8:28 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n\n> On Thu, 30 Jan 2020 at 02:04, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> >\n> > So having seen the feedback on this thread, and I tend to agree with\n> most of what has been said here, I also agree that the server core isn't\n> really the ideal place to handle the orphan prepared transactions.\n> >\n> > Ideally, these must be handled by a transaction manager, however, I do\n> believe that we cannot let database suffer for failing of an external\n> software, and we did a similar change through introduction of idle in\n> transaction timeout behavior.\n>\n> The difference, IMO, is that idle-in-transaction aborts don't affect\n> anything we've promised to be durable.\n>\n> Once you PREPARE TRANSACTION the DB has made a promise that that txn\n> is durable. We don't have any consistent feedback channel to back to\n> applications and say \"Hey, if you're not going to finish this up we\n> need to get rid of it soon, ok?\". If a distributed transaction manager\n> gets consensus for commit and goes to COMMIT PREPARED a previously\n> prepared txn only to find that it has vanished, that's a major\n> problem, and one that may bring the entire DTM to a halt until the\n> admin can intervene.\n>\n> This isn't like idle-in-transaction aborts. It's closer to something\n> like uncommitting a previously committed transaction.\n>\n> I do think it'd make sense to ensure that the documentation clearly\n> highlights the impact of abandoned prepared xacts on server resource\n> retention and performance, preferably with pointers to appropriate\n> views. I haven't reviewed the docs to see how clear that is already.\n>\n\nHaving seen the documentation, IMHO the document does contain enough\ninformation for users to understand what issues can be caused by these\norphaned prepared transactions.\n\n\n>\n> I can also see an argument for a periodic log message (maybe from\n> vacuum?) warning when old prepared xacts hold xmin down. Including one\n> sent to the client application when an explicit VACUUM is executed.\n> (In fact, it'd make sense to generalise that for all xmin-retention).\n>\n\nI think that opens up the debate on what we really mean by \"old\" and\nwhether that requires a syntax change when creating a prepared\ntransactions as Thomas Kellerer suggested earlier?\n\nI agree that vacuum should periodically throw warnings for any prepared\nxacts that are holding xmin down.\n\nGeneralising it for all xmin-retention is a fair idea IMHO, though that\ndoes increase the overall scope here. A vacuum process should (ideally)\nperiodically throw out warnings for anything that is preventing it\n(including\norphaned prepared transactions) from doing its routine work so that\nsomebody can take necessary actions.\n\n\n> But I'm really not a fan of aborting such txns. If you operate with\n> some kind of broken global transaction manager that can forget or\n> abandon prepared xacts, then fix it, or adopt site-local periodic\n> cleanup tasks that understand your site's needs.\n>\n> --\n> Craig Ringer http://www.2ndQuadrant.com/\n> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Thu, Jan 30, 2020 at 8:28 AM Craig Ringer <craig@2ndquadrant.com> wrote:On Thu, 30 Jan 2020 at 02:04, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>\n> So having seen the feedback on this thread, and I tend to agree with most of what has been said here, I also agree that the server core isn't really the ideal place to handle the orphan prepared transactions.\n>\n> Ideally, these must be handled by a transaction manager, however, I do believe that we cannot let database suffer for failing of an external software, and we did a similar change through introduction of idle in transaction timeout behavior.\n\nThe difference, IMO, is that idle-in-transaction aborts don't affect\nanything we've promised to be durable.\n\nOnce you PREPARE TRANSACTION the DB has made a promise that that txn\nis durable. We don't have any consistent feedback channel to back to\napplications and say \"Hey, if you're not going to finish this up we\nneed to get rid of it soon, ok?\". If a distributed transaction manager\ngets consensus for commit and goes to COMMIT PREPARED a previously\nprepared txn only to find that it has vanished, that's a major\nproblem, and one that may bring the entire DTM to a halt until the\nadmin can intervene.\n\nThis isn't like idle-in-transaction aborts. It's closer to something\nlike uncommitting a previously committed transaction.\n\nI do think it'd make sense to ensure that the documentation clearly\nhighlights the impact of abandoned prepared xacts on server resource\nretention and performance, preferably with pointers to appropriate\nviews. I haven't reviewed the docs to see how clear that is already.Having seen the documentation, IMHO the document does contain enough information for users to understand what issues can be caused by these orphaned prepared transactions. \n\nI can also see an argument for a periodic log message (maybe from\nvacuum?) warning when old prepared xacts hold xmin down. Including one\nsent to the client application when an explicit VACUUM is executed.\n(In fact, it'd make sense to generalise that for all xmin-retention).I think that opens up the debate on what we really mean by \"old\" andwhether that requires a syntax change when creating a prepared transactions as Thomas Kellerer suggested earlier?I agree that vacuum should periodically throw warnings for any preparedxacts that are holding xmin down.Generalising it for all xmin-retention is a fair idea IMHO, though thatdoes increase the overall scope here. A vacuum process should (ideally)periodically throw out warnings for anything that is preventing it (including orphaned prepared transactions) from doing its routine work so that somebody can take necessary actions.\n\nBut I'm really not a fan of aborting such txns. If you operate with\nsome kind of broken global transaction manager that can forget or\nabandon prepared xacts, then fix it, or adopt site-local periodic\ncleanup tasks that understand your site's needs.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Fri, 31 Jan 2020 19:02:27 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "All,\n\nAttached is version 1 of POC patch for notifying of orphaned\nprepared transactions via warnings emitted to a client\napplication and/or log file. It applies to PostgreSQL branch\n\"master\" on top of \"e2e02191\" commit.\n\nI've tried to keep the patch as less invasive as I could with\nminimal impact on vacuum processes, so the performance impact\nand the changes are minimal in that area of PostgreSQL core.\n\n\n- What's in this Patch:\n\nThis patch throws warnings when an autovacuum worker encounters\nan orphaned prepared transaction. It also throws warnings to a\nclient when a vacuum command is issued. This patch also\nintroduces two new GUCs:\n\n(1) max_age_prepared_xacts\n- The age after creation of a prepared transaction after which it\nwill be considered an orphan.\n\n(2) prepared_xacts_vacuum_warn_timeout\n- The timeout period for an autovacuum (essentially any of its\nworker) to check for orphaned prepared transactions and throw\nwarnings if any are found.\n\n\n- What This Patch Does:\n\nIf the GUCs are enabled (set to a value higher than -1), an\nautovacuum worker running in the background checks if the\ntimeout has expired. If so, it checks if there are any orphaned\nprepared transactions (i.e. their age has exceeded\nmax_age_prepared_xacts). If it finds any, it throws a warning for\nevery such transaction. It also emits the total number of orphaned\nprepared transactions if one or more are found.\n\nWhen a vacuum command is issued from within a client, say psql,\nin that case, we skip the vacuum timeout check and simply scan\nfor any orphaned prepared transactions. Warnings are emitted to\nthe client and log file if any are found.\n\n\n- About the New GUCs:\n\n= max_age_prepared_xacts:\nSets maximum age after which a prepared transaction is considered an\norphan. It applies when \"prepared transactions\" are enabled. The\nage for a transaction is calculated from the time it was created to\nthe current time. If this value is specified without units, it is taken\nas milliseconds. The default value is -1 which allows prepared\ntransactions to live forever.\n\n= prepared_xacts_vacuum_warn_timeout:\nSets timeout after which vacuum starts throwing warnings for every\nprepared transactions that has exceeded maximum age defined by\n\"max_age_prepared_xacts\". If this value is specified without units,\nit is taken as milliseconds. The default value of -1 will disable\nthis warning mechanism. Setting a too value could potentially fill\nup log with orphaned prepared transaction warnings, so this\nparameter must be set to a value that is reasonably large to not\nfill up log file, but small enough to notify of long running and\npotential orphaned prepared transactions. There is no additional\ntimer or worker introduced with this change. Whenever a vacuum\nworker runs, it first checks for any orphaned prepared transactions.\nSo at best, this GUC serves as a guideline for a vacuum worker\nif a warning should be thrown to log file or a client issuing\nvacuum command.\n\n\n- What this Patch Does Not Cover:\n\nThe warning is not thrown when user either runs vacuumdb or passes\nindividual relations to be vacuum. Specifically in case of vacuumdb,\nit breaks down a vacuum command to an attribute-wise vacuum command.\nSo the vacuum command is indirectly run many times. Considering that\nwe want to emit warnings for every manual vacuum command, this simply\nfloods the terminal and log with orphaned prepared transactions\nwarnings. We could potentially handle that, but the overhead of\nthat seemed too much to me (and I've not invested any time looking\nto fix that either). Hence, warnings are not thrown when user runs\nvacuumdb and relation specific vacuum.\n\n\n\nOn Fri, Jan 31, 2020 at 7:02 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n>\n>\n> On Thu, Jan 30, 2020 at 8:28 AM Craig Ringer <craig@2ndquadrant.com>\n> wrote:\n>\n>> On Thu, 30 Jan 2020 at 02:04, Hamid Akhtar <hamid.akhtar@gmail.com>\n>> wrote:\n>> >\n>> > So having seen the feedback on this thread, and I tend to agree with\n>> most of what has been said here, I also agree that the server core isn't\n>> really the ideal place to handle the orphan prepared transactions.\n>> >\n>> > Ideally, these must be handled by a transaction manager, however, I do\n>> believe that we cannot let database suffer for failing of an external\n>> software, and we did a similar change through introduction of idle in\n>> transaction timeout behavior.\n>>\n>> The difference, IMO, is that idle-in-transaction aborts don't affect\n>> anything we've promised to be durable.\n>>\n>> Once you PREPARE TRANSACTION the DB has made a promise that that txn\n>> is durable. We don't have any consistent feedback channel to back to\n>> applications and say \"Hey, if you're not going to finish this up we\n>> need to get rid of it soon, ok?\". If a distributed transaction manager\n>> gets consensus for commit and goes to COMMIT PREPARED a previously\n>> prepared txn only to find that it has vanished, that's a major\n>> problem, and one that may bring the entire DTM to a halt until the\n>> admin can intervene.\n>>\n>> This isn't like idle-in-transaction aborts. It's closer to something\n>> like uncommitting a previously committed transaction.\n>>\n>> I do think it'd make sense to ensure that the documentation clearly\n>> highlights the impact of abandoned prepared xacts on server resource\n>> retention and performance, preferably with pointers to appropriate\n>> views. I haven't reviewed the docs to see how clear that is already.\n>>\n>\n> Having seen the documentation, IMHO the document does contain enough\n> information for users to understand what issues can be caused by these\n> orphaned prepared transactions.\n>\n>\n>>\n>> I can also see an argument for a periodic log message (maybe from\n>> vacuum?) warning when old prepared xacts hold xmin down. Including one\n>> sent to the client application when an explicit VACUUM is executed.\n>> (In fact, it'd make sense to generalise that for all xmin-retention).\n>>\n>\n> I think that opens up the debate on what we really mean by \"old\" and\n> whether that requires a syntax change when creating a prepared\n> transactions as Thomas Kellerer suggested earlier?\n>\n> I agree that vacuum should periodically throw warnings for any prepared\n> xacts that are holding xmin down.\n>\n> Generalising it for all xmin-retention is a fair idea IMHO, though that\n> does increase the overall scope here. A vacuum process should (ideally)\n> periodically throw out warnings for anything that is preventing it\n> (including\n> orphaned prepared transactions) from doing its routine work so that\n> somebody can take necessary actions.\n>\n>\n>> But I'm really not a fan of aborting such txns. If you operate with\n>> some kind of broken global transaction manager that can forget or\n>> abandon prepared xacts, then fix it, or adopt site-local periodic\n>> cleanup tasks that understand your site's needs.\n>>\n>> --\n>> Craig Ringer http://www.2ndQuadrant.com/\n>> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n>>\n>\n>\n> --\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca\n> ADDR: 10318 WHALLEY BLVD, Surrey, BC\n> CELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\n> SKYPE: engineeredvirus\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus",
"msg_date": "Wed, 19 Feb 2020 20:04:50 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Here is the v2 of the same patch after rebasing it and running it through\npgindent. There are no other code changes.\n\n\nOn Wed, Feb 19, 2020 at 8:04 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n> All,\n>\n> Attached is version 1 of POC patch for notifying of orphaned\n> prepared transactions via warnings emitted to a client\n> application and/or log file. It applies to PostgreSQL branch\n> \"master\" on top of \"e2e02191\" commit.\n>\n> I've tried to keep the patch as less invasive as I could with\n> minimal impact on vacuum processes, so the performance impact\n> and the changes are minimal in that area of PostgreSQL core.\n>\n>\n> - What's in this Patch:\n>\n> This patch throws warnings when an autovacuum worker encounters\n> an orphaned prepared transaction. It also throws warnings to a\n> client when a vacuum command is issued. This patch also\n> introduces two new GUCs:\n>\n> (1) max_age_prepared_xacts\n> - The age after creation of a prepared transaction after which it\n> will be considered an orphan.\n>\n> (2) prepared_xacts_vacuum_warn_timeout\n> - The timeout period for an autovacuum (essentially any of its\n> worker) to check for orphaned prepared transactions and throw\n> warnings if any are found.\n>\n>\n> - What This Patch Does:\n>\n> If the GUCs are enabled (set to a value higher than -1), an\n> autovacuum worker running in the background checks if the\n> timeout has expired. If so, it checks if there are any orphaned\n> prepared transactions (i.e. their age has exceeded\n> max_age_prepared_xacts). If it finds any, it throws a warning for\n> every such transaction. It also emits the total number of orphaned\n> prepared transactions if one or more are found.\n>\n> When a vacuum command is issued from within a client, say psql,\n> in that case, we skip the vacuum timeout check and simply scan\n> for any orphaned prepared transactions. Warnings are emitted to\n> the client and log file if any are found.\n>\n>\n> - About the New GUCs:\n>\n> = max_age_prepared_xacts:\n> Sets maximum age after which a prepared transaction is considered an\n> orphan. It applies when \"prepared transactions\" are enabled. The\n> age for a transaction is calculated from the time it was created to\n> the current time. If this value is specified without units, it is taken\n> as milliseconds. The default value is -1 which allows prepared\n> transactions to live forever.\n>\n> = prepared_xacts_vacuum_warn_timeout:\n> Sets timeout after which vacuum starts throwing warnings for every\n> prepared transactions that has exceeded maximum age defined by\n> \"max_age_prepared_xacts\". If this value is specified without units,\n> it is taken as milliseconds. The default value of -1 will disable\n> this warning mechanism. Setting a too value could potentially fill\n> up log with orphaned prepared transaction warnings, so this\n> parameter must be set to a value that is reasonably large to not\n> fill up log file, but small enough to notify of long running and\n> potential orphaned prepared transactions. There is no additional\n> timer or worker introduced with this change. Whenever a vacuum\n> worker runs, it first checks for any orphaned prepared transactions.\n> So at best, this GUC serves as a guideline for a vacuum worker\n> if a warning should be thrown to log file or a client issuing\n> vacuum command.\n>\n>\n> - What this Patch Does Not Cover:\n>\n> The warning is not thrown when user either runs vacuumdb or passes\n> individual relations to be vacuum. Specifically in case of vacuumdb,\n> it breaks down a vacuum command to an attribute-wise vacuum command.\n> So the vacuum command is indirectly run many times. Considering that\n> we want to emit warnings for every manual vacuum command, this simply\n> floods the terminal and log with orphaned prepared transactions\n> warnings. We could potentially handle that, but the overhead of\n> that seemed too much to me (and I've not invested any time looking\n> to fix that either). Hence, warnings are not thrown when user runs\n> vacuumdb and relation specific vacuum.\n>\n>\n>\n> On Fri, Jan 31, 2020 at 7:02 PM Hamid Akhtar <hamid.akhtar@gmail.com>\n> wrote:\n>\n>>\n>>\n>> On Thu, Jan 30, 2020 at 8:28 AM Craig Ringer <craig@2ndquadrant.com>\n>> wrote:\n>>\n>>> On Thu, 30 Jan 2020 at 02:04, Hamid Akhtar <hamid.akhtar@gmail.com>\n>>> wrote:\n>>> >\n>>> > So having seen the feedback on this thread, and I tend to agree with\n>>> most of what has been said here, I also agree that the server core isn't\n>>> really the ideal place to handle the orphan prepared transactions.\n>>> >\n>>> > Ideally, these must be handled by a transaction manager, however, I do\n>>> believe that we cannot let database suffer for failing of an external\n>>> software, and we did a similar change through introduction of idle in\n>>> transaction timeout behavior.\n>>>\n>>> The difference, IMO, is that idle-in-transaction aborts don't affect\n>>> anything we've promised to be durable.\n>>>\n>>> Once you PREPARE TRANSACTION the DB has made a promise that that txn\n>>> is durable. We don't have any consistent feedback channel to back to\n>>> applications and say \"Hey, if you're not going to finish this up we\n>>> need to get rid of it soon, ok?\". If a distributed transaction manager\n>>> gets consensus for commit and goes to COMMIT PREPARED a previously\n>>> prepared txn only to find that it has vanished, that's a major\n>>> problem, and one that may bring the entire DTM to a halt until the\n>>> admin can intervene.\n>>>\n>>> This isn't like idle-in-transaction aborts. It's closer to something\n>>> like uncommitting a previously committed transaction.\n>>>\n>>> I do think it'd make sense to ensure that the documentation clearly\n>>> highlights the impact of abandoned prepared xacts on server resource\n>>> retention and performance, preferably with pointers to appropriate\n>>> views. I haven't reviewed the docs to see how clear that is already.\n>>>\n>>\n>> Having seen the documentation, IMHO the document does contain enough\n>> information for users to understand what issues can be caused by these\n>> orphaned prepared transactions.\n>>\n>>\n>>>\n>>> I can also see an argument for a periodic log message (maybe from\n>>> vacuum?) warning when old prepared xacts hold xmin down. Including one\n>>> sent to the client application when an explicit VACUUM is executed.\n>>> (In fact, it'd make sense to generalise that for all xmin-retention).\n>>>\n>>\n>> I think that opens up the debate on what we really mean by \"old\" and\n>> whether that requires a syntax change when creating a prepared\n>> transactions as Thomas Kellerer suggested earlier?\n>>\n>> I agree that vacuum should periodically throw warnings for any prepared\n>> xacts that are holding xmin down.\n>>\n>> Generalising it for all xmin-retention is a fair idea IMHO, though that\n>> does increase the overall scope here. A vacuum process should (ideally)\n>> periodically throw out warnings for anything that is preventing it\n>> (including\n>> orphaned prepared transactions) from doing its routine work so that\n>> somebody can take necessary actions.\n>>\n>>\n>>> But I'm really not a fan of aborting such txns. If you operate with\n>>> some kind of broken global transaction manager that can forget or\n>>> abandon prepared xacts, then fix it, or adopt site-local periodic\n>>> cleanup tasks that understand your site's needs.\n>>>\n>>> --\n>>> Craig Ringer http://www.2ndQuadrant.com/\n>>> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n>>>\n>>\n>>\n>> --\n>> Highgo Software (Canada/China/Pakistan)\n>> URL : www.highgo.ca\n>> ADDR: 10318 WHALLEY BLVD, Surrey, BC\n>> CELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\n>> SKYPE: engineeredvirus\n>>\n>\n>\n> --\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca\n> ADDR: 10318 WHALLEY BLVD, Surrey, BC\n> CELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\n> SKYPE: engineeredvirus\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus",
"msg_date": "Mon, 2 Mar 2020 17:42:11 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 05:42:11PM +0500, Hamid Akhtar wrote:\n> Here is the v2 of the same patch after rebasing it and running it through\n> pgindent. There are no other code changes.\n\nThe paragraph about max_age_prepared_xacts doesn't define what is the\neffect of treating a transaction as orphaned.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 5 Mar 2020 20:24:04 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Mon, 2 Mar 2020 at 21:42, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>\n> Here is the v2 of the same patch after rebasing it and running it through pgindent. There are no other code changes.\n>\n\nThank you for working on this. I think what this patch tries to\nachieve would be helpful to inform orphaned prepared transactions that\ncan be cause of inefficient vacuum to users.\n\nAs far as I read the patch, the setting this feature using newly added\nparameters seems to be complicated to me. IIUC, even if a prepared\ntransactions is enough older than max_age_prepared_xacts, we don't\nwarn if it doesn't elapsed prepared_xacts_vacuum_warn_timeout since\nwhen the \"first\" prepared transaction is created. And the first\nprepared transaction means that the first entry for\nTwoPhaseStateData->prepXacts. Therefore, if there is always more than\none prepared transaction, we don't warn for at least\nprepared_xacts_vacuum_warn_timeout seconds even if the first added\nprepared transaction is already removed. So I'm not sure how we can\nthink the setting value of prepared_xacts_vacuum_warn_timeout.\n\nRegarding the warning message, I wonder if the current message is too\ndetailed. If we want to inform that there is orphaned prepared\ntransactions to users, it seems to me that it's enough to report the\nexistence (and possibly the number of orphaned prepared transactions),\nrather than individual details.\n\nGiven that the above things, we can simply think this feature; we can\nhave only max_age_prepared_xacts, and autovacuum checks the minimum of\nprepared_at of prepared transactions, and compare it to\nmax_age_prepared_xacts. We can warn if (CurrentTimestamp -\nmin(prepared_at)) > max_age_prepared_xacts. In addition, if we also\nwant to control this behavior by the age of xid, we can have another\nGUC parameter for comparing the age(min(xid of prepared transactions))\nto that value.\n\nFinally, regarding the name of parameters, when we mention the age of\ntransaction it means the age of xid of the transaction, not the time.\nPlease refer to other GUC parameter having \"age\" in its name such as\nautovacuum_freeze_max_age, vacuum_freeze_min_age. The patch adds\nmax_age_prepared_xacts but I think it should be renamed. For example,\nprepared_xact_warn_min_duration is for time and\nprepared_xact_warn_max_age for age.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Mar 2020 14:42:34 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Thank you so much Bruce and Sawada.\n\nBruce:\nI'll update the documentation with more details for max_age_prepared_xacts\n\nSawada:\nYou have raised 4 very valid points. Here are my thoughts.\n\n(1)\nI think your concern is that if we can reduce the need of 2 GUCs to one.\n\nThe purpose of having 2 different GUCs was serving two different purposes.\n- max_age_prepared_xacts; this defines the maximum age of a prepared\ntransaction after which it may be considered an orphan.\n- prepared_xacts_vacuum_warn_timeout; since we are throwing warnings in the\nlog, we need a way of controlling the behaviour to prevent from flooding\nthe log file with our messages. This timeout defines that. May be there is\na better way of handling this, but may be not.\n\n(2)\nYour point is that when there are more than one prepared transactions (and\neven if the first one is removed), timeout\nprepared_xacts_vacuum_warn_timeout isn't always accurate.\n\nYes, I agree. However, for us to hit the exact timeout for each prepared\ntransaction, we need setup timers and callback functions. That's too much\nof an overhead IMHO. The alternative design that I took (the current\ndesign) is based on the assumption that we don't need a precise timeout for\nthese transactions or for vacuum to report these issues to log. So, a\ndecent enough way of setting a timeout should be good enough considering\nthat it doesn't add any real overhead to the vacuum process. This does mean\nthat an orphaned prepared transaction may be notified after\nprepared_xacts_vacuum_warn_timeout * 2. This, IMHO should be acceptable\nbehavior.\n\n(3)\nMessage is too detailed.\n\nYes, I agree. I'll review this an update the patch.\n\n(4)\nGUCs should be renamed.\n\nYes, I agree. The names you have suggested make more sense. I'll send an\nupdated version of the patch with the new names and other suggested changes.\n\nOn Wed, Mar 11, 2020 at 10:43 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Mon, 2 Mar 2020 at 21:42, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> >\n> > Here is the v2 of the same patch after rebasing it and running it\n> through pgindent. There are no other code changes.\n> >\n>\n> Thank you for working on this. I think what this patch tries to\n> achieve would be helpful to inform orphaned prepared transactions that\n> can be cause of inefficient vacuum to users.\n>\n> As far as I read the patch, the setting this feature using newly added\n> parameters seems to be complicated to me. IIUC, even if a prepared\n> transactions is enough older than max_age_prepared_xacts, we don't\n> warn if it doesn't elapsed prepared_xacts_vacuum_warn_timeout since\n> when the \"first\" prepared transaction is created. And the first\n> prepared transaction means that the first entry for\n> TwoPhaseStateData->prepXacts. Therefore, if there is always more than\n> one prepared transaction, we don't warn for at least\n> prepared_xacts_vacuum_warn_timeout seconds even if the first added\n> prepared transaction is already removed. So I'm not sure how we can\n> think the setting value of prepared_xacts_vacuum_warn_timeout.\n>\n> Regarding the warning message, I wonder if the current message is too\n> detailed. If we want to inform that there is orphaned prepared\n> transactions to users, it seems to me that it's enough to report the\n> existence (and possibly the number of orphaned prepared transactions),\n> rather than individual details.\n>\n> Given that the above things, we can simply think this feature; we can\n> have only max_age_prepared_xacts, and autovacuum checks the minimum of\n> prepared_at of prepared transactions, and compare it to\n> max_age_prepared_xacts. We can warn if (CurrentTimestamp -\n> min(prepared_at)) > max_age_prepared_xacts. In addition, if we also\n> want to control this behavior by the age of xid, we can have another\n> GUC parameter for comparing the age(min(xid of prepared transactions))\n> to that value.\n>\n> Finally, regarding the name of parameters, when we mention the age of\n> transaction it means the age of xid of the transaction, not the time.\n> Please refer to other GUC parameter having \"age\" in its name such as\n> autovacuum_freeze_max_age, vacuum_freeze_min_age. The patch adds\n> max_age_prepared_xacts but I think it should be renamed. For example,\n> prepared_xact_warn_min_duration is for time and\n> prepared_xact_warn_max_age for age.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nThank you so much Bruce and Sawada. Bruce:I'll update the documentation with more details for max_age_prepared_xactsSawada:You have raised 4 very valid points. Here are my thoughts.(1)I think your concern is that if we can reduce the need of 2 GUCs to one.The purpose of having 2 different GUCs was serving two different purposes.- max_age_prepared_xacts; this defines the maximum age of a prepared transaction after which it may be considered an orphan.- prepared_xacts_vacuum_warn_timeout; since we are throwing warnings in the log, we need a way of controlling the behaviour to prevent from flooding the log file with our messages. This timeout defines that. May be there is a better way of handling this, but may be not.(2)Your point is that when there are more than one prepared transactions (and even if the first one is removed), timeout prepared_xacts_vacuum_warn_timeout isn't always accurate.Yes, I agree. However, for us to hit the exact timeout for each prepared transaction, we need setup timers and callback functions. That's too much of an overhead IMHO. The alternative design that I took (the current design) is based on the assumption that we don't need a precise timeout for these transactions or for vacuum to report these issues to log. So, a decent enough way of setting a timeout should be good enough considering that it doesn't add any real overhead to the vacuum process. This does mean that an orphaned prepared transaction may be notified after prepared_xacts_vacuum_warn_timeout * 2. This, IMHO should be acceptable behavior.(3) Message is too detailed. Yes, I agree. I'll review this an update the patch.(4)GUCs should be renamed.Yes, I agree. The names you have suggested make more sense. I'll send an updated version of the patch with the new names and other suggested changes.On Wed, Mar 11, 2020 at 10:43 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Mon, 2 Mar 2020 at 21:42, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>\n> Here is the v2 of the same patch after rebasing it and running it through pgindent. There are no other code changes.\n>\n\nThank you for working on this. I think what this patch tries to\nachieve would be helpful to inform orphaned prepared transactions that\ncan be cause of inefficient vacuum to users.\n\nAs far as I read the patch, the setting this feature using newly added\nparameters seems to be complicated to me. IIUC, even if a prepared\ntransactions is enough older than max_age_prepared_xacts, we don't\nwarn if it doesn't elapsed prepared_xacts_vacuum_warn_timeout since\nwhen the \"first\" prepared transaction is created. And the first\nprepared transaction means that the first entry for\nTwoPhaseStateData->prepXacts. Therefore, if there is always more than\none prepared transaction, we don't warn for at least\nprepared_xacts_vacuum_warn_timeout seconds even if the first added\nprepared transaction is already removed. So I'm not sure how we can\nthink the setting value of prepared_xacts_vacuum_warn_timeout.\n\nRegarding the warning message, I wonder if the current message is too\ndetailed. If we want to inform that there is orphaned prepared\ntransactions to users, it seems to me that it's enough to report the\nexistence (and possibly the number of orphaned prepared transactions),\nrather than individual details.\n\nGiven that the above things, we can simply think this feature; we can\nhave only max_age_prepared_xacts, and autovacuum checks the minimum of\nprepared_at of prepared transactions, and compare it to\nmax_age_prepared_xacts. We can warn if (CurrentTimestamp -\nmin(prepared_at)) > max_age_prepared_xacts. In addition, if we also\nwant to control this behavior by the age of xid, we can have another\nGUC parameter for comparing the age(min(xid of prepared transactions))\nto that value.\n\nFinally, regarding the name of parameters, when we mention the age of\ntransaction it means the age of xid of the transaction, not the time.\nPlease refer to other GUC parameter having \"age\" in its name such as\nautovacuum_freeze_max_age, vacuum_freeze_min_age. The patch adds\nmax_age_prepared_xacts but I think it should be renamed. For example,\nprepared_xact_warn_min_duration is for time and\nprepared_xact_warn_max_age for age.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Wed, 15 Apr 2020 00:00:35 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Wed, Feb 19, 2020 at 10:05 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> Attached is version 1 of POC patch for notifying of orphaned\n> prepared transactions via warnings emitted to a client\n> application and/or log file. It applies to PostgreSQL branch\n> \"master\" on top of \"e2e02191\" commit.\n\nI think this is a bad idea and that we should reject the patch. It's\ntrue that forgotten prepared transactions are a problem, but it's also\ntrue that you can monitor for that yourself using the\npg_prepared_xacts view. If you do, you will have a lot more\nflexibility than this patch gives you, or than any similar patch ever\ncan give you.\n\nGenerally, people don't pay attention to warnings in logs, so they're\njust clutter. Moreover, there are tons of different things for which\nyou should probably monitor (wraparound perils, slow checkpoints,\nbloated tables, etc.) and so the real solution is to run some\nmonitoring software. So even if you do pay attention to your logs, and\neven if the GUCs this provides you sufficient flexibility for your\nneeds in this one area, you still need to run some monitoring\nsoftware. At which point, you don't also need this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 Apr 2020 15:19:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Wed, 15 Apr 2020 at 03:19, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Feb 19, 2020 at 10:05 AM Hamid Akhtar <hamid.akhtar@gmail.com>\n> wrote:\n> > Attached is version 1 of POC patch for notifying of orphaned\n> > prepared transactions via warnings emitted to a client\n> > application and/or log file. It applies to PostgreSQL branch\n> > \"master\" on top of \"e2e02191\" commit.\n>\n> I think this is a bad idea and that we should reject the patch. It's\n> true that forgotten prepared transactions are a problem, but it's also\n> true that you can monitor for that yourself using the\n> pg_prepared_xacts view. If you do, you will have a lot more\n> flexibility than this patch gives you, or than any similar patch ever\n> can give you.\n>\n\nI agree. It's going to cause nothing but problems.\n\nI am generally a fan of improvements that make PostgreSQL easier to use,\neasier to monitor and understand, harder to break accidentally, etc. But\nnot when those improvements come at the price of correct behaviour for\nstandard, externally specified interfaces.\n\nNothing allows us to just throw away prepared xacts. Doing so violates the\nvery definition of what a prepared xact is. It's like saying \"hey, this\ntable is bloated, lets discard all rows with xmin < foo because we figure\nthe user probably doesn't care about them; though they're visible to some\nstill-running xacts, but those xacts haven't accessed the table.\". Um. No.\nWe can't do that.\n\nIf you want this, write an extension that does it as a background worker.\nYou can munge the prepared xacts state in any manner you please from there.\n\nI advocated for visibility / monitoring improvements upthread that might\nhelp mitigate the operational issues. Because I do agree that there's a\nproblem with users having to watch the logs or query obscure state to\nunderstand what the system is doing and why bloat is being created by\nabandoned prepared xacts.\n\nJust discarding the prepared xacts is not the answer though.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 15 Apr 2020 at 03:19, Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Feb 19, 2020 at 10:05 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> Attached is version 1 of POC patch for notifying of orphaned\n> prepared transactions via warnings emitted to a client\n> application and/or log file. It applies to PostgreSQL branch\n> \"master\" on top of \"e2e02191\" commit.\n\nI think this is a bad idea and that we should reject the patch. It's\ntrue that forgotten prepared transactions are a problem, but it's also\ntrue that you can monitor for that yourself using the\npg_prepared_xacts view. If you do, you will have a lot more\nflexibility than this patch gives you, or than any similar patch ever\ncan give you.I agree. It's going to cause nothing but problems.I am generally a fan of improvements that make PostgreSQL easier to use, easier to monitor and understand, harder to break accidentally, etc. But not when those improvements come at the price of correct behaviour for standard, externally specified interfaces.Nothing allows us to just throw away prepared xacts. Doing so violates the very definition of what a prepared xact is. It's like saying \"hey, this table is bloated, lets discard all rows with xmin < foo because we figure the user probably doesn't care about them; though they're visible to some still-running xacts, but those xacts haven't accessed the table.\". Um. No. We can't do that.If you want this, write an extension that does it as a background worker. You can munge the prepared xacts state in any manner you please from there.I advocated for visibility / monitoring improvements upthread that might help mitigate the operational issues. Because I do agree that there's a problem with users having to watch the logs or query obscure state to understand what the system is doing and why bloat is being created by abandoned prepared xacts.Just discarding the prepared xacts is not the answer though.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 16 Apr 2020 13:23:36 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, 16 Apr 2020 at 13:23, Craig Ringer <craig@2ndquadrant.com> wrote:\n\n>\n> Just discarding the prepared xacts is not the answer though.\n>\n\n... however, I have wondered a few times about making vacuum smarter about\ncases where the xmin is held down by prepared xacts or by replication\nslots. If we could record the oldest *and newest* xid needed by such\nresource retention markers we could potentially teach vacuum to remove\nintermediate dead rows. For high-churn workloads like like workqueue\napplications that could be a really big win.\n\nWe wouldn't need to track a fine-grained snapshot with an in-progress list\n(or inverted in-progress list like historic snapshots) for these. We'd just\nremember the needed xid range in [xmin,xmax] form. And we could even do the\nsame for live backends' PGXACT - it might not be worth the price there, but\nif you have workloads that have batch xacts + high churn rate xacts it'd be\npretty appealing.\n\nIt wouldn't help with xid wraparound concerns, but it could help a lot with\nbloat caused by old snapshots for some very common workloads.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 16 Apr 2020 at 13:23, Craig Ringer <craig@2ndquadrant.com> wrote:Just discarding the prepared xacts is not the answer though.... however, I have wondered a few times about making vacuum smarter about cases where the xmin is held down by prepared xacts or by replication slots. If we could record the oldest *and newest* xid needed by such resource retention markers we could potentially teach vacuum to remove intermediate dead rows. For high-churn workloads like like workqueue applications that could be a really big win.We wouldn't need to track a fine-grained snapshot with an in-progress list (or inverted in-progress list like historic snapshots) for these. We'd just remember the needed xid range in [xmin,xmax] form. And we could even do the same for live backends' PGXACT - it might not be worth the price there, but if you have workloads that have batch xacts + high churn rate xacts it'd be pretty appealing.It wouldn't help with xid wraparound concerns, but it could help a lot with bloat caused by old snapshots for some very common workloads.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 16 Apr 2020 13:32:24 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "This patch actually does not discard any prepared transactions and only\nthrows a warning for each orphaned one. So, there is no behaviour change\nexcept for getting some warnings in the log or emitting some warning to a\nclient executing a vacuum command.\n\nI hear all the criticism which I don't disagree with. Obviously, scripts\nand other solutions could provide a lot more flexibility.\n\nAlso, I believe most of us agree that vacuum needs to be smarter.\n\nsrc/backend/commands/vacuum.c does throw warnings for upcoming wraparound\nissues with one warning in particular mentioning prepared transactions and\nstale replication slots. So, throwing warnings is not unprecedented. There\nare 3 warnings in this file which I believe can also be handled by external\ntools. I'm not debating the merit of these warnings, nor am I trying to\njustify the addition of new warnings based on these.\n\nMy real question is whether vacuum should be preemptively complaining about\nprepared transactions or stale replication slots rather than waiting for\ntransaction id to exceed the safe limit. I presume by the time safe limit\nis exceeded, vacuum's work would already have been significantly impacted.\n\nAFAICT, my patch actually doesn't break anything and doesn't add any\nsignificant overhead to the vacuum process. It does supplement the current\nwarnings though which might be useful.\n\nOn Thu, Apr 16, 2020 at 10:32 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n\n> On Thu, 16 Apr 2020 at 13:23, Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n>>\n>> Just discarding the prepared xacts is not the answer though.\n>>\n>\n> ... however, I have wondered a few times about making vacuum smarter about\n> cases where the xmin is held down by prepared xacts or by replication\n> slots. If we could record the oldest *and newest* xid needed by such\n> resource retention markers we could potentially teach vacuum to remove\n> intermediate dead rows. For high-churn workloads like like workqueue\n> applications that could be a really big win.\n>\n> We wouldn't need to track a fine-grained snapshot with an in-progress list\n> (or inverted in-progress list like historic snapshots) for these. We'd just\n> remember the needed xid range in [xmin,xmax] form. And we could even do the\n> same for live backends' PGXACT - it might not be worth the price there, but\n> if you have workloads that have batch xacts + high churn rate xacts it'd be\n> pretty appealing.\n>\n> It wouldn't help with xid wraparound concerns, but it could help a lot\n> with bloat caused by old snapshots for some very common workloads.\n>\n> --\n> Craig Ringer http://www.2ndQuadrant.com/\n> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nThis patch actually does not discard any prepared transactions and only throws a warning for each orphaned one. So, there is no behaviour change except for getting some warnings in the log or emitting some warning to a client executing a vacuum command.I hear all the criticism which I don't disagree with. Obviously, scripts and other solutions could provide a lot more flexibility.Also, I believe most of us agree that vacuum needs to be smarter.src/backend/commands/vacuum.c does throw warnings for upcoming wraparound issues with one warning in particular mentioning prepared transactions and stale replication slots. So, throwing warnings is not unprecedented. There are 3 warnings in this file which I believe can also be handled by external tools. I'm not debating the merit of these warnings, nor am I trying to justify the addition of new warnings based on these.My real question is whether vacuum should be preemptively complaining about prepared transactions or stale replication slots rather than waiting for transaction id to exceed the safe limit. I presume by the time safe limit is exceeded, vacuum's work would already have been significantly impacted.AFAICT, my patch actually doesn't break anything and doesn't add any significant overhead to the vacuum process. It does supplement the current warnings though which might be useful.On Thu, Apr 16, 2020 at 10:32 AM Craig Ringer <craig@2ndquadrant.com> wrote:On Thu, 16 Apr 2020 at 13:23, Craig Ringer <craig@2ndquadrant.com> wrote:Just discarding the prepared xacts is not the answer though.... however, I have wondered a few times about making vacuum smarter about cases where the xmin is held down by prepared xacts or by replication slots. If we could record the oldest *and newest* xid needed by such resource retention markers we could potentially teach vacuum to remove intermediate dead rows. For high-churn workloads like like workqueue applications that could be a really big win.We wouldn't need to track a fine-grained snapshot with an in-progress list (or inverted in-progress list like historic snapshots) for these. We'd just remember the needed xid range in [xmin,xmax] form. And we could even do the same for live backends' PGXACT - it might not be worth the price there, but if you have workloads that have batch xacts + high churn rate xacts it'd be pretty appealing.It wouldn't help with xid wraparound concerns, but it could help a lot with bloat caused by old snapshots for some very common workloads.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Thu, 16 Apr 2020 13:42:36 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, Apr 16, 2020 at 4:43 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> My real question is whether vacuum should be preemptively complaining about prepared transactions or stale replication slots rather than waiting for transaction id to exceed the safe limit. I presume by the time safe limit is exceeded, vacuum's work would already have been significantly impacted.\n\nYeah, for my part, I agree that letting things go until the point\nwhere VACUUM starts to complain is usually bad. Generally, you want to\nknow a lot sooner. That being said, I think the solution to that is to\nrun a monitoring tool, not to overload the autovacuum worker with\nadditional duties.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Apr 2020 08:20:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, Apr 16, 2020 at 5:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Apr 16, 2020 at 4:43 AM Hamid Akhtar <hamid.akhtar@gmail.com>\n> wrote:\n> > My real question is whether vacuum should be preemptively complaining\n> about prepared transactions or stale replication slots rather than waiting\n> for transaction id to exceed the safe limit. I presume by the time safe\n> limit is exceeded, vacuum's work would already have been significantly\n> impacted.\n>\n> Yeah, for my part, I agree that letting things go until the point\n> where VACUUM starts to complain is usually bad. Generally, you want to\n> know a lot sooner. That being said, I think the solution to that is to\n> run a monitoring tool, not to overload the autovacuum worker with\n> additional duties.\n>\n\nSo is the concern performance overhead rather than the need for such a\nfeature?\n\nAny server running with prepared transactions enabled, more likely than\nnot, requires a monitoring tool for tracking orphaned prepared\ntransactions. For such environments, surely the overhead created by such a\nfeature implemented in the server will create a lower overhead than their\nmonitoring tool.\n\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Thu, Apr 16, 2020 at 5:20 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Apr 16, 2020 at 4:43 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> My real question is whether vacuum should be preemptively complaining about prepared transactions or stale replication slots rather than waiting for transaction id to exceed the safe limit. I presume by the time safe limit is exceeded, vacuum's work would already have been significantly impacted.\n\nYeah, for my part, I agree that letting things go until the point\nwhere VACUUM starts to complain is usually bad. Generally, you want to\nknow a lot sooner. That being said, I think the solution to that is to\nrun a monitoring tool, not to overload the autovacuum worker with\nadditional duties.So is the concern performance overhead rather than the need for such a feature?Any server running with prepared transactions enabled, more likely than not, requires a monitoring tool for tracking orphaned prepared transactions. For such environments, surely the overhead created by such a feature implemented in the server will create a lower overhead than their monitoring tool. \n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Thu, 16 Apr 2020 23:16:29 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, Apr 16, 2020 at 2:17 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n> So is the concern performance overhead rather than the need for such a feature?\n\nNo, I don't think this would have any significant overhead. My concern\nis that I think it's the wrong way to solve the problem. If you need\nto check for prepared transactions that got missed, the right way to\ndo that is to use a monitoring tool that runs an appropriate query\nagainst the server on a regular basis and alerts based on the output.\nSuch a tool can be used for many things, of which this is just one,\nand the queries can be customized to the needs of a particular\nenvironment, whereas this feature is much less flexible in that way\nbecause it is hard-coded into the server.\n\nTo put that another way, any problem you can solve with this feature,\nyou can also solve without this feature. And you can solve it any\nreleased branch, without waiting for a release that would\nhypothetically contain this patch, and you can solve it in a more\nflexible way than this patch allows, because you can tailor the query\nany way you like. The right place for a feature like this in something\nlike check_postgres.pl, not the server. It looks like they added it in\n2009:\n\nhttps://bucardo.org/pipermail/check_postgres/2009-April/000349.html\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Apr 2020 14:26:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Apr 16, 2020 at 2:17 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n>> So is the concern performance overhead rather than the need for such a feature?\n\n> No, I don't think this would have any significant overhead. My concern\n> is that I think it's the wrong way to solve the problem.\n\nFWIW, I agree with Robert that this patch is a bad idea. His\nrecommendation is to use an external monitoring tool, which is not a\nself-contained solution, but this isn't either: you'd need to add an\nexternal log-scraping tool to spot the warnings.\n\nEven if I liked the core idea, loading the functionality onto VACUUM seems\nlike a fairly horrid design choice. It's quite unrelated to what that\ncommand does. In the autovac code path, it's going to lead to multiple\nautovac workers all complaining simultaneously about the same problem.\nBut having manual vacuums complain about issues unrelated to the task at\nhand is also a seriously poor bit of UX design. Moreover, that won't do\nall that much to surface problems, since most(?) installations never run\nmanual vacuums; or if they do, the \"manual\" runs are really done by a cron\njob or the like, which is not going to notice the warnings. So you still\nneed a log-scraping tool.\n\nIf we were going to go down the path of periodically logging warnings\nabout old prepared transactions, some single-instance background task\nlike the checkpointer would be a better place to do the work in. But\nI'm not really recommending that, because I agree with Robert that\nwe just plain don't want this functionality.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Apr 2020 15:11:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, Apr 16, 2020 at 03:11:51PM -0400, Tom Lane wrote:\n> Even if I liked the core idea, loading the functionality onto VACUUM seems\n> like a fairly horrid design choice. It's quite unrelated to what that\n> command does. In the autovac code path, it's going to lead to multiple\n> autovac workers all complaining simultaneously about the same problem.\n> But having manual vacuums complain about issues unrelated to the task at\n> hand is also a seriously poor bit of UX design. Moreover, that won't do\n> all that much to surface problems, since most(?) installations never run\n> manual vacuums; or if they do, the \"manual\" runs are really done by a cron\n> job or the like, which is not going to notice the warnings. So you still\n> need a log-scraping tool.\n\n+1.\n\n> If we were going to go down the path of periodically logging warnings\n> about old prepared transactions, some single-instance background task\n> like the checkpointer would be a better place to do the work in. But\n> I'm not really recommending that, because I agree with Robert that\n> we just plain don't want this functionality.\n\nI am not sure that the checkpointer is a good place to do that either,\njoining back with your argument in the first paragraph of this email\nrelated to vacuum. One potential approach would be a contrib module\nthat works as a background worker? However, I would think that\nfinding a minimum set of requirements that we think are generic enough\nfor most users would be something hard to draft a list of. If we had\na small, minimal contrib/ module in core that people could easily\nextend for their own needs and that we would intentionally keep as\nminimal, in the same spirit as say passwordcheck, perhaps..\n--\nMichael",
"msg_date": "Fri, 17 Apr 2020 09:40:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Thank you everyone for the detailed feedback.\n\nOn Fri, Apr 17, 2020 at 5:40 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Apr 16, 2020 at 03:11:51PM -0400, Tom Lane wrote:\n> > Even if I liked the core idea, loading the functionality onto VACUUM\n> seems\n> > like a fairly horrid design choice. It's quite unrelated to what that\n> > command does. In the autovac code path, it's going to lead to multiple\n> > autovac workers all complaining simultaneously about the same problem.\n> > But having manual vacuums complain about issues unrelated to the task at\n> > hand is also a seriously poor bit of UX design. Moreover, that won't do\n> > all that much to surface problems, since most(?) installations never run\n> > manual vacuums; or if they do, the \"manual\" runs are really done by a\n> cron\n> > job or the like, which is not going to notice the warnings. So you still\n> > need a log-scraping tool.\n>\n> +1.\n>\n> > If we were going to go down the path of periodically logging warnings\n> > about old prepared transactions, some single-instance background task\n> > like the checkpointer would be a better place to do the work in. But\n> > I'm not really recommending that, because I agree with Robert that\n> > we just plain don't want this functionality.\n>\n> I am not sure that the checkpointer is a good place to do that either,\n> joining back with your argument in the first paragraph of this email\n> related to vacuum. One potential approach would be a contrib module\n> that works as a background worker? However, I would think that\n> finding a minimum set of requirements that we think are generic enough\n> for most users would be something hard to draft a list of. If we had\n> a small, minimal contrib/ module in core that people could easily\n> extend for their own needs and that we would intentionally keep as\n> minimal, in the same spirit as say passwordcheck, perhaps..\n> --\n> Michael\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nThank you everyone for the detailed feedback. On Fri, Apr 17, 2020 at 5:40 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Apr 16, 2020 at 03:11:51PM -0400, Tom Lane wrote:\n> Even if I liked the core idea, loading the functionality onto VACUUM seems\n> like a fairly horrid design choice. It's quite unrelated to what that\n> command does. In the autovac code path, it's going to lead to multiple\n> autovac workers all complaining simultaneously about the same problem.\n> But having manual vacuums complain about issues unrelated to the task at\n> hand is also a seriously poor bit of UX design. Moreover, that won't do\n> all that much to surface problems, since most(?) installations never run\n> manual vacuums; or if they do, the \"manual\" runs are really done by a cron\n> job or the like, which is not going to notice the warnings. So you still\n> need a log-scraping tool.\n\n+1.\n\n> If we were going to go down the path of periodically logging warnings\n> about old prepared transactions, some single-instance background task\n> like the checkpointer would be a better place to do the work in. But\n> I'm not really recommending that, because I agree with Robert that\n> we just plain don't want this functionality.\n\nI am not sure that the checkpointer is a good place to do that either,\njoining back with your argument in the first paragraph of this email\nrelated to vacuum. One potential approach would be a contrib module\nthat works as a background worker? However, I would think that\nfinding a minimum set of requirements that we think are generic enough\nfor most users would be something hard to draft a list of. If we had\na small, minimal contrib/ module in core that people could easily\nextend for their own needs and that we would intentionally keep as\nminimal, in the same spirit as say passwordcheck, perhaps..\n--\nMichael\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus",
"msg_date": "Fri, 17 Apr 2020 09:07:23 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Thu, Apr 16, 2020 at 03:11:51PM -0400, Tom Lane wrote:\n> If we were going to go down the path of periodically logging warnings\n> about old prepared transactions, some single-instance background task\n> like the checkpointer would be a better place to do the work in. But\n> I'm not really recommending that, because I agree with Robert that\n> we just plain don't want this functionality.\n\nI thought we would just emit a warning at boot time.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 20 Apr 2020 22:35:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Mon, Apr 20, 2020 at 10:35:15PM -0400, Bruce Momjian wrote:\n> On Thu, Apr 16, 2020 at 03:11:51PM -0400, Tom Lane wrote:\n>> If we were going to go down the path of periodically logging warnings\n>> about old prepared transactions, some single-instance background task\n>> like the checkpointer would be a better place to do the work in. But\n>> I'm not really recommending that, because I agree with Robert that\n>> we just plain don't want this functionality.\n> \n> I thought we would just emit a warning at boot time.\n\nThat's more tricky than boot time (did you mean postmaster context?),\nespecially if you are starting a cluster from a base backup as you\nhave no guarantee that the 2PC information is consistent by just\nlooking at what's on disk (some of the 2PC files may still be in WAL\nrecords to-be-replayed), so a natural candidate to gather the\ninformation wanted here would be RecoverPreparedTransactions() for a\nprimary, and StandbyRecoverPreparedTransactions() for a standby.\n--\nMichael",
"msg_date": "Tue, 21 Apr 2020 13:52:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Tue, Apr 21, 2020 at 01:52:46PM +0900, Michael Paquier wrote:\n> On Mon, Apr 20, 2020 at 10:35:15PM -0400, Bruce Momjian wrote:\n> > On Thu, Apr 16, 2020 at 03:11:51PM -0400, Tom Lane wrote:\n> >> If we were going to go down the path of periodically logging warnings\n> >> about old prepared transactions, some single-instance background task\n> >> like the checkpointer would be a better place to do the work in. But\n> >> I'm not really recommending that, because I agree with Robert that\n> >> we just plain don't want this functionality.\n> > \n> > I thought we would just emit a warning at boot time.\n> \n> That's more tricky than boot time (did you mean postmaster context?),\n> especially if you are starting a cluster from a base backup as you\n> have no guarantee that the 2PC information is consistent by just\n> looking at what's on disk (some of the 2PC files may still be in WAL\n> records to-be-replayed), so a natural candidate to gather the\n> information wanted here would be RecoverPreparedTransactions() for a\n> primary, and StandbyRecoverPreparedTransactions() for a standby.\n\nSorry, I meant something in the Postgres logs at postmaster start.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 Apr 2020 14:54:56 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Apr 21, 2020 at 01:52:46PM +0900, Michael Paquier wrote:\n>> On Mon, Apr 20, 2020 at 10:35:15PM -0400, Bruce Momjian wrote:\n>>> On Thu, Apr 16, 2020 at 03:11:51PM -0400, Tom Lane wrote:\n>>>> If we were going to go down the path of periodically logging warnings\n>>>> about old prepared transactions, some single-instance background task\n>>>> like the checkpointer would be a better place to do the work in. But\n>>>> I'm not really recommending that, because I agree with Robert that\n>>>> we just plain don't want this functionality.\n\n> Sorry, I meant something in the Postgres logs at postmaster start.\n\nThat seems strictly worse than periodic logging as far as the probability\nthat somebody will notice the log entry goes. In any case it would only\nhelp people when they restart their postmaster, which ought to be pretty\ninfrequent in a production situation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Apr 2020 16:03:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Tue, Apr 21, 2020 at 04:03:53PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Apr 21, 2020 at 01:52:46PM +0900, Michael Paquier wrote:\n> >> On Mon, Apr 20, 2020 at 10:35:15PM -0400, Bruce Momjian wrote:\n> >>> On Thu, Apr 16, 2020 at 03:11:51PM -0400, Tom Lane wrote:\n> >>>> If we were going to go down the path of periodically logging warnings\n> >>>> about old prepared transactions, some single-instance background task\n> >>>> like the checkpointer would be a better place to do the work in. But\n> >>>> I'm not really recommending that, because I agree with Robert that\n> >>>> we just plain don't want this functionality.\n> \n> > Sorry, I meant something in the Postgres logs at postmaster start.\n> \n> That seems strictly worse than periodic logging as far as the probability\n> that somebody will notice the log entry goes. In any case it would only\n> help people when they restart their postmaster, which ought to be pretty\n> infrequent in a production situation.\n\nI thought if something was wrong, they might look at the server logs\nafter a restart, or they might have a higher probability of having\norphaned prepared transactions after a restart.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 Apr 2020 18:10:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Tue, Apr 21, 2020 at 6:10 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I thought if something was wrong, they might look at the server logs\n> after a restart, or they might have a higher probability of having\n> orphaned prepared transactions after a restart.\n\nMaybe slightly, but having a monitoring tool like check_postgres.pl\nfor this sort of thing still seems like a vastly better solution.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 22 Apr 2020 13:05:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
},
{
"msg_contents": "On Wed, Apr 22, 2020 at 01:05:17PM -0400, Robert Haas wrote:\n> On Tue, Apr 21, 2020 at 6:10 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I thought if something was wrong, they might look at the server logs\n> > after a restart, or they might have a higher probability of having\n> > orphaned prepared transactions after a restart.\n> \n> Maybe slightly, but having a monitoring tool like check_postgres.pl\n> for this sort of thing still seems like a vastly better solution.\n\nIt is --- I was just thinking we should have a minimal native warning.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 22 Apr 2020 14:06:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we need to handle orphaned prepared transactions in the\n server?"
}
] |
[
{
"msg_contents": "Referencing the example given in the documentation for index-only\nscans [0], we consider an index:\n\nCREATE INDEX tab_f_x ON tab (f(x));\n\nThis index currently will not be used for an index-scan for the\nfollowing query since the planner isn't smart enough to know that \"x\"\nis not needed:\n\nSELECT f(x) FROM tab WHERE f(x) < 1;\n\nHowever, any function applied to a column for an index expression is\nrequired to be immutable so as far as I can tell the planner doesn't\nhave to be very smart to know that the index can indeed be used for an\nindex-only scan (without having \"x\" included).\n\nOne interesting use-case for this is to be able to create\nspace-efficient indexes for raw log data. For example, for each type\nof message (which might be encoded as JSON), one could create a\npartial index with the relevant fields extracted and converted into\nnative data types and use index-only scanning to query. This is not\nparticularly attractive today because the message itself would need to\nbe added to the index effectively duplicating the log data.\n\nIn the same vein, being able to add this auxiliary data (which is\nbasically immutable expressions on one or more columns) explicitly\nusing INCLUDE would make the technique actually reliable. This is not\npossible right now since expression are not supported as included\ncolumns.\n\nWhat's required in order to move forward on these capabilities?\n\n[0] https://www.postgresql.org/docs/current/indexes-index-only-scans.html\n\n\n",
"msg_date": "Wed, 22 Jan 2020 22:34:52 +0000",
"msg_from": "Malthe <mborch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Index-only scan for \"f(x)\" without \"x\""
},
{
"msg_contents": "Malthe <mborch@gmail.com> writes:\n> Referencing the example given in the documentation for index-only\n> scans [0], we consider an index:\n> CREATE INDEX tab_f_x ON tab (f(x));\n> This index currently will not be used for an index-scan for the\n> following query since the planner isn't smart enough to know that \"x\"\n> is not needed:\n> SELECT f(x) FROM tab WHERE f(x) < 1;\n> However, any function applied to a column for an index expression is\n> required to be immutable so as far as I can tell the planner doesn't\n> have to be very smart to know that the index can indeed be used for an\n> index-only scan (without having \"x\" included).\n\nThe problem is that the planner's initial analysis of the query tree\nconcludes that the scan of \"tab\" has to return \"x\", because it looks\nthrough the tree for plain Vars, and \"x\" is what it's going to find.\nThis conclusion is in fact true for any plan that doesn't involve\nscanning a suitable index on f(x), so it's not something we can just\ndispense with.\n\nTo back up from that and conclude that the indexscan doesn't really\nneed to return \"x\" because every use of it is in the context \"f(x)\"\nseems like a pretty expensive proposition, especially once you start\nconsidering index expressions that are more complex than a single\nfunction call. A brute-force matching search could easily be O(N^2)\nor worse in the size of the query tree, which is a lot to pay when\nthere's a pretty high chance of failure. (IOW, we have to expect\nthat most queries on \"tab\" are in fact not going to be able to use\nthat index, so we can't afford to spend a lot of planning time just\nbecause the index exists.)\n\n> What's required in order to move forward on these capabilities?\n\nSome non-brute-force solution to the above problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Jan 2020 18:00:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scan for \"f(x)\" without \"x\""
},
{
"msg_contents": "On Thu, 23 Jan 2020 at 00:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The problem is that the planner's initial analysis of the query tree\n> concludes that the scan of \"tab\" has to return \"x\", because it looks\n> through the tree for plain Vars, and \"x\" is what it's going to find.\n> This conclusion is in fact true for any plan that doesn't involve\n> scanning a suitable index on f(x), so it's not something we can just\n> dispense with.\n\nIf f(x) was actually added to the table as a virtual generated column\nV then the planner could perhaps reasonably find that a particular\nexpression depends in part on V (rather than the set of real columns\nunderlying the virtual generated column).\n\nThat is, the planner now knows that it needs to return V = \"f(x)\"\nrather than \"x\" and so it can match the requirement to our index and\ndecide to use an index-only scan.\n\n> To back up from that and conclude that the indexscan doesn't really\n> need to return \"x\" because every use of it is in the context \"f(x)\"\n> seems like a pretty expensive proposition, especially once you start\n> considering index expressions that are more complex than a single\n> function call. A brute-force matching search could easily be O(N^2)\n> or worse in the size of the query tree, which is a lot to pay when\n> there's a pretty high chance of failure. (IOW, we have to expect\n> that most queries on \"tab\" are in fact not going to be able to use\n> that index, so we can't afford to spend a lot of planning time just\n> because the index exists.)\n\nIn the approach outlined above, the set of expressions to match on\nwould be limited by the set of virtual generated columns defined on\nthe table — some sort of prefix tree that allows efficient matching on\na given expression syntax tree? There would be no cost to this on a\ntable that had no virtual generated columns.\n\nThe question is whether this is a reasonable take on virtual generated columns.\n\n --- regards\n\n\n",
"msg_date": "Thu, 23 Jan 2020 08:39:21 +0100",
"msg_from": "Malthe <mborch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index-only scan for \"f(x)\" without \"x\""
}
] |
[
{
"msg_contents": "Dear PgSQL-Hackers,\n\nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about \"Database Level\" triggers: BeforePgStart, AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut - I just mentioned some of it but the final events could be different.\n\nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged in/out, start a stored procedure AfterPgStarted and so on.\n\nThanks!\n\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\n\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.\n\n\n\n\n\n\n\n\n\nDear PgSQL-Hackers,\n \nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about “Database Level” triggers: BeforePgStart, AfterPgStarted, OnLogin,\n OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I just mentioned some of it but the final events could be different.\n\n \nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged in/out, start a stored procedure AfterPgStarted and so\n on.\n \nThanks!\n\n\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you\n are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please\n notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection\n with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\n\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each\n of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.",
"msg_date": "Thu, 23 Jan 2020 08:45:15 +0000",
"msg_from": "Sergiu Velescu <Sergiu.Velescu@endava.com>",
"msg_from_op": true,
"msg_subject": "New feature proposal (trigger)"
},
{
"msg_contents": "čt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com>\nnapsal:\n\n> Dear PgSQL-Hackers,\n>\n>\n>\n> I would like to propose a new feature which is missing in PgSQL but quite\n> useful and nice to have (and exists in Oracle and probably in some other\n> RDBMS), I speak about “Database Level” triggers: BeforePgStart,\n> AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I\n> just mentioned some of it but the final events could be different.\n>\n\n>\n> These DB Level triggers are quite useful for example if somebogy want to\n> set some PG env. variables depends on user belonging to one or another role\n> or want to track who/wen logged in/out, start a stored procedure\n> AfterPgStarted and so on.\n>\n\nDo you have some examples of these useful triggers?\n\nI don't know any one.\n\nRegards\n\nPavel\n\n\n>\n> Thanks!\n>\n> The information in this email is confidential and may be legally\n> privileged. It is intended solely for the addressee. Any opinions expressed\n> are mine and do not necessarily represent the opinions of the Company.\n> Emails are susceptible to interference. If you are not the intended\n> recipient, any disclosure, copying, distribution or any action taken or\n> omitted to be taken in reliance on it, is strictly prohibited and may be\n> unlawful. If you have received this message in error, do not open any\n> attachments but please notify the Endava Service Desk on (+44 (0)870 423\n> 0187), and delete this message from your system. The sender accepts no\n> responsibility for information, errors or omissions in this email, or for\n> its use or misuse, or for any act committed or omitted in connection with\n> this communication. If in doubt, please verify the authenticity of the\n> contents with the sender. Please rely on your own virus checkers as no\n> responsibility is taken by the sender for any damage rising out of any bug\n> or virus infection.\n>\n> Endava plc is a company registered in England under company number 5722669\n> whose registered office is at 125 Old Broad Street, London, EC2N 1AR,\n> United Kingdom. Endava plc is the Endava group holding company and does not\n> provide any services to clients. Each of Endava plc and its subsidiaries is\n> a separate legal entity and has no liability for another such entity's acts\n> or omissions.\n>\n\nčt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\nDear PgSQL-Hackers,\n \nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about “Database Level” triggers: BeforePgStart, AfterPgStarted, OnLogin,\n OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I just mentioned some of it but the final events could be different. \n \nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged in/out, start a stored procedure AfterPgStarted and so\n on.Do you have some examples of these useful triggers?I don't know any one.RegardsPavel\n \nThanks!\n\n\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you\n are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please\n notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection\n with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\n\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each\n of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.",
"msg_date": "Thu, 23 Jan 2020 17:39:01 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New feature proposal (trigger)"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> čt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com>\n> napsal:\n>> I would like to propose a new feature which is missing in PgSQL but quite\n>> useful and nice to have (and exists in Oracle and probably in some other\n>> RDBMS), I speak about “Database Level” triggers: BeforePgStart,\n>> AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I\n>> just mentioned some of it but the final events could be different.\n\n> Do you have some examples of these useful triggers?\n> I don't know any one.\n\nSee also the fate of commit e788bd924, which proposed to add\non-session-start and on-session-end hooks. Getting that sort of thing\nto work safely is a LOT harder than it sounds. There are all sorts of\ndefinitional and implementation problems, at least if you'd like the\nhook or trigger to do anything interesting (like run a transaction).\n\nI rather suspect that exposing such a thing at SQL level would also add\na pile of security considerations (i.e. who's allowed to do what to whom).\nThe hook proposal didn't have to address that, but a trigger feature\ncertainly would.\n\nMaybe it's all do-able, but the work to benefit ratio doesn't look\nvery appetizing to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Jan 2020 14:01:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New feature proposal (trigger)"
},
{
"msg_contents": "Hi,\r\n\r\nYes, please find below few examples.\r\n\r\nOnLogin/Logout.\r\nI want to log/audit each attempt to login (successful and/or not).\r\nWho/how long was logged in DB (who logged in out of business hours (maybe deny access)).\r\nSet session variable based on username (or maybe IP address) - for example DATE format.\r\n\r\nOnStartup (or AfterStarted)\r\nI want to start a procedure which check for a specific event in a loop and send an email.\r\n\r\nOnDDL\r\nLog every DDL in a DB log table (who/when altered/created/dropped/truncated a specific object) and send an email.\r\n\r\nOut of this topic nice to have (I could elaborate any of below topic if you are interested in):\r\nStorage quota per user (or schema).\r\nAudit – I know about existence of pgaudit extension but it is far from ideal (I compare to Oracle Fine Grained Audit).\r\nDuplicate WAL (to have WAL in 2 different places – for example I take backup on separate disk and I want to have a copy of WAL on that disk)\r\nTo have something like Oracle SQL Tuning Advisor (for example I have a “big” SQL which take longer than it should (probably the optimizer didn’t find the pest execution plan in the tame allocated to this) – this tool provide the possibility to analyze comprehensive the SQL and offer solutions (maybe different execution plan, maybe offer suggestion to create a specific index…)).\r\nBest regards.\r\n\r\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\nSent: Thursday, January 23, 2020 18:39\r\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: New feature proposal (trigger)\r\n\r\n\r\n\r\nčt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com<mailto:Sergiu.Velescu@endava.com>> napsal:\r\nDear PgSQL-Hackers,\r\n\r\nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about “Database Level” triggers: BeforePgStart, AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I just mentioned some of it but the final events could be different.\r\n\r\nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged in/out, start a stored procedure AfterPgStarted and so on.\r\n\r\nDo you have some examples of these useful triggers?\r\n\r\nI don't know any one.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\nThanks!\r\n\r\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\r\n\r\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.\r\n\n\n\n\n\n\n\n\n\nHi,\n \nYes, please find below few examples.\n \nOnLogin/Logout.\nI want to log/audit each attempt to login (successful and/or not).\r\n\nWho/how long was logged in DB (who logged in out of business hours (maybe deny access)).\nSet session variable based on username (or maybe IP address) - for example DATE format.\n \nOnStartup (or AfterStarted)\nI want to start a procedure which check for a specific event in a loop and send an email.\r\n\n \nOnDDL\nLog every DDL in a DB log table (who/when altered/created/dropped/truncated a specific object) and send an email.\n\n\nOut of this topic nice to have (I could elaborate any of below topic if you are interested in):\nStorage quota per user (or schema).\nAudit – I know about existence of pgaudit extension but it is far from ideal (I compare to Oracle Fine Grained Audit).\r\n\nDuplicate WAL (to have WAL in 2 different places – for example I take backup on separate disk and I want to have a copy of WAL on that disk)\nTo have something like Oracle SQL Tuning Advisor (for example I have a “big” SQL which take longer than it should (probably the optimizer didn’t find the pest execution plan in the tame allocated to this) – this tool provide the possibility\r\n to analyze comprehensive the SQL and offer solutions (maybe different execution plan, maybe offer suggestion to create a specific index…)).\n\nBest regards.\n \nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Thursday, January 23, 2020 18:39\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: New feature proposal (trigger)\n \n\n\n \n\n \n\n\nčt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\n\n\nDear PgSQL-Hackers,\n \nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about “Database\r\n Level” triggers: BeforePgStart, AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I just mentioned some of it but the final events could be different.\r\n\n\n\n\n\n\n\n \nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged\r\n in/out, start a stored procedure AfterPgStarted and so on.\n\n\n\n\n \n\n\nDo you have some examples of these useful triggers?\n\n\n \n\n\nI don't know any one.\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\n \nThanks!\n\n\r\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you\r\n are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please\r\n notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection\r\n with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\n\r\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each\r\n of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.",
"msg_date": "Fri, 24 Jan 2020 07:55:11 +0000",
"msg_from": "Sergiu Velescu <Sergiu.Velescu@endava.com>",
"msg_from_op": true,
"msg_subject": "RE: New feature proposal (trigger)"
},
{
"msg_contents": "pá 24. 1. 2020 v 8:55 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com>\nnapsal:\n\n> Hi,\n>\n>\n>\n> Yes, please find below few examples.\n>\n>\n>\n> OnLogin/Logout.\n>\n> I want to log/audit each attempt to login (successful and/or not).\n>\n> Who/how long was logged in DB (who logged in out of business hours (maybe\n> deny access)).\n>\n> Set session variable based on username (or maybe IP address) - for\n> example DATE format.\n>\n>\n>\n> OnStartup (or AfterStarted)\n>\n> I want to start a procedure which check for a specific event in a loop and\n> send an email.\n>\n>\n>\n> OnDDL\n>\n> Log every DDL in a DB log table (who/when\n> altered/created/dropped/truncated a specific object) and send an email.\n>\n\nyou can do almost all things today by C extensions or just with Postgres log\n\nPersonally I don't thing so doing these things just from Postgres, PL\nprocedures is good thing\n\nPavel\n\n\n> Out of this topic nice to have (I could elaborate any of below topic if\n> you are interested in):\n>\n> Storage quota per user (or schema).\n>\n> Audit – I know about existence of pgaudit extension but it is far from\n> ideal (I compare to Oracle Fine Grained Audit).\n>\n> Duplicate WAL (to have WAL in 2 different places – for example I take\n> backup on separate disk and I want to have a copy of WAL on that disk)\n>\n> To have something like Oracle SQL Tuning Advisor (for example I have a\n> “big” SQL which take longer than it should (probably the optimizer didn’t\n> find the pest execution plan in the tame allocated to this) – this tool\n> provide the possibility to analyze comprehensive the SQL and offer\n> solutions (maybe different execution plan, maybe offer suggestion to create\n> a specific index…)).\n>\n> Best regards.\n>\n>\n>\n> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n> *Sent:* Thursday, January 23, 2020 18:39\n> *To:* Sergiu Velescu <Sergiu.Velescu@endava.com>\n> *Cc:* pgsql-hackers@postgresql.org\n> *Subject:* Re: New feature proposal (trigger)\n>\n>\n>\n>\n>\n>\n>\n> čt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <\n> Sergiu.Velescu@endava.com> napsal:\n>\n> Dear PgSQL-Hackers,\n>\n>\n>\n> I would like to propose a new feature which is missing in PgSQL but quite\n> useful and nice to have (and exists in Oracle and probably in some other\n> RDBMS), I speak about “Database Level” triggers: BeforePgStart,\n> AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I\n> just mentioned some of it but the final events could be different.\n>\n>\n>\n> These DB Level triggers are quite useful for example if somebogy want to\n> set some PG env. variables depends on user belonging to one or another role\n> or want to track who/wen logged in/out, start a stored procedure\n> AfterPgStarted and so on.\n>\n>\n>\n> Do you have some examples of these useful triggers?\n>\n>\n>\n> I don't know any one.\n>\n>\n>\n> Regards\n>\n>\n>\n> Pavel\n>\n>\n>\n>\n>\n> Thanks!\n>\n>\n> The information in this email is confidential and may be legally\n> privileged. It is intended solely for the addressee. Any opinions expressed\n> are mine and do not necessarily represent the opinions of the Company.\n> Emails are susceptible to interference. If you are not the intended\n> recipient, any disclosure, copying, distribution or any action taken or\n> omitted to be taken in reliance on it, is strictly prohibited and may be\n> unlawful. If you have received this message in error, do not open any\n> attachments but please notify the Endava Service Desk on (+44 (0)870 423\n> 0187), and delete this message from your system. The sender accepts no\n> responsibility for information, errors or omissions in this email, or for\n> its use or misuse, or for any act committed or omitted in connection with\n> this communication. If in doubt, please verify the authenticity of the\n> contents with the sender. Please rely on your own virus checkers as no\n> responsibility is taken by the sender for any damage rising out of any bug\n> or virus infection.\n>\n> Endava plc is a company registered in England under company number 5722669\n> whose registered office is at 125 Old Broad Street, London, EC2N 1AR,\n> United Kingdom. Endava plc is the Endava group holding company and does not\n> provide any services to clients. Each of Endava plc and its subsidiaries is\n> a separate legal entity and has no liability for another such entity's acts\n> or omissions.\n>\n>\n\npá 24. 1. 2020 v 8:55 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\nHi,\n \nYes, please find below few examples.\n \nOnLogin/Logout.\nI want to log/audit each attempt to login (successful and/or not).\n\nWho/how long was logged in DB (who logged in out of business hours (maybe deny access)).\nSet session variable based on username (or maybe IP address) - for example DATE format.\n \nOnStartup (or AfterStarted)\nI want to start a procedure which check for a specific event in a loop and send an email.\n\n \nOnDDL\nLog every DDL in a DB log table (who/when altered/created/dropped/truncated a specific object) and send an email.you can do almost all things today by C extensions or just with Postgres logPersonally I don't thing so doing these things just from Postgres, PL procedures is good thingPavel\n\n\nOut of this topic nice to have (I could elaborate any of below topic if you are interested in):\nStorage quota per user (or schema).\nAudit – I know about existence of pgaudit extension but it is far from ideal (I compare to Oracle Fine Grained Audit).\n\nDuplicate WAL (to have WAL in 2 different places – for example I take backup on separate disk and I want to have a copy of WAL on that disk)\nTo have something like Oracle SQL Tuning Advisor (for example I have a “big” SQL which take longer than it should (probably the optimizer didn’t find the pest execution plan in the tame allocated to this) – this tool provide the possibility\n to analyze comprehensive the SQL and offer solutions (maybe different execution plan, maybe offer suggestion to create a specific index…)).\n\nBest regards.\n \nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Thursday, January 23, 2020 18:39\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: New feature proposal (trigger)\n \n\n\n \n\n \n\n\nčt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\n\n\nDear PgSQL-Hackers,\n \nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about “Database\n Level” triggers: BeforePgStart, AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I just mentioned some of it but the final events could be different.\n\n\n\n\n\n\n\n \nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged\n in/out, start a stored procedure AfterPgStarted and so on.\n\n\n\n\n \n\n\nDo you have some examples of these useful triggers?\n\n\n \n\n\nI don't know any one.\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\n \nThanks!\n\n\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you\n are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please\n notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection\n with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\n\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each\n of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.",
"msg_date": "Fri, 24 Jan 2020 09:02:47 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New feature proposal (trigger)"
},
{
"msg_contents": "Hi,\r\n\r\nCould you please elaborate – what do you mean by “…you can do almost all things today by C extensions…” – does these extensions already exists or I have to develop it?\r\nIf these extensions exists and developed by somebody else (not in PG core) then nobody will install it where sensitive information exists (at least you will not be able to pass the PCI-DSS audit).\r\nIf I have to develop it – then I have 2 option 1) to develop it or 2) to use other RDBMS which already have this implemented.\r\n\r\nFor enterprise class solutions it is vital to have the possibility to keep track of actions in DB (who/when logged-in/out, which statement run and so on), this is even more important than performance because if I need more performance I probably could increase the hardware procession power (CPU/RAM/IOPS) but if I have no audit I have no choice…\r\n\r\nI know PostgreSQL is free solution and I can’t expect it to have everything a commercial RDBMS have but at least we should start to think to implement this!\r\n\r\nHave a nice day!\r\n\r\nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\nSent: Friday, January 24, 2020 10:03\r\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: New feature proposal (trigger)\r\n\r\n\r\n\r\npá 24. 1. 2020 v 8:55 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com<mailto:Sergiu.Velescu@endava.com>> napsal:\r\nHi,\r\n\r\nYes, please find below few examples.\r\n\r\nOnLogin/Logout.\r\nI want to log/audit each attempt to login (successful and/or not).\r\nWho/how long was logged in DB (who logged in out of business hours (maybe deny access)).\r\nSet session variable based on username (or maybe IP address) - for example DATE format.\r\n\r\nOnStartup (or AfterStarted)\r\nI want to start a procedure which check for a specific event in a loop and send an email.\r\n\r\nOnDDL\r\nLog every DDL in a DB log table (who/when altered/created/dropped/truncated a specific object) and send an email.\r\n\r\nyou can do almost all things today by C extensions or just with Postgres log\r\n\r\nPersonally I don't thing so doing these things just from Postgres, PL procedures is good thing\r\n\r\nPavel\r\n\r\n\r\nOut of this topic nice to have (I could elaborate any of below topic if you are interested in):\r\nStorage quota per user (or schema).\r\nAudit – I know about existence of pgaudit extension but it is far from ideal (I compare to Oracle Fine Grained Audit).\r\nDuplicate WAL (to have WAL in 2 different places – for example I take backup on separate disk and I want to have a copy of WAL on that disk)\r\nTo have something like Oracle SQL Tuning Advisor (for example I have a “big” SQL which take longer than it should (probably the optimizer didn’t find the pest execution plan in the tame allocated to this) – this tool provide the possibility to analyze comprehensive the SQL and offer solutions (maybe different execution plan, maybe offer suggestion to create a specific index…)).\r\nBest regards.\r\n\r\nFrom: Pavel Stehule <pavel.stehule@gmail.com<mailto:pavel.stehule@gmail.com>>\r\nSent: Thursday, January 23, 2020 18:39\r\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com<mailto:Sergiu.Velescu@endava.com>>\r\nCc: pgsql-hackers@postgresql.org<mailto:pgsql-hackers@postgresql.org>\r\nSubject: Re: New feature proposal (trigger)\r\n\r\n\r\n\r\nčt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com<mailto:Sergiu.Velescu@endava.com>> napsal:\r\nDear PgSQL-Hackers,\r\n\r\nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about “Database Level” triggers: BeforePgStart, AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I just mentioned some of it but the final events could be different.\r\n\r\nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged in/out, start a stored procedure AfterPgStarted and so on.\r\n\r\nDo you have some examples of these useful triggers?\r\n\r\nI don't know any one.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\nThanks!\r\n\r\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\r\n\r\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.\r\n\n\n\n\n\n\n\n\n\nHi,\n \nCould you please elaborate – what do you mean by “…you can do almost all things today by C extensions…” – does these extensions already exists or I have to develop it?\nIf these extensions exists and developed by somebody else (not in PG core) then nobody will install it where sensitive information exists (at least you will not be able to pass the PCI-DSS audit).\nIf I have to develop it – then I have 2 option 1) to develop it or 2) to use other RDBMS which already have this implemented.\n \nFor enterprise class solutions it is vital to have the possibility to keep track of actions in DB (who/when logged-in/out, which statement run and so on), this is even more important than performance because if I need more performance I\r\n probably could increase the hardware procession power (CPU/RAM/IOPS) but if I have no audit I have no choice…\n \nI know PostgreSQL is free solution and I can’t expect it to have everything a commercial RDBMS have but at least we should start to think to implement this!\n \nHave a nice day!\n \nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Friday, January 24, 2020 10:03\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: New feature proposal (trigger)\n \n\n\n \n\n \n\n\npá 24. 1. 2020 v 8:55 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\n\n\nHi,\n \nYes, please find below few examples.\n \nOnLogin/Logout.\nI want to log/audit each attempt to login (successful and/or not).\r\n\nWho/how long was logged in DB (who logged in out of business hours (maybe deny access)).\nSet session variable based on username (or maybe IP address) - for example DATE format.\n \nOnStartup (or AfterStarted)\nI want to start a procedure which check for a specific event in a loop and send an email.\r\n\n \nOnDDL\nLog every DDL in a DB log table (who/when altered/created/dropped/truncated a specific object) and send an email.\n\n\n\n\n \n\n\nyou can do almost all things today by C extensions or just with Postgres log\n\n\n \n\n\nPersonally I don't thing so doing these things just from Postgres, PL procedures is good thing\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\n \nOut of this topic nice to have (I could elaborate any of below topic if you are interested in):\nStorage quota per user (or schema).\nAudit – I know about existence of pgaudit extension but it is far from ideal (I compare to Oracle Fine Grained Audit).\r\n\nDuplicate WAL (to have WAL in 2 different places – for example I take backup on separate disk and I want to have a copy of WAL on that disk)\nTo have something like Oracle SQL Tuning Advisor (for example I have a “big” SQL which take longer than it should (probably the optimizer didn’t find the pest execution plan in\r\n the tame allocated to this) – this tool provide the possibility to analyze comprehensive the SQL and offer solutions (maybe different execution plan, maybe offer suggestion to create a specific index…)).\nBest regards.\n \nFrom: Pavel Stehule <pavel.stehule@gmail.com>\r\n\nSent: Thursday, January 23, 2020 18:39\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: New feature proposal (trigger)\n \n\n\n \n\n \n\n\nčt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\n\n\nDear PgSQL-Hackers,\n \nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about “Database\r\n Level” triggers: BeforePgStart, AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I just mentioned some of it but the final events could be different.\r\n\n\n\n\n\n\n\n \nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged\r\n in/out, start a stored procedure AfterPgStarted and so on.\n\n\n\n\n \n\n\nDo you have some examples of these useful triggers?\n\n\n \n\n\nI don't know any one.\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\n \nThanks!\n\n\r\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you\r\n are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please\r\n notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection\r\n with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\n\r\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each\r\n of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.",
"msg_date": "Fri, 24 Jan 2020 09:08:47 +0000",
"msg_from": "Sergiu Velescu <Sergiu.Velescu@endava.com>",
"msg_from_op": true,
"msg_subject": "RE: New feature proposal (trigger)"
},
{
"msg_contents": "pá 24. 1. 2020 v 10:08 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com>\nnapsal:\n\n> Hi,\n>\n>\n>\n> Could you please elaborate – what do you mean by “…you can do almost all\n> things today by C extensions…” – does these extensions already exists or I\n> have to develop it?\n>\n> If these extensions exists and developed by somebody else (not in PG core)\n> then nobody will install it where sensitive information exists (at least\n> you will not be able to pass the PCI-DSS audit).\n>\n> If I have to develop it – then I have 2 option 1) to develop it or 2) to\n> use other RDBMS which already have this implemented.\n>\n>\n>\n> For enterprise class solutions it is vital to have the possibility to keep\n> track of actions in DB (who/when logged-in/out, which statement run and so\n> on), this is even more important than performance because if I need more\n> performance I probably could increase the hardware procession power\n> (CPU/RAM/IOPS) but if I have no audit I have no choice…\n>\n>\n>\n> I know PostgreSQL is free solution and I can’t expect it to have\n> everything a commercial RDBMS have but at least we should start to think to\n> implement this!\n>\n\nlot of this does pg_audit https://www.pgaudit.org/\n\nthese is a possibility to log - loging/logout, using DDL. - you can process\npostgresql log.\n\nregards\n\nPavel\n\n\n\n>\n> Have a nice day!\n>\n>\n>\n> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n> *Sent:* Friday, January 24, 2020 10:03\n> *To:* Sergiu Velescu <Sergiu.Velescu@endava.com>\n> *Cc:* pgsql-hackers@postgresql.org\n> *Subject:* Re: New feature proposal (trigger)\n>\n>\n>\n>\n>\n>\n>\n> pá 24. 1. 2020 v 8:55 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com>\n> napsal:\n>\n> Hi,\n>\n>\n>\n> Yes, please find below few examples.\n>\n>\n>\n> OnLogin/Logout.\n>\n> I want to log/audit each attempt to login (successful and/or not).\n>\n> Who/how long was logged in DB (who logged in out of business hours (maybe\n> deny access)).\n>\n> Set session variable based on username (or maybe IP address) - for\n> example DATE format.\n>\n>\n>\n> OnStartup (or AfterStarted)\n>\n> I want to start a procedure which check for a specific event in a loop and\n> send an email.\n>\n>\n>\n> OnDDL\n>\n> Log every DDL in a DB log table (who/when\n> altered/created/dropped/truncated a specific object) and send an email.\n>\n>\n>\n> you can do almost all things today by C extensions or just with Postgres\n> log\n>\n>\n>\n> Personally I don't thing so doing these things just from Postgres, PL\n> procedures is good thing\n>\n>\n>\n> Pavel\n>\n>\n>\n>\n>\n> Out of this topic nice to have (I could elaborate any of below topic if\n> you are interested in):\n>\n> Storage quota per user (or schema).\n>\n> Audit – I know about existence of pgaudit extension but it is far from\n> ideal (I compare to Oracle Fine Grained Audit).\n>\n> Duplicate WAL (to have WAL in 2 different places – for example I take\n> backup on separate disk and I want to have a copy of WAL on that disk)\n>\n> To have something like Oracle SQL Tuning Advisor (for example I have a\n> “big” SQL which take longer than it should (probably the optimizer didn’t\n> find the pest execution plan in the tame allocated to this) – this tool\n> provide the possibility to analyze comprehensive the SQL and offer\n> solutions (maybe different execution plan, maybe offer suggestion to create\n> a specific index…)).\n>\n> Best regards.\n>\n>\n>\n> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n> *Sent:* Thursday, January 23, 2020 18:39\n> *To:* Sergiu Velescu <Sergiu.Velescu@endava.com>\n> *Cc:* pgsql-hackers@postgresql.org\n> *Subject:* Re: New feature proposal (trigger)\n>\n>\n>\n>\n>\n>\n>\n> čt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <\n> Sergiu.Velescu@endava.com> napsal:\n>\n> Dear PgSQL-Hackers,\n>\n>\n>\n> I would like to propose a new feature which is missing in PgSQL but quite\n> useful and nice to have (and exists in Oracle and probably in some other\n> RDBMS), I speak about “Database Level” triggers: BeforePgStart,\n> AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I\n> just mentioned some of it but the final events could be different.\n>\n>\n>\n> These DB Level triggers are quite useful for example if somebogy want to\n> set some PG env. variables depends on user belonging to one or another role\n> or want to track who/wen logged in/out, start a stored procedure\n> AfterPgStarted and so on.\n>\n>\n>\n> Do you have some examples of these useful triggers?\n>\n>\n>\n> I don't know any one.\n>\n>\n>\n> Regards\n>\n>\n>\n> Pavel\n>\n>\n>\n>\n>\n> Thanks!\n>\n>\n> The information in this email is confidential and may be legally\n> privileged. It is intended solely for the addressee. Any opinions expressed\n> are mine and do not necessarily represent the opinions of the Company.\n> Emails are susceptible to interference. If you are not the intended\n> recipient, any disclosure, copying, distribution or any action taken or\n> omitted to be taken in reliance on it, is strictly prohibited and may be\n> unlawful. If you have received this message in error, do not open any\n> attachments but please notify the Endava Service Desk on (+44 (0)870 423\n> 0187), and delete this message from your system. The sender accepts no\n> responsibility for information, errors or omissions in this email, or for\n> its use or misuse, or for any act committed or omitted in connection with\n> this communication. If in doubt, please verify the authenticity of the\n> contents with the sender. Please rely on your own virus checkers as no\n> responsibility is taken by the sender for any damage rising out of any bug\n> or virus infection.\n>\n> Endava plc is a company registered in England under company number 5722669\n> whose registered office is at 125 Old Broad Street, London, EC2N 1AR,\n> United Kingdom. Endava plc is the Endava group holding company and does not\n> provide any services to clients. Each of Endava plc and its subsidiaries is\n> a separate legal entity and has no liability for another such entity's acts\n> or omissions.\n>\n>\n\npá 24. 1. 2020 v 10:08 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\nHi,\n \nCould you please elaborate – what do you mean by “…you can do almost all things today by C extensions…” – does these extensions already exists or I have to develop it?\nIf these extensions exists and developed by somebody else (not in PG core) then nobody will install it where sensitive information exists (at least you will not be able to pass the PCI-DSS audit).\nIf I have to develop it – then I have 2 option 1) to develop it or 2) to use other RDBMS which already have this implemented.\n \nFor enterprise class solutions it is vital to have the possibility to keep track of actions in DB (who/when logged-in/out, which statement run and so on), this is even more important than performance because if I need more performance I\n probably could increase the hardware procession power (CPU/RAM/IOPS) but if I have no audit I have no choice…\n \nI know PostgreSQL is free solution and I can’t expect it to have everything a commercial RDBMS have but at least we should start to think to implement this!lot of this does pg_audit https://www.pgaudit.org/these is a possibility to log - loging/logout, using DDL. - you can process postgresql log.regardsPavel \n \nHave a nice day!\n \nFrom: Pavel Stehule <pavel.stehule@gmail.com> \nSent: Friday, January 24, 2020 10:03\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: New feature proposal (trigger)\n \n\n\n \n\n \n\n\npá 24. 1. 2020 v 8:55 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\n\n\nHi,\n \nYes, please find below few examples.\n \nOnLogin/Logout.\nI want to log/audit each attempt to login (successful and/or not).\n\nWho/how long was logged in DB (who logged in out of business hours (maybe deny access)).\nSet session variable based on username (or maybe IP address) - for example DATE format.\n \nOnStartup (or AfterStarted)\nI want to start a procedure which check for a specific event in a loop and send an email.\n\n \nOnDDL\nLog every DDL in a DB log table (who/when altered/created/dropped/truncated a specific object) and send an email.\n\n\n\n\n \n\n\nyou can do almost all things today by C extensions or just with Postgres log\n\n\n \n\n\nPersonally I don't thing so doing these things just from Postgres, PL procedures is good thing\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\n \nOut of this topic nice to have (I could elaborate any of below topic if you are interested in):\nStorage quota per user (or schema).\nAudit – I know about existence of pgaudit extension but it is far from ideal (I compare to Oracle Fine Grained Audit).\n\nDuplicate WAL (to have WAL in 2 different places – for example I take backup on separate disk and I want to have a copy of WAL on that disk)\nTo have something like Oracle SQL Tuning Advisor (for example I have a “big” SQL which take longer than it should (probably the optimizer didn’t find the pest execution plan in\n the tame allocated to this) – this tool provide the possibility to analyze comprehensive the SQL and offer solutions (maybe different execution plan, maybe offer suggestion to create a specific index…)).\nBest regards.\n \nFrom: Pavel Stehule <pavel.stehule@gmail.com>\n\nSent: Thursday, January 23, 2020 18:39\nTo: Sergiu Velescu <Sergiu.Velescu@endava.com>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: New feature proposal (trigger)\n \n\n\n \n\n \n\n\nčt 23. 1. 2020 v 17:26 odesílatel Sergiu Velescu <Sergiu.Velescu@endava.com> napsal:\n\n\n\n\nDear PgSQL-Hackers,\n \nI would like to propose a new feature which is missing in PgSQL but quite useful and nice to have (and exists in Oracle and probably in some other RDBMS), I speak about “Database\n Level” triggers: BeforePgStart, AfterPgStarted, OnLogin, OnSuccessfulLogin, BeforePGshutdown, OnLogOut – I just mentioned some of it but the final events could be different.\n\n\n\n\n\n\n\n \nThese DB Level triggers are quite useful for example if somebogy want to set some PG env. variables depends on user belonging to one or another role or want to track who/wen logged\n in/out, start a stored procedure AfterPgStarted and so on.\n\n\n\n\n \n\n\nDo you have some examples of these useful triggers?\n\n\n \n\n\nI don't know any one.\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel\n\n\n \n\n\n\n\n \nThanks!\n\n\nThe information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Any opinions expressed are mine and do not necessarily represent the opinions of the Company. Emails are susceptible to interference. If you\n are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited and may be unlawful. If you have received this message in error, do not open any attachments but please\n notify the Endava Service Desk on (+44 (0)870 423 0187), and delete this message from your system. The sender accepts no responsibility for information, errors or omissions in this email, or for its use or misuse, or for any act committed or omitted in connection\n with this communication. If in doubt, please verify the authenticity of the contents with the sender. Please rely on your own virus checkers as no responsibility is taken by the sender for any damage rising out of any bug or virus infection.\n\nEndava plc is a company registered in England under company number 5722669 whose registered office is at 125 Old Broad Street, London, EC2N 1AR, United Kingdom. Endava plc is the Endava group holding company and does not provide any services to clients. Each\n of Endava plc and its subsidiaries is a separate legal entity and has no liability for another such entity's acts or omissions.",
"msg_date": "Fri, 24 Jan 2020 10:14:07 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New feature proposal (trigger)"
},
{
"msg_contents": "## Sergiu Velescu (Sergiu.Velescu@endava.com):\n\n> OnLogin/Logout.\n> I want to log/audit each attempt to login (successful and/or not).\n\nlog_connections/log_disconnections\n\n> Who/how long was logged in DB (who logged in out of business hours\n> (maybe deny access)).\n\nUse PAM authentication.\n\n> Set session variable based on username (or maybe IP address) -\n> for example DATE format.\n\n\"Based on user name\": ALTER ROLE\n\n> OnStartup (or AfterStarted)\n> I want to start a procedure which check for a specific event in a loop\n> and send an email.\n\nThat sounds like \"problematic architecture\" right from the start:\n- sending emails in a database transaction is not a good idea\n- active-waiting for events (\"in a loop\") is inefficient, try writing\n to a queue table and have a daemon read from that.\n\n> OnDDL\n> Log every DDL in a DB log table (who/when altered/created/dropped/\n> truncated a specific object) and send an email.\n\nEvent Triggers\nhttps://www.postgresql.org/docs/current/event-triggers.html\n\n> Duplicate WAL (to have WAL in 2 different places – for example I take\n> backup on separate disk and I want to have a copy of WAL on that disk)\n\nWe have streaming replication/pg_receivewal or file based archiving,\nboth also wrapped in practical products like barman, pgbackrest, ...\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Fri, 24 Jan 2020 10:29:37 +0100",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: New feature proposal (trigger)"
}
] |
[
{
"msg_contents": "Hi all,\nWhile reviewing one patch, I found that if we give any non-integer string\nto atoi (say aa), then it is returning zero(0) as output so we are not\ngiving any error(assuming 0 as valid argument) and continuing our\noperations.\n\nEx:\nLet say, we gave \"-P aa\" (patch is in review[1]), then it will disable\nparallel vacuum because atoi is returning zero as parallel degree but\nideally it should give error or at least it should not disable parallel\nvacuum.\n\nI think, in-place of atoi, we should use different function ( defGetInt32,\nstrtoint) or we can write own function.\n\nThoughts?\n\n[1]:\nhttps://www.postgresql.org/message-id/CA%2Bfd4k6DgwtQSr4%3DUeY%2BWbGuF7-oD%3Dm-ypHPy%2BsYHiXZc%2BhTUQ%40mail.gmail.com\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nHi all,While reviewing one patch, I found that if we give any non-integer string to atoi (say aa), then it is returning zero(0) as output so we are not giving any error(assuming 0 as valid argument) and continuing our operations.Ex:Let say, we gave \"-P aa\" (patch is in review[1]), then it will disable parallel vacuum because atoi is returning zero as parallel degree but ideally it should give error or at least it should not disable parallel vacuum.I think, in-place of atoi, we should use different function ( defGetInt32, strtoint) or we can write own function.Thoughts?[1]: https://www.postgresql.org/message-id/CA%2Bfd4k6DgwtQSr4%3DUeY%2BWbGuF7-oD%3Dm-ypHPy%2BsYHiXZc%2BhTUQ%40mail.gmail.com-- Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Jan 2020 18:25:43 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": true,
"msg_subject": "can we use different function in place of atoi in vacuumdb.c file"
},
{
"msg_contents": "Hi,\nOn Thu, Jan 23, 2020 at 3:56 PM Mahendra Singh Thalor <mahi6run@gmail.com>\nwrote:\n\n> Hi all,\n> While reviewing one patch, I found that if we give any non-integer string\n> to atoi (say aa), then it is returning zero(0) as output so we are not\n> giving any error(assuming 0 as valid argument) and continuing our\n> operations.\n>\n> Ex:\n> Let say, we gave \"-P aa\" (patch is in review[1]), then it will disable\n> parallel vacuum because atoi is returning zero as parallel degree but\n> ideally it should give error or at least it should not disable parallel\n> vacuum.\n>\n> I think, in-place of atoi, we should use different function ( defGetInt32,\n> strtoint) or we can write own function.\n>\n> Thoughts?\n>\n> [1]:\n> https://www.postgresql.org/message-id/CA%2Bfd4k6DgwtQSr4%3DUeY%2BWbGuF7-oD%3Dm-ypHPy%2BsYHiXZc%2BhTUQ%40mail.gmail.com\n> --\n>\n\n\nFor server side there are also scanint8 function for string parsing and for\nClint side we can use strtoint function that didn't have the above issue\nand it accept char pointer which is I think suitable for [1] usage.\n\nregards\n\nSurafel\n\nHi,On Thu, Jan 23, 2020 at 3:56 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:Hi all,While reviewing one patch, I found that if we give any non-integer string to atoi (say aa), then it is returning zero(0) as output so we are not giving any error(assuming 0 as valid argument) and continuing our operations.Ex:Let say, we gave \"-P aa\" (patch is in review[1]), then it will disable parallel vacuum because atoi is returning zero as parallel degree but ideally it should give error or at least it should not disable parallel vacuum.I think, in-place of atoi, we should use different function ( defGetInt32, strtoint) or we can write own function.Thoughts?[1]: https://www.postgresql.org/message-id/CA%2Bfd4k6DgwtQSr4%3DUeY%2BWbGuF7-oD%3Dm-ypHPy%2BsYHiXZc%2BhTUQ%40mail.gmail.com-- \n\nFor\nserver side there are also scanint8\nfunction\nfor\nstring parsing and\nfor\nClint side we\ncan use\nstrtoint function that didn't have the above issue and\nit accept char pointer which is I think suitable for [1]\nusage.regardsSurafel",
"msg_date": "Mon, 27 Jan 2020 10:15:23 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we use different function in place of atoi in vacuumdb.c file"
}
] |
[
{
"msg_contents": "Greetings,\n\nEnclosed find a documentation patch that clarifies the behavior of ALTER SUBSCRIPTION … REFRESH PUBLICATION with new tables; I ran into a situation today where the docs were not clear that existing tables would not be re-copied, so remedying this situation.\n\nShould apply back to Pg 10.\n\nBest,\n\nDavid\n\n\n\n\n--\nDavid Christensen\nSenior Software and Database Engineer\nEnd Point Corporation\ndavid@endpoint.com\n785-727-1171",
"msg_date": "Thu, 23 Jan 2020 14:35:11 -0600",
"msg_from": "David Christensen <david@endpoint.com>",
"msg_from_op": true,
"msg_subject": "Documentation patch for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 2:05 AM David Christensen <david@endpoint.com> wrote:\n>\n> Greetings,\n>\n> Enclosed find a documentation patch that clarifies the behavior of ALTER SUBSCRIPTION … REFRESH PUBLICATION with new tables; I ran into a situation today where the docs were not clear that existing tables would not be re-copied, so remedying this situation.\n>\n\nIt seems this is already covered in REFRESH PUBLICATION, see \"This\nwill start replication of tables that were added to the subscribed-to\npublications since the last invocation of REFRESH PUBLICATION or since\nCREATE SUBSCRIPTION.\". As far as I understand, this text explains the\nsituation you were facing. Can you explain why the text quoted by me\nis not sufficient?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Feb 2020 08:15:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation patch for ALTER SUBSCRIPTION"
},
{
"msg_contents": "\n>> On Feb 4, 2020, at 8:45 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> \n>> On Fri, Jan 24, 2020 at 2:05 AM David Christensen <david@endpoint.com> wrote:\n>> Greetings,\n>> Enclosed find a documentation patch that clarifies the behavior of ALTER SUBSCRIPTION … REFRESH PUBLICATION with new tables; I ran into a situation today where the docs were not clear that existing tables would not be re-copied, so remedying this situation.\n> \n> It seems this is already covered in REFRESH PUBLICATION, see \"This\n> will start replication of tables that were added to the subscribed-to\n> publications since the last invocation of REFRESH PUBLICATION or since\n> CREATE SUBSCRIPTION.\". As far as I understand, this text explains the\n> situation you were facing. Can you explain why the text quoted by me\n> is not sufficient?\n\nHi Amit,\n\nFrom several reads of the text it was not explicitly clear to me that when you issued the copy_data that it would not effectively recopy existing tables in the existing publication, which I had been trying to confirm was not the case prior to running a refresh operation. I had to resort to reviewing the source code to get the answer I was looking for.\n\nIf you are already familiar with the operation under the hood I am sure the ambiguity is not there but since I was recently confused by this I wanted to be more explicit in a way that would have helped me answer my original question. \n\nBest,\n\nDavid\n\n",
"msg_date": "Tue, 4 Feb 2020 21:14:30 -0600",
"msg_from": "David Christensen <david@endpoint.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation patch for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 8:44 AM David Christensen <david@endpoint.com> wrote:\n>\n> >> On Feb 4, 2020, at 8:45 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Fri, Jan 24, 2020 at 2:05 AM David Christensen <david@endpoint.com> wrote:\n> >> Greetings,\n> >> Enclosed find a documentation patch that clarifies the behavior of ALTER SUBSCRIPTION … REFRESH PUBLICATION with new tables; I ran into a situation today where the docs were not clear that existing tables would not be re-copied, so remedying this situation.\n> >\n> > It seems this is already covered in REFRESH PUBLICATION, see \"This\n> > will start replication of tables that were added to the subscribed-to\n> > publications since the last invocation of REFRESH PUBLICATION or since\n> > CREATE SUBSCRIPTION.\". As far as I understand, this text explains the\n> > situation you were facing. Can you explain why the text quoted by me\n> > is not sufficient?\n>\n> Hi Amit,\n>\n> From several reads of the text it was not explicitly clear to me that when you issued the copy_data that it would not effectively recopy existing tables in the existing publication, which I had been trying to confirm was not the case prior to running a refresh operation. I had to resort to reviewing the source code to get the answer I was looking for.\n>\n> If you are already familiar with the operation under the hood I am sure the ambiguity is not there but since I was recently confused by this I wanted to be more explicit in a way that would have helped me answer my original question.\n>\n\nIt is possible that one might not understand how this option works by\nreading the already existing text in docs, but I think writing in a\ndifferent language the same thing also doesn't seem advisable. I\nthink if we want to explain it better, then maybe a succinct example\nat the end of the page might be helpful.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Feb 2020 13:49:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation patch for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On 2020-Feb-05, Amit Kapila wrote:\n\n> It is possible that one might not understand how this option works by\n> reading the already existing text in docs, but I think writing in a\n> different language the same thing also doesn't seem advisable. I\n> think if we want to explain it better, then maybe a succinct example\n> at the end of the page might be helpful.\n\nFor reference, the complete varlistentry is:\n\n <term><literal>REFRESH PUBLICATION</literal></term>\n <listitem>\n <para>\n Fetch missing table information from publisher. This will start\n replication of tables that were added to the subscribed-to publications\n since the last invocation of <command>REFRESH PUBLICATION</command> or\n since <command>CREATE SUBSCRIPTION</command>. <!-- [2] -->\n </para>\n\n <para>\n <replaceable>refresh_option</replaceable> specifies additional options for the\n refresh operation. The supported options are:\n\n <variablelist>\n <varlistentry>\n <term><literal>copy_data</literal> (<type>boolean</type>)</term>\n <listitem>\n <para>\n Specifies whether the existing data in the publications that are\n being subscribed to should be copied once the replication starts.\n The default is <literal>true</literal>. <!-- [1] -->\n </para>\n </listitem>\n </varlistentry>\n </variablelist>\n </para>\n </listitem>\n\nI tend to agree with David that this is ambiguous enough to warrant a\nfew words. Maybe his proposed wording is too verbose; how about just\nadding \"(Previously subscribed tables are not copied.)\" where the [1]\nappears? Alternatively, we could add \"Tables that were already present\nin the subscription are not modified in any way.\" where [2] appears, but\nthat seems less clear to me.\n\nAn example would not be bad if it showed that existing data is not\ncopied. But examples are actually just syntactical examples, so you'd\nhave to resort to a comment explaining that existing tables are not\ncopied by the shown syntax. You might as well just add the words in the\nreference docs ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Feb 2020 10:56:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation patch for ALTER SUBSCRIPTION"
},
{
"msg_contents": "> On Feb 5, 2020, at 7:56 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Feb-05, Amit Kapila wrote:\n> \n>> It is possible that one might not understand how this option works by\n>> reading the already existing text in docs, but I think writing in a\n>> different language the same thing also doesn't seem advisable. I\n>> think if we want to explain it better, then maybe a succinct example\n>> at the end of the page might be helpful.\n> \n> For reference, the complete varlistentry is:\n> \n> <term><literal>REFRESH PUBLICATION</literal></term>\n> <listitem>\n> <para>\n> Fetch missing table information from publisher. This will start\n> replication of tables that were added to the subscribed-to publications\n> since the last invocation of <command>REFRESH PUBLICATION</command> or\n> since <command>CREATE SUBSCRIPTION</command>. <!-- [2] -->\n> </para>\n> \n> <para>\n> <replaceable>refresh_option</replaceable> specifies additional options for the\n> refresh operation. The supported options are:\n> \n> <variablelist>\n> <varlistentry>\n> <term><literal>copy_data</literal> (<type>boolean</type>)</term>\n> <listitem>\n> <para>\n> Specifies whether the existing data in the publications that are\n> being subscribed to should be copied once the replication starts.\n> The default is <literal>true</literal>. <!-- [1] -->\n> </para>\n> </listitem>\n> </varlistentry>\n> </variablelist>\n> </para>\n> </listitem>\n> \n> I tend to agree with David that this is ambiguous enough to warrant a\n> few words. Maybe his proposed wording is too verbose; how about just\n> adding \"(Previously subscribed tables are not copied.)\" where the [1]\n> appears? Alternatively, we could add \"Tables that were already present\n> in the subscription are not modified in any way.\" where [2] appears, but\n> that seems less clear to me.\n> \n> An example would not be bad if it showed that existing data is not\n> copied. But examples are actually just syntactical examples, so you'd\n> have to resort to a comment explaining that existing tables are not\n> copied by the shown syntax. You might as well just add the words in the\n> reference docs …\n\nI would be happy with the suggestion [1]; it would have clarified my specific question.\n\nThanks,\n\nDavid",
"msg_date": "Wed, 5 Feb 2020 08:35:30 -0600",
"msg_from": "David Christensen <david@endpoint.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation patch for ALTER SUBSCRIPTION"
},
{
"msg_contents": "OK, pushed that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Feb 2020 15:08:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation patch for ALTER SUBSCRIPTION"
}
] |
[
{
"msg_contents": "I happened to notice this comment in the logic in\nATAddForeignKeyConstraint that tries to decide if it can skip\nrevalidating a foreign-key constraint after a DDL change:\n\n * Since we require that all collations share the same notion of\n * equality (which they do, because texteq reduces to bitwise\n * equality), we don't compare collation here.\n\nHasn't this been broken by the introduction of nondeterministic\ncollations?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Jan 2020 17:11:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Busted(?) optimization in ATAddForeignKeyConstraint"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 11:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I happened to notice this comment in the logic in\n> ATAddForeignKeyConstraint that tries to decide if it can skip\n> revalidating a foreign-key constraint after a DDL change:\n>\n> * Since we require that all collations share the same notion of\n> * equality (which they do, because texteq reduces to bitwise\n> * equality), we don't compare collation here.\n>\n> Hasn't this been broken by the introduction of nondeterministic\n> collations?\n\nSimilar words appear in the comment for ri_GenerateQualCollation().\n\n\n",
"msg_date": "Fri, 24 Jan 2020 13:21:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Busted(?) optimization in ATAddForeignKeyConstraint"
},
{
"msg_contents": "On 2020-01-23 23:11, Tom Lane wrote:\n> I happened to notice this comment in the logic in\n> ATAddForeignKeyConstraint that tries to decide if it can skip\n> revalidating a foreign-key constraint after a DDL change:\n> \n> * Since we require that all collations share the same notion of\n> * equality (which they do, because texteq reduces to bitwise\n> * equality), we don't compare collation here.\n> \n> Hasn't this been broken by the introduction of nondeterministic\n> collations?\n\nI'm not very familiar with the logic in this function, but I think this \nmight be okay because the foreign-key equality comparisons are done with \nthe collation of the primary key, which doesn't change here AFAICT.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 11:15:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Busted(?) optimization in ATAddForeignKeyConstraint"
},
{
"msg_contents": "On 2020-01-24 01:21, Thomas Munro wrote:\n> On Fri, Jan 24, 2020 at 11:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I happened to notice this comment in the logic in\n>> ATAddForeignKeyConstraint that tries to decide if it can skip\n>> revalidating a foreign-key constraint after a DDL change:\n>>\n>> * Since we require that all collations share the same notion of\n>> * equality (which they do, because texteq reduces to bitwise\n>> * equality), we don't compare collation here.\n>>\n>> Hasn't this been broken by the introduction of nondeterministic\n>> collations?\n> \n> Similar words appear in the comment for ri_GenerateQualCollation().\n\nThe calls to this function are all conditional on \n!get_collation_isdeterministic(). The comment should perhaps be changed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 11:17:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Busted(?) optimization in ATAddForeignKeyConstraint"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-01-23 23:11, Tom Lane wrote:\n>> I happened to notice this comment in the logic in\n>> ATAddForeignKeyConstraint that tries to decide if it can skip\n>> revalidating a foreign-key constraint after a DDL change:\n>> \t* Since we require that all collations share the same notion of\n>> \t* equality (which they do, because texteq reduces to bitwise\n>> \t* equality), we don't compare collation here.\n>> Hasn't this been broken by the introduction of nondeterministic\n>> collations?\n\n> I'm not very familiar with the logic in this function, but I think this \n> might be okay because the foreign-key equality comparisons are done with \n> the collation of the primary key, which doesn't change here AFAICT.\n\nIf we're depending on that, we should just remove the comment and compare\nthe collations. Seems far less likely to break.\n\nEven if there's a reason not to do the comparison, the comment needs\nan update.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Jan 2020 10:24:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Busted(?) optimization in ATAddForeignKeyConstraint"
}
] |
[
{
"msg_contents": "Hi, first patch here and first post to pgsql-hackers. Here goes.\n\nEnclosed please find a patch to tweak the documentation of the ALTER TABLE\npage. I believe this patch is ready to be applied to master and backported\nall the way to 9.2.\n\nOn the ALTER TABLE page, it currently notes that if you change the type of\na column, even to a binary coercible type:\n\n> any indexes on the affected columns must still be rebuilt.\n\nIt appears this hasn't been true for about eight years, since 367bc426a.\n\nHere's the discussion of the topic from earlier today and yesterday:\n\nhttps://www.postgresql.org/message-id/flat/CAMp9%3DExXtH0NeF%2BLTsNrew_oXycAJTNVKbRYnqgoEAT01t%3D67A%40mail.gmail.com\n\nI haven't run tests, but I presume they'll be unaffected by a documentation\nchange.\n\nI've made an effort to follow the example of other people's patches I\nlooked at, but I haven't contributed here before. Happy to take another\nstab at this if this doesn't hit the mark — though I hope it does. I love\nand appreciate Postgresql and hope that I can do my little part to make it\nbetter.\n\nFor the moment, I haven't added this to commitfest. I don't know what it\nis, but I suspect this is small enough somebody will just pick it up.\n\nMike",
"msg_date": "Thu, 23 Jan 2020 23:01:36 -0800",
"msg_from": "Mike Lissner <mlissner@michaeljaylissner.com>",
"msg_from_op": true,
"msg_subject": "Patching documentation of ALTER TABLE re column type changes on\n binary-coercible fields"
},
{
"msg_contents": "Hi all, I didn't get any replies to this. Is this the right way to send in\na patch to the docs?\n\nThanks,\n\n\nMike\n\nOn Thu, Jan 23, 2020 at 11:01 PM Mike Lissner <\nmlissner@michaeljaylissner.com> wrote:\n\n> Hi, first patch here and first post to pgsql-hackers. Here goes.\n>\n> Enclosed please find a patch to tweak the documentation of the ALTER TABLE\n> page. I believe this patch is ready to be applied to master and backported\n> all the way to 9.2.\n>\n> On the ALTER TABLE page, it currently notes that if you change the type of\n> a column, even to a binary coercible type:\n>\n> > any indexes on the affected columns must still be rebuilt.\n>\n> It appears this hasn't been true for about eight years, since 367bc426a.\n>\n> Here's the discussion of the topic from earlier today and yesterday:\n>\n>\n> https://www.postgresql.org/message-id/flat/CAMp9%3DExXtH0NeF%2BLTsNrew_oXycAJTNVKbRYnqgoEAT01t%3D67A%40mail.gmail.com\n>\n> I haven't run tests, but I presume they'll be unaffected by a\n> documentation change.\n>\n> I've made an effort to follow the example of other people's patches I\n> looked at, but I haven't contributed here before. Happy to take another\n> stab at this if this doesn't hit the mark — though I hope it does. I love\n> and appreciate Postgresql and hope that I can do my little part to make it\n> better.\n>\n> For the moment, I haven't added this to commitfest. I don't know what it\n> is, but I suspect this is small enough somebody will just pick it up.\n>\n> Mike\n>",
"msg_date": "Tue, 28 Jan 2020 10:55:47 -0800",
"msg_from": "Mike Lissner <mlissner@michaeljaylissner.com>",
"msg_from_op": true,
"msg_subject": "[Patch]: Documentation of ALTER TABLE re column type changes on\n binary-coercible fields"
},
{
"msg_contents": "\n\n> On Jan 28, 2020, at 10:55 AM, Mike Lissner <mlissner@michaeljaylissner.com> wrote:\n> \n> Hi all, I didn't get any replies to this. Is this the right way to send in a patch to the docs?\n> \n\nYes, your patch has been received, thanks. I don’t know if anybody is reviewing it, but typically you don’t hear back on a patch until somebody has reviewed it and has an opinion to share with you.\n\nThe feeling, “Hey, I submitted something, and for all I know it got lost in the mail” is perhaps common for first time submitters. We’d like to see you around the project again, so please don’t take the silence as a cold shoulder.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 11:51:22 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch]: Documentation of ALTER TABLE re column type changes on\n binary-coercible fields"
},
{
"msg_contents": "On Wednesday, January 29, 2020 3:56 AM (GMT+9), Mike Lissner wrote:\r\n> Hi all, I didn't get any replies to this. Is this the right way to send in a patch to the\r\n> docs?\r\n\r\nHello,\r\nYes, although your current patch does not apply as I tried it in my machine.\r\nBut you can still rebase it.\r\nFor the reviewers/committers to keep track of this, I think it might be better to\r\nregister your patch to the commitfest app: https://commitfest.postgresql.org/27/,\r\nand you may put it under the \"Documentation\" topic. \r\n\r\nThere's also a CFbot to check online whether your patch still applies cleanly\r\nand passes the tests, especially after several commits in the source code.\r\nCurrent CF: http://commitfest.cputube.org/index.html\r\nNext CF: http://commitfest.cputube.org/next.html\r\n\r\nRegards,\r\nKirk Jamison\r\n\r\n\r\n> On Thu, Jan 23, 2020 at 11:01 PM Mike Lissner <mlissner@michaeljaylissner.com\r\n> <mailto:mlissner@michaeljaylissner.com> > wrote:\r\n> \r\n> \r\n> \tHi, first patch here and first post to pgsql-hackers. Here goes.\r\n> \r\n> \r\n> \tEnclosed please find a patch to tweak the documentation of the ALTER TABLE\r\n> page. I believe this patch is ready to be applied to master and backported all the way\r\n> to 9.2.\r\n> \r\n> \r\n> \tOn the ALTER TABLE page, it currently notes that if you change the type of a\r\n> column, even to a binary coercible type:\r\n> \r\n> \t> any indexes on the affected columns must still be rebuilt.\r\n> \r\n> \r\n> \tIt appears this hasn't been true for about eight years, since 367bc426a.\r\n> \r\n> \tHere's the discussion of the topic from earlier today and yesterday:\r\n> \r\n> \thttps://www.postgresql.org/message-\r\n> id/flat/CAMp9%3DExXtH0NeF%2BLTsNrew_oXycAJTNVKbRYnqgoEAT01t%3D67A%40\r\n> mail.gmail.com\r\n> \r\n> \tI haven't run tests, but I presume they'll be unaffected by a documentation\r\n> change.\r\n> \r\n> \r\n> \tI've made an effort to follow the example of other people's patches I looked\r\n> at, but I haven't contributed here before. Happy to take another stab at this if this\r\n> doesn't hit the mark — though I hope it does. I love and appreciate Postgresql and\r\n> hope that I can do my little part to make it better.\r\n> \r\n> \tFor the moment, I haven't added this to commitfest. I don't know what it is,\r\n> but I suspect this is small enough somebody will just pick it up.\r\n> \r\n> \r\n> \tMike\r\n> \r\n\r\n",
"msg_date": "Wed, 29 Jan 2020 01:21:38 +0000",
"msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [Patch]: Documentation of ALTER TABLE re column type changes on\n binary-coercible fields"
},
{
"msg_contents": "Hi,\n\nI've stumbled over this topic today, and found your patch.\n\nOn Thu, Jan 23, 2020 at 11:01:36PM -0800, Mike Lissner wrote:\n> Enclosed please find a patch to tweak the documentation of the ALTER TABLE\n> page. I believe this patch is ready to be applied to master and backported\n> all the way to 9.2.\n> \n> On the ALTER TABLE page, it currently notes that if you change the type of\n> a column, even to a binary coercible type:\n> \n> > any indexes on the affected columns must still be rebuilt.\n> \n> It appears this hasn't been true for about eight years, since 367bc426a.\n> \n> Here's the discussion of the topic from earlier today and yesterday:\n> \n> https://www.postgresql.org/message-id/flat/CAMp9%3DExXtH0NeF%2BLTsNrew_oXycAJTNVKbRYnqgoEAT01t%3D67A%40mail.gmail.com\n> \n> I haven't run tests, but I presume they'll be unaffected by a documentation\n> change.\n> \n> I've made an effort to follow the example of other people's patches I\n> looked at, but I haven't contributed here before. Happy to take another\n> stab at this if this doesn't hit the mark — though I hope it does. I love\n> and appreciate Postgresql and hope that I can do my little part to make it\n> better.\n> \n> For the moment, I haven't added this to commitfest. I don't know what it\n> is, but I suspect this is small enough somebody will just pick it up.\n> \n> Mike\n\n> Index: doc/src/sgml/ref/alter_table.sgml\n> IDEA additional info:\n> Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP\n> <+>UTF-8\n> ===================================================================\n> --- doc/src/sgml/ref/alter_table.sgml\t(revision 6de7bcb76f6593dcd107a6bfed645f2142bf3225)\n> +++ doc/src/sgml/ref/alter_table.sgml\t(revision 9a813e0896e828900739d95f78b5e4be10dac365)\n> @@ -1225,10 +1225,9 @@\n> existing column, if the <literal>USING</literal> clause does not change\n> the column contents and the old type is either binary coercible to the new\n> type or an unconstrained domain over the new type, a table rewrite is not\n> - needed; but any indexes on the affected columns must still be rebuilt.\n> - Table and/or index rebuilds may take a\n> - significant amount of time for a large table; and will temporarily require\n> - as much as double the disk space.\n> + needed. Table and/or index rebuilds may take a significant amount of time\n> + for a large table; and will temporarily require as much as double the disk\n> + space.\n> </para>\n\nIn general, I find the USING part in that paragraph a bit confusing; I\nthink the main use case for ALTER COLUMN ... TYPE is without it. So I\nwould suggest separating the two and make the general case (table and\nindexe rewrites are not needed if the type is binary coercible without\nhaving USING in the same sentence.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Sascha Heuer, Geoff Richardson,\nPeter Lilley\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n",
"msg_date": "Tue, 7 Sep 2021 12:35:40 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: Patching documentation of ALTER TABLE re column type changes on\n binary-coercible fields"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nI propose \"non-volatile WAL buffer,\" a proof-of-concept new feature. It\nenables WAL records to be durable without output to WAL segment files by\nresiding on persistent memory (PMEM) instead of DRAM. It improves database\nperformance by reducing copies of WAL and shortening the time of write\ntransactions.\n\nI attach the first patchset that can be applied to PostgreSQL 12.0 (refs/\ntags/REL_12_0). Please see README.nvwal (added by the patch 0003) to use\nthe new feature.\n\nPMEM [1] is fast, non-volatile, and byte-addressable memory installed into\nDIMM slots. Such products have been already available. For example, an\nNVDIMM-N is a type of PMEM module that contains both DRAM and NAND flash.\nIt can be accessed like a regular DRAM, but on power loss, it can save its\ncontents into flash area. On power restore, it performs the reverse, that\nis, the contents are copied back into DRAM. PMEM also has been already\nsupported by major operating systems such as Linux and Windows, and new\nopen-source libraries such as Persistent Memory Development Kit (PMDK) [2].\nFurthermore, several DBMSes have started to support PMEM.\n\nIt's time for PostgreSQL. PMEM is faster than a solid state disk and\nnaively can be used as a block storage. However, we cannot gain much\nperformance in that way because it is so fast that the overhead of\ntraditional software stacks now becomes unignorable, such as user buffers,\nfilesystems, and block layers. Non-volatile WAL buffer is a work to make\nPostgreSQL PMEM-aware, that is, accessing directly to PMEM as a RAM to\nbypass such overhead and achieve the maximum possible benefit. I believe\nWAL is one of the most important modules to be redesigned for PMEM because\nit has assumed slow disks such as HDDs and SSDs but PMEM is not so.\n\nThis work is inspired by \"Non-volatile Memory Logging\" talked in PGCon\n2016 [3] to gain more benefit from PMEM than my and Yoshimi's previous\nwork did [4][5]. I submitted a talk proposal for PGCon in this year, and\nhave measured and analyzed performance of my PostgreSQL with non-volatile\nWAL buffer, comparing with the original one that uses PMEM as \"a faster-\nthan-SSD storage.\" I will talk about the results if accepted.\n\nBest regards,\nTakashi Menjo\n\n[1] Persistent Memory (SNIA)\n https://www.snia.org/PM\n[2] Persistent Memory Development Kit (pmem.io)\n https://pmem.io/pmdk/ \n[3] Non-volatile Memory Logging (PGCon 2016)\n https://www.pgcon.org/2016/schedule/track/Performance/945.en.html\n[4] Introducing PMDK into PostgreSQL (PGCon 2018)\n https://www.pgcon.org/2018/schedule/events/1154.en.html\n[5] Applying PMDK to WAL operations for persistent memory (pgsql-hackers)\n https://www.postgresql.org/message-id/C20D38E97BCB33DAD59E3A1@lab.ntt.co.jp\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center",
"msg_date": "Fri, 24 Jan 2020 17:06:10 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "[PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "\nHello,\n\n+1 on the idea.\n\nBy quickly looking at the patch, I notice that there are no tests.\n\nIs it possible to emulate somthing without the actual hardware, at least \nfor testing purposes?\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 24 Jan 2020 11:56:27 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On 24/01/2020 10:06, Takashi Menjo wrote:\n> I propose \"non-volatile WAL buffer,\" a proof-of-concept new feature. It\n> enables WAL records to be durable without output to WAL segment files by\n> residing on persistent memory (PMEM) instead of DRAM. It improves database\n> performance by reducing copies of WAL and shortening the time of write\n> transactions.\n> \n> I attach the first patchset that can be applied to PostgreSQL 12.0 (refs/\n> tags/REL_12_0). Please see README.nvwal (added by the patch 0003) to use\n> the new feature.\n\nI have the same comments on this that I had on the previous patch, see:\n\nhttps://www.postgresql.org/message-id/2aec6e2a-6a32-0c39-e4e2-aad854543aa8%40iki.fi\n\n- Heikki\n\n\n",
"msg_date": "Fri, 24 Jan 2020 15:14:50 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hello Fabien,\n\nThank you for your +1 :)\n\n> Is it possible to emulate somthing without the actual hardware, at least\n> for testing purposes?\n\nYes, you can emulate PMEM using DRAM on Linux, via \"memmap=nnG!ssG\" kernel\nparameter. Please see [1] and [2] for emulation details. If your emulation\ndoes not work well, please check if the kernel configuration options (like\nCONFIG_ FOOBAR) for PMEM and DAX (in [1] and [3]) are set up properly.\n\nBest regards,\nTakashi\n\n[1] How to Emulate Persistent Memory Using Dynamic Random-access Memory (DRAM)\n https://software.intel.com/en-us/articles/how-to-emulate-persistent-memory-on-an-intel-architecture-server\n[2] how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system\n https://nvdimm.wiki.kernel.org/how_to_choose_the_correct_memmap_kernel_parameter_for_pmem_on_your_system\n[3] Persistent Memory Wiki\n https://nvdimm.wiki.kernel.org/\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n\n\n\n\n",
"msg_date": "Mon, 27 Jan 2020 11:25:09 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hello Heikki,\n\n> I have the same comments on this that I had on the previous patch, see:\n> \n> https://www.postgresql.org/message-id/2aec6e2a-6a32-0c39-e4e2-aad854543aa8%40iki.fi\n\nThanks. I re-read your messages [1][2]. What you meant, AFAIU, is how\nabout using memory-mapped WAL segment files as WAL buffers, and switching\nCPU instructions or msync() depending on whether the segment files are on\nPMEM or not, to sync inserted WAL records. \n\nIt sounds reasonable, but I'm sorry that I haven't tested such a program\nyet. I'll try it to compare with my non-volatile WAL buffer. For now, I'm\na little worried about the overhead of mmap()/munmap() for each WAL segment\nfile.\n\nYou also told a SIGBUS problem of memory-mapped I/O. I think it's true for\nreading from bad memory blocks, as you mentioned, and also true for writing\nto such blocks [3]. Handling SIGBUS properly or working around it is future\nwork.\n\nBest regards,\nTakashi\n\n[1] https://www.postgresql.org/message-id/83eafbfd-d9c5-6623-2423-7cab1be3888c%40iki.fi\n[2] https://www.postgresql.org/message-id/2aec6e2a-6a32-0c39-e4e2-aad854543aa8%40iki.fi\n[3] https://pmem.io/2018/11/26/bad-blocks.htm\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n\n\n\n\n",
"msg_date": "Mon, 27 Jan 2020 16:01:15 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 2:01 AM Takashi Menjo\n<takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> It sounds reasonable, but I'm sorry that I haven't tested such a program\n> yet. I'll try it to compare with my non-volatile WAL buffer. For now, I'm\n> a little worried about the overhead of mmap()/munmap() for each WAL segment\n> file.\n\nI guess the question here is how the cost of one mmap() and munmap()\npair per WAL segment (normally 16MB) compares to the cost of one\nwrite() per block (normally 8kB). It could be that mmap() is a more\nexpensive call than read(), but by a small enough margin that the\nvastly reduced number of system calls makes it a winner. But that's\njust speculation, because I don't know how heavy mmap() actually is.\n\nI have a different concern. I think that, right now, when we reuse a\nWAL segment, we write entire blocks at a time, so the old contents of\nthe WAL segment are overwritten without ever being read. But that\nbehavior might not be maintained when using mmap(). It might be that\nas soon as we write the first byte to a mapped page, the old contents\nhave to be faulted into memory. Indeed, it's unclear how it could be\notherwise, since the VM page must be made read-write at that point and\nthe system cannot know that we will overwrite the whole page. But\nreading in the old contents of a recycled WAL file just to overwrite\nthem seems like it would be disastrously expensive.\n\nA related, but more minor, concern is whether there are any\ndifferences in in the write-back behavior when modifying a mapped\nregion vs. using write(). Either way, the same pages of the same file\nwill get dirtied, but the kernel might not have the same idea in\neither case about when the changed pages should be written back down\nto disk, and that could make a big difference to performance.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Jan 2020 13:54:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hello Robert,\n\nI think our concerns are roughly classified into two:\n\n (1) Performance\n (2) Consistency\n\nAnd your \"different concern\" is rather into (2), I think.\n\nI'm also worried about it, but I have no good answer for now. I suppose mmap(flags|=MAP_SHARED) called by multiple backend processes for the same file works consistently for both PMEM and non-PMEM devices. However, I have not found any evidence such as specification documents yet.\n\nI also made a tiny program calling memcpy() and msync() on the same mmap()-ed file but mutually distinct address range in parallel, and found that there was no corrupted data. However, that result does not ensure any consistency I'm worried about. I could give it up if there *were* corrupted data...\n\nSo I will go to (1) first. I will test the way Heikki told us to answer whether the cost of mmap() and munmap() per WAL segment, etc, is reasonable or not. If it really is, then I will go to (2).\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 17:26:38 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 3:28 AM Takashi Menjo\n<takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> I think our concerns are roughly classified into two:\n>\n> (1) Performance\n> (2) Consistency\n>\n> And your \"different concern\" is rather into (2), I think.\n\nActually, I think it was mostly a performance concern (writes\ntriggering lots of reading) but there might be a consistency issue as\nwell.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 15:59:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-27 13:54:38 -0500, Robert Haas wrote:\n> On Mon, Jan 27, 2020 at 2:01 AM Takashi Menjo\n> <takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > It sounds reasonable, but I'm sorry that I haven't tested such a program\n> > yet. I'll try it to compare with my non-volatile WAL buffer. For now, I'm\n> > a little worried about the overhead of mmap()/munmap() for each WAL segment\n> > file.\n> \n> I guess the question here is how the cost of one mmap() and munmap()\n> pair per WAL segment (normally 16MB) compares to the cost of one\n> write() per block (normally 8kB). It could be that mmap() is a more\n> expensive call than read(), but by a small enough margin that the\n> vastly reduced number of system calls makes it a winner. But that's\n> just speculation, because I don't know how heavy mmap() actually is.\n\nmmap()/munmap() on a regular basis does have pretty bad scalability\nimpacts. I don't think they'd fully hit us, because we're not in a\nthreaded world however.\n\n\nMy issue with the proposal to go towards mmap()/munmap() is that I think\ndoing so forcloses a lot of improvements. Even today, on fast storage,\nusing the open_datasync is faster (at least when somehow hitting the\nO_DIRECT path, which isn't that easy these days) - and that's despite it\nbeing really unoptimized. I think our WAL scalability is a serious\nissue. There's a fair bit that we can improve by just fix without really\nchanging the way we do IO:\n\n- Split WALWriteLock into one lock for writing and one for flushing the\n WAL. Right now we prevent other sessions from writing out WAL - even\n to other segments - when one session is doing a WAL flush. But there's\n absolutely no need for that.\n- Stop increasing the size of the flush request to the max when flushing\n WAL (cf \"try to write/flush later additions to XLOG as well\" in\n XLogFlush()) - that currently reduces throughput in OLTP workloads\n quite noticably. It made some sense in the spinning disk times, but I\n don't think it does for a halfway decent SSD. By writing the maximum\n ready to write, we hold the lock for longer, increasing latency for\n the committing transaction *and* preventing more WAL from being written.\n- We should immediately ask the OS to flush writes for full XLOG pages\n back to the OS. Right now the IO for that will never be started before\n the commit comes around in an OLTP workload, which means that we just\n waste the time between the XLogWrite() and the commit.\n\nThat'll gain us 2-3x, I think. But after that I think we're going to\nhave to actually change more fundamentally how we do IO for WAL\nwrites. Using async IO I can do like 18k individual durable 8kb writes\n(using O_DSYNC) a second, at a queue depth of 32. On my laptop. If I\nmake it 4k writes, it's 22k.\n\nThat's not directly comparable with postgres WAL flushes, of course, as\nit's all separate blocks, whereas WAL will often end up overwriting the\nlast block. But it doesn't at all account for group commits either,\nwhich we *constantly* end up doing.\n\nPostgres manages somewhere between ~450 (multiple users) ~800 (single\nuser) individually durable WAL writes / sec on the same hardware. Yes,\nthat's more than an order of magnitude less. Of course some of that is\njust that postgres does more than just IO - but that's not effect on the\norder of a magnitude.\n\nSo, why am I bringing this up in this thread? Only because I do not see\na way to actually utilize non-pmem hardware to a much higher degree than\nwe are doing now by using mmap(). Doing so requires using direct IO,\nwhich is fundamentally incompatible with using mmap().\n\n\n\n> I have a different concern. I think that, right now, when we reuse a\n> WAL segment, we write entire blocks at a time, so the old contents of\n> the WAL segment are overwritten without ever being read. But that\n> behavior might not be maintained when using mmap(). It might be that\n> as soon as we write the first byte to a mapped page, the old contents\n> have to be faulted into memory. Indeed, it's unclear how it could be\n> otherwise, since the VM page must be made read-write at that point and\n> the system cannot know that we will overwrite the whole page. But\n> reading in the old contents of a recycled WAL file just to overwrite\n> them seems like it would be disastrously expensive.\n\nYea, that's a serious concern.\n\n\n> A related, but more minor, concern is whether there are any\n> differences in in the write-back behavior when modifying a mapped\n> region vs. using write(). Either way, the same pages of the same file\n> will get dirtied, but the kernel might not have the same idea in\n> either case about when the changed pages should be written back down\n> to disk, and that could make a big difference to performance.\n\nI don't think there's a significant difference in case of linux - no\nidea about others. And either way we probably should force the kernels\nhand to start flushing much sooner.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Feb 2020 07:59:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear hackers,\n\nI made another WIP patchset to mmap WAL segments as WAL buffers. Note that this is not a non-volatile WAL buffer patchset but its competitor. I am measuring and analyzing the performance of this patchset to compare with my N.V.WAL buffer.\n\nPlease wait for a several more days for the result report...\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Robert Haas <robertmhaas@gmail.com>\n> Sent: Wednesday, January 29, 2020 6:00 AM\n> To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Cc: Heikki Linnakangas <hlinnaka@iki.fi>; pgsql-hackers@postgresql.org\n> Subject: Re: [PoC] Non-volatile WAL buffer\n> \n> On Tue, Jan 28, 2020 at 3:28 AM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > I think our concerns are roughly classified into two:\n> >\n> > (1) Performance\n> > (2) Consistency\n> >\n> > And your \"different concern\" is rather into (2), I think.\n> \n> Actually, I think it was mostly a performance concern (writes triggering lots of reading) but there might be a\n> consistency issue as well.\n> \n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company",
"msg_date": "Mon, 10 Feb 2020 18:29:31 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear hackers,\n\nI applied my patchset that mmap()-s WAL segments as WAL buffers to refs/tags/REL_12_0, and measured and analyzed its performance with pgbench. Roughly speaking, When I used *SSD and ext4* to store WAL, it was \"obviously worse\" than the original REL_12_0. VTune told me that the CPU time of memcpy() called by CopyXLogRecordToWAL() got larger than before. When I used *NVDIMM-N and ext4 with filesystem DAX* to store WAL, however, it achieved \"not bad\" performance compared with our previous patchset and non-volatile WAL buffer. Each CPU time of XLogInsert() and XLogFlush() was reduced like as non-volatile WAL buffer.\n\nSo I think mmap()-ing WAL segments as WAL buffers is not such a bad idea as long as we use PMEM, at least NVDIMM-N.\n\nExcuse me but for now I'd keep myself not talking about how much the performance was, because the mmap()-ing patchset is WIP so there might be bugs which wrongfully \"improve\" or \"degrade\" performance. Also we need to know persistent memory programming and related features such as filesystem DAX, huge page faults, and WAL persistence with cache flush and memory barrier instructions to explain why the performance improved. I'd talk about all the details at the appropriate time and place. (The conference, or here later...)\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Sent: Monday, February 10, 2020 6:30 PM\n> To: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>\n> Cc: 'pgsql-hackers@postgresql.org' <pgsql-hackers@postgresql.org>\n> Subject: RE: [PoC] Non-volatile WAL buffer\n> \n> Dear hackers,\n> \n> I made another WIP patchset to mmap WAL segments as WAL buffers. Note that this is not a non-volatile WAL\n> buffer patchset but its competitor. I am measuring and analyzing the performance of this patchset to compare\n> with my N.V.WAL buffer.\n> \n> Please wait for a several more days for the result report...\n> \n> Best regards,\n> Takashi\n> \n> --\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation Center\n> \n> > -----Original Message-----\n> > From: Robert Haas <robertmhaas@gmail.com>\n> > Sent: Wednesday, January 29, 2020 6:00 AM\n> > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > Cc: Heikki Linnakangas <hlinnaka@iki.fi>; pgsql-hackers@postgresql.org\n> > Subject: Re: [PoC] Non-volatile WAL buffer\n> >\n> > On Tue, Jan 28, 2020 at 3:28 AM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > > I think our concerns are roughly classified into two:\n> > >\n> > > (1) Performance\n> > > (2) Consistency\n> > >\n> > > And your \"different concern\" is rather into (2), I think.\n> >\n> > Actually, I think it was mostly a performance concern (writes\n> > triggering lots of reading) but there might be a consistency issue as well.\n> >\n> > --\n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL\n> > Company\n\n\n\n\n",
"msg_date": "Mon, 17 Feb 2020 13:12:37 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Menjo-san,\n\nOn Mon, Feb 17, 2020 at 1:13 PM Takashi Menjo\n<takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> I applied my patchset that mmap()-s WAL segments as WAL buffers to refs/tags/REL_12_0, and measured and analyzed its performance with pgbench. Roughly speaking, When I used *SSD and ext4* to store WAL, it was \"obviously worse\" than the original REL_12_0.\n\nI apologize for not having any opinion on the patches themselves, but\nlet me point out that it's better to base these patches on HEAD\n(master branch) than REL_12_0, because all new code is committed to\nthe master branch, whereas stable branches such as REL_12_0 only\nreceive bug fixes. Do you have any specific reason to be working on\nREL_12_0?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 17 Feb 2020 13:39:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hello Amit,\n\n> I apologize for not having any opinion on the patches themselves, but let me point out that it's better to base these\n> patches on HEAD (master branch) than REL_12_0, because all new code is committed to the master branch,\n> whereas stable branches such as REL_12_0 only receive bug fixes. Do you have any specific reason to be working\n> on REL_12_0?\n\nYes, because I think it's human-friendly to reproduce and discuss performance measurement. Of course I know all new accepted patches are merged into master's HEAD, not stable branches and not even release tags, so I'm aware of rebasing my patchset onto master sooner or later. However, if someone, including me, says that s/he applies my patchset to \"master\" and measures its performance, we have to pay attention to which commit the \"master\" really points to. Although we have sha1 hashes to specify which commit, we should check whether the specific commit on master has patches affecting performance or not because master's HEAD gets new patches day by day. On the other hand, a release tag clearly points the commit all we probably know. Also we can check more easily the features and improvements by using release notes and user manuals.\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n> -----Original Message-----\n> From: Amit Langote <amitlangote09@gmail.com>\n> Sent: Monday, February 17, 2020 1:39 PM\n> To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Cc: Robert Haas <robertmhaas@gmail.com>; Heikki Linnakangas <hlinnaka@iki.fi>; PostgreSQL-development\n> <pgsql-hackers@postgresql.org>\n> Subject: Re: [PoC] Non-volatile WAL buffer\n> \n> Menjo-san,\n> \n> On Mon, Feb 17, 2020 at 1:13 PM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > I applied my patchset that mmap()-s WAL segments as WAL buffers to refs/tags/REL_12_0, and measured and\n> analyzed its performance with pgbench. Roughly speaking, When I used *SSD and ext4* to store WAL, it was\n> \"obviously worse\" than the original REL_12_0.\n> \n> I apologize for not having any opinion on the patches themselves, but let me point out that it's better to base these\n> patches on HEAD (master branch) than REL_12_0, because all new code is committed to the master branch,\n> whereas stable branches such as REL_12_0 only receive bug fixes. Do you have any specific reason to be working\n> on REL_12_0?\n> \n> Thanks,\n> Amit\n\n\n\n\n",
"msg_date": "Mon, 17 Feb 2020 16:15:48 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hello,\n\nOn Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo\n<takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> Hello Amit,\n>\n> > I apologize for not having any opinion on the patches themselves, but let me point out that it's better to base these\n> > patches on HEAD (master branch) than REL_12_0, because all new code is committed to the master branch,\n> > whereas stable branches such as REL_12_0 only receive bug fixes. Do you have any specific reason to be working\n> > on REL_12_0?\n>\n> Yes, because I think it's human-friendly to reproduce and discuss performance measurement. Of course I know all new accepted patches are merged into master's HEAD, not stable branches and not even release tags, so I'm aware of rebasing my patchset onto master sooner or later. However, if someone, including me, says that s/he applies my patchset to \"master\" and measures its performance, we have to pay attention to which commit the \"master\" really points to. Although we have sha1 hashes to specify which commit, we should check whether the specific commit on master has patches affecting performance or not because master's HEAD gets new patches day by day. On the other hand, a release tag clearly points the commit all we probably know. Also we can check more easily the features and improvements by using release notes and user manuals.\n\nThanks for clarifying. I see where you're coming from.\n\nWhile I do sometimes see people reporting numbers with the latest\nstable release' branch, that's normally just one of the baselines.\nThe more important baseline for ongoing development is the master\nbranch's HEAD, which is also what people volunteering to test your\npatches would use. Anyone who reports would have to give at least two\nnumbers -- performance with a branch's HEAD without patch applied and\nthat with patch applied -- which can be enough in most cases to see\nthe difference the patch makes. Sure, the numbers might change on\neach report, but that's fine I'd think. If you continue to develop\nagainst the stable branch, you might miss to notice impact from any\nrelevant developments in the master branch, even developments which\npossibly require rethinking the architecture of your own changes,\nalthough maybe that rarely occurs.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 17 Feb 2020 17:21:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-17 13:12:37 +0900, Takashi Menjo wrote:\n> I applied my patchset that mmap()-s WAL segments as WAL buffers to\n> refs/tags/REL_12_0, and measured and analyzed its performance with\n> pgbench. Roughly speaking, When I used *SSD and ext4* to store WAL,\n> it was \"obviously worse\" than the original REL_12_0. VTune told me\n> that the CPU time of memcpy() called by CopyXLogRecordToWAL() got\n> larger than before.\n\nFWIW, this might largely be because of page faults. In contrast to\nbefore we wouldn't reuse the same pages (because they've been\nmunmap()/mmap()ed), so the first time they're touched, we'll incur page\nfaults. Did you try mmap()ing with MAP_POPULATE? It's probably also\nworthwhile to try to use MAP_HUGETLB.\n\nStill doubtful it's the right direction, but I'd rather have good\nnumbers to back me up :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Feb 2020 21:04:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear Amit,\n\nThank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\n\nI'm rebasing my branch onto master. I'll submit an updated patchset and performance report later.\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Amit Langote <amitlangote09@gmail.com>\n> Sent: Monday, February 17, 2020 5:21 PM\n> To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Cc: Robert Haas <robertmhaas@gmail.com>; Heikki Linnakangas <hlinnaka@iki.fi>; PostgreSQL-development\n> <pgsql-hackers@postgresql.org>\n> Subject: Re: [PoC] Non-volatile WAL buffer\n> \n> Hello,\n> \n> On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > Hello Amit,\n> >\n> > > I apologize for not having any opinion on the patches themselves,\n> > > but let me point out that it's better to base these patches on HEAD\n> > > (master branch) than REL_12_0, because all new code is committed to\n> > > the master branch, whereas stable branches such as REL_12_0 only receive bug fixes. Do you have any\n> specific reason to be working on REL_12_0?\n> >\n> > Yes, because I think it's human-friendly to reproduce and discuss performance measurement. Of course I know\n> all new accepted patches are merged into master's HEAD, not stable branches and not even release tags, so I'm\n> aware of rebasing my patchset onto master sooner or later. However, if someone, including me, says that s/he\n> applies my patchset to \"master\" and measures its performance, we have to pay attention to which commit the\n> \"master\" really points to. Although we have sha1 hashes to specify which commit, we should check whether the\n> specific commit on master has patches affecting performance or not because master's HEAD gets new patches day\n> by day. On the other hand, a release tag clearly points the commit all we probably know. Also we can check more\n> easily the features and improvements by using release notes and user manuals.\n> \n> Thanks for clarifying. I see where you're coming from.\n> \n> While I do sometimes see people reporting numbers with the latest stable release' branch, that's normally just one\n> of the baselines.\n> The more important baseline for ongoing development is the master branch's HEAD, which is also what people\n> volunteering to test your patches would use. Anyone who reports would have to give at least two numbers --\n> performance with a branch's HEAD without patch applied and that with patch applied -- which can be enough in\n> most cases to see the difference the patch makes. Sure, the numbers might change on each report, but that's fine\n> I'd think. If you continue to develop against the stable branch, you might miss to notice impact from any relevant\n> developments in the master branch, even developments which possibly require rethinking the architecture of your\n> own changes, although maybe that rarely occurs.\n> \n> Thanks,\n> Amit\n\n\n\n\n",
"msg_date": "Thu, 20 Feb 2020 18:30:09 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear hackers,\n\nI rebased my non-volatile WAL buffer's patchset onto master. A new v2 patchset is attached to this mail.\n\nI also measured performance before and after patchset, varying -c/--client and -j/--jobs options of pgbench, for each scaling factor s = 50 or 1000. The results are presented in the following tables and the attached charts. Conditions, steps, and other details will be shown later.\n\n\nResults (s=50)\n==============\n Throughput [10^3 TPS] Average latency [ms]\n( c, j) before after before after\n------- --------------------- ---------------------\n( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\n(18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\n(36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\n(54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\n\n\nResults (s=1000)\n================\n Throughput [10^3 TPS] Average latency [ms]\n( c, j) before after before after\t\n------- --------------------- ---------------------\n( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\n(18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\n(36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\n(54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\n\n\nBoth throughput and average latency are improved for each scaling factor. Throughput seemed to almost reach the upper limit when (c,j)=(36,18).\n\nThe percentage in s=1000 case looks larger than in s=50 case. I think larger scaling factor leads to less contentions on the same tables and/or indexes, that is, less lock and unlock operations. In such a situation, write-ahead logging appears to be more significant for performance.\n\n\nConditions\n==========\n- Use one physical server having 2 NUMA nodes (node 0 and 1)\n - Pin postgres (server processes) to node 0 and pgbench to node 1\n - 18 cores and 192GiB DRAM per node\n- Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for pg_wal\n - Both are installed on the server-side node, that is, node 0\n - Both are formatted with ext4\n - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access (DAX)\n- Use the attached postgresql.conf\n - Two new items nvwal_path and nvwal_size are used only after patch\n\n\nSteps\n=====\nFor each (c,j) pair, I did the following steps three times then I found the median of the three as a final result shown in the tables above.\n\n(1) Run initdb with proper -D and -X options; and also give --nvwal-path and --nvwal-size options after patch\n(2) Start postgres and create a database for pgbench tables\n(3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\n(4) Stop postgres, remount filesystems, and start postgres again\n(5) Execute pg_prewarm extension for all the four pgbench tables\n(6) Run pgbench during 30 minutes\n\n\npgbench command line\n====================\n$ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\n\nI gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\n\n\nSoftware\n========\n- Distro: Ubuntu 18.04\n- Kernel: Linux 5.4 (vanilla kernel)\n- C Compiler: gcc 7.4.0\n- PMDK: 1.7\n- PostgreSQL: d677550 (master on Mar 3, 2020)\n\n\nHardware\n========\n- System: HPE ProLiant DL380 Gen10\n- CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\n- DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\n- NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\n- NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\n\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Sent: Thursday, February 20, 2020 6:30 PM\n> To: 'Amit Langote' <amitlangote09@gmail.com>\n> Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>; 'PostgreSQL-development'\n> <pgsql-hackers@postgresql.org>\n> Subject: RE: [PoC] Non-volatile WAL buffer\n> \n> Dear Amit,\n> \n> Thank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\n> \n> I'm rebasing my branch onto master. I'll submit an updated patchset and performance report later.\n> \n> Best regards,\n> Takashi\n> \n> --\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation Center\n> \n> > -----Original Message-----\n> > From: Amit Langote <amitlangote09@gmail.com>\n> > Sent: Monday, February 17, 2020 5:21 PM\n> > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > Cc: Robert Haas <robertmhaas@gmail.com>; Heikki Linnakangas\n> > <hlinnaka@iki.fi>; PostgreSQL-development\n> > <pgsql-hackers@postgresql.org>\n> > Subject: Re: [PoC] Non-volatile WAL buffer\n> >\n> > Hello,\n> >\n> > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > > Hello Amit,\n> > >\n> > > > I apologize for not having any opinion on the patches themselves,\n> > > > but let me point out that it's better to base these patches on\n> > > > HEAD (master branch) than REL_12_0, because all new code is\n> > > > committed to the master branch, whereas stable branches such as\n> > > > REL_12_0 only receive bug fixes. Do you have any\n> > specific reason to be working on REL_12_0?\n> > >\n> > > Yes, because I think it's human-friendly to reproduce and discuss\n> > > performance measurement. Of course I know\n> > all new accepted patches are merged into master's HEAD, not stable\n> > branches and not even release tags, so I'm aware of rebasing my\n> > patchset onto master sooner or later. However, if someone, including\n> > me, says that s/he applies my patchset to \"master\" and measures its\n> > performance, we have to pay attention to which commit the \"master\"\n> > really points to. Although we have sha1 hashes to specify which\n> > commit, we should check whether the specific commit on master has patches affecting performance or not\n> because master's HEAD gets new patches day by day. On the other hand, a release tag clearly points the commit\n> all we probably know. Also we can check more easily the features and improvements by using release notes and\n> user manuals.\n> >\n> > Thanks for clarifying. I see where you're coming from.\n> >\n> > While I do sometimes see people reporting numbers with the latest\n> > stable release' branch, that's normally just one of the baselines.\n> > The more important baseline for ongoing development is the master\n> > branch's HEAD, which is also what people volunteering to test your\n> > patches would use. Anyone who reports would have to give at least two\n> > numbers -- performance with a branch's HEAD without patch applied and\n> > that with patch applied -- which can be enough in most cases to see\n> > the difference the patch makes. Sure, the numbers might change on\n> > each report, but that's fine I'd think. If you continue to develop against the stable branch, you might miss to\n> notice impact from any relevant developments in the master branch, even developments which possibly require\n> rethinking the architecture of your own changes, although maybe that rarely occurs.\n> >\n> > Thanks,\n> > Amit",
"msg_date": "Wed, 18 Mar 2020 17:58:45 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear Andres,\n\nThank you for your advice about MAP_POPULATE flag. I rebased my msync patchset onto master and added a commit to append that flag\nwhen mmap. A new v2 patchset is attached to this mail. Note that this patchset is NOT non-volatile WAL buffer's one.\n\nI also measured performance of the following three versions, varying -c/--client and -j/--jobs options of pgbench, for each scaling\nfactor s = 50 or 1000.\n\n- Before patchset (say \"before\")\n- After patchset except patch 0005 not to use MAP_POPULATE (\"after (no populate)\")\n- After full patchset to use MAP_POPULATE (\"after (populate)\")\n\nThe results are presented in the following tables and the attached charts. Conditions, steps, and other details will be shown\nlater. Note that, unlike the measurement of non-volatile WAL buffer I sent recently [1], I used an NVMe SSD for pg_wal to evaluate\nthis patchset with traditional mmap-ed files, that is, direct access (DAX) is not supported and there are page caches.\n\n\nResults (s=50)\n==============\n Throughput [10^3 TPS]\n( c, j) before after after\n (no populate) (populate)\n------- -------------------------------------\n( 8, 8) 30.9 28.1 (- 9.2%) 28.3 (- 8.6%)\n(18,18) 61.5 46.1 (-25.0%) 47.7 (-22.3%)\n(36,18) 67.0 45.9 (-31.5%) 48.4 (-27.8%)\n(54,18) 68.3 47.0 (-31.3%) 49.6 (-27.5%)\n\n Average Latency [ms]\n( c, j) before after after\n (no populate) (populate)\n------- --------------------------------------\n( 8, 8) 0.259 0.285 (+10.0%) 0.283 (+ 9.3%)\n(18,18) 0.293 0.391 (+33.4%) 0.377 (+28.7%)\n(36,18) 0.537 0.784 (+46.0%) 0.744 (+38.5%)\n(54,18) 0.790 1.149 (+45.4%) 1.090 (+38.0%)\n\n\nResults (s=1000)\n================\n Throghput [10^3 TPS]\n( c, j) before after after\n (no populate) (populate)\n------- ------------------------------------\n( 8, 8) 32.0 29.6 (- 7.6%) 29.1 (- 9.0%)\n(18,18) 66.1 49.2 (-25.6%) 50.4 (-23.7%)\n(36,18) 76.4 51.0 (-33.3%) 53.4 (-30.1%)\n(54,18) 80.1 54.3 (-32.2%) 57.2 (-28.6%)\n\n Average latency [10^3 TPS]\n( c, j) before after after\n (no populate) (populate)\n------- --------------------------------------\n( 8, 8) 0.250 0.271 (+ 8.4%) 0.275 (+10.0%)\n(18,18) 0.272 0.366 (+34.6%) 0.357 (+31.3%)\n(36,18) 0.471 0.706 (+49.9%) 0.674 (+43.1%)\n(54,18) 0.674 0.995 (+47.6%) 0.944 (+40.1%)\n\n\nI'd say MAP_POPULATE made performance a little better in large #clients cases, comparing \"populate\" with \"no populate\". However,\ncomparing \"after\" with \"before\", I found both throughput and average latency degraded. VTune told me that \"after (populate)\" still\nspent larger CPU time for memcpy-ing WAL records into mmap-ed segments than \"before\".\n\nI also made a microbenchmark to see the behavior of mmap and msync. I found that:\n\n- A major fault occured at mmap with MAP_POPULATE, instead at first access to the mmap-ed space.\n- Some minor faults also occured at mmap with MAP_POPULATE, and no additional fault occured when I loaded from the mmap-ed space.\nBut once I stored to that space, a minor fault occured.\n- When I stored to the page that had been msync-ed, a minor fault occurred.\n\nSo I think one of the remaining causes of performance degrade is minor faults when mmap-ed pages get dirtied. And it seems not to\nbe solved by MAP_POPULATE only, as far as I see.\n\n\nConditions\n==========\n- Use one physical server having 2 NUMA nodes (node 0 and 1)\n - Pin postgres (server processes) to node 0 and pgbench to node 1\n - 18 cores and 192GiB DRAM per node\n- Use two NVMe SSDs; one for PGDATA, another for pg_wal\n - Both are installed on the server-side node, that is, node 0\n - Both are formatted with ext4\n- Use the attached postgresql.conf\n\n\nSteps\n=====\nFor each (c,j) pair, I did the following steps three times then I found the median of the three as a final result shown in the\ntables above.\n\n(1) Run initdb with proper -D and -X options\n(2) Start postgres and create a database for pgbench tables\n(3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\n(4) Stop postgres, remount filesystems, and start postgres again\n(5) Execute pg_prewarm extension for all the four pgbench tables\n(6) Run pgbench during 30 minutes\n\n\npgbench command line\n====================\n$ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\n\nI gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\n\n\nSoftware\n========\n- Distro: Ubuntu 18.04\n- Kernel: Linux 5.4 (vanilla kernel)\n- C Compiler: gcc 7.4.0\n- PMDK: 1.7\n- PostgreSQL: d677550 (master on Mar 3, 2020)\n\n\nHardware\n========\n- System: HPE ProLiant DL380 Gen10\n- CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\n- DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\n- NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA x2\n\n\nBest regards,\nTakashi\n\n\n[1] https://www.postgresql.org/message-id/002701d5fd03$6e1d97a0$4a58c6e0$@hco.ntt.co.jp_1\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Andres Freund <andres@anarazel.de>\n> Sent: Thursday, February 20, 2020 2:04 PM\n> To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>;\n> pgsql-hackers@postgresql.org\n> Subject: Re: [PoC] Non-volatile WAL buffer\n> \n> Hi,\n> \n> On 2020-02-17 13:12:37 +0900, Takashi Menjo wrote:\n> > I applied my patchset that mmap()-s WAL segments as WAL buffers to\n> > refs/tags/REL_12_0, and measured and analyzed its performance with\n> > pgbench. Roughly speaking, When I used *SSD and ext4* to store WAL,\n> > it was \"obviously worse\" than the original REL_12_0. VTune told me\n> > that the CPU time of memcpy() called by CopyXLogRecordToWAL() got\n> > larger than before.\n> \n> FWIW, this might largely be because of page faults. In contrast to before we wouldn't reuse the same pages\n> (because they've been munmap()/mmap()ed), so the first time they're touched, we'll incur page faults. Did you\n> try mmap()ing with MAP_POPULATE? It's probably also worthwhile to try to use MAP_HUGETLB.\n> \n> Still doubtful it's the right direction, but I'd rather have good numbers to back me up :)\n> \n> Greetings,\n> \n> Andres Freund",
"msg_date": "Thu, 19 Mar 2020 15:11:10 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear hackers,\n\nI update my non-volatile WAL buffer's patchset to v3. Now we can use it in streaming replication mode.\n\nUpdates from v2:\n\n- walreceiver supports non-volatile WAL buffer\nNow walreceiver stores received records directly to non-volatile WAL buffer if applicable.\n\n- pg_basebackup supports non-volatile WAL buffer\nNow pg_basebackup copies received WAL segments onto non-volatile WAL buffer if you run it with \"nvwal\" mode (-Fn).\nYou should specify a new NVWAL path with --nvwal-path option. The path will be written to postgresql.auto.conf or recovery.conf. The size of the new NVWAL is same as the master's one.\n\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Sent: Wednesday, March 18, 2020 5:59 PM\n> To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org>\n> Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>; 'Amit Langote'\n> <amitlangote09@gmail.com>\n> Subject: RE: [PoC] Non-volatile WAL buffer\n> \n> Dear hackers,\n> \n> I rebased my non-volatile WAL buffer's patchset onto master. A new v2 patchset is attached to this mail.\n> \n> I also measured performance before and after patchset, varying -c/--client and -j/--jobs options of pgbench, for\n> each scaling factor s = 50 or 1000. The results are presented in the following tables and the attached charts.\n> Conditions, steps, and other details will be shown later.\n> \n> \n> Results (s=50)\n> ==============\n> Throughput [10^3 TPS] Average latency [ms]\n> ( c, j) before after before after\n> ------- --------------------- ---------------------\n> ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\n> (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\n> (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\n> (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\n> \n> \n> Results (s=1000)\n> ================\n> Throughput [10^3 TPS] Average latency [ms]\n> ( c, j) before after before after\n> ------- --------------------- ---------------------\n> ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\n> (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\n> (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\n> (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\n> \n> \n> Both throughput and average latency are improved for each scaling factor. Throughput seemed to almost reach\n> the upper limit when (c,j)=(36,18).\n> \n> The percentage in s=1000 case looks larger than in s=50 case. I think larger scaling factor leads to less\n> contentions on the same tables and/or indexes, that is, less lock and unlock operations. In such a situation,\n> write-ahead logging appears to be more significant for performance.\n> \n> \n> Conditions\n> ==========\n> - Use one physical server having 2 NUMA nodes (node 0 and 1)\n> - Pin postgres (server processes) to node 0 and pgbench to node 1\n> - 18 cores and 192GiB DRAM per node\n> - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for pg_wal\n> - Both are installed on the server-side node, that is, node 0\n> - Both are formatted with ext4\n> - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access (DAX)\n> - Use the attached postgresql.conf\n> - Two new items nvwal_path and nvwal_size are used only after patch\n> \n> \n> Steps\n> =====\n> For each (c,j) pair, I did the following steps three times then I found the median of the three as a final result shown\n> in the tables above.\n> \n> (1) Run initdb with proper -D and -X options; and also give --nvwal-path and --nvwal-size options after patch\n> (2) Start postgres and create a database for pgbench tables\n> (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\n> (4) Stop postgres, remount filesystems, and start postgres again\n> (5) Execute pg_prewarm extension for all the four pgbench tables\n> (6) Run pgbench during 30 minutes\n> \n> \n> pgbench command line\n> ====================\n> $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\n> \n> I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\n> \n> \n> Software\n> ========\n> - Distro: Ubuntu 18.04\n> - Kernel: Linux 5.4 (vanilla kernel)\n> - C Compiler: gcc 7.4.0\n> - PMDK: 1.7\n> - PostgreSQL: d677550 (master on Mar 3, 2020)\n> \n> \n> Hardware\n> ========\n> - System: HPE ProLiant DL380 Gen10\n> - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\n> - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\n> - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\n> - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\n> \n> \n> Best regards,\n> Takashi\n> \n> --\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation Center\n> \n> > -----Original Message-----\n> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > Sent: Thursday, February 20, 2020 6:30 PM\n> > To: 'Amit Langote' <amitlangote09@gmail.com>\n> > Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>;\n> 'PostgreSQL-development'\n> > <pgsql-hackers@postgresql.org>\n> > Subject: RE: [PoC] Non-volatile WAL buffer\n> >\n> > Dear Amit,\n> >\n> > Thank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\n> >\n> > I'm rebasing my branch onto master. I'll submit an updated patchset and performance report later.\n> >\n> > Best regards,\n> > Takashi\n> >\n> > --\n> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software\n> > Innovation Center\n> >\n> > > -----Original Message-----\n> > > From: Amit Langote <amitlangote09@gmail.com>\n> > > Sent: Monday, February 17, 2020 5:21 PM\n> > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > > Cc: Robert Haas <robertmhaas@gmail.com>; Heikki Linnakangas\n> > > <hlinnaka@iki.fi>; PostgreSQL-development\n> > > <pgsql-hackers@postgresql.org>\n> > > Subject: Re: [PoC] Non-volatile WAL buffer\n> > >\n> > > Hello,\n> > >\n> > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > > > Hello Amit,\n> > > >\n> > > > > I apologize for not having any opinion on the patches\n> > > > > themselves, but let me point out that it's better to base these\n> > > > > patches on HEAD (master branch) than REL_12_0, because all new\n> > > > > code is committed to the master branch, whereas stable branches\n> > > > > such as\n> > > > > REL_12_0 only receive bug fixes. Do you have any\n> > > specific reason to be working on REL_12_0?\n> > > >\n> > > > Yes, because I think it's human-friendly to reproduce and discuss\n> > > > performance measurement. Of course I know\n> > > all new accepted patches are merged into master's HEAD, not stable\n> > > branches and not even release tags, so I'm aware of rebasing my\n> > > patchset onto master sooner or later. However, if someone,\n> > > including me, says that s/he applies my patchset to \"master\" and\n> > > measures its performance, we have to pay attention to which commit the \"master\"\n> > > really points to. Although we have sha1 hashes to specify which\n> > > commit, we should check whether the specific commit on master has\n> > > patches affecting performance or not\n> > because master's HEAD gets new patches day by day. On the other hand,\n> > a release tag clearly points the commit all we probably know. Also we\n> > can check more easily the features and improvements by using release notes and user manuals.\n> > >\n> > > Thanks for clarifying. I see where you're coming from.\n> > >\n> > > While I do sometimes see people reporting numbers with the latest\n> > > stable release' branch, that's normally just one of the baselines.\n> > > The more important baseline for ongoing development is the master\n> > > branch's HEAD, which is also what people volunteering to test your\n> > > patches would use. Anyone who reports would have to give at least\n> > > two numbers -- performance with a branch's HEAD without patch\n> > > applied and that with patch applied -- which can be enough in most\n> > > cases to see the difference the patch makes. Sure, the numbers\n> > > might change on each report, but that's fine I'd think. If you\n> > > continue to develop against the stable branch, you might miss to\n> > notice impact from any relevant developments in the master branch,\n> > even developments which possibly require rethinking the architecture of your own changes, although maybe that\n> rarely occurs.\n> > >\n> > > Thanks,\n> > > Amit",
"msg_date": "Wed, 24 Jun 2020 16:43:16 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Rebased.\n\n\n2020年6月24日(水) 16:44 Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>:\n\n> Dear hackers,\n>\n> I update my non-volatile WAL buffer's patchset to v3. Now we can use it\n> in streaming replication mode.\n>\n> Updates from v2:\n>\n> - walreceiver supports non-volatile WAL buffer\n> Now walreceiver stores received records directly to non-volatile WAL\n> buffer if applicable.\n>\n> - pg_basebackup supports non-volatile WAL buffer\n> Now pg_basebackup copies received WAL segments onto non-volatile WAL\n> buffer if you run it with \"nvwal\" mode (-Fn).\n> You should specify a new NVWAL path with --nvwal-path option. The path\n> will be written to postgresql.auto.conf or recovery.conf. The size of the\n> new NVWAL is same as the master's one.\n>\n>\n> Best regards,\n> Takashi\n>\n> --\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> NTT Software Innovation Center\n>\n> > -----Original Message-----\n> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > Sent: Wednesday, March 18, 2020 5:59 PM\n> > To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org>\n> > Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <\n> hlinnaka@iki.fi>; 'Amit Langote'\n> > <amitlangote09@gmail.com>\n> > Subject: RE: [PoC] Non-volatile WAL buffer\n> >\n> > Dear hackers,\n> >\n> > I rebased my non-volatile WAL buffer's patchset onto master. A new v2\n> patchset is attached to this mail.\n> >\n> > I also measured performance before and after patchset, varying\n> -c/--client and -j/--jobs options of pgbench, for\n> > each scaling factor s = 50 or 1000. The results are presented in the\n> following tables and the attached charts.\n> > Conditions, steps, and other details will be shown later.\n> >\n> >\n> > Results (s=50)\n> > ==============\n> > Throughput [10^3 TPS] Average latency [ms]\n> > ( c, j) before after before after\n> > ------- --------------------- ---------------------\n> > ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\n> > (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\n> > (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\n> > (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\n> >\n> >\n> > Results (s=1000)\n> > ================\n> > Throughput [10^3 TPS] Average latency [ms]\n> > ( c, j) before after before after\n> > ------- --------------------- ---------------------\n> > ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\n> > (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\n> > (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\n> > (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\n> >\n> >\n> > Both throughput and average latency are improved for each scaling\n> factor. Throughput seemed to almost reach\n> > the upper limit when (c,j)=(36,18).\n> >\n> > The percentage in s=1000 case looks larger than in s=50 case. I think\n> larger scaling factor leads to less\n> > contentions on the same tables and/or indexes, that is, less lock and\n> unlock operations. In such a situation,\n> > write-ahead logging appears to be more significant for performance.\n> >\n> >\n> > Conditions\n> > ==========\n> > - Use one physical server having 2 NUMA nodes (node 0 and 1)\n> > - Pin postgres (server processes) to node 0 and pgbench to node 1\n> > - 18 cores and 192GiB DRAM per node\n> > - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for\n> pg_wal\n> > - Both are installed on the server-side node, that is, node 0\n> > - Both are formatted with ext4\n> > - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access\n> (DAX)\n> > - Use the attached postgresql.conf\n> > - Two new items nvwal_path and nvwal_size are used only after patch\n> >\n> >\n> > Steps\n> > =====\n> > For each (c,j) pair, I did the following steps three times then I found\n> the median of the three as a final result shown\n> > in the tables above.\n> >\n> > (1) Run initdb with proper -D and -X options; and also give --nvwal-path\n> and --nvwal-size options after patch\n> > (2) Start postgres and create a database for pgbench tables\n> > (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\n> > (4) Stop postgres, remount filesystems, and start postgres again\n> > (5) Execute pg_prewarm extension for all the four pgbench tables\n> > (6) Run pgbench during 30 minutes\n> >\n> >\n> > pgbench command line\n> > ====================\n> > $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j\n> ___ dbname\n> >\n> > I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\n> >\n> >\n> > Software\n> > ========\n> > - Distro: Ubuntu 18.04\n> > - Kernel: Linux 5.4 (vanilla kernel)\n> > - C Compiler: gcc 7.4.0\n> > - PMDK: 1.7\n> > - PostgreSQL: d677550 (master on Mar 3, 2020)\n> >\n> >\n> > Hardware\n> > ========\n> > - System: HPE ProLiant DL380 Gen10\n> > - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\n> > - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\n> > - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\n> > - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\n> >\n> >\n> > Best regards,\n> > Takashi\n> >\n> > --\n> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation\n> Center\n> >\n> > > -----Original Message-----\n> > > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > > Sent: Thursday, February 20, 2020 6:30 PM\n> > > To: 'Amit Langote' <amitlangote09@gmail.com>\n> > > Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <\n> hlinnaka@iki.fi>;\n> > 'PostgreSQL-development'\n> > > <pgsql-hackers@postgresql.org>\n> > > Subject: RE: [PoC] Non-volatile WAL buffer\n> > >\n> > > Dear Amit,\n> > >\n> > > Thank you for your advice. Exactly, it's so to speak \"do as the\n> hackers do when in pgsql\"...\n> > >\n> > > I'm rebasing my branch onto master. I'll submit an updated patchset\n> and performance report later.\n> > >\n> > > Best regards,\n> > > Takashi\n> > >\n> > > --\n> > > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software\n> > > Innovation Center\n> > >\n> > > > -----Original Message-----\n> > > > From: Amit Langote <amitlangote09@gmail.com>\n> > > > Sent: Monday, February 17, 2020 5:21 PM\n> > > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > > > Cc: Robert Haas <robertmhaas@gmail.com>; Heikki Linnakangas\n> > > > <hlinnaka@iki.fi>; PostgreSQL-development\n> > > > <pgsql-hackers@postgresql.org>\n> > > > Subject: Re: [PoC] Non-volatile WAL buffer\n> > > >\n> > > > Hello,\n> > > >\n> > > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <\n> takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > > > > Hello Amit,\n> > > > >\n> > > > > > I apologize for not having any opinion on the patches\n> > > > > > themselves, but let me point out that it's better to base these\n> > > > > > patches on HEAD (master branch) than REL_12_0, because all new\n> > > > > > code is committed to the master branch, whereas stable branches\n> > > > > > such as\n> > > > > > REL_12_0 only receive bug fixes. Do you have any\n> > > > specific reason to be working on REL_12_0?\n> > > > >\n> > > > > Yes, because I think it's human-friendly to reproduce and discuss\n> > > > > performance measurement. Of course I know\n> > > > all new accepted patches are merged into master's HEAD, not stable\n> > > > branches and not even release tags, so I'm aware of rebasing my\n> > > > patchset onto master sooner or later. However, if someone,\n> > > > including me, says that s/he applies my patchset to \"master\" and\n> > > > measures its performance, we have to pay attention to which commit\n> the \"master\"\n> > > > really points to. Although we have sha1 hashes to specify which\n> > > > commit, we should check whether the specific commit on master has\n> > > > patches affecting performance or not\n> > > because master's HEAD gets new patches day by day. On the other hand,\n> > > a release tag clearly points the commit all we probably know. Also we\n> > > can check more easily the features and improvements by using release\n> notes and user manuals.\n> > > >\n> > > > Thanks for clarifying. I see where you're coming from.\n> > > >\n> > > > While I do sometimes see people reporting numbers with the latest\n> > > > stable release' branch, that's normally just one of the baselines.\n> > > > The more important baseline for ongoing development is the master\n> > > > branch's HEAD, which is also what people volunteering to test your\n> > > > patches would use. Anyone who reports would have to give at least\n> > > > two numbers -- performance with a branch's HEAD without patch\n> > > > applied and that with patch applied -- which can be enough in most\n> > > > cases to see the difference the patch makes. Sure, the numbers\n> > > > might change on each report, but that's fine I'd think. If you\n> > > > continue to develop against the stable branch, you might miss to\n> > > notice impact from any relevant developments in the master branch,\n> > > even developments which possibly require rethinking the architecture\n> of your own changes, although maybe that\n> > rarely occurs.\n> > > >\n> > > > Thanks,\n> > > > Amit\n>\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Thu, 10 Sep 2020 17:01:27 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Takashi,\r\n\r\nThank you for the patch and work on accelerating PG performance with NVM. I applied the patch and made some performance test based on the patch v4. I stored database data files on NVMe SSD and stored WAL file on Intel PMem (NVM). I used two methods to store WAL file(s):\r\n\r\n1. Leverage your patch to access PMem with libpmem (NVWAL patch).\r\n\r\n2. Access PMem with legacy filesystem interface, that means use PMem as ordinary block device, no PG patch is required to access PMem (Storage over App Direct).\r\n\r\nI tried two insert scenarios:\r\n\r\nA. Insert small record (length of record to be inserted is 24 bytes), I think it is similar as your test\r\n\r\nB. Insert large record (length of record to be inserted is 328 bytes)\r\n\r\nMy original purpose is to see higher performance gain in scenario B as it is more write intensive on WAL. But I observed that NVWAL patch method had ~5% performance improvement compared with Storage over App Direct method in scenario A, while had ~20% performance degradation in scenario B.\r\n\r\nI made further investigation on the test. I found that NVWAL patch can improve performance of XlogFlush function, but it may impact performance of CopyXlogRecordToWAL function. It may be related to the higher latency of memcpy to Intel PMem comparing with DRAM. Here are key data in my test:\r\n\r\nScenario A (length of record to be inserted: 24 bytes per record):\r\n==============================\r\n NVWAL SoAD\r\n------------------------------------ ------- -------\r\nThrougput (10^3 TPS) 310.5 296.0\r\nCPU Time % of CopyXlogRecordToWAL 0.4 0.2\r\nCPU Time % of XLogInsertRecord 1.5 0.8\r\nCPU Time % of XLogFlush 2.1 9.6\r\n\r\nScenario B (length of record to be inserted: 328 bytes per record):\r\n==============================\r\n NVWAL SoAD\r\n------------------------------------ ------- -------\r\nThrougput (10^3 TPS) 13.0 16.9\r\nCPU Time % of CopyXlogRecordToWAL 3.0 1.6\r\nCPU Time % of XLogInsertRecord 23.0 16.4\r\nCPU Time % of XLogFlush 2.3 5.9\r\n\r\nBest Regards,\r\nGang\r\n\r\nFrom: Takashi Menjo <takashi.menjo@gmail.com>\r\nSent: Thursday, September 10, 2020 4:01 PM\r\nTo: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: [PoC] Non-volatile WAL buffer\r\n\r\nRebased.\r\n\r\n\r\n2020年6月24日(水) 16:44 Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp<mailto:takashi.menjou.vg@hco.ntt.co.jp>>:\r\nDear hackers,\r\n\r\nI update my non-volatile WAL buffer's patchset to v3. Now we can use it in streaming replication mode.\r\n\r\nUpdates from v2:\r\n\r\n- walreceiver supports non-volatile WAL buffer\r\nNow walreceiver stores received records directly to non-volatile WAL buffer if applicable.\r\n\r\n- pg_basebackup supports non-volatile WAL buffer\r\nNow pg_basebackup copies received WAL segments onto non-volatile WAL buffer if you run it with \"nvwal\" mode (-Fn).\r\nYou should specify a new NVWAL path with --nvwal-path option. The path will be written to postgresql.auto.conf or recovery.conf. The size of the new NVWAL is same as the master's one.\r\n\r\n\r\nBest regards,\r\nTakashi\r\n\r\n--\r\nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp<mailto:takashi.menjou.vg@hco.ntt.co.jp>>\r\nNTT Software Innovation Center\r\n\r\n> -----Original Message-----\r\n> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp<mailto:takashi.menjou.vg@hco.ntt.co.jp>>\r\n> Sent: Wednesday, March 18, 2020 5:59 PM\r\n> To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org<mailto:pgsql-hackers@postgresql.org>>\r\n> Cc: 'Robert Haas' <robertmhaas@gmail.com<mailto:robertmhaas@gmail.com>>; 'Heikki Linnakangas' <hlinnaka@iki.fi<mailto:hlinnaka@iki.fi>>; 'Amit Langote'\r\n> <amitlangote09@gmail.com<mailto:amitlangote09@gmail.com>>\r\n> Subject: RE: [PoC] Non-volatile WAL buffer\r\n>\r\n> Dear hackers,\r\n>\r\n> I rebased my non-volatile WAL buffer's patchset onto master. A new v2 patchset is attached to this mail.\r\n>\r\n> I also measured performance before and after patchset, varying -c/--client and -j/--jobs options of pgbench, for\r\n> each scaling factor s = 50 or 1000. The results are presented in the following tables and the attached charts.\r\n> Conditions, steps, and other details will be shown later.\r\n>\r\n>\r\n> Results (s=50)\r\n> ==============\r\n> Throughput [10^3 TPS] Average latency [ms]\r\n> ( c, j) before after before after\r\n> ------- --------------------- ---------------------\r\n> ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\r\n> (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\r\n> (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\r\n> (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\r\n>\r\n>\r\n> Results (s=1000)\r\n> ================\r\n> Throughput [10^3 TPS] Average latency [ms]\r\n> ( c, j) before after before after\r\n> ------- --------------------- ---------------------\r\n> ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\r\n> (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\r\n> (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\r\n> (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\r\n>\r\n>\r\n> Both throughput and average latency are improved for each scaling factor. Throughput seemed to almost reach\r\n> the upper limit when (c,j)=(36,18).\r\n>\r\n> The percentage in s=1000 case looks larger than in s=50 case. I think larger scaling factor leads to less\r\n> contentions on the same tables and/or indexes, that is, less lock and unlock operations. In such a situation,\r\n> write-ahead logging appears to be more significant for performance.\r\n>\r\n>\r\n> Conditions\r\n> ==========\r\n> - Use one physical server having 2 NUMA nodes (node 0 and 1)\r\n> - Pin postgres (server processes) to node 0 and pgbench to node 1\r\n> - 18 cores and 192GiB DRAM per node\r\n> - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for pg_wal\r\n> - Both are installed on the server-side node, that is, node 0\r\n> - Both are formatted with ext4\r\n> - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access (DAX)\r\n> - Use the attached postgresql.conf\r\n> - Two new items nvwal_path and nvwal_size are used only after patch\r\n>\r\n>\r\n> Steps\r\n> =====\r\n> For each (c,j) pair, I did the following steps three times then I found the median of the three as a final result shown\r\n> in the tables above.\r\n>\r\n> (1) Run initdb with proper -D and -X options; and also give --nvwal-path and --nvwal-size options after patch\r\n> (2) Start postgres and create a database for pgbench tables\r\n> (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\r\n> (4) Stop postgres, remount filesystems, and start postgres again\r\n> (5) Execute pg_prewarm extension for all the four pgbench tables\r\n> (6) Run pgbench during 30 minutes\r\n>\r\n>\r\n> pgbench command line\r\n> ====================\r\n> $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\r\n>\r\n> I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\r\n>\r\n>\r\n> Software\r\n> ========\r\n> - Distro: Ubuntu 18.04\r\n> - Kernel: Linux 5.4 (vanilla kernel)\r\n> - C Compiler: gcc 7.4.0\r\n> - PMDK: 1.7\r\n> - PostgreSQL: d677550 (master on Mar 3, 2020)\r\n>\r\n>\r\n> Hardware\r\n> ========\r\n> - System: HPE ProLiant DL380 Gen10\r\n> - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\r\n> - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\r\n> - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\r\n> - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\r\n>\r\n>\r\n> Best regards,\r\n> Takashi\r\n>\r\n> --\r\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp<mailto:takashi.menjou.vg@hco.ntt.co.jp>> NTT Software Innovation Center\r\n>\r\n> > -----Original Message-----\r\n> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp<mailto:takashi.menjou.vg@hco.ntt.co.jp>>\r\n> > Sent: Thursday, February 20, 2020 6:30 PM\r\n> > To: 'Amit Langote' <amitlangote09@gmail.com<mailto:amitlangote09@gmail.com>>\r\n> > Cc: 'Robert Haas' <robertmhaas@gmail.com<mailto:robertmhaas@gmail.com>>; 'Heikki Linnakangas' <hlinnaka@iki.fi<mailto:hlinnaka@iki.fi>>;\r\n> 'PostgreSQL-development'\r\n> > <pgsql-hackers@postgresql.org<mailto:pgsql-hackers@postgresql.org>>\r\n> > Subject: RE: [PoC] Non-volatile WAL buffer\r\n> >\r\n> > Dear Amit,\r\n> >\r\n> > Thank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\r\n> >\r\n> > I'm rebasing my branch onto master. I'll submit an updated patchset and performance report later.\r\n> >\r\n> > Best regards,\r\n> > Takashi\r\n> >\r\n> > --\r\n> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp<mailto:takashi.menjou.vg@hco.ntt.co.jp>> NTT Software\r\n> > Innovation Center\r\n> >\r\n> > > -----Original Message-----\r\n> > > From: Amit Langote <amitlangote09@gmail.com<mailto:amitlangote09@gmail.com>>\r\n> > > Sent: Monday, February 17, 2020 5:21 PM\r\n> > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp<mailto:takashi.menjou.vg@hco.ntt.co.jp>>\r\n> > > Cc: Robert Haas <robertmhaas@gmail.com<mailto:robertmhaas@gmail.com>>; Heikki Linnakangas\r\n> > > <hlinnaka@iki.fi<mailto:hlinnaka@iki.fi>>; PostgreSQL-development\r\n> > > <pgsql-hackers@postgresql.org<mailto:pgsql-hackers@postgresql.org>>\r\n> > > Subject: Re: [PoC] Non-volatile WAL buffer\r\n> > >\r\n> > > Hello,\r\n> > >\r\n> > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp<mailto:takashi.menjou.vg@hco.ntt.co.jp>> wrote:\r\n> > > > Hello Amit,\r\n> > > >\r\n> > > > > I apologize for not having any opinion on the patches\r\n> > > > > themselves, but let me point out that it's better to base these\r\n> > > > > patches on HEAD (master branch) than REL_12_0, because all new\r\n> > > > > code is committed to the master branch, whereas stable branches\r\n> > > > > such as\r\n> > > > > REL_12_0 only receive bug fixes. Do you have any\r\n> > > specific reason to be working on REL_12_0?\r\n> > > >\r\n> > > > Yes, because I think it's human-friendly to reproduce and discuss\r\n> > > > performance measurement. Of course I know\r\n> > > all new accepted patches are merged into master's HEAD, not stable\r\n> > > branches and not even release tags, so I'm aware of rebasing my\r\n> > > patchset onto master sooner or later. However, if someone,\r\n> > > including me, says that s/he applies my patchset to \"master\" and\r\n> > > measures its performance, we have to pay attention to which commit the \"master\"\r\n> > > really points to. Although we have sha1 hashes to specify which\r\n> > > commit, we should check whether the specific commit on master has\r\n> > > patches affecting performance or not\r\n> > because master's HEAD gets new patches day by day. On the other hand,\r\n> > a release tag clearly points the commit all we probably know. Also we\r\n> > can check more easily the features and improvements by using release notes and user manuals.\r\n> > >\r\n> > > Thanks for clarifying. I see where you're coming from.\r\n> > >\r\n> > > While I do sometimes see people reporting numbers with the latest\r\n> > > stable release' branch, that's normally just one of the baselines.\r\n> > > The more important baseline for ongoing development is the master\r\n> > > branch's HEAD, which is also what people volunteering to test your\r\n> > > patches would use. Anyone who reports would have to give at least\r\n> > > two numbers -- performance with a branch's HEAD without patch\r\n> > > applied and that with patch applied -- which can be enough in most\r\n> > > cases to see the difference the patch makes. Sure, the numbers\r\n> > > might change on each report, but that's fine I'd think. If you\r\n> > > continue to develop against the stable branch, you might miss to\r\n> > notice impact from any relevant developments in the master branch,\r\n> > even developments which possibly require rethinking the architecture of your own changes, although maybe that\r\n> rarely occurs.\r\n> > >\r\n> > > Thanks,\r\n> > > Amit\r\n\r\n\r\n--\r\nTakashi Menjo <takashi.menjo@gmail.com<mailto:takashi.menjo@gmail.com>>\r\n\n\n\n\n\n\n\n\n\nHi Takashi,\n \nThank you for the patch and work on accelerating PG performance with NVM. I applied the patch and made some performance test based on the patch v4. I stored database data\r\n files on NVMe SSD and stored WAL file on Intel PMem (NVM). I used two methods to store WAL file(s):\n\n1. \r\nLeverage your patch to access PMem with libpmem (NVWAL patch).\n\n2. \r\nAccess PMem with legacy filesystem interface, that means use PMem as ordinary block device, no PG patch is required to access PMem (Storage over App Direct).\n \nI tried two insert scenarios:\r\n\n\nA. \r\nInsert small record (length of record to be inserted is 24 bytes), I think it is similar as your test\n\nB. \r\nInsert large record (length of record to be inserted is 328 bytes)\n \nMy original purpose is to see higher performance gain in scenario B as it is more write intensive on WAL. But I observed that NVWAL patch method had ~5% performance improvement\r\n compared with Storage over App Direct method in scenario A, while had ~20% performance degradation in scenario B.\n \nI made further investigation on the test. I found that NVWAL patch can improve performance of XlogFlush function, but it may impact performance of CopyXlogRecordToWAL function.\r\n It may be related to the higher latency of memcpy to Intel PMem comparing with DRAM. Here are key data in my test:\n \nScenario A (length of record to be inserted: 24 bytes per record):\n==============================\n NVWAL SoAD\n------------------------------------ ------- -------\nThrougput (10^3 TPS) 310.5 296.0\nCPU Time % of CopyXlogRecordToWAL 0.4 0.2\r\n\nCPU Time % of XLogInsertRecord 1.5 0.8\nCPU Time % of XLogFlush 2.1 9.6\n \nScenario B (length of record to be inserted: 328 bytes per record):\n==============================\n NVWAL SoAD\n------------------------------------ ------- -------\nThrougput (10^3 TPS) 13.0 16.9\nCPU Time % of CopyXlogRecordToWAL 3.0 1.6\r\n\nCPU Time % of XLogInsertRecord 23.0 16.4\nCPU Time % of XLogFlush 2.3 5.9\n \nBest Regards,\nGang\n \nFrom: Takashi Menjo <takashi.menjo@gmail.com>\r\n\nSent: Thursday, September 10, 2020 4:01 PM\nTo: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [PoC] Non-volatile WAL buffer\n \n\n\nRebased.\n\n\n \n\n\n \n\n\n2020年6月24日(水) 16:44 Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>:\n\n\nDear hackers,\n\r\nI update my non-volatile WAL buffer's patchset to v3. Now we can use it in streaming replication mode.\n\r\nUpdates from v2:\n\r\n- walreceiver supports non-volatile WAL buffer\r\nNow walreceiver stores received records directly to non-volatile WAL buffer if applicable.\n\r\n- pg_basebackup supports non-volatile WAL buffer\r\nNow pg_basebackup copies received WAL segments onto non-volatile WAL buffer if you run it with \"nvwal\" mode (-Fn).\r\nYou should specify a new NVWAL path with --nvwal-path option. The path will be written to postgresql.auto.conf or recovery.conf. The size of the new NVWAL is same as the master's one.\n\n\r\nBest regards,\r\nTakashi\n\r\n-- \r\nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\r\nNTT Software Innovation Center\n\r\n> -----Original Message-----\r\n> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\r\n> Sent: Wednesday, March 18, 2020 5:59 PM\r\n> To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org>\r\n> Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>; 'Amit Langote'\r\n> <amitlangote09@gmail.com>\r\n> Subject: RE: [PoC] Non-volatile WAL buffer\r\n> \r\n> Dear hackers,\r\n> \r\n> I rebased my non-volatile WAL buffer's patchset onto master. A new v2 patchset is attached to this mail.\r\n> \r\n> I also measured performance before and after patchset, varying -c/--client and -j/--jobs options of pgbench, for\r\n> each scaling factor s = 50 or 1000. The results are presented in the following tables and the attached charts.\r\n> Conditions, steps, and other details will be shown later.\r\n> \r\n> \r\n> Results (s=50)\r\n> ==============\r\n> Throughput [10^3 TPS] Average latency [ms]\r\n> ( c, j) before after before after\r\n> ------- --------------------- ---------------------\r\n> ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\r\n> (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\r\n> (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\r\n> (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\r\n> \r\n> \r\n> Results (s=1000)\r\n> ================\r\n> Throughput [10^3 TPS] Average latency [ms]\r\n> ( c, j) before after before after\r\n> ------- --------------------- ---------------------\r\n> ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\r\n> (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\r\n> (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\r\n> (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\r\n> \r\n> \r\n> Both throughput and average latency are improved for each scaling factor. Throughput seemed to almost reach\r\n> the upper limit when (c,j)=(36,18).\r\n> \r\n> The percentage in s=1000 case looks larger than in s=50 case. I think larger scaling factor leads to less\r\n> contentions on the same tables and/or indexes, that is, less lock and unlock operations. In such a situation,\r\n> write-ahead logging appears to be more significant for performance.\r\n> \r\n> \r\n> Conditions\r\n> ==========\r\n> - Use one physical server having 2 NUMA nodes (node 0 and 1)\r\n> - Pin postgres (server processes) to node 0 and pgbench to node 1\r\n> - 18 cores and 192GiB DRAM per node\r\n> - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for pg_wal\r\n> - Both are installed on the server-side node, that is, node 0\r\n> - Both are formatted with ext4\r\n> - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access (DAX)\r\n> - Use the attached postgresql.conf\r\n> - Two new items nvwal_path and nvwal_size are used only after patch\r\n> \r\n> \r\n> Steps\r\n> =====\r\n> For each (c,j) pair, I did the following steps three times then I found the median of the three as a final result shown\r\n> in the tables above.\r\n> \r\n> (1) Run initdb with proper -D and -X options; and also give --nvwal-path and --nvwal-size options after patch\r\n> (2) Start postgres and create a database for pgbench tables\r\n> (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\r\n> (4) Stop postgres, remount filesystems, and start postgres again\r\n> (5) Execute pg_prewarm extension for all the four pgbench tables\r\n> (6) Run pgbench during 30 minutes\r\n> \r\n> \r\n> pgbench command line\r\n> ====================\r\n> $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\r\n> \r\n> I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\r\n> \r\n> \r\n> Software\r\n> ========\r\n> - Distro: Ubuntu 18.04\r\n> - Kernel: Linux 5.4 (vanilla kernel)\r\n> - C Compiler: gcc 7.4.0\r\n> - PMDK: 1.7\r\n> - PostgreSQL: d677550 (master on Mar 3, 2020)\r\n> \r\n> \r\n> Hardware\r\n> ========\r\n> - System: HPE ProLiant DL380 Gen10\r\n> - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\r\n> - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\r\n> - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\r\n> - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\r\n> \r\n> \r\n> Best regards,\r\n> Takashi\r\n> \r\n> --\r\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation Center\r\n> \r\n> > -----Original Message-----\r\n> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\r\n> > Sent: Thursday, February 20, 2020 6:30 PM\r\n> > To: 'Amit Langote' <amitlangote09@gmail.com>\r\n> > Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>;\r\n> 'PostgreSQL-development'\r\n> > <pgsql-hackers@postgresql.org>\r\n> > Subject: RE: [PoC] Non-volatile WAL buffer\r\n> >\r\n> > Dear Amit,\r\n> >\r\n> > Thank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\r\n> >\r\n> > I'm rebasing my branch onto master. I'll submit an updated patchset and performance report later.\r\n> >\r\n> > Best regards,\r\n> > Takashi\r\n> >\r\n> > --\r\n> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software\r\n> > Innovation Center\r\n> >\r\n> > > -----Original Message-----\r\n> > > From: Amit Langote <amitlangote09@gmail.com>\r\n> > > Sent: Monday, February 17, 2020 5:21 PM\r\n> > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\r\n> > > Cc: Robert Haas <robertmhaas@gmail.com>; Heikki Linnakangas\r\n> > > <hlinnaka@iki.fi>; PostgreSQL-development\r\n> > > <pgsql-hackers@postgresql.org>\r\n> > > Subject: Re: [PoC] Non-volatile WAL buffer\r\n> > >\r\n> > > Hello,\r\n> > >\r\n> > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> wrote:\r\n> > > > Hello Amit,\r\n> > > >\r\n> > > > > I apologize for not having any opinion on the patches\r\n> > > > > themselves, but let me point out that it's better to base these\r\n> > > > > patches on HEAD (master branch) than REL_12_0, because all new\r\n> > > > > code is committed to the master branch, whereas stable branches\r\n> > > > > such as\r\n> > > > > REL_12_0 only receive bug fixes. Do you have any\r\n> > > specific reason to be working on REL_12_0?\r\n> > > >\r\n> > > > Yes, because I think it's human-friendly to reproduce and discuss\r\n> > > > performance measurement. Of course I know\r\n> > > all new accepted patches are merged into master's HEAD, not stable\r\n> > > branches and not even release tags, so I'm aware of rebasing my\r\n> > > patchset onto master sooner or later. However, if someone,\r\n> > > including me, says that s/he applies my patchset to \"master\" and\r\n> > > measures its performance, we have to pay attention to which commit the \"master\"\r\n> > > really points to. Although we have sha1 hashes to specify which\r\n> > > commit, we should check whether the specific commit on master has\r\n> > > patches affecting performance or not\r\n> > because master's HEAD gets new patches day by day. On the other hand,\r\n> > a release tag clearly points the commit all we probably know. Also we\r\n> > can check more easily the features and improvements by using release notes and user manuals.\r\n> > >\r\n> > > Thanks for clarifying. I see where you're coming from.\r\n> > >\r\n> > > While I do sometimes see people reporting numbers with the latest\r\n> > > stable release' branch, that's normally just one of the baselines.\r\n> > > The more important baseline for ongoing development is the master\r\n> > > branch's HEAD, which is also what people volunteering to test your\r\n> > > patches would use. Anyone who reports would have to give at least\r\n> > > two numbers -- performance with a branch's HEAD without patch\r\n> > > applied and that with patch applied -- which can be enough in most\r\n> > > cases to see the difference the patch makes. Sure, the numbers\r\n> > > might change on each report, but that's fine I'd think. If you\r\n> > > continue to develop against the stable branch, you might miss to\r\n> > notice impact from any relevant developments in the master branch,\r\n> > even developments which possibly require rethinking the architecture of your own changes, although maybe that\r\n> rarely occurs.\r\n> > >\r\n> > > Thanks,\r\n> > > Amit\n\n\n\n\n\n \n\n-- \n\nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Mon, 21 Sep 2020 05:14:21 +0000",
"msg_from": "\"Deng, Gang\" <gang.deng@intel.com>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hello Gang,\n\nThank you for your report. I have not taken care of record size deeply yet,\nso your report is very interesting. I will also have a test like yours then\npost results here.\n\nRegards,\nTakashi\n\n\n2020年9月21日(月) 14:14 Deng, Gang <gang.deng@intel.com>:\n\n> Hi Takashi,\n>\n>\n>\n> Thank you for the patch and work on accelerating PG performance with NVM.\n> I applied the patch and made some performance test based on the patch v4. I\n> stored database data files on NVMe SSD and stored WAL file on Intel PMem\n> (NVM). I used two methods to store WAL file(s):\n>\n> 1. Leverage your patch to access PMem with libpmem (NVWAL patch).\n>\n> 2. Access PMem with legacy filesystem interface, that means use PMem\n> as ordinary block device, no PG patch is required to access PMem (Storage\n> over App Direct).\n>\n>\n>\n> I tried two insert scenarios:\n>\n> A. Insert small record (length of record to be inserted is 24 bytes),\n> I think it is similar as your test\n>\n> B. Insert large record (length of record to be inserted is 328 bytes)\n>\n>\n>\n> My original purpose is to see higher performance gain in scenario B as it\n> is more write intensive on WAL. But I observed that NVWAL patch method had\n> ~5% performance improvement compared with Storage over App Direct method in\n> scenario A, while had ~20% performance degradation in scenario B.\n>\n>\n>\n> I made further investigation on the test. I found that NVWAL patch can\n> improve performance of XlogFlush function, but it may impact performance of\n> CopyXlogRecordToWAL function. It may be related to the higher latency of\n> memcpy to Intel PMem comparing with DRAM. Here are key data in my test:\n>\n>\n>\n> Scenario A (length of record to be inserted: 24 bytes per record):\n>\n> ==============================\n>\n>\n> NVWAL SoAD\n>\n> ------------------------------------\n> ------- -------\n>\n> Througput (10^3 TPS)\n> 310.5 296.0\n>\n> CPU Time % of CopyXlogRecordToWAL\n> 0.4 0.2\n>\n> CPU Time % of XLogInsertRecord\n> 1.5 0.8\n>\n> CPU Time % of XLogFlush\n> 2.1 9.6\n>\n>\n>\n> Scenario B (length of record to be inserted: 328 bytes per record):\n>\n> ==============================\n>\n>\n> NVWAL SoAD\n>\n> ------------------------------------\n> ------- -------\n>\n> Througput (10^3 TPS)\n> 13.0 16.9\n>\n> CPU Time % of CopyXlogRecordToWAL\n> 3.0 1.6\n>\n> CPU Time % of XLogInsertRecord\n> 23.0 16.4\n>\n> CPU Time % of XLogFlush\n> 2.3 5.9\n>\n>\n>\n> Best Regards,\n>\n> Gang\n>\n>\n>\n> *From:* Takashi Menjo <takashi.menjo@gmail.com>\n> *Sent:* Thursday, September 10, 2020 4:01 PM\n> *To:* Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> *Cc:* pgsql-hackers@postgresql.org\n> *Subject:* Re: [PoC] Non-volatile WAL buffer\n>\n>\n>\n> Rebased.\n>\n>\n>\n>\n>\n> 2020年6月24日(水) 16:44 Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>:\n>\n> Dear hackers,\n>\n> I update my non-volatile WAL buffer's patchset to v3. Now we can use it\n> in streaming replication mode.\n>\n> Updates from v2:\n>\n> - walreceiver supports non-volatile WAL buffer\n> Now walreceiver stores received records directly to non-volatile WAL\n> buffer if applicable.\n>\n> - pg_basebackup supports non-volatile WAL buffer\n> Now pg_basebackup copies received WAL segments onto non-volatile WAL\n> buffer if you run it with \"nvwal\" mode (-Fn).\n> You should specify a new NVWAL path with --nvwal-path option. The path\n> will be written to postgresql.auto.conf or recovery.conf. The size of the\n> new NVWAL is same as the master's one.\n>\n>\n> Best regards,\n> Takashi\n>\n> --\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> NTT Software Innovation Center\n>\n> > -----Original Message-----\n> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > Sent: Wednesday, March 18, 2020 5:59 PM\n> > To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org>\n> > Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <\n> hlinnaka@iki.fi>; 'Amit Langote'\n> > <amitlangote09@gmail.com>\n> > Subject: RE: [PoC] Non-volatile WAL buffer\n> >\n> > Dear hackers,\n> >\n> > I rebased my non-volatile WAL buffer's patchset onto master. A new v2\n> patchset is attached to this mail.\n> >\n> > I also measured performance before and after patchset, varying\n> -c/--client and -j/--jobs options of pgbench, for\n> > each scaling factor s = 50 or 1000. The results are presented in the\n> following tables and the attached charts.\n> > Conditions, steps, and other details will be shown later.\n> >\n> >\n> > Results (s=50)\n> > ==============\n> > Throughput [10^3 TPS] Average latency [ms]\n> > ( c, j) before after before after\n> > ------- --------------------- ---------------------\n> > ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\n> > (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\n> > (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\n> > (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\n> >\n> >\n> > Results (s=1000)\n> > ================\n> > Throughput [10^3 TPS] Average latency [ms]\n> > ( c, j) before after before after\n> > ------- --------------------- ---------------------\n> > ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\n> > (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\n> > (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\n> > (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\n> >\n> >\n> > Both throughput and average latency are improved for each scaling\n> factor. Throughput seemed to almost reach\n> > the upper limit when (c,j)=(36,18).\n> >\n> > The percentage in s=1000 case looks larger than in s=50 case. I think\n> larger scaling factor leads to less\n> > contentions on the same tables and/or indexes, that is, less lock and\n> unlock operations. In such a situation,\n> > write-ahead logging appears to be more significant for performance.\n> >\n> >\n> > Conditions\n> > ==========\n> > - Use one physical server having 2 NUMA nodes (node 0 and 1)\n> > - Pin postgres (server processes) to node 0 and pgbench to node 1\n> > - 18 cores and 192GiB DRAM per node\n> > - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for\n> pg_wal\n> > - Both are installed on the server-side node, that is, node 0\n> > - Both are formatted with ext4\n> > - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access\n> (DAX)\n> > - Use the attached postgresql.conf\n> > - Two new items nvwal_path and nvwal_size are used only after patch\n> >\n> >\n> > Steps\n> > =====\n> > For each (c,j) pair, I did the following steps three times then I found\n> the median of the three as a final result shown\n> > in the tables above.\n> >\n> > (1) Run initdb with proper -D and -X options; and also give --nvwal-path\n> and --nvwal-size options after patch\n> > (2) Start postgres and create a database for pgbench tables\n> > (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\n> > (4) Stop postgres, remount filesystems, and start postgres again\n> > (5) Execute pg_prewarm extension for all the four pgbench tables\n> > (6) Run pgbench during 30 minutes\n> >\n> >\n> > pgbench command line\n> > ====================\n> > $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j\n> ___ dbname\n> >\n> > I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\n> >\n> >\n> > Software\n> > ========\n> > - Distro: Ubuntu 18.04\n> > - Kernel: Linux 5.4 (vanilla kernel)\n> > - C Compiler: gcc 7.4.0\n> > - PMDK: 1.7\n> > - PostgreSQL: d677550 (master on Mar 3, 2020)\n> >\n> >\n> > Hardware\n> > ========\n> > - System: HPE ProLiant DL380 Gen10\n> > - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\n> > - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\n> > - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\n> > - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\n> >\n> >\n> > Best regards,\n> > Takashi\n> >\n> > --\n> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation\n> Center\n> >\n> > > -----Original Message-----\n> > > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > > Sent: Thursday, February 20, 2020 6:30 PM\n> > > To: 'Amit Langote' <amitlangote09@gmail.com>\n> > > Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <\n> hlinnaka@iki.fi>;\n> > 'PostgreSQL-development'\n> > > <pgsql-hackers@postgresql.org>\n> > > Subject: RE: [PoC] Non-volatile WAL buffer\n> > >\n> > > Dear Amit,\n> > >\n> > > Thank you for your advice. Exactly, it's so to speak \"do as the\n> hackers do when in pgsql\"...\n> > >\n> > > I'm rebasing my branch onto master. I'll submit an updated patchset\n> and performance report later.\n> > >\n> > > Best regards,\n> > > Takashi\n> > >\n> > > --\n> > > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software\n> > > Innovation Center\n> > >\n> > > > -----Original Message-----\n> > > > From: Amit Langote <amitlangote09@gmail.com>\n> > > > Sent: Monday, February 17, 2020 5:21 PM\n> > > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > > > Cc: Robert Haas <robertmhaas@gmail.com>; Heikki Linnakangas\n> > > > <hlinnaka@iki.fi>; PostgreSQL-development\n> > > > <pgsql-hackers@postgresql.org>\n> > > > Subject: Re: [PoC] Non-volatile WAL buffer\n> > > >\n> > > > Hello,\n> > > >\n> > > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <\n> takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > > > > Hello Amit,\n> > > > >\n> > > > > > I apologize for not having any opinion on the patches\n> > > > > > themselves, but let me point out that it's better to base these\n> > > > > > patches on HEAD (master branch) than REL_12_0, because all new\n> > > > > > code is committed to the master branch, whereas stable branches\n> > > > > > such as\n> > > > > > REL_12_0 only receive bug fixes. Do you have any\n> > > > specific reason to be working on REL_12_0?\n> > > > >\n> > > > > Yes, because I think it's human-friendly to reproduce and discuss\n> > > > > performance measurement. Of course I know\n> > > > all new accepted patches are merged into master's HEAD, not stable\n> > > > branches and not even release tags, so I'm aware of rebasing my\n> > > > patchset onto master sooner or later. However, if someone,\n> > > > including me, says that s/he applies my patchset to \"master\" and\n> > > > measures its performance, we have to pay attention to which commit\n> the \"master\"\n> > > > really points to. Although we have sha1 hashes to specify which\n> > > > commit, we should check whether the specific commit on master has\n> > > > patches affecting performance or not\n> > > because master's HEAD gets new patches day by day. On the other hand,\n> > > a release tag clearly points the commit all we probably know. Also we\n> > > can check more easily the features and improvements by using release\n> notes and user manuals.\n> > > >\n> > > > Thanks for clarifying. I see where you're coming from.\n> > > >\n> > > > While I do sometimes see people reporting numbers with the latest\n> > > > stable release' branch, that's normally just one of the baselines.\n> > > > The more important baseline for ongoing development is the master\n> > > > branch's HEAD, which is also what people volunteering to test your\n> > > > patches would use. Anyone who reports would have to give at least\n> > > > two numbers -- performance with a branch's HEAD without patch\n> > > > applied and that with patch applied -- which can be enough in most\n> > > > cases to see the difference the patch makes. Sure, the numbers\n> > > > might change on each report, but that's fine I'd think. If you\n> > > > continue to develop against the stable branch, you might miss to\n> > > notice impact from any relevant developments in the master branch,\n> > > even developments which possibly require rethinking the architecture\n> of your own changes, although maybe that\n> > rarely occurs.\n> > > >\n> > > > Thanks,\n> > > > Amit\n>\n>\n>\n>\n> --\n>\n> Takashi Menjo <takashi.menjo@gmail.com>\n>\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nHello Gang,Thank you for your report. I have not taken care of record size deeply yet, so your report is very interesting. I will also have a test like yours then post results here.Regards,Takashi2020年9月21日(月) 14:14 Deng, Gang <gang.deng@intel.com>:\n\n\nHi Takashi,\n \nThank you for the patch and work on accelerating PG performance with NVM. I applied the patch and made some performance test based on the patch v4. I stored database data\n files on NVMe SSD and stored WAL file on Intel PMem (NVM). I used two methods to store WAL file(s):\n\n1. \nLeverage your patch to access PMem with libpmem (NVWAL patch).\n\n2. \nAccess PMem with legacy filesystem interface, that means use PMem as ordinary block device, no PG patch is required to access PMem (Storage over App Direct).\n \nI tried two insert scenarios:\n\n\nA. \nInsert small record (length of record to be inserted is 24 bytes), I think it is similar as your test\n\nB. \nInsert large record (length of record to be inserted is 328 bytes)\n \nMy original purpose is to see higher performance gain in scenario B as it is more write intensive on WAL. But I observed that NVWAL patch method had ~5% performance improvement\n compared with Storage over App Direct method in scenario A, while had ~20% performance degradation in scenario B.\n \nI made further investigation on the test. I found that NVWAL patch can improve performance of XlogFlush function, but it may impact performance of CopyXlogRecordToWAL function.\n It may be related to the higher latency of memcpy to Intel PMem comparing with DRAM. Here are key data in my test:\n \nScenario A (length of record to be inserted: 24 bytes per record):\n==============================\n NVWAL SoAD\n------------------------------------ ------- -------\nThrougput (10^3 TPS) 310.5 296.0\nCPU Time % of CopyXlogRecordToWAL 0.4 0.2\n\nCPU Time % of XLogInsertRecord 1.5 0.8\nCPU Time % of XLogFlush 2.1 9.6\n \nScenario B (length of record to be inserted: 328 bytes per record):\n==============================\n NVWAL SoAD\n------------------------------------ ------- -------\nThrougput (10^3 TPS) 13.0 16.9\nCPU Time % of CopyXlogRecordToWAL 3.0 1.6\n\nCPU Time % of XLogInsertRecord 23.0 16.4\nCPU Time % of XLogFlush 2.3 5.9\n \nBest Regards,\nGang\n \nFrom: Takashi Menjo <takashi.menjo@gmail.com>\n\nSent: Thursday, September 10, 2020 4:01 PM\nTo: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [PoC] Non-volatile WAL buffer\n \n\n\nRebased.\n\n\n \n\n\n \n\n\n2020年6月24日(水) 16:44 Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>:\n\n\nDear hackers,\n\nI update my non-volatile WAL buffer's patchset to v3. Now we can use it in streaming replication mode.\n\nUpdates from v2:\n\n- walreceiver supports non-volatile WAL buffer\nNow walreceiver stores received records directly to non-volatile WAL buffer if applicable.\n\n- pg_basebackup supports non-volatile WAL buffer\nNow pg_basebackup copies received WAL segments onto non-volatile WAL buffer if you run it with \"nvwal\" mode (-Fn).\nYou should specify a new NVWAL path with --nvwal-path option. The path will be written to postgresql.auto.conf or recovery.conf. The size of the new NVWAL is same as the master's one.\n\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Sent: Wednesday, March 18, 2020 5:59 PM\n> To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org>\n> Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>; 'Amit Langote'\n> <amitlangote09@gmail.com>\n> Subject: RE: [PoC] Non-volatile WAL buffer\n> \n> Dear hackers,\n> \n> I rebased my non-volatile WAL buffer's patchset onto master. A new v2 patchset is attached to this mail.\n> \n> I also measured performance before and after patchset, varying -c/--client and -j/--jobs options of pgbench, for\n> each scaling factor s = 50 or 1000. The results are presented in the following tables and the attached charts.\n> Conditions, steps, and other details will be shown later.\n> \n> \n> Results (s=50)\n> ==============\n> Throughput [10^3 TPS] Average latency [ms]\n> ( c, j) before after before after\n> ------- --------------------- ---------------------\n> ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\n> (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\n> (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\n> (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\n> \n> \n> Results (s=1000)\n> ================\n> Throughput [10^3 TPS] Average latency [ms]\n> ( c, j) before after before after\n> ------- --------------------- ---------------------\n> ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\n> (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\n> (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\n> (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\n> \n> \n> Both throughput and average latency are improved for each scaling factor. Throughput seemed to almost reach\n> the upper limit when (c,j)=(36,18).\n> \n> The percentage in s=1000 case looks larger than in s=50 case. I think larger scaling factor leads to less\n> contentions on the same tables and/or indexes, that is, less lock and unlock operations. In such a situation,\n> write-ahead logging appears to be more significant for performance.\n> \n> \n> Conditions\n> ==========\n> - Use one physical server having 2 NUMA nodes (node 0 and 1)\n> - Pin postgres (server processes) to node 0 and pgbench to node 1\n> - 18 cores and 192GiB DRAM per node\n> - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for pg_wal\n> - Both are installed on the server-side node, that is, node 0\n> - Both are formatted with ext4\n> - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access (DAX)\n> - Use the attached postgresql.conf\n> - Two new items nvwal_path and nvwal_size are used only after patch\n> \n> \n> Steps\n> =====\n> For each (c,j) pair, I did the following steps three times then I found the median of the three as a final result shown\n> in the tables above.\n> \n> (1) Run initdb with proper -D and -X options; and also give --nvwal-path and --nvwal-size options after patch\n> (2) Start postgres and create a database for pgbench tables\n> (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\n> (4) Stop postgres, remount filesystems, and start postgres again\n> (5) Execute pg_prewarm extension for all the four pgbench tables\n> (6) Run pgbench during 30 minutes\n> \n> \n> pgbench command line\n> ====================\n> $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\n> \n> I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\n> \n> \n> Software\n> ========\n> - Distro: Ubuntu 18.04\n> - Kernel: Linux 5.4 (vanilla kernel)\n> - C Compiler: gcc 7.4.0\n> - PMDK: 1.7\n> - PostgreSQL: d677550 (master on Mar 3, 2020)\n> \n> \n> Hardware\n> ========\n> - System: HPE ProLiant DL380 Gen10\n> - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\n> - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\n> - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\n> - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\n> \n> \n> Best regards,\n> Takashi\n> \n> --\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation Center\n> \n> > -----Original Message-----\n> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > Sent: Thursday, February 20, 2020 6:30 PM\n> > To: 'Amit Langote' <amitlangote09@gmail.com>\n> > Cc: 'Robert Haas' <robertmhaas@gmail.com>; 'Heikki Linnakangas' <hlinnaka@iki.fi>;\n> 'PostgreSQL-development'\n> > <pgsql-hackers@postgresql.org>\n> > Subject: RE: [PoC] Non-volatile WAL buffer\n> >\n> > Dear Amit,\n> >\n> > Thank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\n> >\n> > I'm rebasing my branch onto master. I'll submit an updated patchset and performance report later.\n> >\n> > Best regards,\n> > Takashi\n> >\n> > --\n> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software\n> > Innovation Center\n> >\n> > > -----Original Message-----\n> > > From: Amit Langote <amitlangote09@gmail.com>\n> > > Sent: Monday, February 17, 2020 5:21 PM\n> > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> > > Cc: Robert Haas <robertmhaas@gmail.com>; Heikki Linnakangas\n> > > <hlinnaka@iki.fi>; PostgreSQL-development\n> > > <pgsql-hackers@postgresql.org>\n> > > Subject: Re: [PoC] Non-volatile WAL buffer\n> > >\n> > > Hello,\n> > >\n> > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> wrote:\n> > > > Hello Amit,\n> > > >\n> > > > > I apologize for not having any opinion on the patches\n> > > > > themselves, but let me point out that it's better to base these\n> > > > > patches on HEAD (master branch) than REL_12_0, because all new\n> > > > > code is committed to the master branch, whereas stable branches\n> > > > > such as\n> > > > > REL_12_0 only receive bug fixes. Do you have any\n> > > specific reason to be working on REL_12_0?\n> > > >\n> > > > Yes, because I think it's human-friendly to reproduce and discuss\n> > > > performance measurement. Of course I know\n> > > all new accepted patches are merged into master's HEAD, not stable\n> > > branches and not even release tags, so I'm aware of rebasing my\n> > > patchset onto master sooner or later. However, if someone,\n> > > including me, says that s/he applies my patchset to \"master\" and\n> > > measures its performance, we have to pay attention to which commit the \"master\"\n> > > really points to. Although we have sha1 hashes to specify which\n> > > commit, we should check whether the specific commit on master has\n> > > patches affecting performance or not\n> > because master's HEAD gets new patches day by day. On the other hand,\n> > a release tag clearly points the commit all we probably know. Also we\n> > can check more easily the features and improvements by using release notes and user manuals.\n> > >\n> > > Thanks for clarifying. I see where you're coming from.\n> > >\n> > > While I do sometimes see people reporting numbers with the latest\n> > > stable release' branch, that's normally just one of the baselines.\n> > > The more important baseline for ongoing development is the master\n> > > branch's HEAD, which is also what people volunteering to test your\n> > > patches would use. Anyone who reports would have to give at least\n> > > two numbers -- performance with a branch's HEAD without patch\n> > > applied and that with patch applied -- which can be enough in most\n> > > cases to see the difference the patch makes. Sure, the numbers\n> > > might change on each report, but that's fine I'd think. If you\n> > > continue to develop against the stable branch, you might miss to\n> > notice impact from any relevant developments in the master branch,\n> > even developments which possibly require rethinking the architecture of your own changes, although maybe that\n> rarely occurs.\n> > >\n> > > Thanks,\n> > > Amit\n\n\n\n\n\n \n\n-- \n\nTakashi Menjo <takashi.menjo@gmail.com>\n\n\n\n-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Thu, 24 Sep 2020 02:37:56 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Gang,\n\nI have tried to but yet cannot reproduce performance degrade you reported when inserting 328-byte records. So I think the condition of you and me would be different, such as steps to reproduce, postgresql.conf, installation setup, and so on.\n\nMy results and condition are as follows. May I have your condition in more detail? Note that I refer to your \"Storage over App Direct\" as my \"Original (PMEM)\" and \"NVWAL patch\" to \"Non-volatile WAL buffer.\"\n\nBest regards,\nTakashi\n\n\n# Results\nSee the attached figure. In short, Non-volatile WAL buffer got better performance than Original (PMEM).\n\n# Steps\nNote that I ran postgres server and pgbench in a single-machine system but separated two NUMA nodes. PMEM and PCI SSD for the server process are on the server-side NUMA node.\n\n01) Create a PMEM namespace (sudo ndctl create-namespace -f -t pmem -m fsdax -M dev -e namespace0.0)\n02) Make an ext4 filesystem for PMEM then mount it with DAX option (sudo mkfs.ext4 -q -F /dev/pmem0 ; sudo mount -o dax /dev/pmem0 /mnt/pmem0)\n03) Make another ext4 filesystem for PCIe SSD then mount it (sudo mkfs.ext4 -q -F /dev/nvme0n1 ; sudo mount /dev/nvme0n1 /mnt/nvme0n1)\n04) Make /mnt/pmem0/pg_wal directory for WAL\n05) Make /mnt/nvme0n1/pgdata directory for PGDATA\n06) Run initdb (initdb --locale=C --encoding=UTF8 -X /mnt/pmem0/pg_wal ...)\n - Also give -P /mnt/pmem0/pg_wal/nvwal -Q 81920 in the case of Non-volatile WAL buffer\n07) Edit postgresql.conf as the attached one\n - Please remove nvwal_* lines in the case of Original (PMEM)\n08) Start postgres server process on NUMA node 0 (numactl -N 0 -m 0 -- pg_ctl -l pg.log start)\n09) Create a database (createdb --locale=C --encoding=UTF8)\n10) Initialize pgbench tables with s=50 (pgbench -i -s 50)\n11) Change # characters of \"filler\" column of \"pgbench_history\" table to 300 (ALTER TABLE pgbench_history ALTER filler TYPE character(300);)\n - This would make the row size of the table 328 bytes\n12) Stop the postgres server process (pg_ctl -l pg.log -m smart stop)\n13) Remount the PMEM and the PCIe SSD\n14) Start postgres server process on NUMA node 0 again (numactl -N 0 -m 0 -- pg_ctl -l pg.log start)\n15) Run pg_prewarm for all the four pgbench_* tables\n16) Run pgbench on NUMA node 1 for 30 minutes (numactl -N 1 -m 1 -- pgbench -r -M prepared -T 1800 -c __ -j __)\n - It executes the default tpcb-like transactions\n\nI repeated all the steps three times for each (c,j) then got the median \"tps = __ (including connections establishing)\" of the three as throughput and the \"latency average = __ ms \" of that time as average latency.\n\n# Environment variables\nexport PGHOST=/tmp\nexport PGPORT=5432\nexport PGDATABASE=\"$USER\"\nexport PGUSER=\"$USER\"\nexport PGDATA=/mnt/nvme0n1/pgdata\n\n# Setup\n- System: HPE ProLiant DL380 Gen10\n- CPU: Intel Xeon Gold 6240M x2 sockets (18 cores per socket; HT disabled by BIOS)\n- DRAM: DDR4 2933MHz 192GiB/socket x2 sockets (32 GiB per channel x 6 channels per socket)\n- Optane PMem: Apache Pass, AppDirect Mode, DDR4 2666MHz 1.5TiB/socket x2 sockets (256 GiB per channel x 6 channels per socket; interleaving enabled)\n- PCIe SSD: DC P4800X Series SSDPED1K750GA\n- Distro: Ubuntu 20.04.1\n- C compiler: gcc 9.3.0\n- libc: glibc 2.31\n- Linux kernel: 5.7 (vanilla)\n- Filesystem: ext4 (DAX enabled when using Optane PMem)\n- PMDK: 1.9\n- PostgreSQL (Original): 14devel (200f610: Jul 26, 2020)\n- PostgreSQL (Non-volatile WAL buffer): 14devel (200f610: Jul 26, 2020) + non-volatile WAL buffer patchset v4\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Takashi Menjo <takashi.menjo@gmail.com>\n> Sent: Thursday, September 24, 2020 2:38 AM\n> To: Deng, Gang <gang.deng@intel.com>\n> Cc: pgsql-hackers@postgresql.org; Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Subject: Re: [PoC] Non-volatile WAL buffer\n> \n> Hello Gang,\n> \n> Thank you for your report. I have not taken care of record size deeply yet, so your report is very interesting. I will\n> also have a test like yours then post results here.\n> \n> Regards,\n> Takashi\n> \n> \n> 2020年9月21日(月) 14:14 Deng, Gang <gang.deng@intel.com <mailto:gang.deng@intel.com> >:\n> \n> \n> \tHi Takashi,\n> \n> \n> \n> \tThank you for the patch and work on accelerating PG performance with NVM. I applied the patch and made\n> some performance test based on the patch v4. I stored database data files on NVMe SSD and stored WAL file on\n> Intel PMem (NVM). I used two methods to store WAL file(s):\n> \n> \t1. Leverage your patch to access PMem with libpmem (NVWAL patch).\n> \n> \t2. Access PMem with legacy filesystem interface, that means use PMem as ordinary block device, no\n> PG patch is required to access PMem (Storage over App Direct).\n> \n> \n> \n> \tI tried two insert scenarios:\n> \n> \tA. Insert small record (length of record to be inserted is 24 bytes), I think it is similar as your test\n> \n> \tB. Insert large record (length of record to be inserted is 328 bytes)\n> \n> \n> \n> \tMy original purpose is to see higher performance gain in scenario B as it is more write intensive on WAL.\n> But I observed that NVWAL patch method had ~5% performance improvement compared with Storage over App\n> Direct method in scenario A, while had ~20% performance degradation in scenario B.\n> \n> \n> \n> \tI made further investigation on the test. I found that NVWAL patch can improve performance of XlogFlush\n> function, but it may impact performance of CopyXlogRecordToWAL function. It may be related to the higher\n> latency of memcpy to Intel PMem comparing with DRAM. Here are key data in my test:\n> \n> \n> \n> \tScenario A (length of record to be inserted: 24 bytes per record):\n> \n> \t==============================\n> \n> \t NVWAL\n> SoAD\n> \n> \t------------------------------------ ------- -------\n> \n> \tThrougput (10^3 TPS) 310.5\n> 296.0\n> \n> \tCPU Time % of CopyXlogRecordToWAL 0.4 0.2\n> \n> \tCPU Time % of XLogInsertRecord 1.5 0.8\n> \n> \tCPU Time % of XLogFlush 2.1 9.6\n> \n> \n> \n> \tScenario B (length of record to be inserted: 328 bytes per record):\n> \n> \t==============================\n> \n> \t NVWAL\n> SoAD\n> \n> \t------------------------------------ ------- -------\n> \n> \tThrougput (10^3 TPS) 13.0\n> 16.9\n> \n> \tCPU Time % of CopyXlogRecordToWAL 3.0 1.6\n> \n> \tCPU Time % of XLogInsertRecord 23.0 16.4\n> \n> \tCPU Time % of XLogFlush 2.3 5.9\n> \n> \n> \n> \tBest Regards,\n> \n> \tGang\n> \n> \n> \n> \tFrom: Takashi Menjo <takashi.menjo@gmail.com <mailto:takashi.menjo@gmail.com> >\n> \tSent: Thursday, September 10, 2020 4:01 PM\n> \tTo: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> \tCc: pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org>\n> \tSubject: Re: [PoC] Non-volatile WAL buffer\n> \n> \n> \n> \tRebased.\n> \n> \n> \n> \n> \n> \t2020年6月24日(水) 16:44 Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> >:\n> \n> \t\tDear hackers,\n> \n> \t\tI update my non-volatile WAL buffer's patchset to v3. Now we can use it in streaming replication\n> mode.\n> \n> \t\tUpdates from v2:\n> \n> \t\t- walreceiver supports non-volatile WAL buffer\n> \t\tNow walreceiver stores received records directly to non-volatile WAL buffer if applicable.\n> \n> \t\t- pg_basebackup supports non-volatile WAL buffer\n> \t\tNow pg_basebackup copies received WAL segments onto non-volatile WAL buffer if you run it with\n> \"nvwal\" mode (-Fn).\n> \t\tYou should specify a new NVWAL path with --nvwal-path option. The path will be written to\n> postgresql.auto.conf or recovery.conf. The size of the new NVWAL is same as the master's one.\n> \n> \n> \t\tBest regards,\n> \t\tTakashi\n> \n> \t\t--\n> \t\tTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> \t\tNTT Software Innovation Center\n> \n> \t\t> -----Original Message-----\n> \t\t> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> \t\t> Sent: Wednesday, March 18, 2020 5:59 PM\n> \t\t> To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org\n> <mailto:pgsql-hackers@postgresql.org> >\n> \t\t> Cc: 'Robert Haas' <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com> >; 'Heikki\n> Linnakangas' <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >; 'Amit Langote'\n> \t\t> <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\n> \t\t> Subject: RE: [PoC] Non-volatile WAL buffer\n> \t\t>\n> \t\t> Dear hackers,\n> \t\t>\n> \t\t> I rebased my non-volatile WAL buffer's patchset onto master. A new v2 patchset is attached\n> to this mail.\n> \t\t>\n> \t\t> I also measured performance before and after patchset, varying -c/--client and -j/--jobs\n> options of pgbench, for\n> \t\t> each scaling factor s = 50 or 1000. The results are presented in the following tables and the\n> attached charts.\n> \t\t> Conditions, steps, and other details will be shown later.\n> \t\t>\n> \t\t>\n> \t\t> Results (s=50)\n> \t\t> ==============\n> \t\t> Throughput [10^3 TPS] Average latency [ms]\n> \t\t> ( c, j) before after before after\n> \t\t> ------- --------------------- ---------------------\n> \t\t> ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\n> \t\t> (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\n> \t\t> (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\n> \t\t> (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\n> \t\t>\n> \t\t>\n> \t\t> Results (s=1000)\n> \t\t> ================\n> \t\t> Throughput [10^3 TPS] Average latency [ms]\n> \t\t> ( c, j) before after before after\n> \t\t> ------- --------------------- ---------------------\n> \t\t> ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\n> \t\t> (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\n> \t\t> (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\n> \t\t> (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\n> \t\t>\n> \t\t>\n> \t\t> Both throughput and average latency are improved for each scaling factor. Throughput seemed\n> to almost reach\n> \t\t> the upper limit when (c,j)=(36,18).\n> \t\t>\n> \t\t> The percentage in s=1000 case looks larger than in s=50 case. I think larger scaling factor\n> leads to less\n> \t\t> contentions on the same tables and/or indexes, that is, less lock and unlock operations. In such\n> a situation,\n> \t\t> write-ahead logging appears to be more significant for performance.\n> \t\t>\n> \t\t>\n> \t\t> Conditions\n> \t\t> ==========\n> \t\t> - Use one physical server having 2 NUMA nodes (node 0 and 1)\n> \t\t> - Pin postgres (server processes) to node 0 and pgbench to node 1\n> \t\t> - 18 cores and 192GiB DRAM per node\n> \t\t> - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for pg_wal\n> \t\t> - Both are installed on the server-side node, that is, node 0\n> \t\t> - Both are formatted with ext4\n> \t\t> - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access (DAX)\n> \t\t> - Use the attached postgresql.conf\n> \t\t> - Two new items nvwal_path and nvwal_size are used only after patch\n> \t\t>\n> \t\t>\n> \t\t> Steps\n> \t\t> =====\n> \t\t> For each (c,j) pair, I did the following steps three times then I found the median of the three as\n> a final result shown\n> \t\t> in the tables above.\n> \t\t>\n> \t\t> (1) Run initdb with proper -D and -X options; and also give --nvwal-path and --nvwal-size\n> options after patch\n> \t\t> (2) Start postgres and create a database for pgbench tables\n> \t\t> (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\n> \t\t> (4) Stop postgres, remount filesystems, and start postgres again\n> \t\t> (5) Execute pg_prewarm extension for all the four pgbench tables\n> \t\t> (6) Run pgbench during 30 minutes\n> \t\t>\n> \t\t>\n> \t\t> pgbench command line\n> \t\t> ====================\n> \t\t> $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\n> \t\t>\n> \t\t> I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\n> \t\t>\n> \t\t>\n> \t\t> Software\n> \t\t> ========\n> \t\t> - Distro: Ubuntu 18.04\n> \t\t> - Kernel: Linux 5.4 (vanilla kernel)\n> \t\t> - C Compiler: gcc 7.4.0\n> \t\t> - PMDK: 1.7\n> \t\t> - PostgreSQL: d677550 (master on Mar 3, 2020)\n> \t\t>\n> \t\t>\n> \t\t> Hardware\n> \t\t> ========\n> \t\t> - System: HPE ProLiant DL380 Gen10\n> \t\t> - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\n> \t\t> - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\n> \t\t> - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\n> \t\t> - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\n> \t\t>\n> \t\t>\n> \t\t> Best regards,\n> \t\t> Takashi\n> \t\t>\n> \t\t> --\n> \t\t> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> NTT Software Innovation Center\n> \t\t>\n> \t\t> > -----Original Message-----\n> \t\t> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> \t\t> > Sent: Thursday, February 20, 2020 6:30 PM\n> \t\t> > To: 'Amit Langote' <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\n> \t\t> > Cc: 'Robert Haas' <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com> >; 'Heikki\n> Linnakangas' <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >;\n> \t\t> 'PostgreSQL-development'\n> \t\t> > <pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org> >\n> \t\t> > Subject: RE: [PoC] Non-volatile WAL buffer\n> \t\t> >\n> \t\t> > Dear Amit,\n> \t\t> >\n> \t\t> > Thank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\n> \t\t> >\n> \t\t> > I'm rebasing my branch onto master. I'll submit an updated patchset and performance report\n> later.\n> \t\t> >\n> \t\t> > Best regards,\n> \t\t> > Takashi\n> \t\t> >\n> \t\t> > --\n> \t\t> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp>\n> > NTT Software\n> \t\t> > Innovation Center\n> \t\t> >\n> \t\t> > > -----Original Message-----\n> \t\t> > > From: Amit Langote <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\n> \t\t> > > Sent: Monday, February 17, 2020 5:21 PM\n> \t\t> > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> \t\t> > > Cc: Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com> >; Heikki\n> Linnakangas\n> \t\t> > > <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >; PostgreSQL-development\n> \t\t> > > <pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org> >\n> \t\t> > > Subject: Re: [PoC] Non-volatile WAL buffer\n> \t\t> > >\n> \t\t> > > Hello,\n> \t\t> > >\n> \t\t> > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> > wrote:\n> \t\t> > > > Hello Amit,\n> \t\t> > > >\n> \t\t> > > > > I apologize for not having any opinion on the patches\n> \t\t> > > > > themselves, but let me point out that it's better to base these\n> \t\t> > > > > patches on HEAD (master branch) than REL_12_0, because all new\n> \t\t> > > > > code is committed to the master branch, whereas stable branches\n> \t\t> > > > > such as\n> \t\t> > > > > REL_12_0 only receive bug fixes. Do you have any\n> \t\t> > > specific reason to be working on REL_12_0?\n> \t\t> > > >\n> \t\t> > > > Yes, because I think it's human-friendly to reproduce and discuss\n> \t\t> > > > performance measurement. Of course I know\n> \t\t> > > all new accepted patches are merged into master's HEAD, not stable\n> \t\t> > > branches and not even release tags, so I'm aware of rebasing my\n> \t\t> > > patchset onto master sooner or later. However, if someone,\n> \t\t> > > including me, says that s/he applies my patchset to \"master\" and\n> \t\t> > > measures its performance, we have to pay attention to which commit the \"master\"\n> \t\t> > > really points to. Although we have sha1 hashes to specify which\n> \t\t> > > commit, we should check whether the specific commit on master has\n> \t\t> > > patches affecting performance or not\n> \t\t> > because master's HEAD gets new patches day by day. On the other hand,\n> \t\t> > a release tag clearly points the commit all we probably know. Also we\n> \t\t> > can check more easily the features and improvements by using release notes and user\n> manuals.\n> \t\t> > >\n> \t\t> > > Thanks for clarifying. I see where you're coming from.\n> \t\t> > >\n> \t\t> > > While I do sometimes see people reporting numbers with the latest\n> \t\t> > > stable release' branch, that's normally just one of the baselines.\n> \t\t> > > The more important baseline for ongoing development is the master\n> \t\t> > > branch's HEAD, which is also what people volunteering to test your\n> \t\t> > > patches would use. Anyone who reports would have to give at least\n> \t\t> > > two numbers -- performance with a branch's HEAD without patch\n> \t\t> > > applied and that with patch applied -- which can be enough in most\n> \t\t> > > cases to see the difference the patch makes. Sure, the numbers\n> \t\t> > > might change on each report, but that's fine I'd think. If you\n> \t\t> > > continue to develop against the stable branch, you might miss to\n> \t\t> > notice impact from any relevant developments in the master branch,\n> \t\t> > even developments which possibly require rethinking the architecture of your own changes,\n> although maybe that\n> \t\t> rarely occurs.\n> \t\t> > >\n> \t\t> > > Thanks,\n> \t\t> > > Amit\n> \n> \n> \n> \n> \n> \n> \t--\n> \n> \tTakashi Menjo <takashi.menjo@gmail.com <mailto:takashi.menjo@gmail.com> >\n> \n> \n> \n> --\n> \n> Takashi Menjo <takashi.menjo@gmail.com <mailto:takashi.menjo@gmail.com> >",
"msg_date": "Tue, 06 Oct 2020 17:49:13 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Takashi,\r\n\r\nThere are some differences between our HW/SW configuration and test steps. I attached postgresql.conf I used for your reference. I would like to try postgresql.conf and steps you provided in the later days to see if I can find cause.\r\n\r\nI also ran pgbench and postgres server on the same server but on different NUMA node, and ensure server process and PMEM on the same NUMA node. I used similar steps are yours from step 1 to 9. But some difference in later steps, major of them are:\r\n\r\nIn step 10), I created a database and table for test by:\r\n#create database:\r\npsql -c \"create database insert_bench;\"\r\n#create table:\r\npsql -d insert_bench -c \"create table test(crt_time timestamp, info text default '75feba6d5ca9ff65d09af35a67fe962a4e3fa5ef279f94df6696bee65f4529a4bbb03ae56c3b5b86c22b447fc48da894740ed1a9d518a9646b3a751a57acaca1142ccfc945b1082b40043e3f83f8b7605b5a55fcd7eb8fc1d0475c7fe465477da47d96957849327731ae76322f440d167725d2e2bbb60313150a4f69d9a8c9e86f9d79a742e7a35bf159f670e54413fb89ff81b8e5e8ab215c3ddfd00bb6aeb4');\"\r\n\r\nin step 15), I did not use pg_prewarm, but just ran pg_bench for 180 seconds to warm up.\r\nIn step 16), I ran pgbench using command: pgbench -M prepared -n -r -P 10 -f ./test.sql -T 600 -c _ -j _ insert_bench. (test.sql can be found in attachment)\r\n\r\nFor HW/SW conf, the major differences are:\r\nCPU: I used Xeon 8268 (24c@2.9Ghz, HT enabled)\r\nOS Distro: CentOS 8.2.2004 \r\nKernel: 4.18.0-193.6.3.el8_2.x86_64\r\nGCC: 8.3.1\r\n\r\nBest regards\r\nGang\r\n\r\n-----Original Message-----\r\nFrom: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> \r\nSent: Tuesday, October 6, 2020 4:49 PM\r\nTo: Deng, Gang <gang.deng@intel.com>\r\nCc: pgsql-hackers@postgresql.org; 'Takashi Menjo' <takashi.menjo@gmail.com>\r\nSubject: RE: [PoC] Non-volatile WAL buffer\r\n\r\nHi Gang,\r\n\r\nI have tried to but yet cannot reproduce performance degrade you reported when inserting 328-byte records. So I think the condition of you and me would be different, such as steps to reproduce, postgresql.conf, installation setup, and so on.\r\n\r\nMy results and condition are as follows. May I have your condition in more detail? Note that I refer to your \"Storage over App Direct\" as my \"Original (PMEM)\" and \"NVWAL patch\" to \"Non-volatile WAL buffer.\"\r\n\r\nBest regards,\r\nTakashi\r\n\r\n\r\n# Results\r\nSee the attached figure. In short, Non-volatile WAL buffer got better performance than Original (PMEM).\r\n\r\n# Steps\r\nNote that I ran postgres server and pgbench in a single-machine system but separated two NUMA nodes. PMEM and PCI SSD for the server process are on the server-side NUMA node.\r\n\r\n01) Create a PMEM namespace (sudo ndctl create-namespace -f -t pmem -m fsdax -M dev -e namespace0.0)\r\n02) Make an ext4 filesystem for PMEM then mount it with DAX option (sudo mkfs.ext4 -q -F /dev/pmem0 ; sudo mount -o dax /dev/pmem0 /mnt/pmem0)\r\n03) Make another ext4 filesystem for PCIe SSD then mount it (sudo mkfs.ext4 -q -F /dev/nvme0n1 ; sudo mount /dev/nvme0n1 /mnt/nvme0n1)\r\n04) Make /mnt/pmem0/pg_wal directory for WAL\r\n05) Make /mnt/nvme0n1/pgdata directory for PGDATA\r\n06) Run initdb (initdb --locale=C --encoding=UTF8 -X /mnt/pmem0/pg_wal ...)\r\n - Also give -P /mnt/pmem0/pg_wal/nvwal -Q 81920 in the case of Non-volatile WAL buffer\r\n07) Edit postgresql.conf as the attached one\r\n - Please remove nvwal_* lines in the case of Original (PMEM)\r\n08) Start postgres server process on NUMA node 0 (numactl -N 0 -m 0 -- pg_ctl -l pg.log start)\r\n09) Create a database (createdb --locale=C --encoding=UTF8)\r\n10) Initialize pgbench tables with s=50 (pgbench -i -s 50)\r\n11) Change # characters of \"filler\" column of \"pgbench_history\" table to 300 (ALTER TABLE pgbench_history ALTER filler TYPE character(300);)\r\n - This would make the row size of the table 328 bytes\r\n12) Stop the postgres server process (pg_ctl -l pg.log -m smart stop)\r\n13) Remount the PMEM and the PCIe SSD\r\n14) Start postgres server process on NUMA node 0 again (numactl -N 0 -m 0 -- pg_ctl -l pg.log start)\r\n15) Run pg_prewarm for all the four pgbench_* tables\r\n16) Run pgbench on NUMA node 1 for 30 minutes (numactl -N 1 -m 1 -- pgbench -r -M prepared -T 1800 -c __ -j __)\r\n - It executes the default tpcb-like transactions\r\n\r\nI repeated all the steps three times for each (c,j) then got the median \"tps = __ (including connections establishing)\" of the three as throughput and the \"latency average = __ ms \" of that time as average latency.\r\n\r\n# Environment variables\r\nexport PGHOST=/tmp\r\nexport PGPORT=5432\r\nexport PGDATABASE=\"$USER\"\r\nexport PGUSER=\"$USER\"\r\nexport PGDATA=/mnt/nvme0n1/pgdata\r\n\r\n# Setup\r\n- System: HPE ProLiant DL380 Gen10\r\n- CPU: Intel Xeon Gold 6240M x2 sockets (18 cores per socket; HT disabled by BIOS)\r\n- DRAM: DDR4 2933MHz 192GiB/socket x2 sockets (32 GiB per channel x 6 channels per socket)\r\n- Optane PMem: Apache Pass, AppDirect Mode, DDR4 2666MHz 1.5TiB/socket x2 sockets (256 GiB per channel x 6 channels per socket; interleaving enabled)\r\n- PCIe SSD: DC P4800X Series SSDPED1K750GA\r\n- Distro: Ubuntu 20.04.1\r\n- C compiler: gcc 9.3.0\r\n- libc: glibc 2.31\r\n- Linux kernel: 5.7 (vanilla)\r\n- Filesystem: ext4 (DAX enabled when using Optane PMem)\r\n- PMDK: 1.9\r\n- PostgreSQL (Original): 14devel (200f610: Jul 26, 2020)\r\n- PostgreSQL (Non-volatile WAL buffer): 14devel (200f610: Jul 26, 2020) + non-volatile WAL buffer patchset v4\r\n\r\n--\r\nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation Center\r\n\r\n> -----Original Message-----\r\n> From: Takashi Menjo <takashi.menjo@gmail.com>\r\n> Sent: Thursday, September 24, 2020 2:38 AM\r\n> To: Deng, Gang <gang.deng@intel.com>\r\n> Cc: pgsql-hackers@postgresql.org; Takashi Menjo \r\n> <takashi.menjou.vg@hco.ntt.co.jp>\r\n> Subject: Re: [PoC] Non-volatile WAL buffer\r\n> \r\n> Hello Gang,\r\n> \r\n> Thank you for your report. I have not taken care of record size deeply \r\n> yet, so your report is very interesting. I will also have a test like yours then post results here.\r\n> \r\n> Regards,\r\n> Takashi\r\n> \r\n> \r\n> 2020年9月21日(月) 14:14 Deng, Gang <gang.deng@intel.com <mailto:gang.deng@intel.com> >:\r\n> \r\n> \r\n> \tHi Takashi,\r\n> \r\n> \r\n> \r\n> \tThank you for the patch and work on accelerating PG performance with \r\n> NVM. I applied the patch and made some performance test based on the \r\n> patch v4. I stored database data files on NVMe SSD and stored WAL file on Intel PMem (NVM). I used two methods to store WAL file(s):\r\n> \r\n> \t1. Leverage your patch to access PMem with libpmem (NVWAL patch).\r\n> \r\n> \t2. Access PMem with legacy filesystem interface, that means use PMem as ordinary block device, no\r\n> PG patch is required to access PMem (Storage over App Direct).\r\n> \r\n> \r\n> \r\n> \tI tried two insert scenarios:\r\n> \r\n> \tA. Insert small record (length of record to be inserted is 24 bytes), I think it is similar as your test\r\n> \r\n> \tB. Insert large record (length of record to be inserted is 328 bytes)\r\n> \r\n> \r\n> \r\n> \tMy original purpose is to see higher performance gain in scenario B as it is more write intensive on WAL.\r\n> But I observed that NVWAL patch method had ~5% performance improvement \r\n> compared with Storage over App Direct method in scenario A, while had ~20% performance degradation in scenario B.\r\n> \r\n> \r\n> \r\n> \tI made further investigation on the test. I found that NVWAL patch \r\n> can improve performance of XlogFlush function, but it may impact \r\n> performance of CopyXlogRecordToWAL function. It may be related to the higher latency of memcpy to Intel PMem comparing with DRAM. Here are key data in my test:\r\n> \r\n> \r\n> \r\n> \tScenario A (length of record to be inserted: 24 bytes per record):\r\n> \r\n> \t==============================\r\n> \r\n> \t \r\n> NVWAL SoAD\r\n> \r\n> \t------------------------------------ ------- -------\r\n> \r\n> \tThrougput (10^3 TPS) 310.5\r\n> 296.0\r\n> \r\n> \tCPU Time % of CopyXlogRecordToWAL 0.4 0.2\r\n> \r\n> \tCPU Time % of XLogInsertRecord 1.5 0.8\r\n> \r\n> \tCPU Time % of XLogFlush 2.1 9.6\r\n> \r\n> \r\n> \r\n> \tScenario B (length of record to be inserted: 328 bytes per record):\r\n> \r\n> \t==============================\r\n> \r\n> \t \r\n> NVWAL SoAD\r\n> \r\n> \t------------------------------------ ------- -------\r\n> \r\n> \tThrougput (10^3 TPS) 13.0\r\n> 16.9\r\n> \r\n> \tCPU Time % of CopyXlogRecordToWAL 3.0 1.6\r\n> \r\n> \tCPU Time % of XLogInsertRecord 23.0 16.4\r\n> \r\n> \tCPU Time % of XLogFlush 2.3 5.9\r\n> \r\n> \r\n> \r\n> \tBest Regards,\r\n> \r\n> \tGang\r\n> \r\n> \r\n> \r\n> \tFrom: Takashi Menjo <takashi.menjo@gmail.com <mailto:takashi.menjo@gmail.com> >\r\n> \tSent: Thursday, September 10, 2020 4:01 PM\r\n> \tTo: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\r\n> \tCc: pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org>\r\n> \tSubject: Re: [PoC] Non-volatile WAL buffer\r\n> \r\n> \r\n> \r\n> \tRebased.\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \t2020年6月24日(水) 16:44 Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp \r\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> >:\r\n> \r\n> \t\tDear hackers,\r\n> \r\n> \t\tI update my non-volatile WAL buffer's patchset to v3. Now we can \r\n> use it in streaming replication mode.\r\n> \r\n> \t\tUpdates from v2:\r\n> \r\n> \t\t- walreceiver supports non-volatile WAL buffer\r\n> \t\tNow walreceiver stores received records directly to non-volatile WAL buffer if applicable.\r\n> \r\n> \t\t- pg_basebackup supports non-volatile WAL buffer\r\n> \t\tNow pg_basebackup copies received WAL segments onto non-volatile WAL \r\n> buffer if you run it with \"nvwal\" mode (-Fn).\r\n> \t\tYou should specify a new NVWAL path with --nvwal-path option. The \r\n> path will be written to postgresql.auto.conf or recovery.conf. The size of the new NVWAL is same as the master's one.\r\n> \r\n> \r\n> \t\tBest regards,\r\n> \t\tTakashi\r\n> \r\n> \t\t--\r\n> \t\tTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\r\n> \t\tNTT Software Innovation Center\r\n> \r\n> \t\t> -----Original Message-----\r\n> \t\t> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp \r\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\r\n> \t\t> Sent: Wednesday, March 18, 2020 5:59 PM\r\n> \t\t> To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org \r\n> <mailto:pgsql-hackers@postgresql.org> >\r\n> \t\t> Cc: 'Robert Haas' <robertmhaas@gmail.com \r\n> <mailto:robertmhaas@gmail.com> >; 'Heikki Linnakangas' <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >; 'Amit Langote'\r\n> \t\t> <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\r\n> \t\t> Subject: RE: [PoC] Non-volatile WAL buffer\r\n> \t\t>\r\n> \t\t> Dear hackers,\r\n> \t\t>\r\n> \t\t> I rebased my non-volatile WAL buffer's patchset onto master. A \r\n> new v2 patchset is attached to this mail.\r\n> \t\t>\r\n> \t\t> I also measured performance before and after patchset, varying \r\n> -c/--client and -j/--jobs options of pgbench, for\r\n> \t\t> each scaling factor s = 50 or 1000. The results are presented in \r\n> the following tables and the attached charts.\r\n> \t\t> Conditions, steps, and other details will be shown later.\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> Results (s=50)\r\n> \t\t> ==============\r\n> \t\t> Throughput [10^3 TPS] Average latency [ms]\r\n> \t\t> ( c, j) before after before after\r\n> \t\t> ------- --------------------- ---------------------\r\n> \t\t> ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\r\n> \t\t> (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\r\n> \t\t> (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\r\n> \t\t> (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> Results (s=1000)\r\n> \t\t> ================\r\n> \t\t> Throughput [10^3 TPS] Average latency [ms]\r\n> \t\t> ( c, j) before after before after\r\n> \t\t> ------- --------------------- ---------------------\r\n> \t\t> ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\r\n> \t\t> (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\r\n> \t\t> (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\r\n> \t\t> (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> Both throughput and average latency are improved for each scaling \r\n> factor. Throughput seemed to almost reach\r\n> \t\t> the upper limit when (c,j)=(36,18).\r\n> \t\t>\r\n> \t\t> The percentage in s=1000 case looks larger than in s=50 case. I \r\n> think larger scaling factor leads to less\r\n> \t\t> contentions on the same tables and/or indexes, that is, less lock \r\n> and unlock operations. In such a situation,\r\n> \t\t> write-ahead logging appears to be more significant for performance.\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> Conditions\r\n> \t\t> ==========\r\n> \t\t> - Use one physical server having 2 NUMA nodes (node 0 and 1)\r\n> \t\t> - Pin postgres (server processes) to node 0 and pgbench to node 1\r\n> \t\t> - 18 cores and 192GiB DRAM per node\r\n> \t\t> - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for pg_wal\r\n> \t\t> - Both are installed on the server-side node, that is, node 0\r\n> \t\t> - Both are formatted with ext4\r\n> \t\t> - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access (DAX)\r\n> \t\t> - Use the attached postgresql.conf\r\n> \t\t> - Two new items nvwal_path and nvwal_size are used only after patch\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> Steps\r\n> \t\t> =====\r\n> \t\t> For each (c,j) pair, I did the following steps three times then I \r\n> found the median of the three as a final result shown\r\n> \t\t> in the tables above.\r\n> \t\t>\r\n> \t\t> (1) Run initdb with proper -D and -X options; and also give \r\n> --nvwal-path and --nvwal-size options after patch\r\n> \t\t> (2) Start postgres and create a database for pgbench tables\r\n> \t\t> (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\r\n> \t\t> (4) Stop postgres, remount filesystems, and start postgres again\r\n> \t\t> (5) Execute pg_prewarm extension for all the four pgbench tables\r\n> \t\t> (6) Run pgbench during 30 minutes\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> pgbench command line\r\n> \t\t> ====================\r\n> \t\t> $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\r\n> \t\t>\r\n> \t\t> I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> Software\r\n> \t\t> ========\r\n> \t\t> - Distro: Ubuntu 18.04\r\n> \t\t> - Kernel: Linux 5.4 (vanilla kernel)\r\n> \t\t> - C Compiler: gcc 7.4.0\r\n> \t\t> - PMDK: 1.7\r\n> \t\t> - PostgreSQL: d677550 (master on Mar 3, 2020)\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> Hardware\r\n> \t\t> ========\r\n> \t\t> - System: HPE ProLiant DL380 Gen10\r\n> \t\t> - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\r\n> \t\t> - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\r\n> \t\t> - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\r\n> \t\t> - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\r\n> \t\t>\r\n> \t\t>\r\n> \t\t> Best regards,\r\n> \t\t> Takashi\r\n> \t\t>\r\n> \t\t> --\r\n> \t\t> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp \r\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> > NTT Software Innovation Center\r\n> \t\t>\r\n> \t\t> > -----Original Message-----\r\n> \t\t> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp \r\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\r\n> \t\t> > Sent: Thursday, February 20, 2020 6:30 PM\r\n> \t\t> > To: 'Amit Langote' <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\r\n> \t\t> > Cc: 'Robert Haas' <robertmhaas@gmail.com \r\n> <mailto:robertmhaas@gmail.com> >; 'Heikki Linnakangas' <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >;\r\n> \t\t> 'PostgreSQL-development'\r\n> \t\t> > <pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org> >\r\n> \t\t> > Subject: RE: [PoC] Non-volatile WAL buffer\r\n> \t\t> >\r\n> \t\t> > Dear Amit,\r\n> \t\t> >\r\n> \t\t> > Thank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\r\n> \t\t> >\r\n> \t\t> > I'm rebasing my branch onto master. I'll submit an updated \r\n> patchset and performance report later.\r\n> \t\t> >\r\n> \t\t> > Best regards,\r\n> \t\t> > Takashi\r\n> \t\t> >\r\n> \t\t> > --\r\n> \t\t> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp \r\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp>\r\n> > NTT Software\r\n> \t\t> > Innovation Center\r\n> \t\t> >\r\n> \t\t> > > -----Original Message-----\r\n> \t\t> > > From: Amit Langote <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\r\n> \t\t> > > Sent: Monday, February 17, 2020 5:21 PM\r\n> \t\t> > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp \r\n> <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\r\n> \t\t> > > Cc: Robert Haas <robertmhaas@gmail.com \r\n> <mailto:robertmhaas@gmail.com> >; Heikki Linnakangas\r\n> \t\t> > > <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >; PostgreSQL-development\r\n> \t\t> > > <pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org> >\r\n> \t\t> > > Subject: Re: [PoC] Non-volatile WAL buffer\r\n> \t\t> > >\r\n> \t\t> > > Hello,\r\n> \t\t> > >\r\n> \t\t> > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo \r\n> <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> > wrote:\r\n> \t\t> > > > Hello Amit,\r\n> \t\t> > > >\r\n> \t\t> > > > > I apologize for not having any opinion on the patches\r\n> \t\t> > > > > themselves, but let me point out that it's better to base these\r\n> \t\t> > > > > patches on HEAD (master branch) than REL_12_0, because all new\r\n> \t\t> > > > > code is committed to the master branch, whereas stable branches\r\n> \t\t> > > > > such as\r\n> \t\t> > > > > REL_12_0 only receive bug fixes. Do you have any\r\n> \t\t> > > specific reason to be working on REL_12_0?\r\n> \t\t> > > >\r\n> \t\t> > > > Yes, because I think it's human-friendly to reproduce and discuss\r\n> \t\t> > > > performance measurement. Of course I know\r\n> \t\t> > > all new accepted patches are merged into master's HEAD, not stable\r\n> \t\t> > > branches and not even release tags, so I'm aware of rebasing my\r\n> \t\t> > > patchset onto master sooner or later. However, if someone,\r\n> \t\t> > > including me, says that s/he applies my patchset to \"master\" and\r\n> \t\t> > > measures its performance, we have to pay attention to which commit the \"master\"\r\n> \t\t> > > really points to. Although we have sha1 hashes to specify which\r\n> \t\t> > > commit, we should check whether the specific commit on master has\r\n> \t\t> > > patches affecting performance or not\r\n> \t\t> > because master's HEAD gets new patches day by day. On the other hand,\r\n> \t\t> > a release tag clearly points the commit all we probably know. Also we\r\n> \t\t> > can check more easily the features and improvements by using \r\n> release notes and user manuals.\r\n> \t\t> > >\r\n> \t\t> > > Thanks for clarifying. I see where you're coming from.\r\n> \t\t> > >\r\n> \t\t> > > While I do sometimes see people reporting numbers with the latest\r\n> \t\t> > > stable release' branch, that's normally just one of the baselines.\r\n> \t\t> > > The more important baseline for ongoing development is the master\r\n> \t\t> > > branch's HEAD, which is also what people volunteering to test your\r\n> \t\t> > > patches would use. Anyone who reports would have to give at least\r\n> \t\t> > > two numbers -- performance with a branch's HEAD without patch\r\n> \t\t> > > applied and that with patch applied -- which can be enough in most\r\n> \t\t> > > cases to see the difference the patch makes. Sure, the numbers\r\n> \t\t> > > might change on each report, but that's fine I'd think. If you\r\n> \t\t> > > continue to develop against the stable branch, you might miss to\r\n> \t\t> > notice impact from any relevant developments in the master branch,\r\n> \t\t> > even developments which possibly require rethinking the \r\n> architecture of your own changes, although maybe that\r\n> \t\t> rarely occurs.\r\n> \t\t> > >\r\n> \t\t> > > Thanks,\r\n> \t\t> > > Amit\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \t--\r\n> \r\n> \tTakashi Menjo <takashi.menjo@gmail.com \r\n> <mailto:takashi.menjo@gmail.com> >\r\n> \r\n> \r\n> \r\n> --\r\n> \r\n> Takashi Menjo <takashi.menjo@gmail.com \r\n> <mailto:takashi.menjo@gmail.com> >",
"msg_date": "Fri, 9 Oct 2020 06:09:48 +0000",
"msg_from": "\"Deng, Gang\" <gang.deng@intel.com>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Gang,\n\nThanks. I have tried to reproduce performance degrade, using your configuration, query, and steps. And today, I got some results that Original (PMEM) achieved better performance than Non-volatile WAL buffer on my Ubuntu environment. Now I work for further investigation.\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\nNTT Software Innovation Center\n\n> -----Original Message-----\n> From: Deng, Gang <gang.deng@intel.com>\n> Sent: Friday, October 9, 2020 3:10 PM\n> To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Cc: pgsql-hackers@postgresql.org; 'Takashi Menjo' <takashi.menjo@gmail.com>\n> Subject: RE: [PoC] Non-volatile WAL buffer\n> \n> Hi Takashi,\n> \n> There are some differences between our HW/SW configuration and test steps. I attached postgresql.conf I used\n> for your reference. I would like to try postgresql.conf and steps you provided in the later days to see if I can find\n> cause.\n> \n> I also ran pgbench and postgres server on the same server but on different NUMA node, and ensure server process\n> and PMEM on the same NUMA node. I used similar steps are yours from step 1 to 9. But some difference in later\n> steps, major of them are:\n> \n> In step 10), I created a database and table for test by:\n> #create database:\n> psql -c \"create database insert_bench;\"\n> #create table:\n> psql -d insert_bench -c \"create table test(crt_time timestamp, info text default\n> '75feba6d5ca9ff65d09af35a67fe962a4e3fa5ef279f94df6696bee65f4529a4bbb03ae56c3b5b86c22b447fc\n> 48da894740ed1a9d518a9646b3a751a57acaca1142ccfc945b1082b40043e3f83f8b7605b5a55fcd7eb8fc1\n> d0475c7fe465477da47d96957849327731ae76322f440d167725d2e2bbb60313150a4f69d9a8c9e86f9d7\n> 9a742e7a35bf159f670e54413fb89ff81b8e5e8ab215c3ddfd00bb6aeb4');\"\n> \n> in step 15), I did not use pg_prewarm, but just ran pg_bench for 180 seconds to warm up.\n> In step 16), I ran pgbench using command: pgbench -M prepared -n -r -P 10 -f ./test.sql -T 600 -c _ -j _\n> insert_bench. (test.sql can be found in attachment)\n> \n> For HW/SW conf, the major differences are:\n> CPU: I used Xeon 8268 (24c@2.9Ghz, HT enabled) OS Distro: CentOS 8.2.2004\n> Kernel: 4.18.0-193.6.3.el8_2.x86_64\n> GCC: 8.3.1\n> \n> Best regards\n> Gang\n> \n> -----Original Message-----\n> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>\n> Sent: Tuesday, October 6, 2020 4:49 PM\n> To: Deng, Gang <gang.deng@intel.com>\n> Cc: pgsql-hackers@postgresql.org; 'Takashi Menjo' <takashi.menjo@gmail.com>\n> Subject: RE: [PoC] Non-volatile WAL buffer\n> \n> Hi Gang,\n> \n> I have tried to but yet cannot reproduce performance degrade you reported when inserting 328-byte records. So\n> I think the condition of you and me would be different, such as steps to reproduce, postgresql.conf, installation\n> setup, and so on.\n> \n> My results and condition are as follows. May I have your condition in more detail? Note that I refer to your \"Storage\n> over App Direct\" as my \"Original (PMEM)\" and \"NVWAL patch\" to \"Non-volatile WAL buffer.\"\n> \n> Best regards,\n> Takashi\n> \n> \n> # Results\n> See the attached figure. In short, Non-volatile WAL buffer got better performance than Original (PMEM).\n> \n> # Steps\n> Note that I ran postgres server and pgbench in a single-machine system but separated two NUMA nodes. PMEM\n> and PCI SSD for the server process are on the server-side NUMA node.\n> \n> 01) Create a PMEM namespace (sudo ndctl create-namespace -f -t pmem -m fsdax -M dev -e namespace0.0)\n> 02) Make an ext4 filesystem for PMEM then mount it with DAX option (sudo mkfs.ext4 -q -F /dev/pmem0 ; sudo\n> mount -o dax /dev/pmem0 /mnt/pmem0)\n> 03) Make another ext4 filesystem for PCIe SSD then mount it (sudo mkfs.ext4 -q -F /dev/nvme0n1 ; sudo mount\n> /dev/nvme0n1 /mnt/nvme0n1)\n> 04) Make /mnt/pmem0/pg_wal directory for WAL\n> 05) Make /mnt/nvme0n1/pgdata directory for PGDATA\n> 06) Run initdb (initdb --locale=C --encoding=UTF8 -X /mnt/pmem0/pg_wal ...)\n> - Also give -P /mnt/pmem0/pg_wal/nvwal -Q 81920 in the case of Non-volatile WAL buffer\n> 07) Edit postgresql.conf as the attached one\n> - Please remove nvwal_* lines in the case of Original (PMEM)\n> 08) Start postgres server process on NUMA node 0 (numactl -N 0 -m 0 -- pg_ctl -l pg.log start)\n> 09) Create a database (createdb --locale=C --encoding=UTF8)\n> 10) Initialize pgbench tables with s=50 (pgbench -i -s 50)\n> 11) Change # characters of \"filler\" column of \"pgbench_history\" table to 300 (ALTER TABLE pgbench_history\n> ALTER filler TYPE character(300);)\n> - This would make the row size of the table 328 bytes\n> 12) Stop the postgres server process (pg_ctl -l pg.log -m smart stop)\n> 13) Remount the PMEM and the PCIe SSD\n> 14) Start postgres server process on NUMA node 0 again (numactl -N 0 -m 0 -- pg_ctl -l pg.log start)\n> 15) Run pg_prewarm for all the four pgbench_* tables\n> 16) Run pgbench on NUMA node 1 for 30 minutes (numactl -N 1 -m 1 -- pgbench -r -M prepared -T 1800 -c __\n> -j __)\n> - It executes the default tpcb-like transactions\n> \n> I repeated all the steps three times for each (c,j) then got the median \"tps = __ (including connections\n> establishing)\" of the three as throughput and the \"latency average = __ ms \" of that time as average latency.\n> \n> # Environment variables\n> export PGHOST=/tmp\n> export PGPORT=5432\n> export PGDATABASE=\"$USER\"\n> export PGUSER=\"$USER\"\n> export PGDATA=/mnt/nvme0n1/pgdata\n> \n> # Setup\n> - System: HPE ProLiant DL380 Gen10\n> - CPU: Intel Xeon Gold 6240M x2 sockets (18 cores per socket; HT disabled by BIOS)\n> - DRAM: DDR4 2933MHz 192GiB/socket x2 sockets (32 GiB per channel x 6 channels per socket)\n> - Optane PMem: Apache Pass, AppDirect Mode, DDR4 2666MHz 1.5TiB/socket x2 sockets (256 GiB per channel\n> x 6 channels per socket; interleaving enabled)\n> - PCIe SSD: DC P4800X Series SSDPED1K750GA\n> - Distro: Ubuntu 20.04.1\n> - C compiler: gcc 9.3.0\n> - libc: glibc 2.31\n> - Linux kernel: 5.7 (vanilla)\n> - Filesystem: ext4 (DAX enabled when using Optane PMem)\n> - PMDK: 1.9\n> - PostgreSQL (Original): 14devel (200f610: Jul 26, 2020)\n> - PostgreSQL (Non-volatile WAL buffer): 14devel (200f610: Jul 26, 2020) + non-volatile WAL buffer patchset\n> v4\n> \n> --\n> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp> NTT Software Innovation Center\n> \n> > -----Original Message-----\n> > From: Takashi Menjo <takashi.menjo@gmail.com>\n> > Sent: Thursday, September 24, 2020 2:38 AM\n> > To: Deng, Gang <gang.deng@intel.com>\n> > Cc: pgsql-hackers@postgresql.org; Takashi Menjo\n> > <takashi.menjou.vg@hco.ntt.co.jp>\n> > Subject: Re: [PoC] Non-volatile WAL buffer\n> >\n> > Hello Gang,\n> >\n> > Thank you for your report. I have not taken care of record size deeply\n> > yet, so your report is very interesting. I will also have a test like yours then post results here.\n> >\n> > Regards,\n> > Takashi\n> >\n> >\n> > 2020年9月21日(月) 14:14 Deng, Gang <gang.deng@intel.com <mailto:gang.deng@intel.com> >:\n> >\n> >\n> > \tHi Takashi,\n> >\n> >\n> >\n> > \tThank you for the patch and work on accelerating PG performance with\n> > NVM. I applied the patch and made some performance test based on the\n> > patch v4. I stored database data files on NVMe SSD and stored WAL file on Intel PMem (NVM). I used two\n> methods to store WAL file(s):\n> >\n> > \t1. Leverage your patch to access PMem with libpmem (NVWAL patch).\n> >\n> > \t2. Access PMem with legacy filesystem interface, that means use PMem as ordinary block device, no\n> > PG patch is required to access PMem (Storage over App Direct).\n> >\n> >\n> >\n> > \tI tried two insert scenarios:\n> >\n> > \tA. Insert small record (length of record to be inserted is 24 bytes), I think it is similar as your test\n> >\n> > \tB. Insert large record (length of record to be inserted is 328 bytes)\n> >\n> >\n> >\n> > \tMy original purpose is to see higher performance gain in scenario B as it is more write intensive on WAL.\n> > But I observed that NVWAL patch method had ~5% performance improvement\n> > compared with Storage over App Direct method in scenario A, while had ~20% performance degradation in\n> scenario B.\n> >\n> >\n> >\n> > \tI made further investigation on the test. I found that NVWAL patch\n> > can improve performance of XlogFlush function, but it may impact\n> > performance of CopyXlogRecordToWAL function. It may be related to the higher latency of memcpy to Intel\n> PMem comparing with DRAM. Here are key data in my test:\n> >\n> >\n> >\n> > \tScenario A (length of record to be inserted: 24 bytes per record):\n> >\n> > \t==============================\n> >\n> >\n> > NVWAL SoAD\n> >\n> > \t------------------------------------ ------- -------\n> >\n> > \tThrougput (10^3 TPS) 310.5\n> > 296.0\n> >\n> > \tCPU Time % of CopyXlogRecordToWAL 0.4 0.2\n> >\n> > \tCPU Time % of XLogInsertRecord 1.5 0.8\n> >\n> > \tCPU Time % of XLogFlush 2.1 9.6\n> >\n> >\n> >\n> > \tScenario B (length of record to be inserted: 328 bytes per record):\n> >\n> > \t==============================\n> >\n> >\n> > NVWAL SoAD\n> >\n> > \t------------------------------------ ------- -------\n> >\n> > \tThrougput (10^3 TPS) 13.0\n> > 16.9\n> >\n> > \tCPU Time % of CopyXlogRecordToWAL 3.0 1.6\n> >\n> > \tCPU Time % of XLogInsertRecord 23.0 16.4\n> >\n> > \tCPU Time % of XLogFlush 2.3 5.9\n> >\n> >\n> >\n> > \tBest Regards,\n> >\n> > \tGang\n> >\n> >\n> >\n> > \tFrom: Takashi Menjo <takashi.menjo@gmail.com <mailto:takashi.menjo@gmail.com> >\n> > \tSent: Thursday, September 10, 2020 4:01 PM\n> > \tTo: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> > \tCc: pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org>\n> > \tSubject: Re: [PoC] Non-volatile WAL buffer\n> >\n> >\n> >\n> > \tRebased.\n> >\n> >\n> >\n> >\n> >\n> > \t2020年6月24日(水) 16:44 Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> > <mailto:takashi.menjou.vg@hco.ntt.co.jp> >:\n> >\n> > \t\tDear hackers,\n> >\n> > \t\tI update my non-volatile WAL buffer's patchset to v3. Now we can\n> > use it in streaming replication mode.\n> >\n> > \t\tUpdates from v2:\n> >\n> > \t\t- walreceiver supports non-volatile WAL buffer\n> > \t\tNow walreceiver stores received records directly to non-volatile WAL buffer if applicable.\n> >\n> > \t\t- pg_basebackup supports non-volatile WAL buffer\n> > \t\tNow pg_basebackup copies received WAL segments onto non-volatile WAL\n> > buffer if you run it with \"nvwal\" mode (-Fn).\n> > \t\tYou should specify a new NVWAL path with --nvwal-path option. The\n> > path will be written to postgresql.auto.conf or recovery.conf. The size of the new NVWAL is same as the\n> master's one.\n> >\n> >\n> > \t\tBest regards,\n> > \t\tTakashi\n> >\n> > \t\t--\n> > \t\tTakashi Menjo <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> > \t\tNTT Software Innovation Center\n> >\n> > \t\t> -----Original Message-----\n> > \t\t> From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> > <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> > \t\t> Sent: Wednesday, March 18, 2020 5:59 PM\n> > \t\t> To: 'PostgreSQL-development' <pgsql-hackers@postgresql.org\n> > <mailto:pgsql-hackers@postgresql.org> >\n> > \t\t> Cc: 'Robert Haas' <robertmhaas@gmail.com\n> > <mailto:robertmhaas@gmail.com> >; 'Heikki Linnakangas' <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >; 'Amit\n> Langote'\n> > \t\t> <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\n> > \t\t> Subject: RE: [PoC] Non-volatile WAL buffer\n> > \t\t>\n> > \t\t> Dear hackers,\n> > \t\t>\n> > \t\t> I rebased my non-volatile WAL buffer's patchset onto master. A\n> > new v2 patchset is attached to this mail.\n> > \t\t>\n> > \t\t> I also measured performance before and after patchset, varying\n> > -c/--client and -j/--jobs options of pgbench, for\n> > \t\t> each scaling factor s = 50 or 1000. The results are presented in\n> > the following tables and the attached charts.\n> > \t\t> Conditions, steps, and other details will be shown later.\n> > \t\t>\n> > \t\t>\n> > \t\t> Results (s=50)\n> > \t\t> ==============\n> > \t\t> Throughput [10^3 TPS] Average latency [ms]\n> > \t\t> ( c, j) before after before after\n> > \t\t> ------- --------------------- ---------------------\n> > \t\t> ( 8, 8) 35.7 37.1 (+3.9%) 0.224 0.216 (-3.6%)\n> > \t\t> (18,18) 70.9 74.7 (+5.3%) 0.254 0.241 (-5.1%)\n> > \t\t> (36,18) 76.0 80.8 (+6.3%) 0.473 0.446 (-5.7%)\n> > \t\t> (54,18) 75.5 81.8 (+8.3%) 0.715 0.660 (-7.7%)\n> > \t\t>\n> > \t\t>\n> > \t\t> Results (s=1000)\n> > \t\t> ================\n> > \t\t> Throughput [10^3 TPS] Average latency [ms]\n> > \t\t> ( c, j) before after before after\n> > \t\t> ------- --------------------- ---------------------\n> > \t\t> ( 8, 8) 37.4 40.1 (+7.3%) 0.214 0.199 (-7.0%)\n> > \t\t> (18,18) 79.3 86.7 (+9.3%) 0.227 0.208 (-8.4%)\n> > \t\t> (36,18) 87.2 95.5 (+9.5%) 0.413 0.377 (-8.7%)\n> > \t\t> (54,18) 86.8 94.8 (+9.3%) 0.622 0.569 (-8.5%)\n> > \t\t>\n> > \t\t>\n> > \t\t> Both throughput and average latency are improved for each scaling\n> > factor. Throughput seemed to almost reach\n> > \t\t> the upper limit when (c,j)=(36,18).\n> > \t\t>\n> > \t\t> The percentage in s=1000 case looks larger than in s=50 case. I\n> > think larger scaling factor leads to less\n> > \t\t> contentions on the same tables and/or indexes, that is, less lock\n> > and unlock operations. In such a situation,\n> > \t\t> write-ahead logging appears to be more significant for performance.\n> > \t\t>\n> > \t\t>\n> > \t\t> Conditions\n> > \t\t> ==========\n> > \t\t> - Use one physical server having 2 NUMA nodes (node 0 and 1)\n> > \t\t> - Pin postgres (server processes) to node 0 and pgbench to node 1\n> > \t\t> - 18 cores and 192GiB DRAM per node\n> > \t\t> - Use an NVMe SSD for PGDATA and an interleaved 6-in-1 NVDIMM-N set for pg_wal\n> > \t\t> - Both are installed on the server-side node, that is, node 0\n> > \t\t> - Both are formatted with ext4\n> > \t\t> - NVDIMM-N is mounted with \"-o dax\" option to enable Direct Access (DAX)\n> > \t\t> - Use the attached postgresql.conf\n> > \t\t> - Two new items nvwal_path and nvwal_size are used only after patch\n> > \t\t>\n> > \t\t>\n> > \t\t> Steps\n> > \t\t> =====\n> > \t\t> For each (c,j) pair, I did the following steps three times then I\n> > found the median of the three as a final result shown\n> > \t\t> in the tables above.\n> > \t\t>\n> > \t\t> (1) Run initdb with proper -D and -X options; and also give\n> > --nvwal-path and --nvwal-size options after patch\n> > \t\t> (2) Start postgres and create a database for pgbench tables\n> > \t\t> (3) Run \"pgbench -i -s ___\" to create tables (s = 50 or 1000)\n> > \t\t> (4) Stop postgres, remount filesystems, and start postgres again\n> > \t\t> (5) Execute pg_prewarm extension for all the four pgbench tables\n> > \t\t> (6) Run pgbench during 30 minutes\n> > \t\t>\n> > \t\t>\n> > \t\t> pgbench command line\n> > \t\t> ====================\n> > \t\t> $ pgbench -h /tmp -p 5432 -U username -r -M prepared -T 1800 -c ___ -j ___ dbname\n> > \t\t>\n> > \t\t> I gave no -b option to use the built-in \"TPC-B (sort-of)\" query.\n> > \t\t>\n> > \t\t>\n> > \t\t> Software\n> > \t\t> ========\n> > \t\t> - Distro: Ubuntu 18.04\n> > \t\t> - Kernel: Linux 5.4 (vanilla kernel)\n> > \t\t> - C Compiler: gcc 7.4.0\n> > \t\t> - PMDK: 1.7\n> > \t\t> - PostgreSQL: d677550 (master on Mar 3, 2020)\n> > \t\t>\n> > \t\t>\n> > \t\t> Hardware\n> > \t\t> ========\n> > \t\t> - System: HPE ProLiant DL380 Gen10\n> > \t\t> - CPU: Intel Xeon Gold 6154 (Skylake) x 2sockets\n> > \t\t> - DRAM: DDR4 2666MHz {32GiB/ch x 6ch}/socket x 2sockets\n> > \t\t> - NVDIMM-N: DDR4 2666MHz {16GiB/ch x 6ch}/socket x 2sockets\n> > \t\t> - NVMe SSD: Intel Optane DC P4800X Series SSDPED1K750GA\n> > \t\t>\n> > \t\t>\n> > \t\t> Best regards,\n> > \t\t> Takashi\n> > \t\t>\n> > \t\t> --\n> > \t\t> Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> > <mailto:takashi.menjou.vg@hco.ntt.co.jp> > NTT Software Innovation Center\n> > \t\t>\n> > \t\t> > -----Original Message-----\n> > \t\t> > From: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> > <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> > \t\t> > Sent: Thursday, February 20, 2020 6:30 PM\n> > \t\t> > To: 'Amit Langote' <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\n> > \t\t> > Cc: 'Robert Haas' <robertmhaas@gmail.com\n> > <mailto:robertmhaas@gmail.com> >; 'Heikki Linnakangas' <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >;\n> > \t\t> 'PostgreSQL-development'\n> > \t\t> > <pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org> >\n> > \t\t> > Subject: RE: [PoC] Non-volatile WAL buffer\n> > \t\t> >\n> > \t\t> > Dear Amit,\n> > \t\t> >\n> > \t\t> > Thank you for your advice. Exactly, it's so to speak \"do as the hackers do when in pgsql\"...\n> > \t\t> >\n> > \t\t> > I'm rebasing my branch onto master. I'll submit an updated\n> > patchset and performance report later.\n> > \t\t> >\n> > \t\t> > Best regards,\n> > \t\t> > Takashi\n> > \t\t> >\n> > \t\t> > --\n> > \t\t> > Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> > <mailto:takashi.menjou.vg@hco.ntt.co.jp>\n> > > NTT Software\n> > \t\t> > Innovation Center\n> > \t\t> >\n> > \t\t> > > -----Original Message-----\n> > \t\t> > > From: Amit Langote <amitlangote09@gmail.com <mailto:amitlangote09@gmail.com> >\n> > \t\t> > > Sent: Monday, February 17, 2020 5:21 PM\n> > \t\t> > > To: Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp\n> > <mailto:takashi.menjou.vg@hco.ntt.co.jp> >\n> > \t\t> > > Cc: Robert Haas <robertmhaas@gmail.com\n> > <mailto:robertmhaas@gmail.com> >; Heikki Linnakangas\n> > \t\t> > > <hlinnaka@iki.fi <mailto:hlinnaka@iki.fi> >; PostgreSQL-development\n> > \t\t> > > <pgsql-hackers@postgresql.org <mailto:pgsql-hackers@postgresql.org> >\n> > \t\t> > > Subject: Re: [PoC] Non-volatile WAL buffer\n> > \t\t> > >\n> > \t\t> > > Hello,\n> > \t\t> > >\n> > \t\t> > > On Mon, Feb 17, 2020 at 4:16 PM Takashi Menjo\n> > <takashi.menjou.vg@hco.ntt.co.jp <mailto:takashi.menjou.vg@hco.ntt.co.jp> > wrote:\n> > \t\t> > > > Hello Amit,\n> > \t\t> > > >\n> > \t\t> > > > > I apologize for not having any opinion on the patches\n> > \t\t> > > > > themselves, but let me point out that it's better to base these\n> > \t\t> > > > > patches on HEAD (master branch) than REL_12_0, because all new\n> > \t\t> > > > > code is committed to the master branch, whereas stable branches\n> > \t\t> > > > > such as\n> > \t\t> > > > > REL_12_0 only receive bug fixes. Do you have any\n> > \t\t> > > specific reason to be working on REL_12_0?\n> > \t\t> > > >\n> > \t\t> > > > Yes, because I think it's human-friendly to reproduce and discuss\n> > \t\t> > > > performance measurement. Of course I know\n> > \t\t> > > all new accepted patches are merged into master's HEAD, not stable\n> > \t\t> > > branches and not even release tags, so I'm aware of rebasing my\n> > \t\t> > > patchset onto master sooner or later. However, if someone,\n> > \t\t> > > including me, says that s/he applies my patchset to \"master\" and\n> > \t\t> > > measures its performance, we have to pay attention to which commit the \"master\"\n> > \t\t> > > really points to. Although we have sha1 hashes to specify which\n> > \t\t> > > commit, we should check whether the specific commit on master has\n> > \t\t> > > patches affecting performance or not\n> > \t\t> > because master's HEAD gets new patches day by day. On the other hand,\n> > \t\t> > a release tag clearly points the commit all we probably know. Also we\n> > \t\t> > can check more easily the features and improvements by using\n> > release notes and user manuals.\n> > \t\t> > >\n> > \t\t> > > Thanks for clarifying. I see where you're coming from.\n> > \t\t> > >\n> > \t\t> > > While I do sometimes see people reporting numbers with the latest\n> > \t\t> > > stable release' branch, that's normally just one of the baselines.\n> > \t\t> > > The more important baseline for ongoing development is the master\n> > \t\t> > > branch's HEAD, which is also what people volunteering to test your\n> > \t\t> > > patches would use. Anyone who reports would have to give at least\n> > \t\t> > > two numbers -- performance with a branch's HEAD without patch\n> > \t\t> > > applied and that with patch applied -- which can be enough in most\n> > \t\t> > > cases to see the difference the patch makes. Sure, the numbers\n> > \t\t> > > might change on each report, but that's fine I'd think. If you\n> > \t\t> > > continue to develop against the stable branch, you might miss to\n> > \t\t> > notice impact from any relevant developments in the master branch,\n> > \t\t> > even developments which possibly require rethinking the\n> > architecture of your own changes, although maybe that\n> > \t\t> rarely occurs.\n> > \t\t> > >\n> > \t\t> > > Thanks,\n> > \t\t> > > Amit\n> >\n> >\n> >\n> >\n> >\n> >\n> > \t--\n> >\n> > \tTakashi Menjo <takashi.menjo@gmail.com\n> > <mailto:takashi.menjo@gmail.com> >\n> >\n> >\n> >\n> > --\n> >\n> > Takashi Menjo <takashi.menjo@gmail.com\n> > <mailto:takashi.menjo@gmail.com> >\n\n\n\n\n",
"msg_date": "Wed, 14 Oct 2020 14:30:57 +0900",
"msg_from": "Takashi Menjo <takashi.menjou.vg@hco.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "I had a new look at this thread today, trying to figure out where we \nare. I'm a bit confused.\n\nOne thing we have established: mmap()ing WAL files performs worse than \nthe current method, if pg_wal is not on a persistent memory device. This \nis because the kernel faults in existing content of each page, even \nthough we're overwriting everything.\n\nThat's unfortunate. I was hoping that mmap() would be a good option even \nwithout persistent memory hardware. I wish we could tell the kernel to \nzero the pages instead of reading them from the file. Maybe clear the \nfile with ftruncate() before mmapping it?\n\nThat should not be problem with a real persistent memory device, however \n(or when emulating it with DRAM). With DAX, the storage is memory-mapped \ndirectly and there is no page cache, and no pre-faulting.\n\nBecause of that, I'm baffled by what the \nv4-0002-Non-volatile-WAL-buffer.patch does. If I understand it \ncorrectly, it puts the WAL buffers in a separate file, which is stored \non the NVRAM. Why? I realize that this is just a Proof of Concept, but \nI'm very much not interested in anything that requires the DBA to manage \na second WAL location. Did you test the mmap() patches with persistent \nmemory hardware? Did you compare that with the pmem patchset, on the \nsame hardware? If there's a meaningful performance difference between \nthe two, what's causing it?\n\n- Heikki\n\n\n",
"msg_date": "Mon, 26 Oct 2020 16:07:45 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Heikki,\n\n> I had a new look at this thread today, trying to figure out where we are.\nI'm a bit confused.\n>\n> One thing we have established: mmap()ing WAL files performs worse than\nthe current method, if pg_wal is not on\n> a persistent memory device. This is because the kernel faults in existing\ncontent of each page, even though we're\n> overwriting everything.\nYes. In addition, after a certain page (in the sense of OS page) is\nmsync()ed, another page fault will occur again when something is stored\ninto that page.\n\n> That's unfortunate. I was hoping that mmap() would be a good option even\nwithout persistent memory hardware.\n> I wish we could tell the kernel to zero the pages instead of reading them\nfrom the file. Maybe clear the file with\n> ftruncate() before mmapping it?\nThe area extended by ftruncate() appears as if it were zero-filled [1].\nPlease note that it merely \"appears as if.\" It might not be actually\nzero-filled as data blocks on devices, so pre-allocating files should\nimprove transaction performance. At least, on Linux 5.7 and ext4, it takes\nmore time to store into the mapped file just open(O_CREAT)ed and\nftruncate()d than into the one filled already and actually.\n\n> That should not be problem with a real persistent memory device, however\n(or when emulating it with DRAM). With\n> DAX, the storage is memory-mapped directly and there is no page cache,\nand no pre-faulting.\nYes, with filesystem DAX, there is no page cache for file data. A page\nfault still occurs but for each 2MiB DAX hugepage, so its overhead\ndecreases compared with 4KiB page fault. Such a DAX hugepage fault is only\napplied to DAX-mapped files and is different from a general transparent\nhugepage fault.\n\n> Because of that, I'm baffled by what the\nv4-0002-Non-volatile-WAL-buffer.patch does. If I understand it\n> correctly, it puts the WAL buffers in a separate file, which is stored on\nthe NVRAM. Why? I realize that this is just\n> a Proof of Concept, but I'm very much not interested in anything that\nrequires the DBA to manage a second WAL\n> location. Did you test the mmap() patches with persistent memory\nhardware? Did you compare that with the pmem\n> patchset, on the same hardware? If there's a meaningful performance\ndifference between the two, what's causing\n> it?\nYes, this patchset puts the WAL buffers into the file specified by\n\"nvwal_path\" in postgresql.conf.\n\nWhy this patchset puts the buffers into the separated file, not existing\nsegment files in PGDATA/pg_wal, is because it reduces the overhead due to\nsystem calls such as open(), mmap(), munmap(), and close(). It open()s and\nmmap()s the file \"nvwal_path\" once, and keeps that file mapped while\nrunning. On the other hand, as for the patchset mmap()ing the segment\nfiles, a backend process should munmap() and close() the current mapped\nfile and open() and mmap() the new one for each time the inserting location\nfor that process goes over segments. This causes the performance difference\nbetween the two.\n\nBest regards,\nTakashi\n\n[1]\nhttps://pubs.opengroup.org/onlinepubs/9699919799/functions/ftruncate.html\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nHi Heikki,> I had a new look at this thread today, trying to figure out where we are. I'm a bit confused.> > One thing we have established: mmap()ing WAL files performs worse than the current method, if pg_wal is not on> a persistent memory device. This is because the kernel faults in existing content of each page, even though we're> overwriting everything.Yes. In addition, after a certain page (in the sense of OS page) is msync()ed, another page fault will occur again when something is stored into that page.> That's unfortunate. I was hoping that mmap() would be a good option even without persistent memory hardware.> I wish we could tell the kernel to zero the pages instead of reading them from the file. Maybe clear the file with> ftruncate() before mmapping it?The area extended by ftruncate() appears as if it were zero-filled [1]. Please note that it merely \"appears as if.\" It might not be actually zero-filled as data blocks on devices, so pre-allocating files should improve transaction performance. At least, on Linux 5.7 and ext4, it takes more time to store into the mapped file just open(O_CREAT)ed and ftruncate()d than into the one filled already and actually.> That should not be problem with a real persistent memory device, however (or when emulating it with DRAM). With> DAX, the storage is memory-mapped directly and there is no page cache, and no pre-faulting.Yes, with filesystem DAX, there is no page cache for file data. A page fault still occurs but for each 2MiB DAX hugepage, so its overhead decreases compared with 4KiB page fault. Such a DAX hugepage fault is only applied to DAX-mapped files and is different from a general transparent hugepage fault.> Because of that, I'm baffled by what the v4-0002-Non-volatile-WAL-buffer.patch does. If I understand it> correctly, it puts the WAL buffers in a separate file, which is stored on the NVRAM. Why? I realize that this is just> a Proof of Concept, but I'm very much not interested in anything that requires the DBA to manage a second WAL> location. Did you test the mmap() patches with persistent memory hardware? Did you compare that with the pmem> patchset, on the same hardware? If there's a meaningful performance difference between the two, what's causing> it?Yes, this patchset puts the WAL buffers into the file specified by \"nvwal_path\" in postgresql.conf.Why this patchset puts the buffers into the separated file, not existing segment files in PGDATA/pg_wal, is because it reduces the overhead due to system calls such as open(), mmap(), munmap(), and close(). It open()s and mmap()s the file \"nvwal_path\" once, and keeps that file mapped while running. On the other hand, as for the patchset mmap()ing the segment files, a backend process should munmap() and close() the current mapped file and open() and mmap() the new one for each time the inserting location for that process goes over segments. This causes the performance difference between the two.Best regards,Takashi[1] https://pubs.opengroup.org/onlinepubs/9699919799/functions/ftruncate.html-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Fri, 30 Oct 2020 14:57:05 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Gang,\n\nI appreciate your patience. I reproduced the results you reported to me, on\nmy environment.\n\nFirst of all, the condition you gave to me was a little unstable on my\nenvironment, so I made the values of {max_,min_,nv}wal_size larger and the\npre-warm duration longer to get stable performance. I didn't modify your\ntable and query, and benchmark duration.\n\nUnder the stable condition, Original (PMEM) still got better performance\nthan Non-volatile WAL Buffer. To sum up, the reason was that Non-volatile\nWAL Buffer on Optane PMem spent much more time than Original (PMEM) for\nXLogInsert when using your table and query. It offset the improvement of\nXLogFlush, and degraded performance in total. VTune told me that\nNon-volatile WAL Buffer took more CPU time than Original (PMEM) for\n(XLogInsert => XLogInsertRecord => CopyXLogRecordsToWAL =>) memcpy while it\ntook less time for XLogFlush. This profile was very similar to the one you\nreported.\n\nIn general, when WAL buffers are on Optane PMem rather than DRAM, it is\nobvious that it takes more time to memcpy WAL records into the buffers\nbecause Optane PMem is a little slower than DRAM. In return for that,\nNon-volatile WAL Buffer reduces the time to let the records hit to devices\nbecause it doesn't need to write them out of the buffers to somewhere else,\nbut just need to flush out of CPU caches to the underlying memory-mapped\nfile.\n\nYour report shows that Non-volatile WAL Buffer on Optane PMem is not good\nfor certain kinds of transactions, and is good for others. I have tried to\nfix how to insert and flush WAL records, or the configurations or constants\nthat could change performance such as NUM_XLOGINSERT_LOCKS, but\nNon-volatile WAL Buffer have not achieved better performance than Original\n(PMEM) yet when using your table and query. I will continue to work on this\nissue and will report if I have any update.\n\nBy the way, did your performance progress reported by pgbench with -P\noption get down to zero when you run Non-volatile WAL Buffer? If so, your\n{max_,min_,nv}wal_size might be too small or your checkpoint configurations\nmight be not appropriate. Could you check your results again?\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nHi Gang,I appreciate your patience. I reproduced the results you reported to me, on my environment.First of all, the condition you gave to me was a little unstable on my environment, so I made the values of {max_,min_,nv}wal_size larger and the pre-warm duration longer to get stable performance. I didn't modify your table and query, and benchmark duration.Under the stable condition, Original (PMEM) still got better performance than Non-volatile WAL Buffer. To sum up, the reason was that Non-volatile WAL Buffer on Optane PMem spent much more time than Original (PMEM) for XLogInsert when using your table and query. It offset the improvement of XLogFlush, and degraded performance in total. VTune told me that Non-volatile WAL Buffer took more CPU time than Original (PMEM) for (XLogInsert => XLogInsertRecord => CopyXLogRecordsToWAL =>) memcpy while it took less time for XLogFlush. This profile was very similar to the one you reported.In general, when WAL buffers are on Optane PMem rather than DRAM, it is obvious that it takes more time to memcpy WAL records into the buffers because Optane PMem is a little slower than DRAM. In return for that, Non-volatile WAL Buffer reduces the time to let the records hit to devices because it doesn't need to write them out of the buffers to somewhere else, but just need to flush out of CPU caches to the underlying memory-mapped file.Your report shows that Non-volatile WAL Buffer on Optane PMem is not good for certain kinds of transactions, and is good for others. I have tried to fix how to insert and flush WAL records, or the configurations or constants that could change performance such as NUM_XLOGINSERT_LOCKS, but Non-volatile WAL Buffer have not achieved better performance than Original (PMEM) yet when using your table and query. I will continue to work on this issue and will report if I have any update.By the way, did your performance progress reported by pgbench with -P option get down to zero when you run Non-volatile WAL Buffer? If so, your {max_,min_,nv}wal_size might be too small or your checkpoint configurations might be not appropriate. Could you check your results again?Best regards,Takashi-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Thu, 5 Nov 2020 17:35:30 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nThese patches no longer apply :-( A rebased version would be nice.\n\nI've been interested in what performance improvements this might bring,\nso I've been running some extensive benchmarks on a machine with PMEM\nhardware. So let me share some interesting results. (I used commit from\nearly September, to make the patch apply cleanly.)\n\nNote: The hardware was provided by Intel, and they are interested in\nsupporting the development and providing access to machines with PMEM to\ndevelopers. So if you're interested in this patch & PMEM, but don't have\naccess to suitable hardware, try contacting Steve Shaw\n<steve.shaw@intel.com> who's the person responsible for open source\ndatabases at Intel (he's also the author of HammerDB).\n\n\nThe benchmarks were done on a machine with 2 x Xeon Platinum (24/48\ncores), 128GB RAM, NVMe and PMEM SSDs. I did some basic pgbench tests\nwith different scales (500, 5000, 15000) with and without these patches.\nI did some usual tuning (shared buffers, max_wal_size etc.), the most\nimportant changes being:\n\n- maintenance_work_mem = 256MB\n- max_connections = 200\n- random_page_cost = 1.2\n- shared_buffers = 16GB\n- work_mem = 64MB\n- checkpoint_completion_target = 0.9\n- checkpoint_timeout = 20min\n- max_wal_size = 96GB\n- autovacuum_analyze_scale_factor = 0.1\n- autovacuum_vacuum_insert_scale_factor = 0.05\n- autovacuum_vacuum_scale_factor = 0.01\n- vacuum_cost_limit = 1000\n\nAnd on the patched version:\n\n- nvwal_size = 128GB\n- nvwal_path = … points to the PMEM DAX device …\n\nThe machine has multiple SSDs (all Optane-based, IIRC):\n\n- NVMe SSD (Optane)\n- PMEM in BTT mode\n- PMEM in DAX mode\n\nSo I've tested all of them - the data was always on the NVMe device, and\nthe WAL was placed on one of those devices. That means we have these\nfour cases to compare:\n\n- nvme - master with WAL on the NVMe SSD\n- pmembtt - master with WAL on PMEM in BTT mode\n- pmemdax - master with WAL on PMEM in DAX mode\n- pmemdax-ntt - patched version with WAL on PMEM in DAX mode\n\nThe \"nvme\" is a bit disadvantaged as it places both data and WAL on the\nsame device, so consider that while evaluating the results. But for the\nsmaller data sets this should be fairly negligible, I believe.\n\nI'm not entirely sure whether the \"pmemdax\" (i.e. unpatched instance\nwith WAL on PMEM DAX device) is actually safe, but I included it anyway\nto see what difference is.\n\nNow let's look at results for the basic data sizes and client counts.\nI've also attached some charts to illustrate this. These numbers are tps\naverages from 3 runs, each about 30 minutes long.\n\n\n1) scale 500 (fits into shared buffers)\n---------------------------------------\n\n wal 1 16 32 64 96\n ----------------------------------------------------------\n nvme 6321 73794 132687 185409 192228\n pmembtt 6248 60105 85272 82943 84124\n pmemdax 6686 86188 154850 105219 149224\n pmemdax-ntt 8062 104887 211722 231085 252593\n\nThe NVMe performs well (the single device is not an issue, as there\nshould be very little non-WAL I/O). The PMBM/BTT has a clear bottleneck\n~85k tps. It's interesting the PMEM/DAX performs much worse without the\npatch, and the drop at 64 clients. Not sure what that's about.\n\n\n2) scale 5000 (fits into RAM)\n-----------------------------\n\n wal 1 16 32 64 96\n -----------------------------------------------------------\n nvme 4804 43636 61443 79807 86414\n pmembtt 4203 28354 37562 41562 43684\n pmemdax 5580 62180 92361 112935 117261\n pmemdax-ntt 6325 79887 128259 141793 127224\n\nThe differences are more significant, compared to the small scale. The\nBTT seems to have bottleneck around ~43k tps, the PMEM/DAX dominates.\n\n\n3) scale 15000 (bigger than RAM)\n--------------------------------\n\n wal 1 16 32 64 96\n -----------------------------------------------------------\n pmembtt 3638 20630 28985 32019 31303\n pmemdax 5164 48230 69822 85740 90452\n pmemdax-ntt 5382 62359 80038 83779 80191\n\nI have not included the nvme results here, because the impact of placing\nboth data and WAL on the same device was too significant IMHO.\n\nThe remaining results seem nice. It's interesting the patched case is a\nbit slower than master. Not sure why.\n\nOverall, these results seem pretty nice, I guess. Of course, this does\nnot say the current patch is the best way to implement this (or whether\nit's correct), but it does suggest supporting PMEM might bring sizeable\nperformance boost.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 23 Nov 2020 02:22:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nOn 10/30/20 6:57 AM, Takashi Menjo wrote:\n> Hi Heikki,\n> \n>> I had a new look at this thread today, trying to figure out where \n>> we are.\n> \n> I'm a bit confused.\n>> \n>> One thing we have established: mmap()ing WAL files performs worse \n>> than the current method, if pg_wal is not on a persistent memory \n>> device. This is because the kernel faults in existing content of \n>> each page, even though we're overwriting everything.\n> \n> Yes. In addition, after a certain page (in the sense of OS page) is \n> msync()ed, another page fault will occur again when something is \n> stored into that page.\n> \n>> That's unfortunate. I was hoping that mmap() would be a good option\n>> even without persistent memory hardware. I wish we could tell the\n>> kernel to zero the pages instead of reading them from the file.\n>> Maybe clear the file with ftruncate() before mmapping it?\n> \n> The area extended by ftruncate() appears as if it were zero-filled \n> [1]. Please note that it merely \"appears as if.\" It might not be \n> actually zero-filled as data blocks on devices, so pre-allocating \n> files should improve transaction performance. At least, on Linux 5.7\n> and ext4, it takes more time to store into the mapped file just \n> open(O_CREAT)ed and ftruncate()d than into the one filled already and\n> actually.\n> \n\nDoes is really matter that it only appears zero-filled? I think Heikki's\npoint was that maybe ftruncate() would prevent the kernel from faulting\nthe existing page content when we're overwriting it.\n\nNot sure I understand what the benchmark with ext4 was doing, exactly.\nHow was that measured? Might be interesting to have some simple\nbenchmarking tool to demonstrate this (I believe a small standalone tool\nwritten in C should do the trick).\n\n>> That should not be problem with a real persistent memory device, \n>> however (or when emulating it with DRAM). With DAX, the storage is \n>> memory-mapped directly and there is no page cache, and no \n>> pre-faulting.\n> \n> Yes, with filesystem DAX, there is no page cache for file data. A \n> page fault still occurs but for each 2MiB DAX hugepage, so its \n> overhead decreases compared with 4KiB page fault. Such a DAX\n> hugepage fault is only applied to DAX-mapped files and is different\n> from a general transparent hugepage fault.\n> \n\nI don't follow - if there are page faults even when overwriting all the\ndata, I'd say it's still an issue even with 2MB DAX pages. How big is\nthe difference between 4kB and 2MB pages?\n\nNot sure I understand how is this different from general THP fault?\n\n>> Because of that, I'm baffled by what the \n>> v4-0002-Non-volatile-WAL-buffer.patch does. If I understand it \n>> correctly, it puts the WAL buffers in a separate file, which is \n>> stored on the NVRAM. Why? I realize that this is just a Proof of \n>> Concept, but I'm very much not interested in anything that requires\n>> the DBA to manage a second WAL location. Did you test the mmap()\n>> patches with persistent memory hardware? Did you compare that with\n>> the pmem patchset, on the same hardware? If there's a meaningful\n>> performance difference between the two, what's causing it?\n\n> Yes, this patchset puts the WAL buffers into the file specified by \n> \"nvwal_path\" in postgresql.conf.\n> \n> Why this patchset puts the buffers into the separated file, not \n> existing segment files in PGDATA/pg_wal, is because it reduces the \n> overhead due to system calls such as open(), mmap(), munmap(), and \n> close(). It open()s and mmap()s the file \"nvwal_path\" once, and keeps\n> that file mapped while running. On the other hand, as for the \n> patchset mmap()ing the segment files, a backend process should \n> munmap() and close() the current mapped file and open() and mmap() \n> the new one for each time the inserting location for that process \n> goes over segments. This causes the performance difference between \n> the two.\n> \n\nI kinda agree with Heikki here - having to manage yet another location\nfor WAL data is rather inconvenient. We should aim not to make the life\nof DBAs unnecessarily difficult, IMO.\n\nI wonder how significant the syscall overhead is - can you show share\nsome numbers? I don't see any such results in this thread, so I'm not\nsure if it means losing 1% or 10% throughput.\n\nAlso, maybe there are alternative ways to reduce the overhead? For\nexample, we can increase the size of the WAL segment, and with 1GB\nsegments we'd do 1/64 of syscalls. Or maybe we could do some of this\nasynchronously - request a segment ahead, and let another process do the\nactual work etc. so that the running process does not wait.\n\n\nDo I understand correctly that the patch removes \"regular\" WAL buffers\nand instead writes the data into the non-volatile PMEM buffer, without\nwriting that to the WAL segments at all (unless in archiving mode)?\n\nFirstly, I guess many (most?) instances will have to write the WAL\nsegments anyway because of PITR/backups, so I'm not sure we can save\nmuch here.\n\nBut more importantly - doesn't that mean the nvwal_size value is\nessentially a hard limit? With max_wal_size, it's a soft limit i.e.\nwe're allowed to temporarily use more WAL when needed. But with a\npre-allocated file, that's clearly not possible. So what would happen in\nthose cases?\n\nAlso, is it possible to change nvwal_size? I haven't tried, but I wonder\nwhat happens with the current contents of the file.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 Nov 2020 03:01:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nOn 11/23/20 3:01 AM, Tomas Vondra wrote:\n> Hi,\n> \n> On 10/30/20 6:57 AM, Takashi Menjo wrote:\n>> Hi Heikki,\n>>\n>>> I had a new look at this thread today, trying to figure out where \n>>> we are.\n>>\n>> I'm a bit confused.\n>>>\n>>> One thing we have established: mmap()ing WAL files performs worse \n>>> than the current method, if pg_wal is not on a persistent memory \n>>> device. This is because the kernel faults in existing content of \n>>> each page, even though we're overwriting everything.\n>>\n>> Yes. In addition, after a certain page (in the sense of OS page) is \n>> msync()ed, another page fault will occur again when something is \n>> stored into that page.\n>>\n>>> That's unfortunate. I was hoping that mmap() would be a good option\n>>> even without persistent memory hardware. I wish we could tell the\n>>> kernel to zero the pages instead of reading them from the file.\n>>> Maybe clear the file with ftruncate() before mmapping it?\n>>\n>> The area extended by ftruncate() appears as if it were zero-filled \n>> [1]. Please note that it merely \"appears as if.\" It might not be \n>> actually zero-filled as data blocks on devices, so pre-allocating \n>> files should improve transaction performance. At least, on Linux 5.7\n>> and ext4, it takes more time to store into the mapped file just \n>> open(O_CREAT)ed and ftruncate()d than into the one filled already and\n>> actually.\n>>\n> \n> Does is really matter that it only appears zero-filled? I think Heikki's\n> point was that maybe ftruncate() would prevent the kernel from faulting\n> the existing page content when we're overwriting it.\n> \n> Not sure I understand what the benchmark with ext4 was doing, exactly.\n> How was that measured? Might be interesting to have some simple\n> benchmarking tool to demonstrate this (I believe a small standalone tool\n> written in C should do the trick).\n> \n\nOne more thought about this - if ftruncate() is not enough to convince\nthe mmap() to not load existing data from the file, what about not\nreusing the WAL segments at all? I haven't tried, though.\n\n>>> That should not be problem with a real persistent memory device, \n>>> however (or when emulating it with DRAM). With DAX, the storage is \n>>> memory-mapped directly and there is no page cache, and no \n>>> pre-faulting.\n>>\n>> Yes, with filesystem DAX, there is no page cache for file data. A \n>> page fault still occurs but for each 2MiB DAX hugepage, so its \n>> overhead decreases compared with 4KiB page fault. Such a DAX\n>> hugepage fault is only applied to DAX-mapped files and is different\n>> from a general transparent hugepage fault.\n>>\n> \n> I don't follow - if there are page faults even when overwriting all the\n> data, I'd say it's still an issue even with 2MB DAX pages. How big is\n> the difference between 4kB and 2MB pages?\n> \n> Not sure I understand how is this different from general THP fault?\n> \n>>> Because of that, I'm baffled by what the \n>>> v4-0002-Non-volatile-WAL-buffer.patch does. If I understand it \n>>> correctly, it puts the WAL buffers in a separate file, which is \n>>> stored on the NVRAM. Why? I realize that this is just a Proof of \n>>> Concept, but I'm very much not interested in anything that requires\n>>> the DBA to manage a second WAL location. Did you test the mmap()\n>>> patches with persistent memory hardware? Did you compare that with\n>>> the pmem patchset, on the same hardware? If there's a meaningful\n>>> performance difference between the two, what's causing it?\n> \n>> Yes, this patchset puts the WAL buffers into the file specified by \n>> \"nvwal_path\" in postgresql.conf.\n>>\n>> Why this patchset puts the buffers into the separated file, not \n>> existing segment files in PGDATA/pg_wal, is because it reduces the \n>> overhead due to system calls such as open(), mmap(), munmap(), and \n>> close(). It open()s and mmap()s the file \"nvwal_path\" once, and keeps\n>> that file mapped while running. On the other hand, as for the \n>> patchset mmap()ing the segment files, a backend process should \n>> munmap() and close() the current mapped file and open() and mmap() \n>> the new one for each time the inserting location for that process \n>> goes over segments. This causes the performance difference between \n>> the two.\n>>\n> \n> I kinda agree with Heikki here - having to manage yet another location\n> for WAL data is rather inconvenient. We should aim not to make the life\n> of DBAs unnecessarily difficult, IMO.\n> \n> I wonder how significant the syscall overhead is - can you show share\n> some numbers? I don't see any such results in this thread, so I'm not\n> sure if it means losing 1% or 10% throughput.\n> \n> Also, maybe there are alternative ways to reduce the overhead? For\n> example, we can increase the size of the WAL segment, and with 1GB\n> segments we'd do 1/64 of syscalls. Or maybe we could do some of this\n> asynchronously - request a segment ahead, and let another process do the\n> actual work etc. so that the running process does not wait.\n> \n> \n> Do I understand correctly that the patch removes \"regular\" WAL buffers\n> and instead writes the data into the non-volatile PMEM buffer, without\n> writing that to the WAL segments at all (unless in archiving mode)?\n> \n> Firstly, I guess many (most?) instances will have to write the WAL\n> segments anyway because of PITR/backups, so I'm not sure we can save\n> much here.\n> \n> But more importantly - doesn't that mean the nvwal_size value is\n> essentially a hard limit? With max_wal_size, it's a soft limit i.e.\n> we're allowed to temporarily use more WAL when needed. But with a\n> pre-allocated file, that's clearly not possible. So what would happen in\n> those cases?\n> \n> Also, is it possible to change nvwal_size? I haven't tried, but I wonder\n> what happens with the current contents of the file.\n> \n\nI've been thinking about the current design (which essentially places\nthe WAL buffers on PMEM) a bit more. I wonder whether that's actually\nthe right design ...\n\nThe way I understand the current design is that we're essentially\nswitching from this architecture:\n\n clients -> wal buffers (DRAM) -> wal segments (storage)\n\nto this\n\n clients -> wal buffers (PMEM)\n\n(Assuming there we don't have to write segments because of archiving.)\n\nThe first thing to consider is that PMEM is actually somewhat slower\nthan DRAM, the difference is roughly 100ns vs. 300ns (see [1] and [2]).\n From this POV it's a bit strange that we're moving the WAL buffer to a\nslower medium.\n\nOf course, PMEM is significantly faster than other storage types (e.g.\norder of magnitude faster than flash) and we're eliminating the need to\nwrite the WAL from PMEM in some cases, and that may help.\n\nThe second thing I notice is that PMEM does not seem to handle many\nclients particularly well - if you look at Figure 2 in [2], you'll see\nthat there's a clear drop-off in write bandwidth after only a few\nclients. For DRAM there's no such issue. (The total PMEM bandwidth seems\nmuch worse than for DRAM too.)\n\nSo I wonder if using PMEM for the WAL buffer is the right way forward.\nAFAIK the WAL buffer is quite concurrent (multiple clients writing\ndata), which seems to contradict the PMEM vs. DRAM trade-offs.\n\nThe design I've originally expected would look more like this\n\n clients -> wal buffers (DRAM) -> wal segments (PMEM DAX)\n\ni.e. mostly what we have now, but instead of writing the WAL segments\n\"the usual way\" we'd write them using mmap/memcpy, without fsync.\n\nI suppose that's what Heikki meant too, but I'm not sure.\n\n\nregards\n\n\n[1] https://pmem.io/2019/12/19/performance.html\n[2] https://arxiv.org/pdf/1904.01614.pdf\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 Nov 2020 15:41:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> So I wonder if using PMEM for the WAL buffer is the right way forward.\r\n> AFAIK the WAL buffer is quite concurrent (multiple clients writing\r\n> data), which seems to contradict the PMEM vs. DRAM trade-offs.\r\n> \r\n> The design I've originally expected would look more like this\r\n> \r\n> clients -> wal buffers (DRAM) -> wal segments (PMEM DAX)\r\n> \r\n> i.e. mostly what we have now, but instead of writing the WAL segments\r\n> \"the usual way\" we'd write them using mmap/memcpy, without fsync.\r\n> \r\n> I suppose that's what Heikki meant too, but I'm not sure.\r\n\r\nSQL Server probably does so. Please see the following page and the links in \"Next steps\" section. I'm saying \"probably\" because the document doesn't clearly state whether SQL Server memcpys data from DRAM log cache to non-volatile log cache only for transaction commits or for all log cache writes. I presume the former.\r\n\r\n\r\nAdd persisted log buffer to a database\r\nhttps://docs.microsoft.com/en-us/sql/relational-databases/databases/add-persisted-log-buffer?view=sql-server-ver15\r\n--------------------------------------------------\r\nWith non-volatile, tail of the log storage the pattern is\r\n\r\nmemcpy to LC\r\nmemcpy to NV LC\r\nSet status\r\nReturn control to caller (commit is now valid)\r\n...\r\n\r\nWith this new functionality, we use a region of memory which is mapped to a file on a DAX volume to hold that buffer. Since the memory hosted by the DAX volume is already persistent, we have no need to perform a separate flush, and can immediately continue with processing the next operation. Data is flushed from this buffer to more traditional storage in the background.\r\n--------------------------------------------------\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 24 Nov 2020 06:34:09 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "\n\nOn 11/24/20 7:34 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> So I wonder if using PMEM for the WAL buffer is the right way forward.\n>> AFAIK the WAL buffer is quite concurrent (multiple clients writing\n>> data), which seems to contradict the PMEM vs. DRAM trade-offs.\n>>\n>> The design I've originally expected would look more like this\n>>\n>> clients -> wal buffers (DRAM) -> wal segments (PMEM DAX)\n>>\n>> i.e. mostly what we have now, but instead of writing the WAL segments\n>> \"the usual way\" we'd write them using mmap/memcpy, without fsync.\n>>\n>> I suppose that's what Heikki meant too, but I'm not sure.\n> \n> SQL Server probably does so. Please see the following page and the links in \"Next steps\" section. I'm saying \"probably\" because the document doesn't clearly state whether SQL Server memcpys data from DRAM log cache to non-volatile log cache only for transaction commits or for all log cache writes. I presume the former.\n> \n> \n> Add persisted log buffer to a database\n> https://docs.microsoft.com/en-us/sql/relational-databases/databases/add-persisted-log-buffer?view=sql-server-ver15\n> --------------------------------------------------\n> With non-volatile, tail of the log storage the pattern is\n> \n> memcpy to LC\n> memcpy to NV LC\n> Set status\n> Return control to caller (commit is now valid)\n> ...\n> \n> With this new functionality, we use a region of memory which is mapped to a file on a DAX volume to hold that buffer. Since the memory hosted by the DAX volume is already persistent, we have no need to perform a separate flush, and can immediately continue with processing the next operation. Data is flushed from this buffer to more traditional storage in the background.\n> --------------------------------------------------\n> \n\nInteresting, thanks for the likn. If I understand [1] correctly, they\nessentially do this:\n\n clients -> buffers (DRAM) -> buffers (PMEM) -> wal (storage)\n\nthat is, they insert the PMEM buffer between the LC (in DRAM) and\ntraditional (non-PMEM) storage, so that a commit does not need to do any\nfsyncs etc.\n\nIt seems to imply the memcpy between DRAM and PMEM happens right when\nwriting the WAL, but I guess that's not strictly required - we might\njust as well do that in the background, I think.\n\nIt's interesting that they only place the tail of the log on PMEM, i.e.\nthe PMEM buffer has limited size, and the rest of the log is not on\nPMEM. It's a bit as if we inserted a PMEM buffer between our wal buffers\nand the WAL segments, and kept the WAL segments on regular storage. That\ncould work, but I'd bet they did that because at that time the NV\ndevices were much smaller, and placing the whole log on PMEM was not\nquite possible. So it might be unnecessarily complicated, considering\nthe PMEM device capacity is much higher now.\n\nSo I'd suggest we simply try this:\n\n clients -> buffers (DRAM) -> wal segments (PMEM)\n\nI plan to do some hacking and maybe hack together some simple tools to\nbenchmarks various approaches.\n\n\nregards\n\n[1]\nhttps://docs.microsoft.com/en-us/archive/blogs/bobsql/how-it-works-it-just-runs-faster-non-volatile-memory-sql-server-tail-of-log-caching-on-nvdimm\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 24 Nov 2020 19:26:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> It's interesting that they only place the tail of the log on PMEM, i.e.\r\n> the PMEM buffer has limited size, and the rest of the log is not on\r\n> PMEM. It's a bit as if we inserted a PMEM buffer between our wal buffers\r\n> and the WAL segments, and kept the WAL segments on regular storage. That\r\n> could work, but I'd bet they did that because at that time the NV\r\n> devices were much smaller, and placing the whole log on PMEM was not\r\n> quite possible. So it might be unnecessarily complicated, considering\r\n> the PMEM device capacity is much higher now.\r\n> \r\n> So I'd suggest we simply try this:\r\n> \r\n> clients -> buffers (DRAM) -> wal segments (PMEM)\r\n> \r\n> I plan to do some hacking and maybe hack together some simple tools to\r\n> benchmarks various approaches.\r\n\r\nI'm in favor of your approach. Yes, Intel PMEM were available in 128/256/512 GB when I checked last year. That's more than enough to place all WAL segments, so a small PMEM wal buffer is not necessary. I'm excited to see Postgres gain more power.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 25 Nov 2020 00:27:20 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 5:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> I'm not entirely sure whether the \"pmemdax\" (i.e. unpatched instance\n> with WAL on PMEM DAX device) is actually safe, but I included it anyway\n> to see what difference is.\n\n\nI am curious to learn more on this aspect. Kernels have provided support\nfor \"pmemdax\" mode so what part is unsafe in stack.\n\nReading the numbers it seems only at smaller scale modified PostgreSQL is\ngiving enhanced benefit over unmodified PostgreSQL with \"pmemdax\". For most\nof other cases the numbers are pretty close between these two setups, so\ncurious to learn, why even modify PostgreSQL if unmodified PostgreSQL can\nprovide similar benefit with just DAX mode.\n\nOn Sun, Nov 22, 2020 at 5:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:I'm not entirely sure whether the \"pmemdax\" (i.e. unpatched instance\nwith WAL on PMEM DAX device) is actually safe, but I included it anyway\nto see what difference is.I am curious to learn more on this aspect. Kernels have provided support for \"pmemdax\" mode so what part is unsafe in stack.Reading the numbers it seems only at smaller scale modified PostgreSQL is giving enhanced benefit over unmodified PostgreSQL with \"pmemdax\". For most of other cases the numbers are pretty close between these two setups, so curious to learn, why even modify PostgreSQL if unmodified PostgreSQL can provide similar benefit with just DAX mode.",
"msg_date": "Tue, 24 Nov 2020 17:10:16 -0800",
"msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On 11/25/20 1:27 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> It's interesting that they only place the tail of the log on PMEM,\n>> i.e. the PMEM buffer has limited size, and the rest of the log is\n>> not on PMEM. It's a bit as if we inserted a PMEM buffer between our\n>> wal buffers and the WAL segments, and kept the WAL segments on\n>> regular storage. That could work, but I'd bet they did that because\n>> at that time the NV devices were much smaller, and placing the\n>> whole log on PMEM was not quite possible. So it might be\n>> unnecessarily complicated, considering the PMEM device capacity is\n>> much higher now.\n>> \n>> So I'd suggest we simply try this:\n>> \n>> clients -> buffers (DRAM) -> wal segments (PMEM)\n>> \n>> I plan to do some hacking and maybe hack together some simple tools\n>> to benchmarks various approaches.\n> \n> I'm in favor of your approach. Yes, Intel PMEM were available in\n> 128/256/512 GB when I checked last year. That's more than enough to\n> place all WAL segments, so a small PMEM wal buffer is not necessary.\n> I'm excited to see Postgres gain more power.\n>\n\nCool. FWIW I'm not 100% sure it's the right approach, but I think it's\nworth testing. In the worst case we'll discover that this architecture\ndoes not allow fully leveraging PMEM benefits, or maybe it won't work\nfor some other reason and the approach proposed here will work better.\nLet's play a bit and we'll see.\n\nI have hacked a very simple patch doing this (essentially replacing\nopen/write/close calls in xlog.c with pmem calls). It's a bit rough but\nseems good enough for testing/experimenting. I'll polish it a bit, do\nsome benchmarks, and share some numbers in a day or two.\n\n\nregards\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 25 Nov 2020 02:44:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On 11/25/20 2:10 AM, Ashwin Agrawal wrote:\n> On Sun, Nov 22, 2020 at 5:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\n> wrote:\n> \n>> I'm not entirely sure whether the \"pmemdax\" (i.e. unpatched instance\n>> with WAL on PMEM DAX device) is actually safe, but I included it anyway\n>> to see what difference is.\n> > I am curious to learn more on this aspect. Kernels have provided support\n> for \"pmemdax\" mode so what part is unsafe in stack.\n> \n\nI do admit I'm not 100% certain about this, so I err on the side of\ncaution. While discussing this with Steve Shaw, he suggested that\napplications may get broken because DAX devices don't behave like block\ndevices in some respects (atomicity, addressability, ...).\n\n> Reading the numbers it seems only at smaller scale modified PostgreSQL is\n> giving enhanced benefit over unmodified PostgreSQL with \"pmemdax\". For most\n> of other cases the numbers are pretty close between these two setups, so\n> curious to learn, why even modify PostgreSQL if unmodified PostgreSQL can\n> provide similar benefit with just DAX mode.\n> \n\nThat's a valid questions, but I wouldn't say the ~20% difference on the\nmedium scale is negligible. And it's possible that for the larger scales\nthe primary bottleneck is the storage used for data directory, not WAL\n(notice that nvme is missing for the large scale).\n\nOf course, it's faster than flash storage but the PMEM costs more too,\nand when you pay $$$ for hardware you probably want to get as much\nbenefit from it as possible.\n\n\n[1]\nhttps://ark.intel.com/content/www/us/en/ark/products/203879/intel-optane-persistent-memory-200-series-128gb-pmem-module.html\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 25 Nov 2020 03:19:23 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nHere's the \"simple patch\" that I'm currently experimenting with. It\nessentially replaces open/close/write/fsync with pmem calls\n(map/unmap/memcpy/persist variants), and it's by no means committable.\nBut it works well enough for experiments / measurements, etc.\n\nThe numbers (5-minute pgbench runs on scale 500) look like this:\n\n master/btt master/dax ntt simple\n -----------------------------------------------------------\n 1 5469 7402 7977 6746\n 16 48222 80869 107025 82343\n 32 73974 158189 214718 158348\n 64 85921 154540 225715 164248\n 96 150602 221159 237008 217253\n\nA chart illustrating these results is attached. The four columns are\nshowing unpatched master with WAL on a pmem device, in BTT or DAX modes,\n\"ntt\" is the patch submitted to this thread, and \"simple\" is the patch\nI've hacked together.\n\nAs expected, the BTT case performs poorly (compared to the rest).\n\nThe \"master/dax\" and \"simple\" perform about the same. There are some\ndifferences, but those may be attributed to noise. The NTT patch does\noutperform these cases by ~20-40% in some cases.\n\nThe question is why. I recall suggestions this is due to page faults\nwhen writing data into the WAL, but I did experiment with various\nsettings that I think should prevent that (e.g. disabling WAL reuse\nand/or disabling zeroing the segments) but that made no measurable\ndifference.\n\nSo I've added some primitive instrumentation to the code, counting the\ncalls and measuring duration for each of the PMEM operations, and\nprinting the stats regularly into log (after ~1M ops).\n\nTypical results from a run with a single client look like this (slightly\nformatted/wrapped for e-mail):\n\n PMEM STATS\n COUNT total 1000000 map 30 unmap 20\n memcpy 510210 persist 489740\n TIME total 0 map 931080 unmap 188750\n memcpy 4938866752 persist 187846686\n LENGTH memcpy 4337647616 persist 329824672\n\nThis shows that a majority of the 1M calls is memcpy/persist, the rest\nis mostly negligible - both in terms of number of calls and duration.\nThe time values are in nanoseconds, BTW.\n\nSo for example we did 30 map_file calls, taking ~0.9ms in total, and the\nunmap calls took even less time. So the direct impact of map/unmap calls\nis rather negligible, I think.\n\nThe dominant part is clearly the memcpy (~5s) and persist (~2s). It's\nnot much per call, but it's overall it costs much more than the map and\nunmap calls.\n\nFinally, let's look at the LENGTH, which is a sum of the ranges either\ncopied to PMEM (memcpy) or fsynced (persist). Those are in bytes, and\nthe memcpy value is way higher than the persist one. In this particular\ncase, it's something like 4.3MB vs. 300kB, so an order of magnitude.\n\nIt's entirely possible this is a bug/measurement error in the patch. I'm\nnot all that familiar with the XLOG stuff, so maybe I did some silly\nmistake somewhere.\n\nBut I think it might be also explained by the fact that XLogWrite()\nalways writes the WAL in a multiple of 8kB pages. Which is perfectly\nreasonable for regular block-oriented storage, but pmem/dax is exactly\nabout not having to do that - PMEM is byte-addressable. And with pgbech,\nthe individual WAL records are tiny, so having to instead write/flush\nthe whole 8kB page (or more of them) repeatedly, as we append the WAL\nrecords, seems a bit wasteful. So I wonder if this is why the trivial\npatch does not show any benefits.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 26 Nov 2020 20:27:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On 26/11/2020 21:27, Tomas Vondra wrote:\n> Hi,\n> \n> Here's the \"simple patch\" that I'm currently experimenting with. It\n> essentially replaces open/close/write/fsync with pmem calls\n> (map/unmap/memcpy/persist variants), and it's by no means committable.\n> But it works well enough for experiments / measurements, etc.\n> \n> The numbers (5-minute pgbench runs on scale 500) look like this:\n> \n> master/btt master/dax ntt simple\n> -----------------------------------------------------------\n> 1 5469 7402 7977 6746\n> 16 48222 80869 107025 82343\n> 32 73974 158189 214718 158348\n> 64 85921 154540 225715 164248\n> 96 150602 221159 237008 217253\n> \n> A chart illustrating these results is attached. The four columns are\n> showing unpatched master with WAL on a pmem device, in BTT or DAX modes,\n> \"ntt\" is the patch submitted to this thread, and \"simple\" is the patch\n> I've hacked together.\n> \n> As expected, the BTT case performs poorly (compared to the rest).\n> \n> The \"master/dax\" and \"simple\" perform about the same. There are some\n> differences, but those may be attributed to noise. The NTT patch does\n> outperform these cases by ~20-40% in some cases.\n> \n> The question is why. I recall suggestions this is due to page faults\n> when writing data into the WAL, but I did experiment with various\n> settings that I think should prevent that (e.g. disabling WAL reuse\n> and/or disabling zeroing the segments) but that made no measurable\n> difference.\n\nThe page faults are only a problem when mmap() is used *without* DAX.\n\nTakashi tried a patch earlier to mmap() WAL segments and insert WAL to \nthem directly. See 0002-Use-WAL-segments-as-WAL-buffers.patch at \nhttps://www.postgresql.org/message-id/000001d5dff4%24995ed180%24cc1c7480%24%40hco.ntt.co.jp_1. \nCould you test that patch too, please? Using your nomenclature, that \npatch skips wal_buffers and does:\n\n clients -> wal segments (PMEM DAX)\n\nHe got good results with that with DAX, but otherwise it performed \nworse. And then we discussed why that might be, and the page fault \nhypothesis was brought up.\n\nI think 0002-Use-WAL-segments-as-WAL-buffers.patch is the most promising \napproach here. But because it's slower without DAX, we need to keep the \ncurrent code for non-DAX systems. Unfortunately it means that we need to \nmaintain both implementations, selectable with a GUC or some DAX \ndetection magic. The question then is whether the code complexity is \nworth the performance gin on DAX-enabled systems.\n\nAndres was not excited about mmapping the WAL segments because of \nperformance reasons. I'm not sure how much of his critique applies if we \nkeep supporting both methods and only use mmap() if so configured.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 26 Nov 2020 22:59:20 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "\n\nOn 11/26/20 9:59 PM, Heikki Linnakangas wrote:\n> On 26/11/2020 21:27, Tomas Vondra wrote:\n>> Hi,\n>>\n>> Here's the \"simple patch\" that I'm currently experimenting with. It\n>> essentially replaces open/close/write/fsync with pmem calls\n>> (map/unmap/memcpy/persist variants), and it's by no means committable.\n>> But it works well enough for experiments / measurements, etc.\n>>\n>> The numbers (5-minute pgbench runs on scale 500) look like this:\n>>\n>> master/btt master/dax ntt simple\n>> -----------------------------------------------------------\n>> 1 5469 7402 7977 6746\n>> 16 48222 80869 107025 82343\n>> 32 73974 158189 214718 158348\n>> 64 85921 154540 225715 164248\n>> 96 150602 221159 237008 217253\n>>\n>> A chart illustrating these results is attached. The four columns are\n>> showing unpatched master with WAL on a pmem device, in BTT or DAX modes,\n>> \"ntt\" is the patch submitted to this thread, and \"simple\" is the patch\n>> I've hacked together.\n>>\n>> As expected, the BTT case performs poorly (compared to the rest).\n>>\n>> The \"master/dax\" and \"simple\" perform about the same. There are some\n>> differences, but those may be attributed to noise. The NTT patch does\n>> outperform these cases by ~20-40% in some cases.\n>>\n>> The question is why. I recall suggestions this is due to page faults\n>> when writing data into the WAL, but I did experiment with various\n>> settings that I think should prevent that (e.g. disabling WAL reuse\n>> and/or disabling zeroing the segments) but that made no measurable\n>> difference.\n> \n> The page faults are only a problem when mmap() is used *without* DAX.\n> \n> Takashi tried a patch earlier to mmap() WAL segments and insert WAL to\n> them directly. See 0002-Use-WAL-segments-as-WAL-buffers.patch at\n> https://www.postgresql.org/message-id/000001d5dff4%24995ed180%24cc1c7480%24%40hco.ntt.co.jp_1.\n> Could you test that patch too, please? Using your nomenclature, that\n> patch skips wal_buffers and does:\n> \n> clients -> wal segments (PMEM DAX)\n> \n> He got good results with that with DAX, but otherwise it performed\n> worse. And then we discussed why that might be, and the page fault\n> hypothesis was brought up.\n> \n\nD'oh, I haven't noticed there's a patch doing that. This thread has so\nmany different patches - which is good, but a bit confusing.\n\n> I think 0002-Use-WAL-segments-as-WAL-buffers.patch is the most promising\n> approach here. But because it's slower without DAX, we need to keep the\n> current code for non-DAX systems. Unfortunately it means that we need to\n> maintain both implementations, selectable with a GUC or some DAX\n> detection magic. The question then is whether the code complexity is\n> worth the performance gin on DAX-enabled systems.\n> \n\nSure, I can give it a spin. The question is whether it applies to\ncurrent master, or whether some sort of rebase is needed. I'll try.\n\n> Andres was not excited about mmapping the WAL segments because of\n> performance reasons. I'm not sure how much of his critique applies if we\n> keep supporting both methods and only use mmap() if so configured.\n> \n\nYeah. I don't think we can just discard the current approach, there are\nfar too many OS variants that even if Linux is happy one of the other\ncritters won't be.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 26 Nov 2020 22:19:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "\n\nOn 11/26/20 10:19 PM, Tomas Vondra wrote:\n> \n> \n> On 11/26/20 9:59 PM, Heikki Linnakangas wrote:\n>> On 26/11/2020 21:27, Tomas Vondra wrote:\n>>> Hi,\n>>>\n>>> Here's the \"simple patch\" that I'm currently experimenting with. It\n>>> essentially replaces open/close/write/fsync with pmem calls\n>>> (map/unmap/memcpy/persist variants), and it's by no means committable.\n>>> But it works well enough for experiments / measurements, etc.\n>>>\n>>> The numbers (5-minute pgbench runs on scale 500) look like this:\n>>>\n>>> master/btt master/dax ntt simple\n>>> -----------------------------------------------------------\n>>> 1 5469 7402 7977 6746\n>>> 16 48222 80869 107025 82343\n>>> 32 73974 158189 214718 158348\n>>> 64 85921 154540 225715 164248\n>>> 96 150602 221159 237008 217253\n>>>\n>>> A chart illustrating these results is attached. The four columns are\n>>> showing unpatched master with WAL on a pmem device, in BTT or DAX modes,\n>>> \"ntt\" is the patch submitted to this thread, and \"simple\" is the patch\n>>> I've hacked together.\n>>>\n>>> As expected, the BTT case performs poorly (compared to the rest).\n>>>\n>>> The \"master/dax\" and \"simple\" perform about the same. There are some\n>>> differences, but those may be attributed to noise. The NTT patch does\n>>> outperform these cases by ~20-40% in some cases.\n>>>\n>>> The question is why. I recall suggestions this is due to page faults\n>>> when writing data into the WAL, but I did experiment with various\n>>> settings that I think should prevent that (e.g. disabling WAL reuse\n>>> and/or disabling zeroing the segments) but that made no measurable\n>>> difference.\n>>\n>> The page faults are only a problem when mmap() is used *without* DAX.\n>>\n>> Takashi tried a patch earlier to mmap() WAL segments and insert WAL to\n>> them directly. See 0002-Use-WAL-segments-as-WAL-buffers.patch at\n>> https://www.postgresql.org/message-id/000001d5dff4%24995ed180%24cc1c7480%24%40hco.ntt.co.jp_1.\n>> Could you test that patch too, please? Using your nomenclature, that\n>> patch skips wal_buffers and does:\n>>\n>> clients -> wal segments (PMEM DAX)\n>>\n>> He got good results with that with DAX, but otherwise it performed\n>> worse. And then we discussed why that might be, and the page fault\n>> hypothesis was brought up.\n>>\n> \n> D'oh, I haven't noticed there's a patch doing that. This thread has so\n> many different patches - which is good, but a bit confusing.\n> \n>> I think 0002-Use-WAL-segments-as-WAL-buffers.patch is the most promising\n>> approach here. But because it's slower without DAX, we need to keep the\n>> current code for non-DAX systems. Unfortunately it means that we need to\n>> maintain both implementations, selectable with a GUC or some DAX\n>> detection magic. The question then is whether the code complexity is\n>> worth the performance gin on DAX-enabled systems.\n>>\n> \n> Sure, I can give it a spin. The question is whether it applies to\n> current master, or whether some sort of rebase is needed. I'll try.\n> \n\nUnfortunately, that patch seems to fail for me :-(\n\nThe patches seem to be for PG12, so I applied them on REL_12_STABLE (all\nthe parts 0001-0005) and then I did this:\n\nLIBS=\"-lpmem\" ./configure --prefix=/home/tomas/pg-12-pmem --enable-debug\nmake -s install\n\ninitdb -X /opt/pmemdax/benchmarks/wal -D /opt/nvme/benchmarks/data\n\npg_ctl -D /opt/nvme/benchmarks/data/ -l pg.log start\n\ncreatedb test\npgbench -i -s 500 test\n\n\nwhich however fails after just about 70k rows generated (PQputline\nfailed), and the pg.log says this:\n\n PANIC: could not open or mmap file\n\"pg_wal/000000010000000000000006\": No such file or directory\n CONTEXT: COPY pgbench_accounts, line 721000\n STATEMENT: copy pgbench_accounts from stdin\n\nTakashi-san, can you check and provide a fixed version? Ideally, I'll\ntake a look too, but I'm not familiar with this patch so it may take\nmore time.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Nov 2020 01:02:58 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On 11/27/20 1:02 AM, Tomas Vondra wrote:\n> \n> Unfortunately, that patch seems to fail for me :-(\n> \n> The patches seem to be for PG12, so I applied them on REL_12_STABLE (all\n> the parts 0001-0005) and then I did this:\n> \n> LIBS=\"-lpmem\" ./configure --prefix=/home/tomas/pg-12-pmem --enable-debug\n> make -s install\n> \n> initdb -X /opt/pmemdax/benchmarks/wal -D /opt/nvme/benchmarks/data\n> \n> pg_ctl -D /opt/nvme/benchmarks/data/ -l pg.log start\n> \n> createdb test\n> pgbench -i -s 500 test\n> \n> \n> which however fails after just about 70k rows generated (PQputline\n> failed), and the pg.log says this:\n> \n> PANIC: could not open or mmap file\n> \"pg_wal/000000010000000000000006\": No such file or directory\n> CONTEXT: COPY pgbench_accounts, line 721000\n> STATEMENT: copy pgbench_accounts from stdin\n> \n> Takashi-san, can you check and provide a fixed version? Ideally, I'll\n> take a look too, but I'm not familiar with this patch so it may take\n> more time.\n> \n\nI did try to get this working today, unsuccessfully. I did manage to\napply the 0002 part separately on REL_12_0 (there's one trivial rejected\nchunk), but I still get the same failure. In fact, when built with\nassertions, I can't even get initdb to pass :-(\n\nI do get this:\n\nTRAP: FailedAssertion(\"!(page->xlp_pageaddr == ptr - (ptr % 8192))\",\nFile: \"xlog.c\", Line: 1813)\n\nThe values involved here are\n\n xlp_pageaddr = 16777216\n ptr = 20971520\n\nso the page seems to be at the very beginning of the second WAL segment,\nbut the pointer is somewhere later. A full backtrace is attached.\n\nI'll continue investigating this, but the xlog code is not particularly\neasy to understand in general, so it may take time.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 28 Nov 2020 02:37:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nI think I've managed to get the 0002 patch [1] rebased to master and \nworking (with help from Masahiko Sawada). It's not clear to me how it \ncould have worked as submitted - my theory is that an incomplete patch \nwas submitted by mistake, or something like that.\n\nUnfortunately, the benchmark results were kinda disappointing. For a \npgbench on scale 500 (fits into shared buffers), an average of three \n5-minute runs looks like this:\n\n branch 1 16 32 64 96\n ----------------------------------------------------------------\n master 7291 87704 165310 150437 224186\n ntt 7912 106095 213206 212410 237819\n simple-no-buffers 7654 96544 115416 95828 103065\n\nNTT refers to the patch from September 10, pre-allocating a large WAL \nfile on PMEM, and simple-no-buffers is the simpler patch simply removing \nthe WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n\nNote: The patch is just replacing the old implementation with mmap. \nThat's good enough for experiments like this, but we probably want to \nkeep the old one for setups without PMEM. But it's good enough for \ntesting, benchmarking etc.\n\nUnfortunately, the results for this simple approach are pretty bad. Not \nonly compared to the \"ntt\" patch, but even to master. I'm not entirely \nsure what's the root cause, but I have a couple hypotheses:\n\n1) bug in the patch - That's clearly a possibility, although I've tried \ntried to eliminate this possibility.\n\n2) PMEM is slower than DRAM - From what I know, PMEM is much faster than \nNVMe storage, but still much slower than DRAM (both in terms of latency \nand bandwidth, see [2] for some data). It's not terrible, but the \nlatency is maybe 2-3x higher - not a huge difference, but may matter for \nWAL buffers?\n\n3) PMEM does not handle parallel writes well - If you look at [2], \nFigure 4(b), you'll see that the throughput actually *drops\" as the \nnumber of threads increase. That's pretty strange / annoying, because \nthat's how we write into WAL buffers - each thread writes it's own data, \nso parallelism is not something we can get rid of.\n\nI've added some simple profiling, to measure number of calls / time for \neach operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data \nfor each backend, and logs the counts every 1M ops.\n\nTypical stats from a concurrent run looks like this:\n\n xlog stats cnt 43000000\n map cnt 100 time 5448333 unmap cnt 100 time 3730963\n memcpy cnt 985964 time 1550442272 len 15150499\n memset cnt 0 time 0 len 0\n persist cnt 13836 time 10369617 len 16292182\n\nThe times are in nanoseconds, so this says the backend did 100 mmap and \nunmap calls, taking ~10ms in total. There were ~14k pmem_persist calls, \ntaking 10ms in total. And the most time (~1.5s) was used by pmem_memcpy \ncopying about 15MB of data. That's quite a lot :-(\n\nMy conclusion from this is that eliminating WAL buffers and writing WAL \ndirectly to PMEM (by memcpy to mmap-ed WAL segments) is probably not the \nright approach.\n\nI suppose we should keep WAL buffers, and then just write the data to \nmmap-ed WAL segments on PMEM. Which I think is what the NTT patch does, \nexcept that it allocates one huge file on PMEM and writes to that \n(instead of the traditional WAL segments).\n\nSo I decided to try how it'd work with writing to regular WAL segments, \nmmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does that, \nand the results look a bit nicer:\n\n branch 1 16 32 64 96\n ----------------------------------------------------------------\n master 7291 87704 165310 150437 224186\n ntt 7912 106095 213206 212410 237819\n simple-no-buffers 7654 96544 115416 95828 103065\n with-wal-buffers 7477 95454 181702 140167 214715\n\nSo, much better than the version without WAL buffers, somewhat better \nthan master (except for 64/96 clients), but still not as good as NTT.\n\nAt this point I was wondering how could the NTT patch be faster when \nit's doing roughly the same thing. I'm sire there are some differences, \nbut it seemed strange. The main difference seems to be that it only maps \none large file, and only once. OTOH the alternative \"simple\" patch maps \nsegments one by one, in each backend. Per the debug stats the map/unmap \ncalls are fairly cheap, but maybe it interferes with the memcpy somehow.\n\nSo I did an experiment by increasing the size of the WAL segments. I \nchose to try with 521MB and 1024MB, and the results with 1GB look like this:\n\n branch 1 16 32 64 96\n ----------------------------------------------------------------\n master 6635 88524 171106 163387 245307\n ntt 7909 106826 217364 223338 242042\n simple-no-buffers 7871 101575 199403 188074 224716\n with-wal-buffers 7643 101056 206911 223860 261712\n\nSo yeah, there's a clear difference. It changes the values for \"master\" \na bit, but both the \"simple\" patches (with and without) WAL buffers are \nmuch faster. The with-wal-buffers is almost equal to the NTT patch, \nwhich was using 96GB file. I presume larger WAL segments would get even \ncloser, if we supported them.\n\nI'll continue investigating this, but my conclusion so far seem to be \nthat we can't really replace WAL buffers with PMEM - that seems to \nperform much worse.\n\nThe question is what to do about the segment size. Can we reduce the \noverhead of mmap-ing individual segments, so that this works even for \nsmaller WAL segments, to make this useful for common instances (not \neveryone wants to run with 1GB WAL). Or whether we need to adopt the \ndesign with a large file, mapped just once.\n\nAnother question is whether it's even worth the extra complexity. On \n16MB segments the difference between master and NTT patch seems to be \nnon-trivial, but increasing the WAL segment size kinda reduces that. So \nmaybe just using File I/O on PMEM DAX filesystem seems good enough. \nAlternatively, maybe we could switch to libpmemblk, which should \neliminate the filesystem overhead at least.\n\nI'm also wondering if WAL is the right usage for PMEM. Per [2] there's a \nhuge read-write assymmetry (the writes being way slower), and their \nrecommendation (in \"Observation 3\" is)\n\n The read-write asymmetry of PMem im-plies the necessity of avoiding\n writes as much as possible for PMem.\n\nSo maybe we should not be trying to use PMEM for WAL, which is pretty \nwrite-heavy (and in most cases even write-only).\n\nI'll continue investigating this, but I'd welcome some feedback and \nthoughts about this.\n\n\nAttached are:\n\n* patches.tgz - all three patches discussed here, rebased to master\n\n* bench.tgz - benchmarking scripts / config files I used\n\n* pmem.pdf - charts illustrating results between the patches, and also \nshowing the impact of the increased WAL segments\n\n\nregards\n\n[1] \nhttps://www.postgresql.org/message-id/000001d5dff4%24995ed180%24cc1c7480%24%40hco.ntt.co.jp_1\n\n[2] https://arxiv.org/pdf/2005.07658.pdf (Lessons learned from the early \nperformance evaluation of IntelOptane DC Persistent Memory in DBMS)\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 6 Jan 2021 18:16:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I think I've managed to get the 0002 patch [1] rebased to master and\n> working (with help from Masahiko Sawada). It's not clear to me how it\n> could have worked as submitted - my theory is that an incomplete patch\n> was submitted by mistake, or something like that.\n>\n> Unfortunately, the benchmark results were kinda disappointing. For a\n> pgbench on scale 500 (fits into shared buffers), an average of three\n> 5-minute runs looks like this:\n>\n> branch 1 16 32 64 96\n> ----------------------------------------------------------------\n> master 7291 87704 165310 150437 224186\n> ntt 7912 106095 213206 212410 237819\n> simple-no-buffers 7654 96544 115416 95828 103065\n>\n> NTT refers to the patch from September 10, pre-allocating a large WAL\n> file on PMEM, and simple-no-buffers is the simpler patch simply removing\n> the WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n>\n> Note: The patch is just replacing the old implementation with mmap.\n> That's good enough for experiments like this, but we probably want to\n> keep the old one for setups without PMEM. But it's good enough for\n> testing, benchmarking etc.\n>\n> Unfortunately, the results for this simple approach are pretty bad. Not\n> only compared to the \"ntt\" patch, but even to master. I'm not entirely\n> sure what's the root cause, but I have a couple hypotheses:\n>\n> 1) bug in the patch - That's clearly a possibility, although I've tried\n> tried to eliminate this possibility.\n>\n> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster than\n> NVMe storage, but still much slower than DRAM (both in terms of latency\n> and bandwidth, see [2] for some data). It's not terrible, but the\n> latency is maybe 2-3x higher - not a huge difference, but may matter for\n> WAL buffers?\n>\n> 3) PMEM does not handle parallel writes well - If you look at [2],\n> Figure 4(b), you'll see that the throughput actually *drops\" as the\n> number of threads increase. That's pretty strange / annoying, because\n> that's how we write into WAL buffers - each thread writes it's own data,\n> so parallelism is not something we can get rid of.\n>\n> I've added some simple profiling, to measure number of calls / time for\n> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data\n> for each backend, and logs the counts every 1M ops.\n>\n> Typical stats from a concurrent run looks like this:\n>\n> xlog stats cnt 43000000\n> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n> memcpy cnt 985964 time 1550442272 len 15150499\n> memset cnt 0 time 0 len 0\n> persist cnt 13836 time 10369617 len 16292182\n>\n> The times are in nanoseconds, so this says the backend did 100 mmap and\n> unmap calls, taking ~10ms in total. There were ~14k pmem_persist calls,\n> taking 10ms in total. And the most time (~1.5s) was used by pmem_memcpy\n> copying about 15MB of data. That's quite a lot :-(\n\nIt might also be interesting if we can see how much time spent on each\nlogging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n\n>\n> My conclusion from this is that eliminating WAL buffers and writing WAL\n> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not the\n> right approach.\n>\n> I suppose we should keep WAL buffers, and then just write the data to\n> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch does,\n> except that it allocates one huge file on PMEM and writes to that\n> (instead of the traditional WAL segments).\n>\n> So I decided to try how it'd work with writing to regular WAL segments,\n> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does that,\n> and the results look a bit nicer:\n>\n> branch 1 16 32 64 96\n> ----------------------------------------------------------------\n> master 7291 87704 165310 150437 224186\n> ntt 7912 106095 213206 212410 237819\n> simple-no-buffers 7654 96544 115416 95828 103065\n> with-wal-buffers 7477 95454 181702 140167 214715\n>\n> So, much better than the version without WAL buffers, somewhat better\n> than master (except for 64/96 clients), but still not as good as NTT.\n>\n> At this point I was wondering how could the NTT patch be faster when\n> it's doing roughly the same thing. I'm sire there are some differences,\n> but it seemed strange. The main difference seems to be that it only maps\n> one large file, and only once. OTOH the alternative \"simple\" patch maps\n> segments one by one, in each backend. Per the debug stats the map/unmap\n> calls are fairly cheap, but maybe it interferes with the memcpy somehow.\n>\n\nWhile looking at the two methods: NTT and simple-no-buffer, I realized\nthat in XLogFlush(), NTT patch flushes (by pmem_flush() and\npmem_drain()) WAL without acquiring WALWriteLock whereas\nsimple-no-buffer patch acquires WALWriteLock to do that\n(pmem_persist()). I wonder if this also affected the performance\ndifferences between those two methods since WALWriteLock serializes\nthe operations. With PMEM, multiple backends can concurrently flush\nthe records if the memory region is not overlapped? If so, flushing\nWAL without WALWriteLock would be a big benefit.\n\n> So I did an experiment by increasing the size of the WAL segments. I\n> chose to try with 521MB and 1024MB, and the results with 1GB look like this:\n>\n> branch 1 16 32 64 96\n> ----------------------------------------------------------------\n> master 6635 88524 171106 163387 245307\n> ntt 7909 106826 217364 223338 242042\n> simple-no-buffers 7871 101575 199403 188074 224716\n> with-wal-buffers 7643 101056 206911 223860 261712\n>\n> So yeah, there's a clear difference. It changes the values for \"master\"\n> a bit, but both the \"simple\" patches (with and without) WAL buffers are\n> much faster. The with-wal-buffers is almost equal to the NTT patch,\n> which was using 96GB file. I presume larger WAL segments would get even\n> closer, if we supported them.\n>\n> I'll continue investigating this, but my conclusion so far seem to be\n> that we can't really replace WAL buffers with PMEM - that seems to\n> perform much worse.\n>\n> The question is what to do about the segment size. Can we reduce the\n> overhead of mmap-ing individual segments, so that this works even for\n> smaller WAL segments, to make this useful for common instances (not\n> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n> design with a large file, mapped just once.\n>\n> Another question is whether it's even worth the extra complexity. On\n> 16MB segments the difference between master and NTT patch seems to be\n> non-trivial, but increasing the WAL segment size kinda reduces that. So\n> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n> Alternatively, maybe we could switch to libpmemblk, which should\n> eliminate the filesystem overhead at least.\n\nI think the performance improvement by NTT patch with the 16MB WAL\nsegment, the most common WAL segment size, is very good (150437 vs.\n212410 with 64 clients). But maybe evaluating writing WAL segment\nfiles on PMEM DAX filesystem is also worth, as you mentioned, if we\ndon't do that yet.\n\nAlso, I'm interested in why the through-put of NTT patch saturated at\n32 clients, which is earlier than the master's one (96 clients). How\nmany CPU cores are there on the machine you used?\n\n> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n> huge read-write assymmetry (the writes being way slower), and their\n> recommendation (in \"Observation 3\" is)\n>\n> The read-write asymmetry of PMem im-plies the necessity of avoiding\n> writes as much as possible for PMem.\n>\n> So maybe we should not be trying to use PMEM for WAL, which is pretty\n> write-heavy (and in most cases even write-only).\n\nI think using PMEM for WAL is cost-effective but it leverages the only\nlow-latency (sequential) write, but not other abilities such as\nfine-grained access and low-latency random write. If we want to\nexploit its all ability we might need some drastic changes to logging\nprotocol while considering storing data on PMEM.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 21 Jan 2021 11:17:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "\n\nOn 1/21/21 3:17 AM, Masahiko Sawada wrote:\n> On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> I think I've managed to get the 0002 patch [1] rebased to master and\n>> working (with help from Masahiko Sawada). It's not clear to me how it\n>> could have worked as submitted - my theory is that an incomplete patch\n>> was submitted by mistake, or something like that.\n>>\n>> Unfortunately, the benchmark results were kinda disappointing. For a\n>> pgbench on scale 500 (fits into shared buffers), an average of three\n>> 5-minute runs looks like this:\n>>\n>> branch 1 16 32 64 96\n>> ----------------------------------------------------------------\n>> master 7291 87704 165310 150437 224186\n>> ntt 7912 106095 213206 212410 237819\n>> simple-no-buffers 7654 96544 115416 95828 103065\n>>\n>> NTT refers to the patch from September 10, pre-allocating a large WAL\n>> file on PMEM, and simple-no-buffers is the simpler patch simply removing\n>> the WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n>>\n>> Note: The patch is just replacing the old implementation with mmap.\n>> That's good enough for experiments like this, but we probably want to\n>> keep the old one for setups without PMEM. But it's good enough for\n>> testing, benchmarking etc.\n>>\n>> Unfortunately, the results for this simple approach are pretty bad. Not\n>> only compared to the \"ntt\" patch, but even to master. I'm not entirely\n>> sure what's the root cause, but I have a couple hypotheses:\n>>\n>> 1) bug in the patch - That's clearly a possibility, although I've tried\n>> tried to eliminate this possibility.\n>>\n>> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster than\n>> NVMe storage, but still much slower than DRAM (both in terms of latency\n>> and bandwidth, see [2] for some data). It's not terrible, but the\n>> latency is maybe 2-3x higher - not a huge difference, but may matter for\n>> WAL buffers?\n>>\n>> 3) PMEM does not handle parallel writes well - If you look at [2],\n>> Figure 4(b), you'll see that the throughput actually *drops\" as the\n>> number of threads increase. That's pretty strange / annoying, because\n>> that's how we write into WAL buffers - each thread writes it's own data,\n>> so parallelism is not something we can get rid of.\n>>\n>> I've added some simple profiling, to measure number of calls / time for\n>> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data\n>> for each backend, and logs the counts every 1M ops.\n>>\n>> Typical stats from a concurrent run looks like this:\n>>\n>> xlog stats cnt 43000000\n>> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n>> memcpy cnt 985964 time 1550442272 len 15150499\n>> memset cnt 0 time 0 len 0\n>> persist cnt 13836 time 10369617 len 16292182\n>>\n>> The times are in nanoseconds, so this says the backend did 100 mmap and\n>> unmap calls, taking ~10ms in total. There were ~14k pmem_persist calls,\n>> taking 10ms in total. And the most time (~1.5s) was used by pmem_memcpy\n>> copying about 15MB of data. That's quite a lot :-(\n> \n> It might also be interesting if we can see how much time spent on each\n> logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n>\n\nYeah, we could extend it to that, that's fairly mechanical thing. Bbut \nmaybe that could be visible in a regular perf profile. Also, I suppose \nmost of the time will be used by the pmem calls, shown in the stats.\n\n>>\n>> My conclusion from this is that eliminating WAL buffers and writing WAL\n>> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not the\n>> right approach.\n>>\n>> I suppose we should keep WAL buffers, and then just write the data to\n>> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch does,\n>> except that it allocates one huge file on PMEM and writes to that\n>> (instead of the traditional WAL segments).\n>>\n>> So I decided to try how it'd work with writing to regular WAL segments,\n>> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does that,\n>> and the results look a bit nicer:\n>>\n>> branch 1 16 32 64 96\n>> ----------------------------------------------------------------\n>> master 7291 87704 165310 150437 224186\n>> ntt 7912 106095 213206 212410 237819\n>> simple-no-buffers 7654 96544 115416 95828 103065\n>> with-wal-buffers 7477 95454 181702 140167 214715\n>>\n>> So, much better than the version without WAL buffers, somewhat better\n>> than master (except for 64/96 clients), but still not as good as NTT.\n>>\n>> At this point I was wondering how could the NTT patch be faster when\n>> it's doing roughly the same thing. I'm sire there are some differences,\n>> but it seemed strange. The main difference seems to be that it only maps\n>> one large file, and only once. OTOH the alternative \"simple\" patch maps\n>> segments one by one, in each backend. Per the debug stats the map/unmap\n>> calls are fairly cheap, but maybe it interferes with the memcpy somehow.\n>>\n> \n> While looking at the two methods: NTT and simple-no-buffer, I realized\n> that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n> pmem_drain()) WAL without acquiring WALWriteLock whereas\n> simple-no-buffer patch acquires WALWriteLock to do that\n> (pmem_persist()). I wonder if this also affected the performance\n> differences between those two methods since WALWriteLock serializes\n> the operations. With PMEM, multiple backends can concurrently flush\n> the records if the memory region is not overlapped? If so, flushing\n> WAL without WALWriteLock would be a big benefit.\n> \n\nThat's a very good question - it's quite possible the WALWriteLock is \nnot really needed, because the processes are actually \"writing\" the WAL \ndirectly to PMEM. So it's a bit confusing, because it's only really \nconcerned about making sure it's flushed.\n\nAnd yes, multiple processes certainly can write to PMEM at the same \ntime, in fact it's a requirement to get good throughput I believe. My \nunderstanding is we need ~8 processes, at least that's what I heard from \npeople with more PMEM experience.\n\nTBH I'm not convinced the code in the \"simple-no-buffer\" code (coming \nfrom the 0002 patch) is actually correct. Essentially, consider the \nbackend needs to do a flush, but does not have a segment mapped. So it \nmaps it and calls pmem_drain() on it.\n\nBut does that actually flush anything? Does it properly flush changes \ndone by other processes that may not have called pmem_drain() yet? I \nfind this somewhat suspicious and I'd bet all processes that did write \nsomething have to call pmem_drain().\n\n\n>> So I did an experiment by increasing the size of the WAL segments. I\n>> chose to try with 521MB and 1024MB, and the results with 1GB look like this:\n>>\n>> branch 1 16 32 64 96\n>> ----------------------------------------------------------------\n>> master 6635 88524 171106 163387 245307\n>> ntt 7909 106826 217364 223338 242042\n>> simple-no-buffers 7871 101575 199403 188074 224716\n>> with-wal-buffers 7643 101056 206911 223860 261712\n>>\n>> So yeah, there's a clear difference. It changes the values for \"master\"\n>> a bit, but both the \"simple\" patches (with and without) WAL buffers are\n>> much faster. The with-wal-buffers is almost equal to the NTT patch,\n>> which was using 96GB file. I presume larger WAL segments would get even\n>> closer, if we supported them.\n>>\n>> I'll continue investigating this, but my conclusion so far seem to be\n>> that we can't really replace WAL buffers with PMEM - that seems to\n>> perform much worse.\n>>\n>> The question is what to do about the segment size. Can we reduce the\n>> overhead of mmap-ing individual segments, so that this works even for\n>> smaller WAL segments, to make this useful for common instances (not\n>> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n>> design with a large file, mapped just once.\n>>\n>> Another question is whether it's even worth the extra complexity. On\n>> 16MB segments the difference between master and NTT patch seems to be\n>> non-trivial, but increasing the WAL segment size kinda reduces that. So\n>> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n>> Alternatively, maybe we could switch to libpmemblk, which should\n>> eliminate the filesystem overhead at least.\n> \n> I think the performance improvement by NTT patch with the 16MB WAL\n> segment, the most common WAL segment size, is very good (150437 vs.\n> 212410 with 64 clients). But maybe evaluating writing WAL segment\n> files on PMEM DAX filesystem is also worth, as you mentioned, if we\n> don't do that yet.\n> \n\nWell, not sure. I think the question is still open whether it's actually \nsafe to run on DAX, which does not have atomic writes of 512B sectors, \nand I think we rely on that e.g. for pg_config. But maybe for WAL that's \nnot an issue.\n\n> Also, I'm interested in why the through-put of NTT patch saturated at\n> 32 clients, which is earlier than the master's one (96 clients). How\n> many CPU cores are there on the machine you used?\n> \n\n From what I know, this is somewhat expected for PMEM devices, for a \nbunch of reasons:\n\n1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), so \nit takes fewer processes to saturate it.\n\n2) Internally, the PMEM has a 256B buffer for writes, used for combining \netc. With too many processes sending writes, it becomes to look more \nrandom, which is harmful for throughput.\n\nWhen combined, this means the performance starts dropping at certain \nnumber of threads, and the optimal number of threads is rather low \n(something like 5-10). This is very different behavior compared to DRAM.\n\nThere's a nice overview and measurements in this paper:\n\nBuilding blocks for persistent memory / How to get the most out of your \nnew memory?\nAlexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons \nKemper\n\nhttps://link.springer.com/article/10.1007/s00778-020-00622-9\n\n\n>> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n>> huge read-write assymmetry (the writes being way slower), and their\n>> recommendation (in \"Observation 3\" is)\n>>\n>> The read-write asymmetry of PMem im-plies the necessity of avoiding\n>> writes as much as possible for PMem.\n>>\n>> So maybe we should not be trying to use PMEM for WAL, which is pretty\n>> write-heavy (and in most cases even write-only).\n> \n> I think using PMEM for WAL is cost-effective but it leverages the only\n> low-latency (sequential) write, but not other abilities such as\n> fine-grained access and low-latency random write. If we want to\n> exploit its all ability we might need some drastic changes to logging\n> protocol while considering storing data on PMEM.\n> \n\nTrue. I think investigating whether it's sensible to use PMEM for this \npurpose. It may turn out that replacing the DRAM WAL buffers with writes \ndirectly to PMEM is not economical, and aggregating data in a DRAM \nbuffer is better :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Jan 2021 03:32:41 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nLet me share some numbers from a few more tests. I've been experimenting \nwith two optimization ideas - alignment and non-temporal writes.\n\nThe first idea (alignment) is not entirely unique to PMEM - we have a \nbunch of places where we align stuff to cacheline, and the same thing \ndoes apply to PMEM. The cache lines are 64B, so I've tweaked the WAL \nformat to align records accordingly - the header sizes are a multiple of \n64B, and the space is reserved in 64B chunks. It's a bit crude, but good \nenough for experiments, I think. This means the WAL format would not be \ncompatible, and there's additional overhead (not sure how much).\n\n\nThe second idea is somewhat specific to PMEM - the pmem_memcpy provided \nby libpmem allows specifying flags, determining whether the data should \ngo to CPU cache or not, whether it should be flushed, etc. So far the \ncode was using\n\n pmem_memcpy(..., PMEM_F_MEM_NOFLUSH);\n\nfollowing the idea that caching data in CPU cache and then flushing it \nin larger chunks is more efficient. I heard some recommendations to use \nnon-temporal writes (which should not use CPU cache), so I tested that \nswitching to\n\n pmem_memcpy(..., PMEM_F_NON_TEMPORAL);\n\nThe experimental patches doing these things are attached, as usual.\n\nThe results are a bit better than for the preceding patches, but only by \na couple percent. That's a bit disappointing. Attached is a PDF with \ncharts for the three WAL segment sizes as before.\n\n\nIt's possible the patches are introducing some internal bottleneck, so I \nplan to focus on profiling and optimizing them next. I'd welcome some \nfeedback with ideas what might be wrong, of course ;-)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 22 Jan 2021 04:27:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "\n\nOn 22.01.2021 5:32, Tomas Vondra wrote:\n>\n>\n> On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n>> On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> Hi,\n>>>\n>>> I think I've managed to get the 0002 patch [1] rebased to master and\n>>> working (with help from Masahiko Sawada). It's not clear to me how it\n>>> could have worked as submitted - my theory is that an incomplete patch\n>>> was submitted by mistake, or something like that.\n>>>\n>>> Unfortunately, the benchmark results were kinda disappointing. For a\n>>> pgbench on scale 500 (fits into shared buffers), an average of three\n>>> 5-minute runs looks like this:\n>>>\n>>> branch 1 16 32 64 96\n>>> ----------------------------------------------------------------\n>>> master 7291 87704 165310 150437 224186\n>>> ntt 7912 106095 213206 212410 237819\n>>> simple-no-buffers 7654 96544 115416 95828 103065\n>>>\n>>> NTT refers to the patch from September 10, pre-allocating a large WAL\n>>> file on PMEM, and simple-no-buffers is the simpler patch simply \n>>> removing\n>>> the WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n>>>\n>>> Note: The patch is just replacing the old implementation with mmap.\n>>> That's good enough for experiments like this, but we probably want to\n>>> keep the old one for setups without PMEM. But it's good enough for\n>>> testing, benchmarking etc.\n>>>\n>>> Unfortunately, the results for this simple approach are pretty bad. Not\n>>> only compared to the \"ntt\" patch, but even to master. I'm not entirely\n>>> sure what's the root cause, but I have a couple hypotheses:\n>>>\n>>> 1) bug in the patch - That's clearly a possibility, although I've tried\n>>> tried to eliminate this possibility.\n>>>\n>>> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster \n>>> than\n>>> NVMe storage, but still much slower than DRAM (both in terms of latency\n>>> and bandwidth, see [2] for some data). It's not terrible, but the\n>>> latency is maybe 2-3x higher - not a huge difference, but may matter \n>>> for\n>>> WAL buffers?\n>>>\n>>> 3) PMEM does not handle parallel writes well - If you look at [2],\n>>> Figure 4(b), you'll see that the throughput actually *drops\" as the\n>>> number of threads increase. That's pretty strange / annoying, because\n>>> that's how we write into WAL buffers - each thread writes it's own \n>>> data,\n>>> so parallelism is not something we can get rid of.\n>>>\n>>> I've added some simple profiling, to measure number of calls / time for\n>>> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data\n>>> for each backend, and logs the counts every 1M ops.\n>>>\n>>> Typical stats from a concurrent run looks like this:\n>>>\n>>> xlog stats cnt 43000000\n>>> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n>>> memcpy cnt 985964 time 1550442272 len 15150499\n>>> memset cnt 0 time 0 len 0\n>>> persist cnt 13836 time 10369617 len 16292182\n>>>\n>>> The times are in nanoseconds, so this says the backend did 100 mmap \n>>> and\n>>> unmap calls, taking ~10ms in total. There were ~14k pmem_persist calls,\n>>> taking 10ms in total. And the most time (~1.5s) was used by pmem_memcpy\n>>> copying about 15MB of data. That's quite a lot :-(\n>>\n>> It might also be interesting if we can see how much time spent on each\n>> logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n>>\n>\n> Yeah, we could extend it to that, that's fairly mechanical thing. Bbut \n> maybe that could be visible in a regular perf profile. Also, I suppose \n> most of the time will be used by the pmem calls, shown in the stats.\n>\n>>>\n>>> My conclusion from this is that eliminating WAL buffers and writing WAL\n>>> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not \n>>> the\n>>> right approach.\n>>>\n>>> I suppose we should keep WAL buffers, and then just write the data to\n>>> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch does,\n>>> except that it allocates one huge file on PMEM and writes to that\n>>> (instead of the traditional WAL segments).\n>>>\n>>> So I decided to try how it'd work with writing to regular WAL segments,\n>>> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does that,\n>>> and the results look a bit nicer:\n>>>\n>>> branch 1 16 32 64 96\n>>> ----------------------------------------------------------------\n>>> master 7291 87704 165310 150437 224186\n>>> ntt 7912 106095 213206 212410 237819\n>>> simple-no-buffers 7654 96544 115416 95828 103065\n>>> with-wal-buffers 7477 95454 181702 140167 214715\n>>>\n>>> So, much better than the version without WAL buffers, somewhat better\n>>> than master (except for 64/96 clients), but still not as good as NTT.\n>>>\n>>> At this point I was wondering how could the NTT patch be faster when\n>>> it's doing roughly the same thing. I'm sire there are some differences,\n>>> but it seemed strange. The main difference seems to be that it only \n>>> maps\n>>> one large file, and only once. OTOH the alternative \"simple\" patch maps\n>>> segments one by one, in each backend. Per the debug stats the map/unmap\n>>> calls are fairly cheap, but maybe it interferes with the memcpy \n>>> somehow.\n>>>\n>>\n>> While looking at the two methods: NTT and simple-no-buffer, I realized\n>> that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n>> pmem_drain()) WAL without acquiring WALWriteLock whereas\n>> simple-no-buffer patch acquires WALWriteLock to do that\n>> (pmem_persist()). I wonder if this also affected the performance\n>> differences between those two methods since WALWriteLock serializes\n>> the operations. With PMEM, multiple backends can concurrently flush\n>> the records if the memory region is not overlapped? If so, flushing\n>> WAL without WALWriteLock would be a big benefit.\n>>\n>\n> That's a very good question - it's quite possible the WALWriteLock is \n> not really needed, because the processes are actually \"writing\" the \n> WAL directly to PMEM. So it's a bit confusing, because it's only \n> really concerned about making sure it's flushed.\n>\n> And yes, multiple processes certainly can write to PMEM at the same \n> time, in fact it's a requirement to get good throughput I believe. My \n> understanding is we need ~8 processes, at least that's what I heard \n> from people with more PMEM experience.\n>\n> TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming \n> from the 0002 patch) is actually correct. Essentially, consider the \n> backend needs to do a flush, but does not have a segment mapped. So it \n> maps it and calls pmem_drain() on it.\n>\n> But does that actually flush anything? Does it properly flush changes \n> done by other processes that may not have called pmem_drain() yet? I \n> find this somewhat suspicious and I'd bet all processes that did write \n> something have to call pmem_drain().\n>\n>\n>>> So I did an experiment by increasing the size of the WAL segments. I\n>>> chose to try with 521MB and 1024MB, and the results with 1GB look \n>>> like this:\n>>>\n>>> branch 1 16 32 64 96\n>>> ----------------------------------------------------------------\n>>> master 6635 88524 171106 163387 245307\n>>> ntt 7909 106826 217364 223338 242042\n>>> simple-no-buffers 7871 101575 199403 188074 224716\n>>> with-wal-buffers 7643 101056 206911 223860 261712\n>>>\n>>> So yeah, there's a clear difference. It changes the values for \"master\"\n>>> a bit, but both the \"simple\" patches (with and without) WAL buffers are\n>>> much faster. The with-wal-buffers is almost equal to the NTT patch,\n>>> which was using 96GB file. I presume larger WAL segments would get even\n>>> closer, if we supported them.\n>>>\n>>> I'll continue investigating this, but my conclusion so far seem to be\n>>> that we can't really replace WAL buffers with PMEM - that seems to\n>>> perform much worse.\n>>>\n>>> The question is what to do about the segment size. Can we reduce the\n>>> overhead of mmap-ing individual segments, so that this works even for\n>>> smaller WAL segments, to make this useful for common instances (not\n>>> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n>>> design with a large file, mapped just once.\n>>>\n>>> Another question is whether it's even worth the extra complexity. On\n>>> 16MB segments the difference between master and NTT patch seems to be\n>>> non-trivial, but increasing the WAL segment size kinda reduces that. So\n>>> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n>>> Alternatively, maybe we could switch to libpmemblk, which should\n>>> eliminate the filesystem overhead at least.\n>>\n>> I think the performance improvement by NTT patch with the 16MB WAL\n>> segment, the most common WAL segment size, is very good (150437 vs.\n>> 212410 with 64 clients). But maybe evaluating writing WAL segment\n>> files on PMEM DAX filesystem is also worth, as you mentioned, if we\n>> don't do that yet.\n>>\n>\n> Well, not sure. I think the question is still open whether it's \n> actually safe to run on DAX, which does not have atomic writes of 512B \n> sectors, and I think we rely on that e.g. for pg_config. But maybe for \n> WAL that's not an issue.\n>\n>> Also, I'm interested in why the through-put of NTT patch saturated at\n>> 32 clients, which is earlier than the master's one (96 clients). How\n>> many CPU cores are there on the machine you used?\n>>\n>\n> From what I know, this is somewhat expected for PMEM devices, for a \n> bunch of reasons:\n>\n> 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), \n> so it takes fewer processes to saturate it.\n>\n> 2) Internally, the PMEM has a 256B buffer for writes, used for \n> combining etc. With too many processes sending writes, it becomes to \n> look more random, which is harmful for throughput.\n>\n> When combined, this means the performance starts dropping at certain \n> number of threads, and the optimal number of threads is rather low \n> (something like 5-10). This is very different behavior compared to DRAM.\n>\n> There's a nice overview and measurements in this paper:\n>\n> Building blocks for persistent memory / How to get the most out of \n> your new memory?\n> Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons \n> Kemper\n>\n> https://link.springer.com/article/10.1007/s00778-020-00622-9\n>\n>\n>>> I'm also wondering if WAL is the right usage for PMEM. Per [2] \n>>> there's a\n>>> huge read-write assymmetry (the writes being way slower), and their\n>>> recommendation (in \"Observation 3\" is)\n>>>\n>>> The read-write asymmetry of PMem im-plies the necessity of \n>>> avoiding\n>>> writes as much as possible for PMem.\n>>>\n>>> So maybe we should not be trying to use PMEM for WAL, which is pretty\n>>> write-heavy (and in most cases even write-only).\n>>\n>> I think using PMEM for WAL is cost-effective but it leverages the only\n>> low-latency (sequential) write, but not other abilities such as\n>> fine-grained access and low-latency random write. If we want to\n>> exploit its all ability we might need some drastic changes to logging\n>> protocol while considering storing data on PMEM.\n>>\n>\n> True. I think investigating whether it's sensible to use PMEM for this \n> purpose. It may turn out that replacing the DRAM WAL buffers with \n> writes directly to PMEM is not economical, and aggregating data in a \n> DRAM buffer is better :-(\n>\n>\n> regards\n>\n\nI have heard from several DBMS experts that appearance of huge and cheap \nnon-volatile memory can make a revolution in database system architecture.\nIf all database can fit in non-volatile memory, then we do not need \nbuffers, WAL, ...\nBut although multi-terabyte NVM announces were made by IBM several \nyears ago, I do not know about some successful DBMS prototypes with new \narchitecture.\nI tried to understand why...\n\nIt was very interesting to me to read this thread, which is actually \nstarted in 2016 with \"Non-volatile Memory Logging\" presentation at PGCon.\nAs far as I understand from Tomas result right now using PMEM for WAL \ndoesn't provide some substantial increase of performance.\n\nBut the main advantage of PMEM from my point of view is that it allows \nto avoid write-ahead logging at all!\nCertainly we need to change our algorithms to make it possible. Speaking \nabout Postgres, we have to rewrite all indexes + heap\nand throw away buffer manager + WAL.\n\nWhat can be used instead of standard B-Tree?\nFor example there is description of multiword-CAS approach:\n\n http://justinlevandoski.org/papers/mwcas.pdf\n\nand BzTree implementation on top of it:\n\n https://www.cc.gatech.edu/~jarulraj/papers/2018.bztree.vldb.pdf\n\nThere is free BzTree implementation at github:\n\n git@github.com:sfu-dis/bztree.git\n\nI tried to adopt it for Postgres. It was not so easy because:\n1. It was written in modern C++ (-std=c++14)\n2. It supports multithreading, but not mutliprocess access\n\nSo I have to patch code of this library instead of just using it:\n\n git@github.com:postgrespro/bztree.git\n\nI have not tested yet most iterating case: access to PMEM through PMDK. \nAnd I do not have hardware for such tests.\nBut first results are also seem to be interesting: PMwCAS is kind of \nlockless algorithm and it shows much better scaling at\nNUMA host comparing with standard Postgres.\n\nI have done simple parallel insertion test: multiple clients are \ninserting data with random keys.\nTo make competition with vanilla Postgres more honest I used unlogged table:\n\ncreate unlogged table t(pk int, payload int);\ncreate index on t using bztree(pk);\n\nrandinsert.sql:\ninsert into t (payload,pk) values \n(generate_series(1,1000),random()*1000000000);\n\npgbench -f randinsert.sql -c N -j N -M prepared -n -t 1000 -P 1 postgres\n\nSo each client is inserting one million records.\nThe target system has 160 virtual and 80 real cores with 256GB of RAM.\nResults (TPS) are the following:\n\nN nbtree bztree\n1 540 455\n10 993 2237\n100 1479 5025\n\nSo bztree is more than 3 times faster for 100 clients.\nJust for comparison: result for inserting in this table without index is \n10k TPS.\n\nI am going then try to play with PMEM.\nIf results will be promising, then it is possible to think about \nreimplementation of heap and WAL-less Postgres!\n\nI am sorry, that my post has no direct relation to the topic of this \nthread (Non-volatile WAL buffer).\nIt seems to be that it is better to use PMEM to eliminate WAL at all \ninstead of optimizing it.\nCertainly, I realize that WAL plays very important role in Postgres:\narchiving and replication are based on WAL. So even if we can live \nwithout WAL, it is still not clear whether we really want to live \nwithout it.\n\nOne more idea: using multiword CAS approach requires us to make changes \nas editing sequences.\nSuch editing sequence is actually ready WAL records. So implementors of \naccess methods do not have to do\ndouble work: update data structure in memory and create correspondent \nWAL records. Moreover, PMwCAS operations are atomic:\nwe can replay or revert them in case of fault. So there is no need in \nFPW (full page writes) which have very noticeable impact on WAL size and\ndatabase performance.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 22 Jan 2021 19:04:40 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 11:32 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n> > On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> I think I've managed to get the 0002 patch [1] rebased to master and\n> >> working (with help from Masahiko Sawada). It's not clear to me how it\n> >> could have worked as submitted - my theory is that an incomplete patch\n> >> was submitted by mistake, or something like that.\n> >>\n> >> Unfortunately, the benchmark results were kinda disappointing. For a\n> >> pgbench on scale 500 (fits into shared buffers), an average of three\n> >> 5-minute runs looks like this:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 7291 87704 165310 150437 224186\n> >> ntt 7912 106095 213206 212410 237819\n> >> simple-no-buffers 7654 96544 115416 95828 103065\n> >>\n> >> NTT refers to the patch from September 10, pre-allocating a large WAL\n> >> file on PMEM, and simple-no-buffers is the simpler patch simply removing\n> >> the WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n> >>\n> >> Note: The patch is just replacing the old implementation with mmap.\n> >> That's good enough for experiments like this, but we probably want to\n> >> keep the old one for setups without PMEM. But it's good enough for\n> >> testing, benchmarking etc.\n> >>\n> >> Unfortunately, the results for this simple approach are pretty bad. Not\n> >> only compared to the \"ntt\" patch, but even to master. I'm not entirely\n> >> sure what's the root cause, but I have a couple hypotheses:\n> >>\n> >> 1) bug in the patch - That's clearly a possibility, although I've tried\n> >> tried to eliminate this possibility.\n> >>\n> >> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster than\n> >> NVMe storage, but still much slower than DRAM (both in terms of latency\n> >> and bandwidth, see [2] for some data). It's not terrible, but the\n> >> latency is maybe 2-3x higher - not a huge difference, but may matter for\n> >> WAL buffers?\n> >>\n> >> 3) PMEM does not handle parallel writes well - If you look at [2],\n> >> Figure 4(b), you'll see that the throughput actually *drops\" as the\n> >> number of threads increase. That's pretty strange / annoying, because\n> >> that's how we write into WAL buffers - each thread writes it's own data,\n> >> so parallelism is not something we can get rid of.\n> >>\n> >> I've added some simple profiling, to measure number of calls / time for\n> >> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data\n> >> for each backend, and logs the counts every 1M ops.\n> >>\n> >> Typical stats from a concurrent run looks like this:\n> >>\n> >> xlog stats cnt 43000000\n> >> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n> >> memcpy cnt 985964 time 1550442272 len 15150499\n> >> memset cnt 0 time 0 len 0\n> >> persist cnt 13836 time 10369617 len 16292182\n> >>\n> >> The times are in nanoseconds, so this says the backend did 100 mmap and\n> >> unmap calls, taking ~10ms in total. There were ~14k pmem_persist calls,\n> >> taking 10ms in total. And the most time (~1.5s) was used by pmem_memcpy\n> >> copying about 15MB of data. That's quite a lot :-(\n> >\n> > It might also be interesting if we can see how much time spent on each\n> > logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n> >\n>\n> Yeah, we could extend it to that, that's fairly mechanical thing. Bbut\n> maybe that could be visible in a regular perf profile. Also, I suppose\n> most of the time will be used by the pmem calls, shown in the stats.\n>\n> >>\n> >> My conclusion from this is that eliminating WAL buffers and writing WAL\n> >> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not the\n> >> right approach.\n> >>\n> >> I suppose we should keep WAL buffers, and then just write the data to\n> >> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch does,\n> >> except that it allocates one huge file on PMEM and writes to that\n> >> (instead of the traditional WAL segments).\n> >>\n> >> So I decided to try how it'd work with writing to regular WAL segments,\n> >> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does that,\n> >> and the results look a bit nicer:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 7291 87704 165310 150437 224186\n> >> ntt 7912 106095 213206 212410 237819\n> >> simple-no-buffers 7654 96544 115416 95828 103065\n> >> with-wal-buffers 7477 95454 181702 140167 214715\n> >>\n> >> So, much better than the version without WAL buffers, somewhat better\n> >> than master (except for 64/96 clients), but still not as good as NTT.\n> >>\n> >> At this point I was wondering how could the NTT patch be faster when\n> >> it's doing roughly the same thing. I'm sire there are some differences,\n> >> but it seemed strange. The main difference seems to be that it only maps\n> >> one large file, and only once. OTOH the alternative \"simple\" patch maps\n> >> segments one by one, in each backend. Per the debug stats the map/unmap\n> >> calls are fairly cheap, but maybe it interferes with the memcpy somehow.\n> >>\n> >\n> > While looking at the two methods: NTT and simple-no-buffer, I realized\n> > that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n> > pmem_drain()) WAL without acquiring WALWriteLock whereas\n> > simple-no-buffer patch acquires WALWriteLock to do that\n> > (pmem_persist()). I wonder if this also affected the performance\n> > differences between those two methods since WALWriteLock serializes\n> > the operations. With PMEM, multiple backends can concurrently flush\n> > the records if the memory region is not overlapped? If so, flushing\n> > WAL without WALWriteLock would be a big benefit.\n> >\n>\n> That's a very good question - it's quite possible the WALWriteLock is\n> not really needed, because the processes are actually \"writing\" the WAL\n> directly to PMEM. So it's a bit confusing, because it's only really\n> concerned about making sure it's flushed.\n>\n> And yes, multiple processes certainly can write to PMEM at the same\n> time, in fact it's a requirement to get good throughput I believe. My\n> understanding is we need ~8 processes, at least that's what I heard from\n> people with more PMEM experience.\n\nThanks, that's good to know.\n\n>\n> TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n> from the 0002 patch) is actually correct. Essentially, consider the\n> backend needs to do a flush, but does not have a segment mapped. So it\n> maps it and calls pmem_drain() on it.\n>\n> But does that actually flush anything? Does it properly flush changes\n> done by other processes that may not have called pmem_drain() yet? I\n> find this somewhat suspicious and I'd bet all processes that did write\n> something have to call pmem_drain().\n\nYeah, in terms of experiments at least it's good to find out that the\napproach mmapping each WAL segment is not good at performance.\n\n>\n>\n> >> So I did an experiment by increasing the size of the WAL segments. I\n> >> chose to try with 521MB and 1024MB, and the results with 1GB look like this:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 6635 88524 171106 163387 245307\n> >> ntt 7909 106826 217364 223338 242042\n> >> simple-no-buffers 7871 101575 199403 188074 224716\n> >> with-wal-buffers 7643 101056 206911 223860 261712\n> >>\n> >> So yeah, there's a clear difference. It changes the values for \"master\"\n> >> a bit, but both the \"simple\" patches (with and without) WAL buffers are\n> >> much faster. The with-wal-buffers is almost equal to the NTT patch,\n> >> which was using 96GB file. I presume larger WAL segments would get even\n> >> closer, if we supported them.\n> >>\n> >> I'll continue investigating this, but my conclusion so far seem to be\n> >> that we can't really replace WAL buffers with PMEM - that seems to\n> >> perform much worse.\n> >>\n> >> The question is what to do about the segment size. Can we reduce the\n> >> overhead of mmap-ing individual segments, so that this works even for\n> >> smaller WAL segments, to make this useful for common instances (not\n> >> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n> >> design with a large file, mapped just once.\n> >>\n> >> Another question is whether it's even worth the extra complexity. On\n> >> 16MB segments the difference between master and NTT patch seems to be\n> >> non-trivial, but increasing the WAL segment size kinda reduces that. So\n> >> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n> >> Alternatively, maybe we could switch to libpmemblk, which should\n> >> eliminate the filesystem overhead at least.\n> >\n> > I think the performance improvement by NTT patch with the 16MB WAL\n> > segment, the most common WAL segment size, is very good (150437 vs.\n> > 212410 with 64 clients). But maybe evaluating writing WAL segment\n> > files on PMEM DAX filesystem is also worth, as you mentioned, if we\n> > don't do that yet.\n> >\n>\n> Well, not sure. I think the question is still open whether it's actually\n> safe to run on DAX, which does not have atomic writes of 512B sectors,\n> and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n> not an issue.\n\nI think we can use the Block Translation Table (BTT) driver that\nprovides atomic sector updates.\n\n>\n> > Also, I'm interested in why the through-put of NTT patch saturated at\n> > 32 clients, which is earlier than the master's one (96 clients). How\n> > many CPU cores are there on the machine you used?\n> >\n>\n> From what I know, this is somewhat expected for PMEM devices, for a\n> bunch of reasons:\n>\n> 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), so\n> it takes fewer processes to saturate it.\n>\n> 2) Internally, the PMEM has a 256B buffer for writes, used for combining\n> etc. With too many processes sending writes, it becomes to look more\n> random, which is harmful for throughput.\n>\n> When combined, this means the performance starts dropping at certain\n> number of threads, and the optimal number of threads is rather low\n> (something like 5-10). This is very different behavior compared to DRAM.\n\nMakes sense.\n\n>\n> There's a nice overview and measurements in this paper:\n>\n> Building blocks for persistent memory / How to get the most out of your\n> new memory?\n> Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons\n> Kemper\n>\n> https://link.springer.com/article/10.1007/s00778-020-00622-9\n\nThank you. I'll read it.\n\n>\n>\n> >> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n> >> huge read-write assymmetry (the writes being way slower), and their\n> >> recommendation (in \"Observation 3\" is)\n> >>\n> >> The read-write asymmetry of PMem im-plies the necessity of avoiding\n> >> writes as much as possible for PMem.\n> >>\n> >> So maybe we should not be trying to use PMEM for WAL, which is pretty\n> >> write-heavy (and in most cases even write-only).\n> >\n> > I think using PMEM for WAL is cost-effective but it leverages the only\n> > low-latency (sequential) write, but not other abilities such as\n> > fine-grained access and low-latency random write. If we want to\n> > exploit its all ability we might need some drastic changes to logging\n> > protocol while considering storing data on PMEM.\n> >\n>\n> True. I think investigating whether it's sensible to use PMEM for this\n> purpose. It may turn out that replacing the DRAM WAL buffers with writes\n> directly to PMEM is not economical, and aggregating data in a DRAM\n> buffer is better :-(\n\nYes. I think it might be interesting to do an analysis of the\nbottlenecks of NTT patch by perf etc. If bottlenecks are moved to\nother places by removing WALWriteLock during flush, it's probably a\ngood sign for further performance improvements. IIRC WALWriteLock is\none of the main bottlenecks on OLTP workload, although my memory might\nalready be out of date.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 25 Jan 2021 11:56:15 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear everyone,\n\nI'm sorry for the late reply. I rebase my two patchsets onto the latest\nmaster 411ae64.The one patchset prefixed with v4 is for non-volatile WAL\nbuffer; the other prefixed with v3 is for msync.\n\nI will reply to your thankful feedbacks one by one within days. Please wait\nfor a moment.\n\nBest regards,\nTakashi\n\n\n01/25/2021(Mon) 11:56 Masahiko Sawada <sawada.mshk@gmail.com>:\n\n> On Fri, Jan 22, 2021 at 11:32 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n> > > On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n> > > <tomas.vondra@enterprisedb.com> wrote:\n> > >>\n> > >> Hi,\n> > >>\n> > >> I think I've managed to get the 0002 patch [1] rebased to master and\n> > >> working (with help from Masahiko Sawada). It's not clear to me how it\n> > >> could have worked as submitted - my theory is that an incomplete patch\n> > >> was submitted by mistake, or something like that.\n> > >>\n> > >> Unfortunately, the benchmark results were kinda disappointing. For a\n> > >> pgbench on scale 500 (fits into shared buffers), an average of three\n> > >> 5-minute runs looks like this:\n> > >>\n> > >> branch 1 16 32 64 96\n> > >> ----------------------------------------------------------------\n> > >> master 7291 87704 165310 150437 224186\n> > >> ntt 7912 106095 213206 212410 237819\n> > >> simple-no-buffers 7654 96544 115416 95828 103065\n> > >>\n> > >> NTT refers to the patch from September 10, pre-allocating a large WAL\n> > >> file on PMEM, and simple-no-buffers is the simpler patch simply\n> removing\n> > >> the WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n> > >>\n> > >> Note: The patch is just replacing the old implementation with mmap.\n> > >> That's good enough for experiments like this, but we probably want to\n> > >> keep the old one for setups without PMEM. But it's good enough for\n> > >> testing, benchmarking etc.\n> > >>\n> > >> Unfortunately, the results for this simple approach are pretty bad.\n> Not\n> > >> only compared to the \"ntt\" patch, but even to master. I'm not entirely\n> > >> sure what's the root cause, but I have a couple hypotheses:\n> > >>\n> > >> 1) bug in the patch - That's clearly a possibility, although I've\n> tried\n> > >> tried to eliminate this possibility.\n> > >>\n> > >> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster\n> than\n> > >> NVMe storage, but still much slower than DRAM (both in terms of\n> latency\n> > >> and bandwidth, see [2] for some data). It's not terrible, but the\n> > >> latency is maybe 2-3x higher - not a huge difference, but may matter\n> for\n> > >> WAL buffers?\n> > >>\n> > >> 3) PMEM does not handle parallel writes well - If you look at [2],\n> > >> Figure 4(b), you'll see that the throughput actually *drops\" as the\n> > >> number of threads increase. That's pretty strange / annoying, because\n> > >> that's how we write into WAL buffers - each thread writes it's own\n> data,\n> > >> so parallelism is not something we can get rid of.\n> > >>\n> > >> I've added some simple profiling, to measure number of calls / time\n> for\n> > >> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data\n> > >> for each backend, and logs the counts every 1M ops.\n> > >>\n> > >> Typical stats from a concurrent run looks like this:\n> > >>\n> > >> xlog stats cnt 43000000\n> > >> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n> > >> memcpy cnt 985964 time 1550442272 len 15150499\n> > >> memset cnt 0 time 0 len 0\n> > >> persist cnt 13836 time 10369617 len 16292182\n> > >>\n> > >> The times are in nanoseconds, so this says the backend did 100 mmap\n> and\n> > >> unmap calls, taking ~10ms in total. There were ~14k pmem_persist\n> calls,\n> > >> taking 10ms in total. And the most time (~1.5s) was used by\n> pmem_memcpy\n> > >> copying about 15MB of data. That's quite a lot :-(\n> > >\n> > > It might also be interesting if we can see how much time spent on each\n> > > logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n> > >\n> >\n> > Yeah, we could extend it to that, that's fairly mechanical thing. Bbut\n> > maybe that could be visible in a regular perf profile. Also, I suppose\n> > most of the time will be used by the pmem calls, shown in the stats.\n> >\n> > >>\n> > >> My conclusion from this is that eliminating WAL buffers and writing\n> WAL\n> > >> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not\n> the\n> > >> right approach.\n> > >>\n> > >> I suppose we should keep WAL buffers, and then just write the data to\n> > >> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch\n> does,\n> > >> except that it allocates one huge file on PMEM and writes to that\n> > >> (instead of the traditional WAL segments).\n> > >>\n> > >> So I decided to try how it'd work with writing to regular WAL\n> segments,\n> > >> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does\n> that,\n> > >> and the results look a bit nicer:\n> > >>\n> > >> branch 1 16 32 64 96\n> > >> ----------------------------------------------------------------\n> > >> master 7291 87704 165310 150437 224186\n> > >> ntt 7912 106095 213206 212410 237819\n> > >> simple-no-buffers 7654 96544 115416 95828 103065\n> > >> with-wal-buffers 7477 95454 181702 140167 214715\n> > >>\n> > >> So, much better than the version without WAL buffers, somewhat better\n> > >> than master (except for 64/96 clients), but still not as good as NTT.\n> > >>\n> > >> At this point I was wondering how could the NTT patch be faster when\n> > >> it's doing roughly the same thing. I'm sire there are some\n> differences,\n> > >> but it seemed strange. The main difference seems to be that it only\n> maps\n> > >> one large file, and only once. OTOH the alternative \"simple\" patch\n> maps\n> > >> segments one by one, in each backend. Per the debug stats the\n> map/unmap\n> > >> calls are fairly cheap, but maybe it interferes with the memcpy\n> somehow.\n> > >>\n> > >\n> > > While looking at the two methods: NTT and simple-no-buffer, I realized\n> > > that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n> > > pmem_drain()) WAL without acquiring WALWriteLock whereas\n> > > simple-no-buffer patch acquires WALWriteLock to do that\n> > > (pmem_persist()). I wonder if this also affected the performance\n> > > differences between those two methods since WALWriteLock serializes\n> > > the operations. With PMEM, multiple backends can concurrently flush\n> > > the records if the memory region is not overlapped? If so, flushing\n> > > WAL without WALWriteLock would be a big benefit.\n> > >\n> >\n> > That's a very good question - it's quite possible the WALWriteLock is\n> > not really needed, because the processes are actually \"writing\" the WAL\n> > directly to PMEM. So it's a bit confusing, because it's only really\n> > concerned about making sure it's flushed.\n> >\n> > And yes, multiple processes certainly can write to PMEM at the same\n> > time, in fact it's a requirement to get good throughput I believe. My\n> > understanding is we need ~8 processes, at least that's what I heard from\n> > people with more PMEM experience.\n>\n> Thanks, that's good to know.\n>\n> >\n> > TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n> > from the 0002 patch) is actually correct. Essentially, consider the\n> > backend needs to do a flush, but does not have a segment mapped. So it\n> > maps it and calls pmem_drain() on it.\n> >\n> > But does that actually flush anything? Does it properly flush changes\n> > done by other processes that may not have called pmem_drain() yet? I\n> > find this somewhat suspicious and I'd bet all processes that did write\n> > something have to call pmem_drain().\n>\n> Yeah, in terms of experiments at least it's good to find out that the\n> approach mmapping each WAL segment is not good at performance.\n>\n> >\n> >\n> > >> So I did an experiment by increasing the size of the WAL segments. I\n> > >> chose to try with 521MB and 1024MB, and the results with 1GB look\n> like this:\n> > >>\n> > >> branch 1 16 32 64 96\n> > >> ----------------------------------------------------------------\n> > >> master 6635 88524 171106 163387 245307\n> > >> ntt 7909 106826 217364 223338 242042\n> > >> simple-no-buffers 7871 101575 199403 188074 224716\n> > >> with-wal-buffers 7643 101056 206911 223860 261712\n> > >>\n> > >> So yeah, there's a clear difference. It changes the values for\n> \"master\"\n> > >> a bit, but both the \"simple\" patches (with and without) WAL buffers\n> are\n> > >> much faster. The with-wal-buffers is almost equal to the NTT patch,\n> > >> which was using 96GB file. I presume larger WAL segments would get\n> even\n> > >> closer, if we supported them.\n> > >>\n> > >> I'll continue investigating this, but my conclusion so far seem to be\n> > >> that we can't really replace WAL buffers with PMEM - that seems to\n> > >> perform much worse.\n> > >>\n> > >> The question is what to do about the segment size. Can we reduce the\n> > >> overhead of mmap-ing individual segments, so that this works even for\n> > >> smaller WAL segments, to make this useful for common instances (not\n> > >> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n> > >> design with a large file, mapped just once.\n> > >>\n> > >> Another question is whether it's even worth the extra complexity. On\n> > >> 16MB segments the difference between master and NTT patch seems to be\n> > >> non-trivial, but increasing the WAL segment size kinda reduces that.\n> So\n> > >> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n> > >> Alternatively, maybe we could switch to libpmemblk, which should\n> > >> eliminate the filesystem overhead at least.\n> > >\n> > > I think the performance improvement by NTT patch with the 16MB WAL\n> > > segment, the most common WAL segment size, is very good (150437 vs.\n> > > 212410 with 64 clients). But maybe evaluating writing WAL segment\n> > > files on PMEM DAX filesystem is also worth, as you mentioned, if we\n> > > don't do that yet.\n> > >\n> >\n> > Well, not sure. I think the question is still open whether it's actually\n> > safe to run on DAX, which does not have atomic writes of 512B sectors,\n> > and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n> > not an issue.\n>\n> I think we can use the Block Translation Table (BTT) driver that\n> provides atomic sector updates.\n>\n> >\n> > > Also, I'm interested in why the through-put of NTT patch saturated at\n> > > 32 clients, which is earlier than the master's one (96 clients). How\n> > > many CPU cores are there on the machine you used?\n> > >\n> >\n> > From what I know, this is somewhat expected for PMEM devices, for a\n> > bunch of reasons:\n> >\n> > 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), so\n> > it takes fewer processes to saturate it.\n> >\n> > 2) Internally, the PMEM has a 256B buffer for writes, used for combining\n> > etc. With too many processes sending writes, it becomes to look more\n> > random, which is harmful for throughput.\n> >\n> > When combined, this means the performance starts dropping at certain\n> > number of threads, and the optimal number of threads is rather low\n> > (something like 5-10). This is very different behavior compared to DRAM.\n>\n> Makes sense.\n>\n> >\n> > There's a nice overview and measurements in this paper:\n> >\n> > Building blocks for persistent memory / How to get the most out of your\n> > new memory?\n> > Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons\n> > Kemper\n> >\n> > https://link.springer.com/article/10.1007/s00778-020-00622-9\n>\n> Thank you. I'll read it.\n>\n> >\n> >\n> > >> I'm also wondering if WAL is the right usage for PMEM. Per [2]\n> there's a\n> > >> huge read-write assymmetry (the writes being way slower), and their\n> > >> recommendation (in \"Observation 3\" is)\n> > >>\n> > >> The read-write asymmetry of PMem im-plies the necessity of\n> avoiding\n> > >> writes as much as possible for PMem.\n> > >>\n> > >> So maybe we should not be trying to use PMEM for WAL, which is pretty\n> > >> write-heavy (and in most cases even write-only).\n> > >\n> > > I think using PMEM for WAL is cost-effective but it leverages the only\n> > > low-latency (sequential) write, but not other abilities such as\n> > > fine-grained access and low-latency random write. If we want to\n> > > exploit its all ability we might need some drastic changes to logging\n> > > protocol while considering storing data on PMEM.\n> > >\n> >\n> > True. I think investigating whether it's sensible to use PMEM for this\n> > purpose. It may turn out that replacing the DRAM WAL buffers with writes\n> > directly to PMEM is not economical, and aggregating data in a DRAM\n> > buffer is better :-(\n>\n> Yes. I think it might be interesting to do an analysis of the\n> bottlenecks of NTT patch by perf etc. If bottlenecks are moved to\n> other places by removing WALWriteLock during flush, it's probably a\n> good sign for further performance improvements. IIRC WALWriteLock is\n> one of the main bottlenecks on OLTP workload, although my memory might\n> already be out of date.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nDear everyone,I'm sorry for the late reply. I rebase my two patchsets onto the latest master 411ae64.The one patchset prefixed with v4 is for non-volatile WAL buffer; the other prefixed with v3 is for msync.I will reply to your thankful feedbacks one by one within days. Please wait for a moment.Best regards,Takashi01/25/2021(Mon) 11:56 Masahiko Sawada <sawada.mshk@gmail.com>:On Fri, Jan 22, 2021 at 11:32 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n> > On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> I think I've managed to get the 0002 patch [1] rebased to master and\n> >> working (with help from Masahiko Sawada). It's not clear to me how it\n> >> could have worked as submitted - my theory is that an incomplete patch\n> >> was submitted by mistake, or something like that.\n> >>\n> >> Unfortunately, the benchmark results were kinda disappointing. For a\n> >> pgbench on scale 500 (fits into shared buffers), an average of three\n> >> 5-minute runs looks like this:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 7291 87704 165310 150437 224186\n> >> ntt 7912 106095 213206 212410 237819\n> >> simple-no-buffers 7654 96544 115416 95828 103065\n> >>\n> >> NTT refers to the patch from September 10, pre-allocating a large WAL\n> >> file on PMEM, and simple-no-buffers is the simpler patch simply removing\n> >> the WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n> >>\n> >> Note: The patch is just replacing the old implementation with mmap.\n> >> That's good enough for experiments like this, but we probably want to\n> >> keep the old one for setups without PMEM. But it's good enough for\n> >> testing, benchmarking etc.\n> >>\n> >> Unfortunately, the results for this simple approach are pretty bad. Not\n> >> only compared to the \"ntt\" patch, but even to master. I'm not entirely\n> >> sure what's the root cause, but I have a couple hypotheses:\n> >>\n> >> 1) bug in the patch - That's clearly a possibility, although I've tried\n> >> tried to eliminate this possibility.\n> >>\n> >> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster than\n> >> NVMe storage, but still much slower than DRAM (both in terms of latency\n> >> and bandwidth, see [2] for some data). It's not terrible, but the\n> >> latency is maybe 2-3x higher - not a huge difference, but may matter for\n> >> WAL buffers?\n> >>\n> >> 3) PMEM does not handle parallel writes well - If you look at [2],\n> >> Figure 4(b), you'll see that the throughput actually *drops\" as the\n> >> number of threads increase. That's pretty strange / annoying, because\n> >> that's how we write into WAL buffers - each thread writes it's own data,\n> >> so parallelism is not something we can get rid of.\n> >>\n> >> I've added some simple profiling, to measure number of calls / time for\n> >> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data\n> >> for each backend, and logs the counts every 1M ops.\n> >>\n> >> Typical stats from a concurrent run looks like this:\n> >>\n> >> xlog stats cnt 43000000\n> >> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n> >> memcpy cnt 985964 time 1550442272 len 15150499\n> >> memset cnt 0 time 0 len 0\n> >> persist cnt 13836 time 10369617 len 16292182\n> >>\n> >> The times are in nanoseconds, so this says the backend did 100 mmap and\n> >> unmap calls, taking ~10ms in total. There were ~14k pmem_persist calls,\n> >> taking 10ms in total. And the most time (~1.5s) was used by pmem_memcpy\n> >> copying about 15MB of data. That's quite a lot :-(\n> >\n> > It might also be interesting if we can see how much time spent on each\n> > logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n> >\n>\n> Yeah, we could extend it to that, that's fairly mechanical thing. Bbut\n> maybe that could be visible in a regular perf profile. Also, I suppose\n> most of the time will be used by the pmem calls, shown in the stats.\n>\n> >>\n> >> My conclusion from this is that eliminating WAL buffers and writing WAL\n> >> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not the\n> >> right approach.\n> >>\n> >> I suppose we should keep WAL buffers, and then just write the data to\n> >> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch does,\n> >> except that it allocates one huge file on PMEM and writes to that\n> >> (instead of the traditional WAL segments).\n> >>\n> >> So I decided to try how it'd work with writing to regular WAL segments,\n> >> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does that,\n> >> and the results look a bit nicer:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 7291 87704 165310 150437 224186\n> >> ntt 7912 106095 213206 212410 237819\n> >> simple-no-buffers 7654 96544 115416 95828 103065\n> >> with-wal-buffers 7477 95454 181702 140167 214715\n> >>\n> >> So, much better than the version without WAL buffers, somewhat better\n> >> than master (except for 64/96 clients), but still not as good as NTT.\n> >>\n> >> At this point I was wondering how could the NTT patch be faster when\n> >> it's doing roughly the same thing. I'm sire there are some differences,\n> >> but it seemed strange. The main difference seems to be that it only maps\n> >> one large file, and only once. OTOH the alternative \"simple\" patch maps\n> >> segments one by one, in each backend. Per the debug stats the map/unmap\n> >> calls are fairly cheap, but maybe it interferes with the memcpy somehow.\n> >>\n> >\n> > While looking at the two methods: NTT and simple-no-buffer, I realized\n> > that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n> > pmem_drain()) WAL without acquiring WALWriteLock whereas\n> > simple-no-buffer patch acquires WALWriteLock to do that\n> > (pmem_persist()). I wonder if this also affected the performance\n> > differences between those two methods since WALWriteLock serializes\n> > the operations. With PMEM, multiple backends can concurrently flush\n> > the records if the memory region is not overlapped? If so, flushing\n> > WAL without WALWriteLock would be a big benefit.\n> >\n>\n> That's a very good question - it's quite possible the WALWriteLock is\n> not really needed, because the processes are actually \"writing\" the WAL\n> directly to PMEM. So it's a bit confusing, because it's only really\n> concerned about making sure it's flushed.\n>\n> And yes, multiple processes certainly can write to PMEM at the same\n> time, in fact it's a requirement to get good throughput I believe. My\n> understanding is we need ~8 processes, at least that's what I heard from\n> people with more PMEM experience.\n\nThanks, that's good to know.\n\n>\n> TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n> from the 0002 patch) is actually correct. Essentially, consider the\n> backend needs to do a flush, but does not have a segment mapped. So it\n> maps it and calls pmem_drain() on it.\n>\n> But does that actually flush anything? Does it properly flush changes\n> done by other processes that may not have called pmem_drain() yet? I\n> find this somewhat suspicious and I'd bet all processes that did write\n> something have to call pmem_drain().\n\nYeah, in terms of experiments at least it's good to find out that the\napproach mmapping each WAL segment is not good at performance.\n\n>\n>\n> >> So I did an experiment by increasing the size of the WAL segments. I\n> >> chose to try with 521MB and 1024MB, and the results with 1GB look like this:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 6635 88524 171106 163387 245307\n> >> ntt 7909 106826 217364 223338 242042\n> >> simple-no-buffers 7871 101575 199403 188074 224716\n> >> with-wal-buffers 7643 101056 206911 223860 261712\n> >>\n> >> So yeah, there's a clear difference. It changes the values for \"master\"\n> >> a bit, but both the \"simple\" patches (with and without) WAL buffers are\n> >> much faster. The with-wal-buffers is almost equal to the NTT patch,\n> >> which was using 96GB file. I presume larger WAL segments would get even\n> >> closer, if we supported them.\n> >>\n> >> I'll continue investigating this, but my conclusion so far seem to be\n> >> that we can't really replace WAL buffers with PMEM - that seems to\n> >> perform much worse.\n> >>\n> >> The question is what to do about the segment size. Can we reduce the\n> >> overhead of mmap-ing individual segments, so that this works even for\n> >> smaller WAL segments, to make this useful for common instances (not\n> >> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n> >> design with a large file, mapped just once.\n> >>\n> >> Another question is whether it's even worth the extra complexity. On\n> >> 16MB segments the difference between master and NTT patch seems to be\n> >> non-trivial, but increasing the WAL segment size kinda reduces that. So\n> >> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n> >> Alternatively, maybe we could switch to libpmemblk, which should\n> >> eliminate the filesystem overhead at least.\n> >\n> > I think the performance improvement by NTT patch with the 16MB WAL\n> > segment, the most common WAL segment size, is very good (150437 vs.\n> > 212410 with 64 clients). But maybe evaluating writing WAL segment\n> > files on PMEM DAX filesystem is also worth, as you mentioned, if we\n> > don't do that yet.\n> >\n>\n> Well, not sure. I think the question is still open whether it's actually\n> safe to run on DAX, which does not have atomic writes of 512B sectors,\n> and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n> not an issue.\n\nI think we can use the Block Translation Table (BTT) driver that\nprovides atomic sector updates.\n\n>\n> > Also, I'm interested in why the through-put of NTT patch saturated at\n> > 32 clients, which is earlier than the master's one (96 clients). How\n> > many CPU cores are there on the machine you used?\n> >\n>\n> From what I know, this is somewhat expected for PMEM devices, for a\n> bunch of reasons:\n>\n> 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), so\n> it takes fewer processes to saturate it.\n>\n> 2) Internally, the PMEM has a 256B buffer for writes, used for combining\n> etc. With too many processes sending writes, it becomes to look more\n> random, which is harmful for throughput.\n>\n> When combined, this means the performance starts dropping at certain\n> number of threads, and the optimal number of threads is rather low\n> (something like 5-10). This is very different behavior compared to DRAM.\n\nMakes sense.\n\n>\n> There's a nice overview and measurements in this paper:\n>\n> Building blocks for persistent memory / How to get the most out of your\n> new memory?\n> Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons\n> Kemper\n>\n> https://link.springer.com/article/10.1007/s00778-020-00622-9\n\nThank you. I'll read it.\n\n>\n>\n> >> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n> >> huge read-write assymmetry (the writes being way slower), and their\n> >> recommendation (in \"Observation 3\" is)\n> >>\n> >> The read-write asymmetry of PMem im-plies the necessity of avoiding\n> >> writes as much as possible for PMem.\n> >>\n> >> So maybe we should not be trying to use PMEM for WAL, which is pretty\n> >> write-heavy (and in most cases even write-only).\n> >\n> > I think using PMEM for WAL is cost-effective but it leverages the only\n> > low-latency (sequential) write, but not other abilities such as\n> > fine-grained access and low-latency random write. If we want to\n> > exploit its all ability we might need some drastic changes to logging\n> > protocol while considering storing data on PMEM.\n> >\n>\n> True. I think investigating whether it's sensible to use PMEM for this\n> purpose. It may turn out that replacing the DRAM WAL buffers with writes\n> directly to PMEM is not economical, and aggregating data in a DRAM\n> buffer is better :-(\n\nYes. I think it might be interesting to do an analysis of the\nbottlenecks of NTT patch by perf etc. If bottlenecks are moved to\nother places by removing WALWriteLock during flush, it's probably a\ngood sign for further performance improvements. IIRC WALWriteLock is\none of the main bottlenecks on OLTP workload, although my memory might\nalready be out of date.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Tue, 26 Jan 2021 17:46:57 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear everyone,\n\nSorry but I forgot to attach my patchsets... Please see the files attached\nto this mail. Please also note that they contain some fixes.\n\nBest regards,\nTakashi\n\n\n2021年1月26日(火) 17:46 Takashi Menjo <takashi.menjo@gmail.com>:\n\n> Dear everyone,\n>\n> I'm sorry for the late reply. I rebase my two patchsets onto the latest\n> master 411ae64.The one patchset prefixed with v4 is for non-volatile WAL\n> buffer; the other prefixed with v3 is for msync.\n>\n> I will reply to your thankful feedbacks one by one within days. Please\n> wait for a moment.\n>\n> Best regards,\n> Takashi\n>\n>\n> 01/25/2021(Mon) 11:56 Masahiko Sawada <sawada.mshk@gmail.com>:\n>\n>> On Fri, Jan 22, 2021 at 11:32 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>> >\n>> >\n>> >\n>> > On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n>> > > On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n>> > > <tomas.vondra@enterprisedb.com> wrote:\n>> > >>\n>> > >> Hi,\n>> > >>\n>> > >> I think I've managed to get the 0002 patch [1] rebased to master and\n>> > >> working (with help from Masahiko Sawada). It's not clear to me how it\n>> > >> could have worked as submitted - my theory is that an incomplete\n>> patch\n>> > >> was submitted by mistake, or something like that.\n>> > >>\n>> > >> Unfortunately, the benchmark results were kinda disappointing. For a\n>> > >> pgbench on scale 500 (fits into shared buffers), an average of three\n>> > >> 5-minute runs looks like this:\n>> > >>\n>> > >> branch 1 16 32 64 96\n>> > >> ----------------------------------------------------------------\n>> > >> master 7291 87704 165310 150437 224186\n>> > >> ntt 7912 106095 213206 212410 237819\n>> > >> simple-no-buffers 7654 96544 115416 95828 103065\n>> > >>\n>> > >> NTT refers to the patch from September 10, pre-allocating a large WAL\n>> > >> file on PMEM, and simple-no-buffers is the simpler patch simply\n>> removing\n>> > >> the WAL buffers and writing directly to a mmap-ed WAL segment on\n>> PMEM.\n>> > >>\n>> > >> Note: The patch is just replacing the old implementation with mmap.\n>> > >> That's good enough for experiments like this, but we probably want to\n>> > >> keep the old one for setups without PMEM. But it's good enough for\n>> > >> testing, benchmarking etc.\n>> > >>\n>> > >> Unfortunately, the results for this simple approach are pretty bad.\n>> Not\n>> > >> only compared to the \"ntt\" patch, but even to master. I'm not\n>> entirely\n>> > >> sure what's the root cause, but I have a couple hypotheses:\n>> > >>\n>> > >> 1) bug in the patch - That's clearly a possibility, although I've\n>> tried\n>> > >> tried to eliminate this possibility.\n>> > >>\n>> > >> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster\n>> than\n>> > >> NVMe storage, but still much slower than DRAM (both in terms of\n>> latency\n>> > >> and bandwidth, see [2] for some data). It's not terrible, but the\n>> > >> latency is maybe 2-3x higher - not a huge difference, but may matter\n>> for\n>> > >> WAL buffers?\n>> > >>\n>> > >> 3) PMEM does not handle parallel writes well - If you look at [2],\n>> > >> Figure 4(b), you'll see that the throughput actually *drops\" as the\n>> > >> number of threads increase. That's pretty strange / annoying, because\n>> > >> that's how we write into WAL buffers - each thread writes it's own\n>> data,\n>> > >> so parallelism is not something we can get rid of.\n>> > >>\n>> > >> I've added some simple profiling, to measure number of calls / time\n>> for\n>> > >> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates\n>> data\n>> > >> for each backend, and logs the counts every 1M ops.\n>> > >>\n>> > >> Typical stats from a concurrent run looks like this:\n>> > >>\n>> > >> xlog stats cnt 43000000\n>> > >> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n>> > >> memcpy cnt 985964 time 1550442272 len 15150499\n>> > >> memset cnt 0 time 0 len 0\n>> > >> persist cnt 13836 time 10369617 len 16292182\n>> > >>\n>> > >> The times are in nanoseconds, so this says the backend did 100 mmap\n>> and\n>> > >> unmap calls, taking ~10ms in total. There were ~14k pmem_persist\n>> calls,\n>> > >> taking 10ms in total. And the most time (~1.5s) was used by\n>> pmem_memcpy\n>> > >> copying about 15MB of data. That's quite a lot :-(\n>> > >\n>> > > It might also be interesting if we can see how much time spent on each\n>> > > logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n>> > >\n>> >\n>> > Yeah, we could extend it to that, that's fairly mechanical thing. Bbut\n>> > maybe that could be visible in a regular perf profile. Also, I suppose\n>> > most of the time will be used by the pmem calls, shown in the stats.\n>> >\n>> > >>\n>> > >> My conclusion from this is that eliminating WAL buffers and writing\n>> WAL\n>> > >> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not\n>> the\n>> > >> right approach.\n>> > >>\n>> > >> I suppose we should keep WAL buffers, and then just write the data to\n>> > >> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch\n>> does,\n>> > >> except that it allocates one huge file on PMEM and writes to that\n>> > >> (instead of the traditional WAL segments).\n>> > >>\n>> > >> So I decided to try how it'd work with writing to regular WAL\n>> segments,\n>> > >> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does\n>> that,\n>> > >> and the results look a bit nicer:\n>> > >>\n>> > >> branch 1 16 32 64 96\n>> > >> ----------------------------------------------------------------\n>> > >> master 7291 87704 165310 150437 224186\n>> > >> ntt 7912 106095 213206 212410 237819\n>> > >> simple-no-buffers 7654 96544 115416 95828 103065\n>> > >> with-wal-buffers 7477 95454 181702 140167 214715\n>> > >>\n>> > >> So, much better than the version without WAL buffers, somewhat better\n>> > >> than master (except for 64/96 clients), but still not as good as NTT.\n>> > >>\n>> > >> At this point I was wondering how could the NTT patch be faster when\n>> > >> it's doing roughly the same thing. I'm sire there are some\n>> differences,\n>> > >> but it seemed strange. The main difference seems to be that it only\n>> maps\n>> > >> one large file, and only once. OTOH the alternative \"simple\" patch\n>> maps\n>> > >> segments one by one, in each backend. Per the debug stats the\n>> map/unmap\n>> > >> calls are fairly cheap, but maybe it interferes with the memcpy\n>> somehow.\n>> > >>\n>> > >\n>> > > While looking at the two methods: NTT and simple-no-buffer, I realized\n>> > > that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n>> > > pmem_drain()) WAL without acquiring WALWriteLock whereas\n>> > > simple-no-buffer patch acquires WALWriteLock to do that\n>> > > (pmem_persist()). I wonder if this also affected the performance\n>> > > differences between those two methods since WALWriteLock serializes\n>> > > the operations. With PMEM, multiple backends can concurrently flush\n>> > > the records if the memory region is not overlapped? If so, flushing\n>> > > WAL without WALWriteLock would be a big benefit.\n>> > >\n>> >\n>> > That's a very good question - it's quite possible the WALWriteLock is\n>> > not really needed, because the processes are actually \"writing\" the WAL\n>> > directly to PMEM. So it's a bit confusing, because it's only really\n>> > concerned about making sure it's flushed.\n>> >\n>> > And yes, multiple processes certainly can write to PMEM at the same\n>> > time, in fact it's a requirement to get good throughput I believe. My\n>> > understanding is we need ~8 processes, at least that's what I heard from\n>> > people with more PMEM experience.\n>>\n>> Thanks, that's good to know.\n>>\n>> >\n>> > TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n>> > from the 0002 patch) is actually correct. Essentially, consider the\n>> > backend needs to do a flush, but does not have a segment mapped. So it\n>> > maps it and calls pmem_drain() on it.\n>> >\n>> > But does that actually flush anything? Does it properly flush changes\n>> > done by other processes that may not have called pmem_drain() yet? I\n>> > find this somewhat suspicious and I'd bet all processes that did write\n>> > something have to call pmem_drain().\n>>\n>> Yeah, in terms of experiments at least it's good to find out that the\n>> approach mmapping each WAL segment is not good at performance.\n>>\n>> >\n>> >\n>> > >> So I did an experiment by increasing the size of the WAL segments. I\n>> > >> chose to try with 521MB and 1024MB, and the results with 1GB look\n>> like this:\n>> > >>\n>> > >> branch 1 16 32 64 96\n>> > >> ----------------------------------------------------------------\n>> > >> master 6635 88524 171106 163387 245307\n>> > >> ntt 7909 106826 217364 223338 242042\n>> > >> simple-no-buffers 7871 101575 199403 188074 224716\n>> > >> with-wal-buffers 7643 101056 206911 223860 261712\n>> > >>\n>> > >> So yeah, there's a clear difference. It changes the values for\n>> \"master\"\n>> > >> a bit, but both the \"simple\" patches (with and without) WAL buffers\n>> are\n>> > >> much faster. The with-wal-buffers is almost equal to the NTT patch,\n>> > >> which was using 96GB file. I presume larger WAL segments would get\n>> even\n>> > >> closer, if we supported them.\n>> > >>\n>> > >> I'll continue investigating this, but my conclusion so far seem to be\n>> > >> that we can't really replace WAL buffers with PMEM - that seems to\n>> > >> perform much worse.\n>> > >>\n>> > >> The question is what to do about the segment size. Can we reduce the\n>> > >> overhead of mmap-ing individual segments, so that this works even for\n>> > >> smaller WAL segments, to make this useful for common instances (not\n>> > >> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n>> > >> design with a large file, mapped just once.\n>> > >>\n>> > >> Another question is whether it's even worth the extra complexity. On\n>> > >> 16MB segments the difference between master and NTT patch seems to be\n>> > >> non-trivial, but increasing the WAL segment size kinda reduces that.\n>> So\n>> > >> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n>> > >> Alternatively, maybe we could switch to libpmemblk, which should\n>> > >> eliminate the filesystem overhead at least.\n>> > >\n>> > > I think the performance improvement by NTT patch with the 16MB WAL\n>> > > segment, the most common WAL segment size, is very good (150437 vs.\n>> > > 212410 with 64 clients). But maybe evaluating writing WAL segment\n>> > > files on PMEM DAX filesystem is also worth, as you mentioned, if we\n>> > > don't do that yet.\n>> > >\n>> >\n>> > Well, not sure. I think the question is still open whether it's actually\n>> > safe to run on DAX, which does not have atomic writes of 512B sectors,\n>> > and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n>> > not an issue.\n>>\n>> I think we can use the Block Translation Table (BTT) driver that\n>> provides atomic sector updates.\n>>\n>> >\n>> > > Also, I'm interested in why the through-put of NTT patch saturated at\n>> > > 32 clients, which is earlier than the master's one (96 clients). How\n>> > > many CPU cores are there on the machine you used?\n>> > >\n>> >\n>> > From what I know, this is somewhat expected for PMEM devices, for a\n>> > bunch of reasons:\n>> >\n>> > 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), so\n>> > it takes fewer processes to saturate it.\n>> >\n>> > 2) Internally, the PMEM has a 256B buffer for writes, used for combining\n>> > etc. With too many processes sending writes, it becomes to look more\n>> > random, which is harmful for throughput.\n>> >\n>> > When combined, this means the performance starts dropping at certain\n>> > number of threads, and the optimal number of threads is rather low\n>> > (something like 5-10). This is very different behavior compared to DRAM.\n>>\n>> Makes sense.\n>>\n>> >\n>> > There's a nice overview and measurements in this paper:\n>> >\n>> > Building blocks for persistent memory / How to get the most out of your\n>> > new memory?\n>> > Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons\n>> > Kemper\n>> >\n>> > https://link.springer.com/article/10.1007/s00778-020-00622-9\n>>\n>> Thank you. I'll read it.\n>>\n>> >\n>> >\n>> > >> I'm also wondering if WAL is the right usage for PMEM. Per [2]\n>> there's a\n>> > >> huge read-write assymmetry (the writes being way slower), and their\n>> > >> recommendation (in \"Observation 3\" is)\n>> > >>\n>> > >> The read-write asymmetry of PMem im-plies the necessity of\n>> avoiding\n>> > >> writes as much as possible for PMem.\n>> > >>\n>> > >> So maybe we should not be trying to use PMEM for WAL, which is pretty\n>> > >> write-heavy (and in most cases even write-only).\n>> > >\n>> > > I think using PMEM for WAL is cost-effective but it leverages the only\n>> > > low-latency (sequential) write, but not other abilities such as\n>> > > fine-grained access and low-latency random write. If we want to\n>> > > exploit its all ability we might need some drastic changes to logging\n>> > > protocol while considering storing data on PMEM.\n>> > >\n>> >\n>> > True. I think investigating whether it's sensible to use PMEM for this\n>> > purpose. It may turn out that replacing the DRAM WAL buffers with writes\n>> > directly to PMEM is not economical, and aggregating data in a DRAM\n>> > buffer is better :-(\n>>\n>> Yes. I think it might be interesting to do an analysis of the\n>> bottlenecks of NTT patch by perf etc. If bottlenecks are moved to\n>> other places by removing WALWriteLock during flush, it's probably a\n>> good sign for further performance improvements. IIRC WALWriteLock is\n>> one of the main bottlenecks on OLTP workload, although my memory might\n>> already be out of date.\n>>\n>> Regards,\n>>\n>> --\n>> Masahiko Sawada\n>> EDB: https://www.enterprisedb.com/\n>>\n>\n>\n> --\n> Takashi Menjo <takashi.menjo@gmail.com>\n>\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Tue, 26 Jan 2021 17:52:59 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Dear everyone, Tomas,\n\nFirst of all, the \"v4\" patchset for non-volatile WAL buffer attached to the\nprevious mail is actually v5... Please read \"v4\" as \"v5.\"\n\nThen, to Tomas:\nThank you for your crash report you gave on Nov 27, 2020, regarding msync\npatchset. I applied the latest msync patchset v3 attached to the previous\nto master 411ae64 (on Jan18, 2021) then tested it, and I got no error when\npgbench -i -s 500. Please try it if necessary.\n\nBest regards,\nTakashi\n\n\n2021年1月26日(火) 17:52 Takashi Menjo <takashi.menjo@gmail.com>:\n\n> Dear everyone,\n>\n> Sorry but I forgot to attach my patchsets... Please see the files attached\n> to this mail. Please also note that they contain some fixes.\n>\n> Best regards,\n> Takashi\n>\n>\n> 2021年1月26日(火) 17:46 Takashi Menjo <takashi.menjo@gmail.com>:\n>\n>> Dear everyone,\n>>\n>> I'm sorry for the late reply. I rebase my two patchsets onto the latest\n>> master 411ae64.The one patchset prefixed with v4 is for non-volatile WAL\n>> buffer; the other prefixed with v3 is for msync.\n>>\n>> I will reply to your thankful feedbacks one by one within days. Please\n>> wait for a moment.\n>>\n>> Best regards,\n>> Takashi\n>>\n>>\n>> 01/25/2021(Mon) 11:56 Masahiko Sawada <sawada.mshk@gmail.com>:\n>>\n>>> On Fri, Jan 22, 2021 at 11:32 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>> >\n>>> >\n>>> >\n>>> > On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n>>> > > On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n>>> > > <tomas.vondra@enterprisedb.com> wrote:\n>>> > >>\n>>> > >> Hi,\n>>> > >>\n>>> > >> I think I've managed to get the 0002 patch [1] rebased to master and\n>>> > >> working (with help from Masahiko Sawada). It's not clear to me how\n>>> it\n>>> > >> could have worked as submitted - my theory is that an incomplete\n>>> patch\n>>> > >> was submitted by mistake, or something like that.\n>>> > >>\n>>> > >> Unfortunately, the benchmark results were kinda disappointing. For a\n>>> > >> pgbench on scale 500 (fits into shared buffers), an average of three\n>>> > >> 5-minute runs looks like this:\n>>> > >>\n>>> > >> branch 1 16 32 64 96\n>>> > >> ----------------------------------------------------------------\n>>> > >> master 7291 87704 165310 150437 224186\n>>> > >> ntt 7912 106095 213206 212410 237819\n>>> > >> simple-no-buffers 7654 96544 115416 95828 103065\n>>> > >>\n>>> > >> NTT refers to the patch from September 10, pre-allocating a large\n>>> WAL\n>>> > >> file on PMEM, and simple-no-buffers is the simpler patch simply\n>>> removing\n>>> > >> the WAL buffers and writing directly to a mmap-ed WAL segment on\n>>> PMEM.\n>>> > >>\n>>> > >> Note: The patch is just replacing the old implementation with mmap.\n>>> > >> That's good enough for experiments like this, but we probably want\n>>> to\n>>> > >> keep the old one for setups without PMEM. But it's good enough for\n>>> > >> testing, benchmarking etc.\n>>> > >>\n>>> > >> Unfortunately, the results for this simple approach are pretty bad.\n>>> Not\n>>> > >> only compared to the \"ntt\" patch, but even to master. I'm not\n>>> entirely\n>>> > >> sure what's the root cause, but I have a couple hypotheses:\n>>> > >>\n>>> > >> 1) bug in the patch - That's clearly a possibility, although I've\n>>> tried\n>>> > >> tried to eliminate this possibility.\n>>> > >>\n>>> > >> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster\n>>> than\n>>> > >> NVMe storage, but still much slower than DRAM (both in terms of\n>>> latency\n>>> > >> and bandwidth, see [2] for some data). It's not terrible, but the\n>>> > >> latency is maybe 2-3x higher - not a huge difference, but may\n>>> matter for\n>>> > >> WAL buffers?\n>>> > >>\n>>> > >> 3) PMEM does not handle parallel writes well - If you look at [2],\n>>> > >> Figure 4(b), you'll see that the throughput actually *drops\" as the\n>>> > >> number of threads increase. That's pretty strange / annoying,\n>>> because\n>>> > >> that's how we write into WAL buffers - each thread writes it's own\n>>> data,\n>>> > >> so parallelism is not something we can get rid of.\n>>> > >>\n>>> > >> I've added some simple profiling, to measure number of calls / time\n>>> for\n>>> > >> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates\n>>> data\n>>> > >> for each backend, and logs the counts every 1M ops.\n>>> > >>\n>>> > >> Typical stats from a concurrent run looks like this:\n>>> > >>\n>>> > >> xlog stats cnt 43000000\n>>> > >> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n>>> > >> memcpy cnt 985964 time 1550442272 len 15150499\n>>> > >> memset cnt 0 time 0 len 0\n>>> > >> persist cnt 13836 time 10369617 len 16292182\n>>> > >>\n>>> > >> The times are in nanoseconds, so this says the backend did 100\n>>> mmap and\n>>> > >> unmap calls, taking ~10ms in total. There were ~14k pmem_persist\n>>> calls,\n>>> > >> taking 10ms in total. And the most time (~1.5s) was used by\n>>> pmem_memcpy\n>>> > >> copying about 15MB of data. That's quite a lot :-(\n>>> > >\n>>> > > It might also be interesting if we can see how much time spent on\n>>> each\n>>> > > logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n>>> > >\n>>> >\n>>> > Yeah, we could extend it to that, that's fairly mechanical thing. Bbut\n>>> > maybe that could be visible in a regular perf profile. Also, I suppose\n>>> > most of the time will be used by the pmem calls, shown in the stats.\n>>> >\n>>> > >>\n>>> > >> My conclusion from this is that eliminating WAL buffers and writing\n>>> WAL\n>>> > >> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably\n>>> not the\n>>> > >> right approach.\n>>> > >>\n>>> > >> I suppose we should keep WAL buffers, and then just write the data\n>>> to\n>>> > >> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch\n>>> does,\n>>> > >> except that it allocates one huge file on PMEM and writes to that\n>>> > >> (instead of the traditional WAL segments).\n>>> > >>\n>>> > >> So I decided to try how it'd work with writing to regular WAL\n>>> segments,\n>>> > >> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does\n>>> that,\n>>> > >> and the results look a bit nicer:\n>>> > >>\n>>> > >> branch 1 16 32 64 96\n>>> > >> ----------------------------------------------------------------\n>>> > >> master 7291 87704 165310 150437 224186\n>>> > >> ntt 7912 106095 213206 212410 237819\n>>> > >> simple-no-buffers 7654 96544 115416 95828 103065\n>>> > >> with-wal-buffers 7477 95454 181702 140167 214715\n>>> > >>\n>>> > >> So, much better than the version without WAL buffers, somewhat\n>>> better\n>>> > >> than master (except for 64/96 clients), but still not as good as\n>>> NTT.\n>>> > >>\n>>> > >> At this point I was wondering how could the NTT patch be faster when\n>>> > >> it's doing roughly the same thing. I'm sire there are some\n>>> differences,\n>>> > >> but it seemed strange. The main difference seems to be that it only\n>>> maps\n>>> > >> one large file, and only once. OTOH the alternative \"simple\" patch\n>>> maps\n>>> > >> segments one by one, in each backend. Per the debug stats the\n>>> map/unmap\n>>> > >> calls are fairly cheap, but maybe it interferes with the memcpy\n>>> somehow.\n>>> > >>\n>>> > >\n>>> > > While looking at the two methods: NTT and simple-no-buffer, I\n>>> realized\n>>> > > that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n>>> > > pmem_drain()) WAL without acquiring WALWriteLock whereas\n>>> > > simple-no-buffer patch acquires WALWriteLock to do that\n>>> > > (pmem_persist()). I wonder if this also affected the performance\n>>> > > differences between those two methods since WALWriteLock serializes\n>>> > > the operations. With PMEM, multiple backends can concurrently flush\n>>> > > the records if the memory region is not overlapped? If so, flushing\n>>> > > WAL without WALWriteLock would be a big benefit.\n>>> > >\n>>> >\n>>> > That's a very good question - it's quite possible the WALWriteLock is\n>>> > not really needed, because the processes are actually \"writing\" the WAL\n>>> > directly to PMEM. So it's a bit confusing, because it's only really\n>>> > concerned about making sure it's flushed.\n>>> >\n>>> > And yes, multiple processes certainly can write to PMEM at the same\n>>> > time, in fact it's a requirement to get good throughput I believe. My\n>>> > understanding is we need ~8 processes, at least that's what I heard\n>>> from\n>>> > people with more PMEM experience.\n>>>\n>>> Thanks, that's good to know.\n>>>\n>>> >\n>>> > TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n>>> > from the 0002 patch) is actually correct. Essentially, consider the\n>>> > backend needs to do a flush, but does not have a segment mapped. So it\n>>> > maps it and calls pmem_drain() on it.\n>>> >\n>>> > But does that actually flush anything? Does it properly flush changes\n>>> > done by other processes that may not have called pmem_drain() yet? I\n>>> > find this somewhat suspicious and I'd bet all processes that did write\n>>> > something have to call pmem_drain().\n>>>\n>>> Yeah, in terms of experiments at least it's good to find out that the\n>>> approach mmapping each WAL segment is not good at performance.\n>>>\n>>> >\n>>> >\n>>> > >> So I did an experiment by increasing the size of the WAL segments. I\n>>> > >> chose to try with 521MB and 1024MB, and the results with 1GB look\n>>> like this:\n>>> > >>\n>>> > >> branch 1 16 32 64 96\n>>> > >> ----------------------------------------------------------------\n>>> > >> master 6635 88524 171106 163387 245307\n>>> > >> ntt 7909 106826 217364 223338 242042\n>>> > >> simple-no-buffers 7871 101575 199403 188074 224716\n>>> > >> with-wal-buffers 7643 101056 206911 223860 261712\n>>> > >>\n>>> > >> So yeah, there's a clear difference. It changes the values for\n>>> \"master\"\n>>> > >> a bit, but both the \"simple\" patches (with and without) WAL buffers\n>>> are\n>>> > >> much faster. The with-wal-buffers is almost equal to the NTT patch,\n>>> > >> which was using 96GB file. I presume larger WAL segments would get\n>>> even\n>>> > >> closer, if we supported them.\n>>> > >>\n>>> > >> I'll continue investigating this, but my conclusion so far seem to\n>>> be\n>>> > >> that we can't really replace WAL buffers with PMEM - that seems to\n>>> > >> perform much worse.\n>>> > >>\n>>> > >> The question is what to do about the segment size. Can we reduce the\n>>> > >> overhead of mmap-ing individual segments, so that this works even\n>>> for\n>>> > >> smaller WAL segments, to make this useful for common instances (not\n>>> > >> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n>>> > >> design with a large file, mapped just once.\n>>> > >>\n>>> > >> Another question is whether it's even worth the extra complexity. On\n>>> > >> 16MB segments the difference between master and NTT patch seems to\n>>> be\n>>> > >> non-trivial, but increasing the WAL segment size kinda reduces\n>>> that. So\n>>> > >> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n>>> > >> Alternatively, maybe we could switch to libpmemblk, which should\n>>> > >> eliminate the filesystem overhead at least.\n>>> > >\n>>> > > I think the performance improvement by NTT patch with the 16MB WAL\n>>> > > segment, the most common WAL segment size, is very good (150437 vs.\n>>> > > 212410 with 64 clients). But maybe evaluating writing WAL segment\n>>> > > files on PMEM DAX filesystem is also worth, as you mentioned, if we\n>>> > > don't do that yet.\n>>> > >\n>>> >\n>>> > Well, not sure. I think the question is still open whether it's\n>>> actually\n>>> > safe to run on DAX, which does not have atomic writes of 512B sectors,\n>>> > and I think we rely on that e.g. for pg_config. But maybe for WAL\n>>> that's\n>>> > not an issue.\n>>>\n>>> I think we can use the Block Translation Table (BTT) driver that\n>>> provides atomic sector updates.\n>>>\n>>> >\n>>> > > Also, I'm interested in why the through-put of NTT patch saturated at\n>>> > > 32 clients, which is earlier than the master's one (96 clients). How\n>>> > > many CPU cores are there on the machine you used?\n>>> > >\n>>> >\n>>> > From what I know, this is somewhat expected for PMEM devices, for a\n>>> > bunch of reasons:\n>>> >\n>>> > 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), so\n>>> > it takes fewer processes to saturate it.\n>>> >\n>>> > 2) Internally, the PMEM has a 256B buffer for writes, used for\n>>> combining\n>>> > etc. With too many processes sending writes, it becomes to look more\n>>> > random, which is harmful for throughput.\n>>> >\n>>> > When combined, this means the performance starts dropping at certain\n>>> > number of threads, and the optimal number of threads is rather low\n>>> > (something like 5-10). This is very different behavior compared to\n>>> DRAM.\n>>>\n>>> Makes sense.\n>>>\n>>> >\n>>> > There's a nice overview and measurements in this paper:\n>>> >\n>>> > Building blocks for persistent memory / How to get the most out of your\n>>> > new memory?\n>>> > Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons\n>>> > Kemper\n>>> >\n>>> > https://link.springer.com/article/10.1007/s00778-020-00622-9\n>>>\n>>> Thank you. I'll read it.\n>>>\n>>> >\n>>> >\n>>> > >> I'm also wondering if WAL is the right usage for PMEM. Per [2]\n>>> there's a\n>>> > >> huge read-write assymmetry (the writes being way slower), and their\n>>> > >> recommendation (in \"Observation 3\" is)\n>>> > >>\n>>> > >> The read-write asymmetry of PMem im-plies the necessity of\n>>> avoiding\n>>> > >> writes as much as possible for PMem.\n>>> > >>\n>>> > >> So maybe we should not be trying to use PMEM for WAL, which is\n>>> pretty\n>>> > >> write-heavy (and in most cases even write-only).\n>>> > >\n>>> > > I think using PMEM for WAL is cost-effective but it leverages the\n>>> only\n>>> > > low-latency (sequential) write, but not other abilities such as\n>>> > > fine-grained access and low-latency random write. If we want to\n>>> > > exploit its all ability we might need some drastic changes to logging\n>>> > > protocol while considering storing data on PMEM.\n>>> > >\n>>> >\n>>> > True. I think investigating whether it's sensible to use PMEM for this\n>>> > purpose. It may turn out that replacing the DRAM WAL buffers with\n>>> writes\n>>> > directly to PMEM is not economical, and aggregating data in a DRAM\n>>> > buffer is better :-(\n>>>\n>>> Yes. I think it might be interesting to do an analysis of the\n>>> bottlenecks of NTT patch by perf etc. If bottlenecks are moved to\n>>> other places by removing WALWriteLock during flush, it's probably a\n>>> good sign for further performance improvements. IIRC WALWriteLock is\n>>> one of the main bottlenecks on OLTP workload, although my memory might\n>>> already be out of date.\n>>>\n>>> Regards,\n>>>\n>>> --\n>>> Masahiko Sawada\n>>> EDB: https://www.enterprisedb.com/\n>>>\n>>\n>>\n>> --\n>> Takashi Menjo <takashi.menjo@gmail.com>\n>>\n>\n>\n> --\n> Takashi Menjo <takashi.menjo@gmail.com>\n>\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\n Dear everyone, Tomas,First of all, the \"v4\" patchset for non-volatile WAL buffer attached to the previous mail is actually v5... Please read \"v4\" as \"v5.\"Then, to Tomas:Thank you for your crash report you gave on Nov 27, 2020, regarding msync patchset. I applied the latest msync patchset v3 attached to the previous to master \n\n 411ae64 (on Jan18, 2021) then tested it, and I got no error when pgbench -i -s 500. Please try it if necessary.Best regards,Takashi2021年1月26日(火) 17:52 Takashi Menjo <takashi.menjo@gmail.com>:Dear everyone,Sorry but I forgot to attach my patchsets... Please see the files attached to this mail. Please also note that they contain some fixes.Best regards,Takashi2021年1月26日(火) 17:46 Takashi Menjo <takashi.menjo@gmail.com>:Dear everyone,I'm sorry for the late reply. I rebase my two patchsets onto the latest master 411ae64.The one patchset prefixed with v4 is for non-volatile WAL buffer; the other prefixed with v3 is for msync.I will reply to your thankful feedbacks one by one within days. Please wait for a moment.Best regards,Takashi01/25/2021(Mon) 11:56 Masahiko Sawada <sawada.mshk@gmail.com>:On Fri, Jan 22, 2021 at 11:32 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n> > On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> I think I've managed to get the 0002 patch [1] rebased to master and\n> >> working (with help from Masahiko Sawada). It's not clear to me how it\n> >> could have worked as submitted - my theory is that an incomplete patch\n> >> was submitted by mistake, or something like that.\n> >>\n> >> Unfortunately, the benchmark results were kinda disappointing. For a\n> >> pgbench on scale 500 (fits into shared buffers), an average of three\n> >> 5-minute runs looks like this:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 7291 87704 165310 150437 224186\n> >> ntt 7912 106095 213206 212410 237819\n> >> simple-no-buffers 7654 96544 115416 95828 103065\n> >>\n> >> NTT refers to the patch from September 10, pre-allocating a large WAL\n> >> file on PMEM, and simple-no-buffers is the simpler patch simply removing\n> >> the WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n> >>\n> >> Note: The patch is just replacing the old implementation with mmap.\n> >> That's good enough for experiments like this, but we probably want to\n> >> keep the old one for setups without PMEM. But it's good enough for\n> >> testing, benchmarking etc.\n> >>\n> >> Unfortunately, the results for this simple approach are pretty bad. Not\n> >> only compared to the \"ntt\" patch, but even to master. I'm not entirely\n> >> sure what's the root cause, but I have a couple hypotheses:\n> >>\n> >> 1) bug in the patch - That's clearly a possibility, although I've tried\n> >> tried to eliminate this possibility.\n> >>\n> >> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster than\n> >> NVMe storage, but still much slower than DRAM (both in terms of latency\n> >> and bandwidth, see [2] for some data). It's not terrible, but the\n> >> latency is maybe 2-3x higher - not a huge difference, but may matter for\n> >> WAL buffers?\n> >>\n> >> 3) PMEM does not handle parallel writes well - If you look at [2],\n> >> Figure 4(b), you'll see that the throughput actually *drops\" as the\n> >> number of threads increase. That's pretty strange / annoying, because\n> >> that's how we write into WAL buffers - each thread writes it's own data,\n> >> so parallelism is not something we can get rid of.\n> >>\n> >> I've added some simple profiling, to measure number of calls / time for\n> >> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data\n> >> for each backend, and logs the counts every 1M ops.\n> >>\n> >> Typical stats from a concurrent run looks like this:\n> >>\n> >> xlog stats cnt 43000000\n> >> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n> >> memcpy cnt 985964 time 1550442272 len 15150499\n> >> memset cnt 0 time 0 len 0\n> >> persist cnt 13836 time 10369617 len 16292182\n> >>\n> >> The times are in nanoseconds, so this says the backend did 100 mmap and\n> >> unmap calls, taking ~10ms in total. There were ~14k pmem_persist calls,\n> >> taking 10ms in total. And the most time (~1.5s) was used by pmem_memcpy\n> >> copying about 15MB of data. That's quite a lot :-(\n> >\n> > It might also be interesting if we can see how much time spent on each\n> > logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n> >\n>\n> Yeah, we could extend it to that, that's fairly mechanical thing. Bbut\n> maybe that could be visible in a regular perf profile. Also, I suppose\n> most of the time will be used by the pmem calls, shown in the stats.\n>\n> >>\n> >> My conclusion from this is that eliminating WAL buffers and writing WAL\n> >> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not the\n> >> right approach.\n> >>\n> >> I suppose we should keep WAL buffers, and then just write the data to\n> >> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch does,\n> >> except that it allocates one huge file on PMEM and writes to that\n> >> (instead of the traditional WAL segments).\n> >>\n> >> So I decided to try how it'd work with writing to regular WAL segments,\n> >> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does that,\n> >> and the results look a bit nicer:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 7291 87704 165310 150437 224186\n> >> ntt 7912 106095 213206 212410 237819\n> >> simple-no-buffers 7654 96544 115416 95828 103065\n> >> with-wal-buffers 7477 95454 181702 140167 214715\n> >>\n> >> So, much better than the version without WAL buffers, somewhat better\n> >> than master (except for 64/96 clients), but still not as good as NTT.\n> >>\n> >> At this point I was wondering how could the NTT patch be faster when\n> >> it's doing roughly the same thing. I'm sire there are some differences,\n> >> but it seemed strange. The main difference seems to be that it only maps\n> >> one large file, and only once. OTOH the alternative \"simple\" patch maps\n> >> segments one by one, in each backend. Per the debug stats the map/unmap\n> >> calls are fairly cheap, but maybe it interferes with the memcpy somehow.\n> >>\n> >\n> > While looking at the two methods: NTT and simple-no-buffer, I realized\n> > that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n> > pmem_drain()) WAL without acquiring WALWriteLock whereas\n> > simple-no-buffer patch acquires WALWriteLock to do that\n> > (pmem_persist()). I wonder if this also affected the performance\n> > differences between those two methods since WALWriteLock serializes\n> > the operations. With PMEM, multiple backends can concurrently flush\n> > the records if the memory region is not overlapped? If so, flushing\n> > WAL without WALWriteLock would be a big benefit.\n> >\n>\n> That's a very good question - it's quite possible the WALWriteLock is\n> not really needed, because the processes are actually \"writing\" the WAL\n> directly to PMEM. So it's a bit confusing, because it's only really\n> concerned about making sure it's flushed.\n>\n> And yes, multiple processes certainly can write to PMEM at the same\n> time, in fact it's a requirement to get good throughput I believe. My\n> understanding is we need ~8 processes, at least that's what I heard from\n> people with more PMEM experience.\n\nThanks, that's good to know.\n\n>\n> TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n> from the 0002 patch) is actually correct. Essentially, consider the\n> backend needs to do a flush, but does not have a segment mapped. So it\n> maps it and calls pmem_drain() on it.\n>\n> But does that actually flush anything? Does it properly flush changes\n> done by other processes that may not have called pmem_drain() yet? I\n> find this somewhat suspicious and I'd bet all processes that did write\n> something have to call pmem_drain().\n\nYeah, in terms of experiments at least it's good to find out that the\napproach mmapping each WAL segment is not good at performance.\n\n>\n>\n> >> So I did an experiment by increasing the size of the WAL segments. I\n> >> chose to try with 521MB and 1024MB, and the results with 1GB look like this:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 6635 88524 171106 163387 245307\n> >> ntt 7909 106826 217364 223338 242042\n> >> simple-no-buffers 7871 101575 199403 188074 224716\n> >> with-wal-buffers 7643 101056 206911 223860 261712\n> >>\n> >> So yeah, there's a clear difference. It changes the values for \"master\"\n> >> a bit, but both the \"simple\" patches (with and without) WAL buffers are\n> >> much faster. The with-wal-buffers is almost equal to the NTT patch,\n> >> which was using 96GB file. I presume larger WAL segments would get even\n> >> closer, if we supported them.\n> >>\n> >> I'll continue investigating this, but my conclusion so far seem to be\n> >> that we can't really replace WAL buffers with PMEM - that seems to\n> >> perform much worse.\n> >>\n> >> The question is what to do about the segment size. Can we reduce the\n> >> overhead of mmap-ing individual segments, so that this works even for\n> >> smaller WAL segments, to make this useful for common instances (not\n> >> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n> >> design with a large file, mapped just once.\n> >>\n> >> Another question is whether it's even worth the extra complexity. On\n> >> 16MB segments the difference between master and NTT patch seems to be\n> >> non-trivial, but increasing the WAL segment size kinda reduces that. So\n> >> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n> >> Alternatively, maybe we could switch to libpmemblk, which should\n> >> eliminate the filesystem overhead at least.\n> >\n> > I think the performance improvement by NTT patch with the 16MB WAL\n> > segment, the most common WAL segment size, is very good (150437 vs.\n> > 212410 with 64 clients). But maybe evaluating writing WAL segment\n> > files on PMEM DAX filesystem is also worth, as you mentioned, if we\n> > don't do that yet.\n> >\n>\n> Well, not sure. I think the question is still open whether it's actually\n> safe to run on DAX, which does not have atomic writes of 512B sectors,\n> and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n> not an issue.\n\nI think we can use the Block Translation Table (BTT) driver that\nprovides atomic sector updates.\n\n>\n> > Also, I'm interested in why the through-put of NTT patch saturated at\n> > 32 clients, which is earlier than the master's one (96 clients). How\n> > many CPU cores are there on the machine you used?\n> >\n>\n> From what I know, this is somewhat expected for PMEM devices, for a\n> bunch of reasons:\n>\n> 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), so\n> it takes fewer processes to saturate it.\n>\n> 2) Internally, the PMEM has a 256B buffer for writes, used for combining\n> etc. With too many processes sending writes, it becomes to look more\n> random, which is harmful for throughput.\n>\n> When combined, this means the performance starts dropping at certain\n> number of threads, and the optimal number of threads is rather low\n> (something like 5-10). This is very different behavior compared to DRAM.\n\nMakes sense.\n\n>\n> There's a nice overview and measurements in this paper:\n>\n> Building blocks for persistent memory / How to get the most out of your\n> new memory?\n> Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons\n> Kemper\n>\n> https://link.springer.com/article/10.1007/s00778-020-00622-9\n\nThank you. I'll read it.\n\n>\n>\n> >> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n> >> huge read-write assymmetry (the writes being way slower), and their\n> >> recommendation (in \"Observation 3\" is)\n> >>\n> >> The read-write asymmetry of PMem im-plies the necessity of avoiding\n> >> writes as much as possible for PMem.\n> >>\n> >> So maybe we should not be trying to use PMEM for WAL, which is pretty\n> >> write-heavy (and in most cases even write-only).\n> >\n> > I think using PMEM for WAL is cost-effective but it leverages the only\n> > low-latency (sequential) write, but not other abilities such as\n> > fine-grained access and low-latency random write. If we want to\n> > exploit its all ability we might need some drastic changes to logging\n> > protocol while considering storing data on PMEM.\n> >\n>\n> True. I think investigating whether it's sensible to use PMEM for this\n> purpose. It may turn out that replacing the DRAM WAL buffers with writes\n> directly to PMEM is not economical, and aggregating data in a DRAM\n> buffer is better :-(\n\nYes. I think it might be interesting to do an analysis of the\nbottlenecks of NTT patch by perf etc. If bottlenecks are moved to\nother places by removing WALWriteLock during flush, it's probably a\ngood sign for further performance improvements. IIRC WALWriteLock is\none of the main bottlenecks on OLTP workload, although my memory might\nalready be out of date.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n-- Takashi Menjo <takashi.menjo@gmail.com>\n-- Takashi Menjo <takashi.menjo@gmail.com>\n-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Tue, 26 Jan 2021 18:50:50 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nNow I have caught up with this thread. I see that many of you are\ninterested in performance profiling.\n\nI share my slides in SNIA SDC 2020 [1]. In the slides, I had profiles\nfocused on XLogInsert and XLogFlush (mainly the latter) for my non-volatile\nWAL buffer patchset. I found that the time for XLogWrite and\nlocking/unlocking WALWriteLock were eliminated by the patchset. Instead,\nXLogInsert and WaitXLogInsertionsToFinish took more (or a little more) time\nthan ever because memcpy-ing to PMEM (Optane PMem) is slower than to DRAM.\nFor details, please see the slides.\n\nBest regards,\nTakashi\n\n[1]\nhttps://www.snia.org/educational-library/how-can-persistent-memory-make-databases-faster-and-how-could-we-go-ahead-2020\n\n\n2021年1月26日(火) 18:50 Takashi Menjo <takashi.menjo@gmail.com>:\n\n> Dear everyone, Tomas,\n>\n> First of all, the \"v4\" patchset for non-volatile WAL buffer attached to\n> the previous mail is actually v5... Please read \"v4\" as \"v5.\"\n>\n> Then, to Tomas:\n> Thank you for your crash report you gave on Nov 27, 2020, regarding msync\n> patchset. I applied the latest msync patchset v3 attached to the previous\n> to master 411ae64 (on Jan18, 2021) then tested it, and I got no error when\n> pgbench -i -s 500. Please try it if necessary.\n>\n> Best regards,\n> Takashi\n>\n>\n> 2021年1月26日(火) 17:52 Takashi Menjo <takashi.menjo@gmail.com>:\n>\n>> Dear everyone,\n>>\n>> Sorry but I forgot to attach my patchsets... Please see the files\n>> attached to this mail. Please also note that they contain some fixes.\n>>\n>> Best regards,\n>> Takashi\n>>\n>>\n>> 2021年1月26日(火) 17:46 Takashi Menjo <takashi.menjo@gmail.com>:\n>>\n>>> Dear everyone,\n>>>\n>>> I'm sorry for the late reply. I rebase my two patchsets onto the latest\n>>> master 411ae64.The one patchset prefixed with v4 is for non-volatile WAL\n>>> buffer; the other prefixed with v3 is for msync.\n>>>\n>>> I will reply to your thankful feedbacks one by one within days. Please\n>>> wait for a moment.\n>>>\n>>> Best regards,\n>>> Takashi\n>>>\n>>>\n>>> 01/25/2021(Mon) 11:56 Masahiko Sawada <sawada.mshk@gmail.com>:\n>>>\n>>>> On Fri, Jan 22, 2021 at 11:32 AM Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>> >\n>>>> >\n>>>> >\n>>>> > On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n>>>> > > On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n>>>> > > <tomas.vondra@enterprisedb.com> wrote:\n>>>> > >>\n>>>> > >> Hi,\n>>>> > >>\n>>>> > >> I think I've managed to get the 0002 patch [1] rebased to master\n>>>> and\n>>>> > >> working (with help from Masahiko Sawada). It's not clear to me how\n>>>> it\n>>>> > >> could have worked as submitted - my theory is that an incomplete\n>>>> patch\n>>>> > >> was submitted by mistake, or something like that.\n>>>> > >>\n>>>> > >> Unfortunately, the benchmark results were kinda disappointing. For\n>>>> a\n>>>> > >> pgbench on scale 500 (fits into shared buffers), an average of\n>>>> three\n>>>> > >> 5-minute runs looks like this:\n>>>> > >>\n>>>> > >> branch 1 16 32 64\n>>>> 96\n>>>> > >>\n>>>> ----------------------------------------------------------------\n>>>> > >> master 7291 87704 165310 150437\n>>>> 224186\n>>>> > >> ntt 7912 106095 213206 212410\n>>>> 237819\n>>>> > >> simple-no-buffers 7654 96544 115416 95828\n>>>> 103065\n>>>> > >>\n>>>> > >> NTT refers to the patch from September 10, pre-allocating a large\n>>>> WAL\n>>>> > >> file on PMEM, and simple-no-buffers is the simpler patch simply\n>>>> removing\n>>>> > >> the WAL buffers and writing directly to a mmap-ed WAL segment on\n>>>> PMEM.\n>>>> > >>\n>>>> > >> Note: The patch is just replacing the old implementation with mmap.\n>>>> > >> That's good enough for experiments like this, but we probably want\n>>>> to\n>>>> > >> keep the old one for setups without PMEM. But it's good enough for\n>>>> > >> testing, benchmarking etc.\n>>>> > >>\n>>>> > >> Unfortunately, the results for this simple approach are pretty\n>>>> bad. Not\n>>>> > >> only compared to the \"ntt\" patch, but even to master. I'm not\n>>>> entirely\n>>>> > >> sure what's the root cause, but I have a couple hypotheses:\n>>>> > >>\n>>>> > >> 1) bug in the patch - That's clearly a possibility, although I've\n>>>> tried\n>>>> > >> tried to eliminate this possibility.\n>>>> > >>\n>>>> > >> 2) PMEM is slower than DRAM - From what I know, PMEM is much\n>>>> faster than\n>>>> > >> NVMe storage, but still much slower than DRAM (both in terms of\n>>>> latency\n>>>> > >> and bandwidth, see [2] for some data). It's not terrible, but the\n>>>> > >> latency is maybe 2-3x higher - not a huge difference, but may\n>>>> matter for\n>>>> > >> WAL buffers?\n>>>> > >>\n>>>> > >> 3) PMEM does not handle parallel writes well - If you look at [2],\n>>>> > >> Figure 4(b), you'll see that the throughput actually *drops\" as the\n>>>> > >> number of threads increase. That's pretty strange / annoying,\n>>>> because\n>>>> > >> that's how we write into WAL buffers - each thread writes it's own\n>>>> data,\n>>>> > >> so parallelism is not something we can get rid of.\n>>>> > >>\n>>>> > >> I've added some simple profiling, to measure number of calls /\n>>>> time for\n>>>> > >> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates\n>>>> data\n>>>> > >> for each backend, and logs the counts every 1M ops.\n>>>> > >>\n>>>> > >> Typical stats from a concurrent run looks like this:\n>>>> > >>\n>>>> > >> xlog stats cnt 43000000\n>>>> > >> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n>>>> > >> memcpy cnt 985964 time 1550442272 len 15150499\n>>>> > >> memset cnt 0 time 0 len 0\n>>>> > >> persist cnt 13836 time 10369617 len 16292182\n>>>> > >>\n>>>> > >> The times are in nanoseconds, so this says the backend did 100\n>>>> mmap and\n>>>> > >> unmap calls, taking ~10ms in total. There were ~14k pmem_persist\n>>>> calls,\n>>>> > >> taking 10ms in total. And the most time (~1.5s) was used by\n>>>> pmem_memcpy\n>>>> > >> copying about 15MB of data. That's quite a lot :-(\n>>>> > >\n>>>> > > It might also be interesting if we can see how much time spent on\n>>>> each\n>>>> > > logging function, such as XLogInsert(), XLogWrite(), and\n>>>> XLogFlush().\n>>>> > >\n>>>> >\n>>>> > Yeah, we could extend it to that, that's fairly mechanical thing. Bbut\n>>>> > maybe that could be visible in a regular perf profile. Also, I suppose\n>>>> > most of the time will be used by the pmem calls, shown in the stats.\n>>>> >\n>>>> > >>\n>>>> > >> My conclusion from this is that eliminating WAL buffers and\n>>>> writing WAL\n>>>> > >> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably\n>>>> not the\n>>>> > >> right approach.\n>>>> > >>\n>>>> > >> I suppose we should keep WAL buffers, and then just write the data\n>>>> to\n>>>> > >> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch\n>>>> does,\n>>>> > >> except that it allocates one huge file on PMEM and writes to that\n>>>> > >> (instead of the traditional WAL segments).\n>>>> > >>\n>>>> > >> So I decided to try how it'd work with writing to regular WAL\n>>>> segments,\n>>>> > >> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does\n>>>> that,\n>>>> > >> and the results look a bit nicer:\n>>>> > >>\n>>>> > >> branch 1 16 32 64\n>>>> 96\n>>>> > >>\n>>>> ----------------------------------------------------------------\n>>>> > >> master 7291 87704 165310 150437\n>>>> 224186\n>>>> > >> ntt 7912 106095 213206 212410\n>>>> 237819\n>>>> > >> simple-no-buffers 7654 96544 115416 95828\n>>>> 103065\n>>>> > >> with-wal-buffers 7477 95454 181702 140167\n>>>> 214715\n>>>> > >>\n>>>> > >> So, much better than the version without WAL buffers, somewhat\n>>>> better\n>>>> > >> than master (except for 64/96 clients), but still not as good as\n>>>> NTT.\n>>>> > >>\n>>>> > >> At this point I was wondering how could the NTT patch be faster\n>>>> when\n>>>> > >> it's doing roughly the same thing. I'm sire there are some\n>>>> differences,\n>>>> > >> but it seemed strange. The main difference seems to be that it\n>>>> only maps\n>>>> > >> one large file, and only once. OTOH the alternative \"simple\" patch\n>>>> maps\n>>>> > >> segments one by one, in each backend. Per the debug stats the\n>>>> map/unmap\n>>>> > >> calls are fairly cheap, but maybe it interferes with the memcpy\n>>>> somehow.\n>>>> > >>\n>>>> > >\n>>>> > > While looking at the two methods: NTT and simple-no-buffer, I\n>>>> realized\n>>>> > > that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n>>>> > > pmem_drain()) WAL without acquiring WALWriteLock whereas\n>>>> > > simple-no-buffer patch acquires WALWriteLock to do that\n>>>> > > (pmem_persist()). I wonder if this also affected the performance\n>>>> > > differences between those two methods since WALWriteLock serializes\n>>>> > > the operations. With PMEM, multiple backends can concurrently flush\n>>>> > > the records if the memory region is not overlapped? If so, flushing\n>>>> > > WAL without WALWriteLock would be a big benefit.\n>>>> > >\n>>>> >\n>>>> > That's a very good question - it's quite possible the WALWriteLock is\n>>>> > not really needed, because the processes are actually \"writing\" the\n>>>> WAL\n>>>> > directly to PMEM. So it's a bit confusing, because it's only really\n>>>> > concerned about making sure it's flushed.\n>>>> >\n>>>> > And yes, multiple processes certainly can write to PMEM at the same\n>>>> > time, in fact it's a requirement to get good throughput I believe. My\n>>>> > understanding is we need ~8 processes, at least that's what I heard\n>>>> from\n>>>> > people with more PMEM experience.\n>>>>\n>>>> Thanks, that's good to know.\n>>>>\n>>>> >\n>>>> > TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n>>>> > from the 0002 patch) is actually correct. Essentially, consider the\n>>>> > backend needs to do a flush, but does not have a segment mapped. So it\n>>>> > maps it and calls pmem_drain() on it.\n>>>> >\n>>>> > But does that actually flush anything? Does it properly flush changes\n>>>> > done by other processes that may not have called pmem_drain() yet? I\n>>>> > find this somewhat suspicious and I'd bet all processes that did write\n>>>> > something have to call pmem_drain().\n>>>>\n>>>> Yeah, in terms of experiments at least it's good to find out that the\n>>>> approach mmapping each WAL segment is not good at performance.\n>>>>\n>>>> >\n>>>> >\n>>>> > >> So I did an experiment by increasing the size of the WAL segments.\n>>>> I\n>>>> > >> chose to try with 521MB and 1024MB, and the results with 1GB look\n>>>> like this:\n>>>> > >>\n>>>> > >> branch 1 16 32 64\n>>>> 96\n>>>> > >>\n>>>> ----------------------------------------------------------------\n>>>> > >> master 6635 88524 171106 163387\n>>>> 245307\n>>>> > >> ntt 7909 106826 217364 223338\n>>>> 242042\n>>>> > >> simple-no-buffers 7871 101575 199403 188074\n>>>> 224716\n>>>> > >> with-wal-buffers 7643 101056 206911 223860\n>>>> 261712\n>>>> > >>\n>>>> > >> So yeah, there's a clear difference. It changes the values for\n>>>> \"master\"\n>>>> > >> a bit, but both the \"simple\" patches (with and without) WAL\n>>>> buffers are\n>>>> > >> much faster. The with-wal-buffers is almost equal to the NTT\n>>>> patch,\n>>>> > >> which was using 96GB file. I presume larger WAL segments would get\n>>>> even\n>>>> > >> closer, if we supported them.\n>>>> > >>\n>>>> > >> I'll continue investigating this, but my conclusion so far seem to\n>>>> be\n>>>> > >> that we can't really replace WAL buffers with PMEM - that seems to\n>>>> > >> perform much worse.\n>>>> > >>\n>>>> > >> The question is what to do about the segment size. Can we reduce\n>>>> the\n>>>> > >> overhead of mmap-ing individual segments, so that this works even\n>>>> for\n>>>> > >> smaller WAL segments, to make this useful for common instances (not\n>>>> > >> everyone wants to run with 1GB WAL). Or whether we need to adopt\n>>>> the\n>>>> > >> design with a large file, mapped just once.\n>>>> > >>\n>>>> > >> Another question is whether it's even worth the extra complexity.\n>>>> On\n>>>> > >> 16MB segments the difference between master and NTT patch seems to\n>>>> be\n>>>> > >> non-trivial, but increasing the WAL segment size kinda reduces\n>>>> that. So\n>>>> > >> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n>>>> > >> Alternatively, maybe we could switch to libpmemblk, which should\n>>>> > >> eliminate the filesystem overhead at least.\n>>>> > >\n>>>> > > I think the performance improvement by NTT patch with the 16MB WAL\n>>>> > > segment, the most common WAL segment size, is very good (150437 vs.\n>>>> > > 212410 with 64 clients). But maybe evaluating writing WAL segment\n>>>> > > files on PMEM DAX filesystem is also worth, as you mentioned, if we\n>>>> > > don't do that yet.\n>>>> > >\n>>>> >\n>>>> > Well, not sure. I think the question is still open whether it's\n>>>> actually\n>>>> > safe to run on DAX, which does not have atomic writes of 512B sectors,\n>>>> > and I think we rely on that e.g. for pg_config. But maybe for WAL\n>>>> that's\n>>>> > not an issue.\n>>>>\n>>>> I think we can use the Block Translation Table (BTT) driver that\n>>>> provides atomic sector updates.\n>>>>\n>>>> >\n>>>> > > Also, I'm interested in why the through-put of NTT patch saturated\n>>>> at\n>>>> > > 32 clients, which is earlier than the master's one (96 clients). How\n>>>> > > many CPU cores are there on the machine you used?\n>>>> > >\n>>>> >\n>>>> > From what I know, this is somewhat expected for PMEM devices, for a\n>>>> > bunch of reasons:\n>>>> >\n>>>> > 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%),\n>>>> so\n>>>> > it takes fewer processes to saturate it.\n>>>> >\n>>>> > 2) Internally, the PMEM has a 256B buffer for writes, used for\n>>>> combining\n>>>> > etc. With too many processes sending writes, it becomes to look more\n>>>> > random, which is harmful for throughput.\n>>>> >\n>>>> > When combined, this means the performance starts dropping at certain\n>>>> > number of threads, and the optimal number of threads is rather low\n>>>> > (something like 5-10). This is very different behavior compared to\n>>>> DRAM.\n>>>>\n>>>> Makes sense.\n>>>>\n>>>> >\n>>>> > There's a nice overview and measurements in this paper:\n>>>> >\n>>>> > Building blocks for persistent memory / How to get the most out of\n>>>> your\n>>>> > new memory?\n>>>> > Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons\n>>>> > Kemper\n>>>> >\n>>>> > https://link.springer.com/article/10.1007/s00778-020-00622-9\n>>>>\n>>>> Thank you. I'll read it.\n>>>>\n>>>> >\n>>>> >\n>>>> > >> I'm also wondering if WAL is the right usage for PMEM. Per [2]\n>>>> there's a\n>>>> > >> huge read-write assymmetry (the writes being way slower), and their\n>>>> > >> recommendation (in \"Observation 3\" is)\n>>>> > >>\n>>>> > >> The read-write asymmetry of PMem im-plies the necessity of\n>>>> avoiding\n>>>> > >> writes as much as possible for PMem.\n>>>> > >>\n>>>> > >> So maybe we should not be trying to use PMEM for WAL, which is\n>>>> pretty\n>>>> > >> write-heavy (and in most cases even write-only).\n>>>> > >\n>>>> > > I think using PMEM for WAL is cost-effective but it leverages the\n>>>> only\n>>>> > > low-latency (sequential) write, but not other abilities such as\n>>>> > > fine-grained access and low-latency random write. If we want to\n>>>> > > exploit its all ability we might need some drastic changes to\n>>>> logging\n>>>> > > protocol while considering storing data on PMEM.\n>>>> > >\n>>>> >\n>>>> > True. I think investigating whether it's sensible to use PMEM for this\n>>>> > purpose. It may turn out that replacing the DRAM WAL buffers with\n>>>> writes\n>>>> > directly to PMEM is not economical, and aggregating data in a DRAM\n>>>> > buffer is better :-(\n>>>>\n>>>> Yes. I think it might be interesting to do an analysis of the\n>>>> bottlenecks of NTT patch by perf etc. If bottlenecks are moved to\n>>>> other places by removing WALWriteLock during flush, it's probably a\n>>>> good sign for further performance improvements. IIRC WALWriteLock is\n>>>> one of the main bottlenecks on OLTP workload, although my memory might\n>>>> already be out of date.\n>>>>\n>>>> Regards,\n>>>>\n>>>> --\n>>>> Masahiko Sawada\n>>>> EDB: https://www.enterprisedb.com/\n>>>>\n>>>\n>>>\n>>> --\n>>> Takashi Menjo <takashi.menjo@gmail.com>\n>>>\n>>\n>>\n>> --\n>> Takashi Menjo <takashi.menjo@gmail.com>\n>>\n>\n>\n> --\n> Takashi Menjo <takashi.menjo@gmail.com>\n>\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nHi,Now I have caught up with this thread. I see that many of you are interested in performance profiling.I share my slides in SNIA SDC 2020 [1]. In the slides, I had profiles focused on XLogInsert and XLogFlush (mainly the latter) for my non-volatile WAL buffer patchset. I found that the time for XLogWrite and locking/unlocking WALWriteLock were eliminated by the patchset. Instead, XLogInsert and WaitXLogInsertionsToFinish took more (or a little more) time than ever because memcpy-ing to PMEM (Optane PMem) is slower than to DRAM. For details, please see the slides.Best regards,Takashi[1] https://www.snia.org/educational-library/how-can-persistent-memory-make-databases-faster-and-how-could-we-go-ahead-20202021年1月26日(火) 18:50 Takashi Menjo <takashi.menjo@gmail.com>: Dear everyone, Tomas,First of all, the \"v4\" patchset for non-volatile WAL buffer attached to the previous mail is actually v5... Please read \"v4\" as \"v5.\"Then, to Tomas:Thank you for your crash report you gave on Nov 27, 2020, regarding msync patchset. I applied the latest msync patchset v3 attached to the previous to master \n\n 411ae64 (on Jan18, 2021) then tested it, and I got no error when pgbench -i -s 500. Please try it if necessary.Best regards,Takashi2021年1月26日(火) 17:52 Takashi Menjo <takashi.menjo@gmail.com>:Dear everyone,Sorry but I forgot to attach my patchsets... Please see the files attached to this mail. Please also note that they contain some fixes.Best regards,Takashi2021年1月26日(火) 17:46 Takashi Menjo <takashi.menjo@gmail.com>:Dear everyone,I'm sorry for the late reply. I rebase my two patchsets onto the latest master 411ae64.The one patchset prefixed with v4 is for non-volatile WAL buffer; the other prefixed with v3 is for msync.I will reply to your thankful feedbacks one by one within days. Please wait for a moment.Best regards,Takashi01/25/2021(Mon) 11:56 Masahiko Sawada <sawada.mshk@gmail.com>:On Fri, Jan 22, 2021 at 11:32 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n> > On Thu, Jan 7, 2021 at 2:16 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> I think I've managed to get the 0002 patch [1] rebased to master and\n> >> working (with help from Masahiko Sawada). It's not clear to me how it\n> >> could have worked as submitted - my theory is that an incomplete patch\n> >> was submitted by mistake, or something like that.\n> >>\n> >> Unfortunately, the benchmark results were kinda disappointing. For a\n> >> pgbench on scale 500 (fits into shared buffers), an average of three\n> >> 5-minute runs looks like this:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 7291 87704 165310 150437 224186\n> >> ntt 7912 106095 213206 212410 237819\n> >> simple-no-buffers 7654 96544 115416 95828 103065\n> >>\n> >> NTT refers to the patch from September 10, pre-allocating a large WAL\n> >> file on PMEM, and simple-no-buffers is the simpler patch simply removing\n> >> the WAL buffers and writing directly to a mmap-ed WAL segment on PMEM.\n> >>\n> >> Note: The patch is just replacing the old implementation with mmap.\n> >> That's good enough for experiments like this, but we probably want to\n> >> keep the old one for setups without PMEM. But it's good enough for\n> >> testing, benchmarking etc.\n> >>\n> >> Unfortunately, the results for this simple approach are pretty bad. Not\n> >> only compared to the \"ntt\" patch, but even to master. I'm not entirely\n> >> sure what's the root cause, but I have a couple hypotheses:\n> >>\n> >> 1) bug in the patch - That's clearly a possibility, although I've tried\n> >> tried to eliminate this possibility.\n> >>\n> >> 2) PMEM is slower than DRAM - From what I know, PMEM is much faster than\n> >> NVMe storage, but still much slower than DRAM (both in terms of latency\n> >> and bandwidth, see [2] for some data). It's not terrible, but the\n> >> latency is maybe 2-3x higher - not a huge difference, but may matter for\n> >> WAL buffers?\n> >>\n> >> 3) PMEM does not handle parallel writes well - If you look at [2],\n> >> Figure 4(b), you'll see that the throughput actually *drops\" as the\n> >> number of threads increase. That's pretty strange / annoying, because\n> >> that's how we write into WAL buffers - each thread writes it's own data,\n> >> so parallelism is not something we can get rid of.\n> >>\n> >> I've added some simple profiling, to measure number of calls / time for\n> >> each operation (use -DXLOG_DEBUG_STATS to enable). It accumulates data\n> >> for each backend, and logs the counts every 1M ops.\n> >>\n> >> Typical stats from a concurrent run looks like this:\n> >>\n> >> xlog stats cnt 43000000\n> >> map cnt 100 time 5448333 unmap cnt 100 time 3730963\n> >> memcpy cnt 985964 time 1550442272 len 15150499\n> >> memset cnt 0 time 0 len 0\n> >> persist cnt 13836 time 10369617 len 16292182\n> >>\n> >> The times are in nanoseconds, so this says the backend did 100 mmap and\n> >> unmap calls, taking ~10ms in total. There were ~14k pmem_persist calls,\n> >> taking 10ms in total. And the most time (~1.5s) was used by pmem_memcpy\n> >> copying about 15MB of data. That's quite a lot :-(\n> >\n> > It might also be interesting if we can see how much time spent on each\n> > logging function, such as XLogInsert(), XLogWrite(), and XLogFlush().\n> >\n>\n> Yeah, we could extend it to that, that's fairly mechanical thing. Bbut\n> maybe that could be visible in a regular perf profile. Also, I suppose\n> most of the time will be used by the pmem calls, shown in the stats.\n>\n> >>\n> >> My conclusion from this is that eliminating WAL buffers and writing WAL\n> >> directly to PMEM (by memcpy to mmap-ed WAL segments) is probably not the\n> >> right approach.\n> >>\n> >> I suppose we should keep WAL buffers, and then just write the data to\n> >> mmap-ed WAL segments on PMEM. Which I think is what the NTT patch does,\n> >> except that it allocates one huge file on PMEM and writes to that\n> >> (instead of the traditional WAL segments).\n> >>\n> >> So I decided to try how it'd work with writing to regular WAL segments,\n> >> mmap-ed ad hoc. The pmem-with-wal-buffers-master.patch patch does that,\n> >> and the results look a bit nicer:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 7291 87704 165310 150437 224186\n> >> ntt 7912 106095 213206 212410 237819\n> >> simple-no-buffers 7654 96544 115416 95828 103065\n> >> with-wal-buffers 7477 95454 181702 140167 214715\n> >>\n> >> So, much better than the version without WAL buffers, somewhat better\n> >> than master (except for 64/96 clients), but still not as good as NTT.\n> >>\n> >> At this point I was wondering how could the NTT patch be faster when\n> >> it's doing roughly the same thing. I'm sire there are some differences,\n> >> but it seemed strange. The main difference seems to be that it only maps\n> >> one large file, and only once. OTOH the alternative \"simple\" patch maps\n> >> segments one by one, in each backend. Per the debug stats the map/unmap\n> >> calls are fairly cheap, but maybe it interferes with the memcpy somehow.\n> >>\n> >\n> > While looking at the two methods: NTT and simple-no-buffer, I realized\n> > that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n> > pmem_drain()) WAL without acquiring WALWriteLock whereas\n> > simple-no-buffer patch acquires WALWriteLock to do that\n> > (pmem_persist()). I wonder if this also affected the performance\n> > differences between those two methods since WALWriteLock serializes\n> > the operations. With PMEM, multiple backends can concurrently flush\n> > the records if the memory region is not overlapped? If so, flushing\n> > WAL without WALWriteLock would be a big benefit.\n> >\n>\n> That's a very good question - it's quite possible the WALWriteLock is\n> not really needed, because the processes are actually \"writing\" the WAL\n> directly to PMEM. So it's a bit confusing, because it's only really\n> concerned about making sure it's flushed.\n>\n> And yes, multiple processes certainly can write to PMEM at the same\n> time, in fact it's a requirement to get good throughput I believe. My\n> understanding is we need ~8 processes, at least that's what I heard from\n> people with more PMEM experience.\n\nThanks, that's good to know.\n\n>\n> TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n> from the 0002 patch) is actually correct. Essentially, consider the\n> backend needs to do a flush, but does not have a segment mapped. So it\n> maps it and calls pmem_drain() on it.\n>\n> But does that actually flush anything? Does it properly flush changes\n> done by other processes that may not have called pmem_drain() yet? I\n> find this somewhat suspicious and I'd bet all processes that did write\n> something have to call pmem_drain().\n\nYeah, in terms of experiments at least it's good to find out that the\napproach mmapping each WAL segment is not good at performance.\n\n>\n>\n> >> So I did an experiment by increasing the size of the WAL segments. I\n> >> chose to try with 521MB and 1024MB, and the results with 1GB look like this:\n> >>\n> >> branch 1 16 32 64 96\n> >> ----------------------------------------------------------------\n> >> master 6635 88524 171106 163387 245307\n> >> ntt 7909 106826 217364 223338 242042\n> >> simple-no-buffers 7871 101575 199403 188074 224716\n> >> with-wal-buffers 7643 101056 206911 223860 261712\n> >>\n> >> So yeah, there's a clear difference. It changes the values for \"master\"\n> >> a bit, but both the \"simple\" patches (with and without) WAL buffers are\n> >> much faster. The with-wal-buffers is almost equal to the NTT patch,\n> >> which was using 96GB file. I presume larger WAL segments would get even\n> >> closer, if we supported them.\n> >>\n> >> I'll continue investigating this, but my conclusion so far seem to be\n> >> that we can't really replace WAL buffers with PMEM - that seems to\n> >> perform much worse.\n> >>\n> >> The question is what to do about the segment size. Can we reduce the\n> >> overhead of mmap-ing individual segments, so that this works even for\n> >> smaller WAL segments, to make this useful for common instances (not\n> >> everyone wants to run with 1GB WAL). Or whether we need to adopt the\n> >> design with a large file, mapped just once.\n> >>\n> >> Another question is whether it's even worth the extra complexity. On\n> >> 16MB segments the difference between master and NTT patch seems to be\n> >> non-trivial, but increasing the WAL segment size kinda reduces that. So\n> >> maybe just using File I/O on PMEM DAX filesystem seems good enough.\n> >> Alternatively, maybe we could switch to libpmemblk, which should\n> >> eliminate the filesystem overhead at least.\n> >\n> > I think the performance improvement by NTT patch with the 16MB WAL\n> > segment, the most common WAL segment size, is very good (150437 vs.\n> > 212410 with 64 clients). But maybe evaluating writing WAL segment\n> > files on PMEM DAX filesystem is also worth, as you mentioned, if we\n> > don't do that yet.\n> >\n>\n> Well, not sure. I think the question is still open whether it's actually\n> safe to run on DAX, which does not have atomic writes of 512B sectors,\n> and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n> not an issue.\n\nI think we can use the Block Translation Table (BTT) driver that\nprovides atomic sector updates.\n\n>\n> > Also, I'm interested in why the through-put of NTT patch saturated at\n> > 32 clients, which is earlier than the master's one (96 clients). How\n> > many CPU cores are there on the machine you used?\n> >\n>\n> From what I know, this is somewhat expected for PMEM devices, for a\n> bunch of reasons:\n>\n> 1) The memory bandwidth is much lower than for DRAM (maybe ~10-20%), so\n> it takes fewer processes to saturate it.\n>\n> 2) Internally, the PMEM has a 256B buffer for writes, used for combining\n> etc. With too many processes sending writes, it becomes to look more\n> random, which is harmful for throughput.\n>\n> When combined, this means the performance starts dropping at certain\n> number of threads, and the optimal number of threads is rather low\n> (something like 5-10). This is very different behavior compared to DRAM.\n\nMakes sense.\n\n>\n> There's a nice overview and measurements in this paper:\n>\n> Building blocks for persistent memory / How to get the most out of your\n> new memory?\n> Alexander van Renen, Lukas Vogel, Viktor Leis, Thomas Neumann & Alfons\n> Kemper\n>\n> https://link.springer.com/article/10.1007/s00778-020-00622-9\n\nThank you. I'll read it.\n\n>\n>\n> >> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n> >> huge read-write assymmetry (the writes being way slower), and their\n> >> recommendation (in \"Observation 3\" is)\n> >>\n> >> The read-write asymmetry of PMem im-plies the necessity of avoiding\n> >> writes as much as possible for PMem.\n> >>\n> >> So maybe we should not be trying to use PMEM for WAL, which is pretty\n> >> write-heavy (and in most cases even write-only).\n> >\n> > I think using PMEM for WAL is cost-effective but it leverages the only\n> > low-latency (sequential) write, but not other abilities such as\n> > fine-grained access and low-latency random write. If we want to\n> > exploit its all ability we might need some drastic changes to logging\n> > protocol while considering storing data on PMEM.\n> >\n>\n> True. I think investigating whether it's sensible to use PMEM for this\n> purpose. It may turn out that replacing the DRAM WAL buffers with writes\n> directly to PMEM is not economical, and aggregating data in a DRAM\n> buffer is better :-(\n\nYes. I think it might be interesting to do an analysis of the\nbottlenecks of NTT patch by perf etc. If bottlenecks are moved to\nother places by removing WALWriteLock during flush, it's probably a\ngood sign for further performance improvements. IIRC WALWriteLock is\none of the main bottlenecks on OLTP workload, although my memory might\nalready be out of date.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n-- Takashi Menjo <takashi.menjo@gmail.com>\n-- Takashi Menjo <takashi.menjo@gmail.com>\n-- Takashi Menjo <takashi.menjo@gmail.com>\n-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Wed, 27 Jan 2021 17:28:25 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On 1/25/21 3:56 AM, Masahiko Sawada wrote:\n>>\n>> ...\n>>\n>> On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n>>> ...\n>>>\n>>> While looking at the two methods: NTT and simple-no-buffer, I realized\n>>> that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n>>> pmem_drain()) WAL without acquiring WALWriteLock whereas\n>>> simple-no-buffer patch acquires WALWriteLock to do that\n>>> (pmem_persist()). I wonder if this also affected the performance\n>>> differences between those two methods since WALWriteLock serializes\n>>> the operations. With PMEM, multiple backends can concurrently flush\n>>> the records if the memory region is not overlapped? If so, flushing\n>>> WAL without WALWriteLock would be a big benefit.\n>>>\n>>\n>> That's a very good question - it's quite possible the WALWriteLock is\n>> not really needed, because the processes are actually \"writing\" the WAL\n>> directly to PMEM. So it's a bit confusing, because it's only really\n>> concerned about making sure it's flushed.\n>>\n>> And yes, multiple processes certainly can write to PMEM at the same\n>> time, in fact it's a requirement to get good throughput I believe. My\n>> understanding is we need ~8 processes, at least that's what I heard from\n>> people with more PMEM experience.\n> \n> Thanks, that's good to know.\n> \n>>\n>> TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n>> from the 0002 patch) is actually correct. Essentially, consider the\n>> backend needs to do a flush, but does not have a segment mapped. So it\n>> maps it and calls pmem_drain() on it.\n>>\n>> But does that actually flush anything? Does it properly flush changes\n>> done by other processes that may not have called pmem_drain() yet? I\n>> find this somewhat suspicious and I'd bet all processes that did write\n>> something have to call pmem_drain().\n>\nFor the record, from what I learned / been told by engineers with PMEM\nexperience, calling pmem_drain() should properly flush changes done by\nother processes. So it should be sufficient to do that in XLogFlush(),\nfrom a single process.\n\nMy understanding is that we have about three challenges here:\n\n(a) we still need to track how far we flushed, so this needs to be\nprotected by some lock anyway (although perhaps a much smaller section\nof the function)\n\n(b) pmem_drain() flushes all the changes, so it flushes even \"future\"\npart of the WAL after the requested LSN, which may negatively affects\nperformance I guess. So I wonder if pmem_persist would be a better fit,\nas it allows specifying a range that should be persisted.\n\n(c) As mentioned before, PMEM behaves differently with concurrent\naccess, i.e. it reaches peak throughput with relatively low number of\nthreads wroting data, and then the throughput drops quite quickly. I'm\nnot sure if the same thing applies to pmem_drain() too - if it does, we\nmay need something like we have for insertions, i.e. a handful of locks\nallowing limited number of concurrent inserts.\n\n\n> Yeah, in terms of experiments at least it's good to find out that the\n> approach mmapping each WAL segment is not good at performance.\n> \nRight. The problem with small WAL segments seems to be that each mmap\ncauses the TLB to be thrown away, which means a lot of expensive cache\nmisses. As the mmap needs to be done by each backend writing WAL, this\nis particularly bad with small WAL segments. The NTT patch works around\nthat by doing just a single mmap.\n\nI wonder if we could pre-allocate and mmap small segments, and keep them\nmapped and just rename the underlying files when recycling them. That'd\nkeep the regular segment files, as expected by various tools, etc. The\nquestion is what would happen when we temporarily need more WAL, etc.\n\n>>>\n>>> ...\n>>>\n>>> I think the performance improvement by NTT patch with the 16MB WAL\n>>> segment, the most common WAL segment size, is very good (150437 vs.\n>>> 212410 with 64 clients). But maybe evaluating writing WAL segment\n>>> files on PMEM DAX filesystem is also worth, as you mentioned, if we\n>>> don't do that yet.\n>>>\n>>\n>> Well, not sure. I think the question is still open whether it's actually\n>> safe to run on DAX, which does not have atomic writes of 512B sectors,\n>> and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n>> not an issue.\n> \n> I think we can use the Block Translation Table (BTT) driver that\n> provides atomic sector updates.\n> \n\nBut we have benchmarked that, see my message from 2020/11/26, which\nshows this table:\n\n master/btt master/dax ntt simple\n -----------------------------------------------------------\n 1 5469 7402 7977 6746\n 16 48222 80869 107025 82343\n 32 73974 158189 214718 158348\n 64 85921 154540 225715 164248\n 96 150602 221159 237008 217253\n\nClearly, BTT is quite expensive. Maybe there's a way to tune that at\nfilesystem/kernel level, I haven't tried that.\n\n>>\n>>>> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n>>>> huge read-write assymmetry (the writes being way slower), and their\n>>>> recommendation (in \"Observation 3\" is)\n>>>>\n>>>> The read-write asymmetry of PMem im-plies the necessity of avoiding\n>>>> writes as much as possible for PMem.\n>>>>\n>>>> So maybe we should not be trying to use PMEM for WAL, which is pretty\n>>>> write-heavy (and in most cases even write-only).\n>>>\n>>> I think using PMEM for WAL is cost-effective but it leverages the only\n>>> low-latency (sequential) write, but not other abilities such as\n>>> fine-grained access and low-latency random write. If we want to\n>>> exploit its all ability we might need some drastic changes to logging\n>>> protocol while considering storing data on PMEM.\n>>>\n>>\n>> True. I think investigating whether it's sensible to use PMEM for this\n>> purpose. It may turn out that replacing the DRAM WAL buffers with writes\n>> directly to PMEM is not economical, and aggregating data in a DRAM\n>> buffer is better :-(\n> \n> Yes. I think it might be interesting to do an analysis of the\n> bottlenecks of NTT patch by perf etc. If bottlenecks are moved to\n> other places by removing WALWriteLock during flush, it's probably a\n> good sign for further performance improvements. IIRC WALWriteLock is\n> one of the main bottlenecks on OLTP workload, although my memory might\n> already be out of date.\n> \n\nI think WALWriteLock itself (i.e. acquiring/releasing it) is not an\nissue - the problem is that writing the WAL to persistent storage itself\nis expensive, and we're waiting to that.\n\nSo it's not clear to me if removing the lock (and allowing multiple\nprocesses to do pmem_drain concurrently) can actually help, considering\npmem_drain() should flush writes from other processes anyway.\n\nBut as I said, that is just my theory - I might be entirely wrong, it'd\nbe good to hack XLogFlush a bit and try it out.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 27 Jan 2021 17:41:36 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> (c) As mentioned before, PMEM behaves differently with concurrent\r\n> access, i.e. it reaches peak throughput with relatively low number of\r\n> threads wroting data, and then the throughput drops quite quickly. I'm\r\n> not sure if the same thing applies to pmem_drain() too - if it does, we\r\n> may need something like we have for insertions, i.e. a handful of locks\r\n> allowing limited number of concurrent inserts.\r\n\r\n> I think WALWriteLock itself (i.e. acquiring/releasing it) is not an\r\n> issue - the problem is that writing the WAL to persistent storage itself\r\n> is expensive, and we're waiting to that.\r\n> \r\n> So it's not clear to me if removing the lock (and allowing multiple\r\n> processes to do pmem_drain concurrently) can actually help, considering\r\n> pmem_drain() should flush writes from other processes anyway.\r\n\r\nI may be out of the track, but HPE's benchmark using Oracle 18c, placing the REDO log file on Intel PMEM in App Direct mode, showed only 27% performance increase compared to even \"SAS\" SSD.\r\n\r\nhttps://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00074230enw\r\n\r\n\r\nThe just-released Oracle 21c has started support for placing data files on PMEM, eliminating the overhead of buffer cache. It's interesting that this new feature is categorized in \"Manageability\", not \"Performance and scalability.\"\r\n\r\nhttps://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/persistent-memory-database-258797846.html\r\n\r\n\r\nThey recommend placing REDO logs on DAX-aware file systems. I ownder what's behind this.\r\n\r\nhttps://docs.oracle.com/en/database/oracle/oracle-database/21/admin/using-PMEM-db-support.html#GUID-D230B9CF-1845-4833-9BF7-43E9F15B7113\r\n\r\n\"You can use PMEM Filestore for database datafiles and control files. For performance reasons, Oracle recommends that you store redo log files as independent files in a DAX-aware filesystem such as EXT4/XFS.\"\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 28 Jan 2021 02:45:46 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Tomas,\n\nI'd answer your questions. (Not all for now, sorry.)\n\n\n> Do I understand correctly that the patch removes \"regular\" WAL buffers\nand instead writes the data into the non-volatile PMEM buffer, without\nwriting that to the WAL segments at all (unless in archiving mode)?\n> Firstly, I guess many (most?) instances will have to write the WAL\nsegments anyway because of PITR/backups, so I'm not sure we can save much\nhere.\n\nMostly yes. My \"non-volatile WAL buffer\" patchset removes regular volatile\nWAL buffers and brings non-volatile ones. All the WAL will get into the\nnon-volatile buffers and persist there. No write out of the buffers to WAL\nsegment files is required. However in archiving mode or in a case of buffer\nfull (described later), both of the non-volatile buffers and the segment\nfiles are used.\n\nIn archiving mode with my patchset, for each time one segment (16MB\ndefault) is fixed on the non-volatile buffers, that segment is written to a\nsegment file asynchronously (by XLogBackgroundFlush). Then it will be\narchived by existing archiving functionality.\n\n\n> But more importantly - doesn't that mean the nvwal_size value is\nessentially a hard limit? With max_wal_size, it's a soft limit i.e. we're\nallowed to temporarily use more WAL when needed. But with a pre-allocated\nfile, that's clearly not possible. So what would happen in those cases?\n\nYes, nvwal_size is a hard limit, and I see it's a major weak point of my\npatchset.\n\nWhen all non-volatile WAL buffers are filled, the oldest segment on the\nbuffers is written (by XLogWrite) to a regular WAL segment file, then those\nbuffers are cleared (by AdvanceXLInsertBuffer) for new records. All WAL\nrecord insertions to the buffers block until that write and clear are\ncomplete. Due to that, all write transactions also block.\n\nTo make the matter worse, if a checkpoint eventually occurs in such a\nbuffer full case, record insertions would block for a certain time at the\nend of the checkpoint because a large amount of the non-volatile buffers\nwill be cleared (see PreallocNonVolatileXlogBuffer). From a client view, it\nwould look as if the postgres server freezes for a while.\n\nProper checkpointing would prevent such cases, but it could be hard to\ncontrol. When I reproduced the Gang's case reported in this thread, such\nbuffer full and freeze occured.\n\n\n> Also, is it possible to change nvwal_size? I haven't tried, but I wonder\nwhat happens with the current contents of the file.\n\nThe value of nvwal_size should be equal to the actual size of nvwal_path\nfile when postgres starts up. If not equal, postgres will panic at\nMapNonVolatileXLogBuffer (see nv_xlog_buffer.c), and the WAL contents on\nthe file will remain as it was. So, if an admin accidentally changes the\nnvwal_size value, they just cannot get postgres up.\n\nThe file size may be extended/shrunk offline by truncate(1) command, but\nthe WAL contents on the file also should be moved to the proper offset\nbecause the insertion/recovery offset is calculated by modulo, that is,\nrecord's LSN % nvwal_size; otherwise we lose WAL. An offline tool to do\nsuch an operation might be required, but is not yet.\n\n\n> The way I understand the current design is that we're essentially\nswitching from this architecture:\n>\n> clients -> wal buffers (DRAM) -> wal segments (storage)\n>\n> to this\n>\n> clients -> wal buffers (PMEM)\n>\n> (Assuming there we don't have to write segments because of archiving.)\n\nYes. Let me describe how current PostgreSQL design is and how the patchsets\nand works talked in this thread changes it, AFAIU:\n\n - Current PostgreSQL:\n clients -[memcpy]-> buffers (DRAM) -[write]-> segments (disk)\n\n - Patch \"pmem-with-wal-buffers-master.patch\" Tomas posted:\n clients -[memcpy]-> buffers (DRAM) -[pmem_memcpy]-> mmap-ed segments\n(PMEM)\n\n - My \"non-volatile WAL buffer\" patchset:\n clients -[pmem_memcpy(*)]-> buffers (PMEM)\n\n - My another patchset mmap-ing segments as buffers:\n clients -[pmem_memcpy(*)]-> mmap-ed segments as buffers (PMEM)\n\n - \"Non-volatile Memory Logging\" in PGcon 2016 [1][2][3]:\n clients -[memcpy]-> buffers (WC[4] DRAM as pseudo PMEM) -[async\nwrite]-> segments (disk)\n\n (* or memcpy + pmem_flush)\n\nAnd I'd say that our previous work \"Introducing PMDK into PostgreSQL\"\ntalked in PGCon 2018 [5] and its patchset [6 for the latest] are based on\nthe same idea as Tomas's patch above.\n\n\nThat's all for this mail. Please be patient for the next mail.\n\nBest regards,\nTakashi\n\n[1] https://www.pgcon.org/2016/schedule/track/Performance/945.en.html\n[2] https://github.com/meistervonperf/postgresql-NVM-logging\n[3] https://github.com/meistervonperf/pseudo-pram\n[4] https://www.kernel.org/doc/html/latest/x86/pat.html\n[5] https://pgcon.org/2018/schedule/events/1154.en.html\n[6]\nhttps://www.postgresql.org/message-id/CAOwnP3ONd9uXPXKoc5AAfnpCnCyOna1ru6sU=eY_4WfMjaKG9A@mail.gmail.com\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nHi Tomas,I'd answer your questions. (Not all for now, sorry.)> Do I understand correctly that the patch removes \"regular\" WAL buffers and instead writes the data into the non-volatile PMEM buffer, without writing that to the WAL segments at all (unless in archiving mode)?> Firstly, I guess many (most?) instances will have to write the WAL segments anyway because of PITR/backups, so I'm not sure we can save much here.Mostly yes. My \"non-volatile WAL buffer\" patchset removes regular volatile WAL buffers and brings non-volatile ones. All the WAL will get into the non-volatile buffers and persist there. No write out of the buffers to WAL segment files is required. However in archiving mode or in a case of buffer full (described later), both of the non-volatile buffers and the segment files are used.In archiving mode with my patchset, for each time one segment (16MB default) is fixed on the non-volatile buffers, that segment is written to a segment file asynchronously (by XLogBackgroundFlush). Then it will be archived by existing archiving functionality.> But more importantly - doesn't that mean the nvwal_size value is essentially a hard limit? With max_wal_size, it's a soft limit i.e. we're allowed to temporarily use more WAL when needed. But with a pre-allocated file, that's clearly not possible. So what would happen in those cases?Yes, nvwal_size is a hard limit, and I see it's a major weak point of my patchset.When all non-volatile WAL buffers are filled, the oldest segment on the buffers is written (by XLogWrite) to a regular WAL segment file, then those buffers are cleared (by AdvanceXLInsertBuffer) for new records. All WAL record insertions to the buffers block until that write and clear are complete. Due to that, all write transactions also block.To make the matter worse, if a checkpoint eventually occurs in such a buffer full case, record insertions would block for a certain time at the end of the checkpoint because a large amount of the non-volatile buffers will be cleared (see PreallocNonVolatileXlogBuffer). From a client view, it would look as if the postgres server freezes for a while.Proper checkpointing would prevent such cases, but it could be hard to control. When I reproduced the Gang's case reported in this thread, such buffer full and freeze occured.> Also, is it possible to change nvwal_size? I haven't tried, but I wonder what happens with the current contents of the file.The value of nvwal_size should be equal to the actual size of nvwal_path file when postgres starts up. If not equal, postgres will panic at MapNonVolatileXLogBuffer (see nv_xlog_buffer.c), and the WAL contents on the file will remain as it was. So, if an admin accidentally changes the nvwal_size value, they just cannot get postgres up.The file size may be extended/shrunk offline by truncate(1) command, but the WAL contents on the file also should be moved to the proper offset because the insertion/recovery offset is calculated by modulo, that is, record's LSN % nvwal_size; otherwise we lose WAL. An offline tool to do such an operation might be required, but is not yet.> The way I understand the current design is that we're essentially switching from this architecture:> > clients -> wal buffers (DRAM) -> wal segments (storage)> > to this> > clients -> wal buffers (PMEM)> > (Assuming there we don't have to write segments because of archiving.)Yes. Let me describe how current PostgreSQL design is and how the patchsets and works talked in this thread changes it, AFAIU: - Current PostgreSQL: clients -[memcpy]-> buffers (DRAM) -[write]-> segments (disk) - Patch \"pmem-with-wal-buffers-master.patch\" Tomas posted: clients -[memcpy]-> buffers (DRAM) -[pmem_memcpy]-> mmap-ed segments (PMEM) - My \"non-volatile WAL buffer\" patchset: clients -[pmem_memcpy(*)]-> buffers (PMEM) - My another patchset mmap-ing segments as buffers: clients -[pmem_memcpy(*)]-> mmap-ed segments as buffers (PMEM) - \"Non-volatile Memory Logging\" in PGcon 2016 [1][2][3]: clients -[memcpy]-> buffers (WC[4] DRAM as pseudo PMEM) -[async write]-> segments (disk) (* or memcpy + pmem_flush)And I'd say that our previous work \"Introducing PMDK into PostgreSQL\" talked in PGCon 2018 [5] and its patchset [6 for the latest] are based on the same idea as Tomas's patch above.That's all for this mail. Please be patient for the next mail.Best regards,Takashi[1] https://www.pgcon.org/2016/schedule/track/Performance/945.en.html[2] https://github.com/meistervonperf/postgresql-NVM-logging[3] https://github.com/meistervonperf/pseudo-pram[4] https://www.kernel.org/doc/html/latest/x86/pat.html[5] https://pgcon.org/2018/schedule/events/1154.en.html[6] https://www.postgresql.org/message-id/CAOwnP3ONd9uXPXKoc5AAfnpCnCyOna1ru6sU=eY_4WfMjaKG9A@mail.gmail.com-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Fri, 29 Jan 2021 18:02:05 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 1:41 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 1/25/21 3:56 AM, Masahiko Sawada wrote:\n> >>\n> >> ...\n> >>\n> >> On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n> >>> ...\n> >>>\n> >>> While looking at the two methods: NTT and simple-no-buffer, I realized\n> >>> that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n> >>> pmem_drain()) WAL without acquiring WALWriteLock whereas\n> >>> simple-no-buffer patch acquires WALWriteLock to do that\n> >>> (pmem_persist()). I wonder if this also affected the performance\n> >>> differences between those two methods since WALWriteLock serializes\n> >>> the operations. With PMEM, multiple backends can concurrently flush\n> >>> the records if the memory region is not overlapped? If so, flushing\n> >>> WAL without WALWriteLock would be a big benefit.\n> >>>\n> >>\n> >> That's a very good question - it's quite possible the WALWriteLock is\n> >> not really needed, because the processes are actually \"writing\" the WAL\n> >> directly to PMEM. So it's a bit confusing, because it's only really\n> >> concerned about making sure it's flushed.\n> >>\n> >> And yes, multiple processes certainly can write to PMEM at the same\n> >> time, in fact it's a requirement to get good throughput I believe. My\n> >> understanding is we need ~8 processes, at least that's what I heard from\n> >> people with more PMEM experience.\n> >\n> > Thanks, that's good to know.\n> >\n> >>\n> >> TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n> >> from the 0002 patch) is actually correct. Essentially, consider the\n> >> backend needs to do a flush, but does not have a segment mapped. So it\n> >> maps it and calls pmem_drain() on it.\n> >>\n> >> But does that actually flush anything? Does it properly flush changes\n> >> done by other processes that may not have called pmem_drain() yet? I\n> >> find this somewhat suspicious and I'd bet all processes that did write\n> >> something have to call pmem_drain().\n> >\n> For the record, from what I learned / been told by engineers with PMEM\n> experience, calling pmem_drain() should properly flush changes done by\n> other processes. So it should be sufficient to do that in XLogFlush(),\n> from a single process.\n>\n> My understanding is that we have about three challenges here:\n>\n> (a) we still need to track how far we flushed, so this needs to be\n> protected by some lock anyway (although perhaps a much smaller section\n> of the function)\n>\n> (b) pmem_drain() flushes all the changes, so it flushes even \"future\"\n> part of the WAL after the requested LSN, which may negatively affects\n> performance I guess. So I wonder if pmem_persist would be a better fit,\n> as it allows specifying a range that should be persisted.\n>\n> (c) As mentioned before, PMEM behaves differently with concurrent\n> access, i.e. it reaches peak throughput with relatively low number of\n> threads wroting data, and then the throughput drops quite quickly. I'm\n> not sure if the same thing applies to pmem_drain() too - if it does, we\n> may need something like we have for insertions, i.e. a handful of locks\n> allowing limited number of concurrent inserts.\n\nThanks. That's a good summary.\n\n>\n>\n> > Yeah, in terms of experiments at least it's good to find out that the\n> > approach mmapping each WAL segment is not good at performance.\n> >\n> Right. The problem with small WAL segments seems to be that each mmap\n> causes the TLB to be thrown away, which means a lot of expensive cache\n> misses. As the mmap needs to be done by each backend writing WAL, this\n> is particularly bad with small WAL segments. The NTT patch works around\n> that by doing just a single mmap.\n>\n> I wonder if we could pre-allocate and mmap small segments, and keep them\n> mapped and just rename the underlying files when recycling them. That'd\n> keep the regular segment files, as expected by various tools, etc. The\n> question is what would happen when we temporarily need more WAL, etc.\n>\n> >>>\n> >>> ...\n> >>>\n> >>> I think the performance improvement by NTT patch with the 16MB WAL\n> >>> segment, the most common WAL segment size, is very good (150437 vs.\n> >>> 212410 with 64 clients). But maybe evaluating writing WAL segment\n> >>> files on PMEM DAX filesystem is also worth, as you mentioned, if we\n> >>> don't do that yet.\n> >>>\n> >>\n> >> Well, not sure. I think the question is still open whether it's actually\n> >> safe to run on DAX, which does not have atomic writes of 512B sectors,\n> >> and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n> >> not an issue.\n> >\n> > I think we can use the Block Translation Table (BTT) driver that\n> > provides atomic sector updates.\n> >\n>\n> But we have benchmarked that, see my message from 2020/11/26, which\n> shows this table:\n>\n> master/btt master/dax ntt simple\n> -----------------------------------------------------------\n> 1 5469 7402 7977 6746\n> 16 48222 80869 107025 82343\n> 32 73974 158189 214718 158348\n> 64 85921 154540 225715 164248\n> 96 150602 221159 237008 217253\n>\n> Clearly, BTT is quite expensive. Maybe there's a way to tune that at\n> filesystem/kernel level, I haven't tried that.\n\nI missed your mail. Yeah, BTT seems to be quite expensive.\n\n>\n> >>\n> >>>> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n> >>>> huge read-write assymmetry (the writes being way slower), and their\n> >>>> recommendation (in \"Observation 3\" is)\n> >>>>\n> >>>> The read-write asymmetry of PMem im-plies the necessity of avoiding\n> >>>> writes as much as possible for PMem.\n> >>>>\n> >>>> So maybe we should not be trying to use PMEM for WAL, which is pretty\n> >>>> write-heavy (and in most cases even write-only).\n> >>>\n> >>> I think using PMEM for WAL is cost-effective but it leverages the only\n> >>> low-latency (sequential) write, but not other abilities such as\n> >>> fine-grained access and low-latency random write. If we want to\n> >>> exploit its all ability we might need some drastic changes to logging\n> >>> protocol while considering storing data on PMEM.\n> >>>\n> >>\n> >> True. I think investigating whether it's sensible to use PMEM for this\n> >> purpose. It may turn out that replacing the DRAM WAL buffers with writes\n> >> directly to PMEM is not economical, and aggregating data in a DRAM\n> >> buffer is better :-(\n> >\n> > Yes. I think it might be interesting to do an analysis of the\n> > bottlenecks of NTT patch by perf etc. If bottlenecks are moved to\n> > other places by removing WALWriteLock during flush, it's probably a\n> > good sign for further performance improvements. IIRC WALWriteLock is\n> > one of the main bottlenecks on OLTP workload, although my memory might\n> > already be out of date.\n> >\n>\n> I think WALWriteLock itself (i.e. acquiring/releasing it) is not an\n> issue - the problem is that writing the WAL to persistent storage itself\n> is expensive, and we're waiting to that.\n>\n> So it's not clear to me if removing the lock (and allowing multiple\n> processes to do pmem_drain concurrently) can actually help, considering\n> pmem_drain() should flush writes from other processes anyway.\n>\n> But as I said, that is just my theory - I might be entirely wrong, it'd\n> be good to hack XLogFlush a bit and try it out.\n>\n>\n\nI've done some performance benchmarks with the master and NTT v4\npatch. Let me share the results.\n\npgbench setup:\n* scale factor = 2000\n* duration = 600 sec\n* clients = 32, 64, 96\n\nNVWAL setup:\n* nvwal_size = 50GB\n* max_wal_size = 50GB\n* min_wal_size = 50GB\n\nThe whole database fits in shared_buffers and WAL segment file size is 16MB.\n\nThe results are:\n\n master NTT master-unlogged\n32 113209 67107 154298\n64 144880 54289 178883\n96 151405 50562 180018\n\n\"master-unlogged\" is the same setup as \"master\" except for using\nunlogged tables (using --unlogged-tables pgbench option). The TPS\nincreased by about 20% compared to \"master\" case (i.g., logged table\ncase). The reason why I experimented unlogged table case as well is\nthat we can think these results as an ideal performance if we were\nable to write WAL records in 0 sec. IOW, even if the PMEM patch would\nsignificantly improve WAL logging performance, I think it could not\nexceed this performance. But hope is that if we currently have a\nperformance bottle-neck in WAL logging (.e.g, locking and writing\nWAL), removing or minimizing WAL logging would bring a chance to\nfurther improve performance by eliminating the new-coming bottle-neck.\n\nAs we can see from the above result, apparently, the performance of\n“ntt” case was not good in this evaluation. I've not reviewed the\npatch in-depth yet but something might be wrong with the v4 patch or\nPMEM configuration I did on my environment is wrong.\n\nBesides, I've checked the main wait events on each experiment using\npg_wait_sampling. Here are the top 5 wait events on \"master\" case\nexcluding wait events on the main function of auxiliary processes:\n\n event_type | event | sum\n------------+----------------------+-------\n Client | ClientRead | 46902\n LWLock | WALWrite | 33405\n IPC | ProcArrayGroupUpdate | 8855\n LWLock | WALInsert | 3215\n LWLock | ProcArray | 3022\n\nWe can see the wait event on WALWrite lwlock acquisition happened many\ntimes and it was the primary wait event. On the other hand, In\n\"master-unlogged\" case, I got:\n\n event_type | event | sum\n------------+----------------------+-------\n Client | ClientRead | 59871\n IPC | ProcArrayGroupUpdate | 17528\n LWLock | ProcArray | 4317\n LWLock | XactSLRU | 3705\n IPC | XactGroupUpdate | 3045\n\nLWLock of WAL logging disappeared.\n\nThe result of \"ntt\" case is:\n\n event_type | event | sum\n------------+----------------------+--------\n LWLock | WALInsert | 126487\n Client | ClientRead | 12173\n LWLock | BufferContent | 4480\n Lock | transactionid | 2017\n IPC | ProcArrayGroupUpdate | 924\n\nThe wait event on WALWrite lwlock disappeared. Instead, there were\nmany wait events on WALInsert lwlock. I've not investigated this\nresult yet. This could be because the v4 patch acquires WALInsert lock\nmore than necessary or writing WAL records to PMEM took more time than\nwriting to DRAM as Tomas mentioned before.\n\nIf the PMEM patch introduces a new WAL file (called nwwal file in the\npatch) and writes a normal WAL segment file based on nvwal file, I\nthink it doesn't necessarily need to follow the current WAL segment\nfile format (i.g., sequential writes to 8kB each block). I think there\nis a better algorithm to write WAL records to PMEM more efficiently\nlike this paper proposing[1].\n\nFinally, I realized while using the PMEM patch that with a large nvwal\nfile, PostgreSQL server takes a long time to start since it\ninitializes nvwal file. In my environment, nvwal size is 50GB and it\ntook 1 min to startup. This could lead to downtime in production.\n\n[1] https://jianh.web.engr.illinois.edu/papers/jian-vldb15.pdf\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 13 Feb 2021 12:18:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> I've done some performance benchmarks with the master and NTT v4\r\n> patch. Let me share the results.\r\n> \r\n...\r\n> master NTT master-unlogged\r\n> 32 113209 67107 154298\r\n> 64 144880 54289 178883\r\n> 96 151405 50562 180018\r\n> \r\n> \"master-unlogged\" is the same setup as \"master\" except for using\r\n> unlogged tables (using --unlogged-tables pgbench option). The TPS\r\n> increased by about 20% compared to \"master\" case (i.g., logged table\r\n> case). The reason why I experimented unlogged table case as well is\r\n> that we can think these results as an ideal performance if we were\r\n> able to write WAL records in 0 sec. IOW, even if the PMEM patch would\r\n> significantly improve WAL logging performance, I think it could not\r\n> exceed this performance. But hope is that if we currently have a\r\n> performance bottle-neck in WAL logging (.e.g, locking and writing\r\n> WAL), removing or minimizing WAL logging would bring a chance to\r\n> further improve performance by eliminating the new-coming bottle-neck.\r\n\r\nCould you tell us the specifics of the storage for WAL, e.g., SSD/HDD, the interface is NVMe/SAS/SATA, read-write throughput and latency (on the product catalog), and the product model?\r\n\r\nWas the WAL stored on a storage device separate from the other files? I want to know if the comparison is as fair as possible. I guess that in the NTT (PMEM) case, the WAL traffic is not affected by the I/Os of the other files.\r\n\r\nWhat would the comparison look like between master and unlogged-master if you place WAL on a DAX-aware filesystem like xfs or ext4 on PMEM, which Oracle recommends as REDO log storage? That is, if we place the WAL on the fastest storage configuration possible, what would be the difference between the logged and unlogged?\r\n\r\nI'm asking these to know if we consider it worthwhile to make further efforts in special code for WAL on PMEM.\r\n\r\n\r\n> Besides, I've checked the main wait events on each experiment using\r\n> pg_wait_sampling. Here are the top 5 wait events on \"master\" case\r\n> excluding wait events on the main function of auxiliary processes:\r\n> \r\n> event_type | event | sum\r\n> ------------+----------------------+-------\r\n> Client | ClientRead | 46902\r\n> LWLock | WALWrite | 33405\r\n> IPC | ProcArrayGroupUpdate | 8855\r\n> LWLock | WALInsert | 3215\r\n> LWLock | ProcArray | 3022\r\n> \r\n> We can see the wait event on WALWrite lwlock acquisition happened many\r\n> times and it was the primary wait event.\r\n> \r\n> The result of \"ntt\" case is:\r\n> \r\n> event_type | event | sum\r\n> ------------+----------------------+--------\r\n> LWLock | WALInsert | 126487\r\n> Client | ClientRead | 12173\r\n> LWLock | BufferContent | 4480\r\n> Lock | transactionid | 2017\r\n> IPC | ProcArrayGroupUpdate | 924\r\n> \r\n> The wait event on WALWrite lwlock disappeared. Instead, there were\r\n> many wait events on WALInsert lwlock. I've not investigated this\r\n> result yet. This could be because the v4 patch acquires WALInsert lock\r\n> more than necessary or writing WAL records to PMEM took more time than\r\n> writing to DRAM as Tomas mentioned before.\r\n\r\nIncreasing NUM_XLOGINSERT_LOCKS might improve the result, but I don't have much hope because PMEM appears to have limited concurrency...\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Mon, 15 Feb 2021 01:19:34 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nI made a new page at PostgreSQL Wiki to gather and summarize information\nand discussion about PMEM-backed WAL designs and implementations. Some\nparts of the page are TBD. I will continue to maintain the page. Requests\nare welcome.\n\nPersistent Memory for WAL\nhttps://wiki.postgresql.org/wiki/Persistent_Memory_for_WAL\n\nRegards,\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nHi,I made a new page at PostgreSQL Wiki to gather and summarize information and discussion about PMEM-backed WAL designs and implementations. Some parts of the page are TBD. I will continue to maintain the page. Requests are welcome.Persistent Memory for WALhttps://wiki.postgresql.org/wiki/Persistent_Memory_for_WALRegards,-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Tue, 16 Feb 2021 16:20:10 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "From: Takashi Menjo <takashi.menjo@gmail.com> \r\n> I made a new page at PostgreSQL Wiki to gather and summarize information and discussion about PMEM-backed WAL designs and implementations. Some parts of the page are TBD. I will continue to maintain the page. Requests are welcome.\r\n> \r\n> Persistent Memory for WAL\r\n> https://wiki.postgresql.org/wiki/Persistent_Memory_for_WAL\r\n\r\nThank you for putting together the information.\r\n\r\nIn \"Allocates WAL buffers on shared buffers\", \"shared buffers\" should be DRAM because shared buffers in Postgres means the buffer cache for database data.\r\n\r\nI haven't tracked the whole thread, but could you collect information like the following? I think such (partly basic) information will be helpful to decide whether it's worth casting more efforts into complex code, or it's enough to place WAL on DAX-aware filesystems and tune the filesystem.\r\n\r\n* What approaches other DBMSs take, and their performance gains (Oracle, SQL Server, HANA, Cassandra, etc.)\r\nThe same DBMS should take different approaches depending on the file type: Oracle recommends different things to data files and REDO logs.\r\n\r\n* The storage capabilities of PMEM compared to the fast(est) alternatives such as NVMe SSD (read/write IOPS, latency, throughput, concurrency, which may be posted on websites like Tom's Hardware or SNIA)\r\n\r\n* What's the situnation like on Windows?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Tue, 16 Feb 2021 08:21:03 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Takayuki,\n\nThank you for your helpful comments.\n\nIn \"Allocates WAL buffers on shared buffers\", \"shared buffers\" should be\n> DRAM because shared buffers in Postgres means the buffer cache for database\n> data.\n>\n\nThat's true. Fixed.\n\n\n> I haven't tracked the whole thread, but could you collect information like\n> the following? I think such (partly basic) information will be helpful to\n> decide whether it's worth casting more efforts into complex code, or it's\n> enough to place WAL on DAX-aware filesystems and tune the filesystem.\n>\n> * What approaches other DBMSs take, and their performance gains (Oracle,\n> SQL Server, HANA, Cassandra, etc.)\n> The same DBMS should take different approaches depending on the file type:\n> Oracle recommends different things to data files and REDO logs.\n>\n\nI also think it will be helpful. Adding \"Other DBMSes using PMEM\" section.\n\n* The storage capabilities of PMEM compared to the fast(est) alternatives\n> such as NVMe SSD (read/write IOPS, latency, throughput, concurrency, which\n> may be posted on websites like Tom's Hardware or SNIA)\n>\n\nThis will be helpful, too. Adding \"Basic performance\" subsection under\n\"Overview of persistent memory (PMEM).\"\n\n* What's the situnation like on Windows?\n>\n\nSorry but I don't know Windows' PMEM support very much. All I know is that\nWindows Server 2016 and 2019 supports PMEM (2016 partially) [1] and PMDK\nsupports Windows [2].\n\nAll the above contents will be updated gradually. Please stay tuned.\n\nRegards,\n\n[1]\nhttps://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/deploy-pmem\n[2]\nhttps://docs.pmem.io/persistent-memory/getting-started-guide/installing-pmdk/installing-pmdk-on-windows\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\nHi Takayuki,Thank you for your helpful comments.In \"Allocates WAL buffers on shared buffers\", \"shared buffers\" should be DRAM because shared buffers in Postgres means the buffer cache for database data.That's true. Fixed. \nI haven't tracked the whole thread, but could you collect information like the following? I think such (partly basic) information will be helpful to decide whether it's worth casting more efforts into complex code, or it's enough to place WAL on DAX-aware filesystems and tune the filesystem.\n\n* What approaches other DBMSs take, and their performance gains (Oracle, SQL Server, HANA, Cassandra, etc.)\nThe same DBMS should take different approaches depending on the file type: Oracle recommends different things to data files and REDO logs.I also think it will be helpful. Adding \"Other DBMSes using PMEM\" section.\n* The storage capabilities of PMEM compared to the fast(est) alternatives such as NVMe SSD (read/write IOPS, latency, throughput, concurrency, which may be posted on websites like Tom's Hardware or SNIA) This will be helpful, too. Adding \"Basic performance\" subsection under \"Overview of persistent memory (PMEM).\"\n* What's the situnation like on Windows?\nSorry but I don't know Windows' PMEM support very much. All I know is that Windows Server 2016 and 2019 supports PMEM (2016 partially) [1] and PMDK supports Windows [2].All the above contents will be updated gradually. Please stay tuned.Regards,[1] https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/deploy-pmem[2] https://docs.pmem.io/persistent-memory/getting-started-guide/installing-pmdk/installing-pmdk-on-windows-- Takashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Tue, 16 Feb 2021 18:10:11 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Sawada,\n\nThank you for your performance report.\n\nFirst, I'd say that the latest v5 non-volatile WAL buffer patchset\nlooks not bad itself. I made a performance test for the v5 and got\nbetter performance than the original (non-patched) one and our\nprevious work. See the attached figure for results.\n\nI think steps and/or setups of Tomas's, yours, and mine could be\ndifferent, leading to the different performance results. So I show my\nsteps and setups for my performance test. Please see the tail of this\nmail for them.\n\nAlso, I write performance tips to the PMEM page at PostgreSQL wiki\n[1]. I wish it could be helpful to improve performance.\n\nRegards,\nTakashi\n\n[1] https://wiki.postgresql.org/wiki/Persistent_Memory_for_WAL#Performance_tips\n\n\n\n# Environment variables\nexport PGHOST=/tmp\nexport PGPORT=5432\nexport PGDATABASE=\"$USER\"\nexport PGUSER=\"$USER\"\nexport PGDATA=/mnt/nvme0n1/pgdata\n\n# Steps\nNote that I ran postgres server and pgbench in a single-machine system\nbut separated two NUMA nodes. PMEM and PCI SSD for the server process\nare on the server-side NUMA node.\n\n01) Create a PMEM namespace (sudo ndctl create-namespace -f -t pmem -m\nfsdax -M dev -e namespace0.0)\n02) Make an ext4 filesystem for PMEM then mount it with DAX option\n(sudo mkfs.ext4 -q -F /dev/pmem0 ; sudo mount -o dax /dev/pmem0\n/mnt/pmem0)\n03) Make another ext4 filesystem for PCIe SSD then mount it (sudo\nmkfs.ext4 -q -F /dev/nvme0n1 ; sudo mount /dev/nvme0n1 /mnt/nvme0n1)\n04) Make /mnt/pmem0/pg_wal directory for WAL\n05) Make /mnt/nvme0n1/pgdata directory for PGDATA\n06) Run initdb (initdb --locale=C --encoding=UTF8 -X /mnt/pmem0/pg_wal ...)\n - Also give -P /mnt/pmem0/pg_wal/nvwal -Q 81920 in the case of\n\"Non-volatile WAL buffer\"\n07) Edit postgresql.conf as the attached one\n08) Start postgres server process on NUMA node 0 (numactl -N 0 -m 0 --\npg_ctl -l pg.log start)\n09) Create a database (createdb --locale=C --encoding=UTF8)\n10) Initialize pgbench tables with s=50 (pgbench -i -s 50)\n11) Stop the postgres server process (pg_ctl -l pg.log -m smart stop)\n12) Remount the PMEM and the PCIe SSD\n13) Start postgres server process on NUMA node 0 again (numactl -N 0\n-m 0 -- pg_ctl -l pg.log start)\n14) Run pg_prewarm for all the four pgbench_* tables\n15) Run pgbench on NUMA node 1 for 30 minutes (numactl -N 1 -m 1 --\npgbench -r -M prepared -T 1800 -c __ -j __)\n - It executes the default tpcb-like transactions\n\nI repeated all the steps three times for each (c,j) then got the\nmedian \"tps = __ (including connections establishing)\" of the three as\nthroughput and the \"latency average = __ ms \" of that time as average\nlatency.\n\n# Setup\n- System: HPE ProLiant DL380 Gen10\n- CPU: Intel Xeon Gold 6240M x2 sockets (18 cores per socket; HT\ndisabled by BIOS)\n- DRAM: DDR4 2933MHz 192GiB/socket x2 sockets (32 GiB per channel x 6\nchannels per socket)\n- Optane PMem: Apache Pass, AppDirect Mode, DDR4 2666MHz 1.5TiB/socket\nx2 sockets (256 GiB per channel x 6 channels per socket; interleaving\nenabled)\n- PCIe SSD: DC P4800X Series SSDPED1K750GA\n- Distro: Ubuntu 20.04.1\n- C compiler: gcc 9.3.0\n- libc: glibc 2.31\n- Linux kernel: 5.7.0 (built by myself)\n- Filesystem: ext4 (DAX enabled when using Optane PMem)\n- PMDK: 1.9 (built by myself)\n- PostgreSQL (Original): 9e7dbe3369cd8f5b0136c53b817471002505f934 (Jan\n18, 2021 @ master)\n- PostgreSQL (Mapped WAL file): Original + v5 of \"Applying PMDK to WAL\noperations for persistent memory\" [2]\n- PostgreSQL (Non-volatile WAL buffer): Original + v5 of \"Non-volatile\nWAL buffer\" [3]; please read the files' prefix \"v4-\" as \"v5-\"\n\n[2] https://www.postgresql.org/message-id/CAOwnP3O3O1GbHpddUAzT%3DCP3aMpX99%3D1WtBAfsRZYe2Ui53MFQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAOwnP3Oz4CnKp0-_KU-x5irr9pBqPNkk7pjwZE5Pgo8i1CbFGg%40mail.gmail.com\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Wed, 17 Feb 2021 18:02:42 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On 1/22/21 5:04 PM, Konstantin Knizhnik wrote:\n> ...\n>\n> I have heard from several DBMS experts that appearance of huge and\n> cheap non-volatile memory can make a revolution in database system\n> architecture. If all database can fit in non-volatile memory, then we\n> do not need buffers, WAL, ...>\n> But although multi-terabyte NVM announces were made by IBM several\n> years ago, I do not know about some successful DBMS prototypes with new\n> architecture.\n>\n> I tried to understand why...\n>\nIMHO those predictions are a bit too optimistic, because they often\nassume PMEM behavior is mostly similar to DRAM, except for the extra\npersistence. But that's not quite true - throughput with PMEM is much\nlower in general, peak throughput is reached with few processes (and\nthen drops quickly) etc. But over the last few years we were focused on\noptimizing for exactly the opposite - systems with many CPU cores and\nprocesses, because that's what maximizes DRAM throughput.\n\nI'm not saying a revolution is not possible, but it'll probably require\nquite significant rethinking of the whole architecture, and it may take\nmultiple PMEM generations until the performance improves enough to make\nthis economical. Some systems are probably more suitable for this (e.g.\nRedis is doing most of the work in a single process, IIRC).\n\nThe other challenge of course is availability of the hardware - most\nusers run on whatever is widely available at cloud providers. And PMEM\nis unlikely to get there very soon, I'd guess. Until that happens, the\npressure from these customers will be (naturally) fairly low. Perhaps\nsomeone will develop hardware appliances for on-premise setups, as was\nquite common in the past. Not sure.\n\n> It was very interesting to me to read this thread, which is actually\n> started in 2016 with \"Non-volatile Memory Logging\" presentation at PGCon.\n> As far as I understand from Tomas result right now using PMEM for WAL\n> doesn't provide some substantial increase of performance.\n> \n\nAt the moment, I'd probably agree. It's quite possible the PoC patches\nare missing some optimizations and the difference might be better, but\neven then the performance increase seems fairly modest and limited to\ncertainly workloads.\n\n> But the main advantage of PMEM from my point of view is that it allows\n> to avoid write-ahead logging at all!\n\nNo, PMEM certainly does not allow avoiding write-ahead logging - we\nstill need to handle e.g. recovery after a crash, when the data files\nare in unknown / corrupted state.\n\nNot to mention that WAL is used for physical and logical replication\n(and thus HA), and so on.\n\n> Certainly we need to change our algorithms to make it possible. Speaking\n> about Postgres, we have to rewrite all indexes + heap\n> and throw away buffer manager + WAL.\n> \n\nThe problem with removing buffer manager and just writing everything\ndirectly to PMEM is the worse latency/throughput (compared to DRAM).\nIt's probably much more efficient to combine multiple writes into RAM\nand then do one (much slower) write to persistent storage, than pay the\nhigher latency for every write.\n\nIt might make sense for data sets that are larger than DRAM but can fit\ninto PMEM. But that seems like fairly rare case, and even then it may be\nmore efficient to redesign the schema to fit into RAM somehow (sharding,\npartitioning, ...).\n\n> What can be used instead of standard B-Tree?\n> For example there is description of multiword-CAS approach:\n> \n> http://justinlevandoski.org/papers/mwcas.pdf\n> \n> and BzTree implementation on top of it:\n> \n> https://www.cc.gatech.edu/~jarulraj/papers/2018.bztree.vldb.pdf\n> \n> There is free BzTree implementation at github:\n> \n> git@github.com:sfu-dis/bztree.git\n> \n> I tried to adopt it for Postgres. It was not so easy because:\n> 1. It was written in modern C++ (-std=c++14)\n> 2. It supports multithreading, but not mutliprocess access\n> \n> So I have to patch code of this library instead of just using it:\n> \n> git@github.com:postgrespro/bztree.git\n> \n> I have not tested yet most iterating case: access to PMEM through PMDK.\n> And I do not have hardware for such tests.\n> But first results are also seem to be interesting: PMwCAS is kind of\n> lockless algorithm and it shows much better scaling at\n> NUMA host comparing with standard Postgres.\n> \n> I have done simple parallel insertion test: multiple clients are\n> inserting data with random keys.\n> To make competition with vanilla Postgres more honest I used unlogged\n> table:\n> \n> create unlogged table t(pk int, payload int);\n> create index on t using bztree(pk);\n> \n> randinsert.sql:\n> insert into t (payload,pk) values\n> (generate_series(1,1000),random()*1000000000);\n> \n> pgbench -f randinsert.sql -c N -j N -M prepared -n -t 1000 -P 1 postgres\n> \n> So each client is inserting one million records.\n> The target system has 160 virtual and 80 real cores with 256GB of RAM.\n> Results (TPS) are the following:\n> \n> N nbtree bztree\n> 1 540 455\n> 10 993 2237\n> 100 1479 5025\n> \n> So bztree is more than 3 times faster for 100 clients.\n> Just for comparison: result for inserting in this table without index is\n> 10k TPS.\n> \n\nI'm not familiar with bztree, but I agree novel indexing structures are\nan interesting topic on their own. I only quickly skimmed the bztree\npaper, but it seems it might be useful even on DRAM (assuming it will\nwork with replication etc.).\n\nThe other \"problem\" with placing data files (tables, indexes) on PMEM\nand making this code PMEM-aware is that these writes generally happen\nasynchronously in the background, so the impact on transaction rate is\nfairly low. This is why all the patches in this thread try to apply PMEM\non the WAL logging / flushing, which is on the critical path.\n\n> I am going then try to play with PMEM.\n> If results will be promising, then it is possible to think about\n> reimplementation of heap and WAL-less Postgres!\n> \n> I am sorry, that my post has no direct relation to the topic of this\n> thread (Non-volatile WAL buffer).\n> It seems to be that it is better to use PMEM to eliminate WAL at all\n> instead of optimizing it.\n> Certainly, I realize that WAL plays very important role in Postgres:\n> archiving and replication are based on WAL. So even if we can live\n> without WAL, it is still not clear whether we really want to live\n> without it.\n> \n> One more idea: using multiword CAS approach requires us to make changes\n> as editing sequences.\n> Such editing sequence is actually ready WAL records. So implementors of\n> access methods do not have to do\n> double work: update data structure in memory and create correspondent\n> WAL records. Moreover, PMwCAS operations are atomic:\n> we can replay or revert them in case of fault. So there is no need in\n> FPW (full page writes) which have very noticeable impact on WAL size and\n> database performance.\n> \n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Feb 2021 04:25:33 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Thank you for your feedback.\n\nOn 19.02.2021 6:25, Tomas Vondra wrote:\n> On 1/22/21 5:04 PM, Konstantin Knizhnik wrote:\n>> ...\n>>\n>> I have heard from several DBMS experts that appearance of huge and\n>> cheap non-volatile memory can make a revolution in database system\n>> architecture. If all database can fit in non-volatile memory, then we\n>> do not need buffers, WAL, ...>\n>> But although multi-terabyte NVM announces were made by IBM several\n>> years ago, I do not know about some successful DBMS prototypes with new\n>> architecture.\n>>\n>> I tried to understand why...\n>>\n> IMHO those predictions are a bit too optimistic, because they often\n> assume PMEM behavior is mostly similar to DRAM, except for the extra\n> persistence. But that's not quite true - throughput with PMEM is much\n> lower\nActually it is not completely true.\nThere are several types of NVDIMMs.\nMost popular now is NVDIMM-N which is just combination of DRAM and flash.\nSpeed it the same as of normal DRAM, but size of such memory is also \ncomparable with DRAM.\nSo I do not think that it is perspective approach.\nAnd definitely speed of Intel Optane memory is much slower than of DRAM.\n>> But the main advantage of PMEM from my point of view is that it allows\n>> to avoid write-ahead logging at all!\n> No, PMEM certainly does not allow avoiding write-ahead logging - we\n> still need to handle e.g. recovery after a crash, when the data files\n> are in unknown / corrupted state.\n\nIt is possible to avoid write-ahead logging if we use special algorithms \n(like PMwCAS)\nwhich ensures atomicity of updates.\n> The problem with removing buffer manager and just writing everything\n> directly to PMEM is the worse latency/throughput (compared to DRAM).\n> It's probably much more efficient to combine multiple writes into RAM\n> and then do one (much slower) write to persistent storage, than pay the\n> higher latency for every write.\n>\n> It might make sense for data sets that are larger than DRAM but can fit\n> into PMEM. But that seems like fairly rare case, and even then it may be\n> more efficient to redesign the schema to fit into RAM somehow (sharding,\n> partitioning, ...).\n\nCertainly avoid buffering will make sense only if speed of accessing \nPMEM will be comparable with DRAM.\n> So I have to patch code of this library instead of just using it:\n>\n> git@github.com:postgrespro/bztree.git\n>\n> I have not tested yet most iterating case: access to PMEM through PMDK.\n> And I do not have hardware for such tests.\n> But first results are also seem to be interesting: PMwCAS is kind of\n> lockless algorithm and it shows much better scaling at\n> NUMA host comparing with standard Postgres.\n>\n> I have done simple parallel insertion test: multiple clients are\n> inserting data with random keys.\n> To make competition with vanilla Postgres more honest I used unlogged\n> table:\n>\n> create unlogged table t(pk int, payload int);\n> create index on t using bztree(pk);\n>\n> randinsert.sql:\n> insert into t (payload,pk) values\n> (generate_series(1,1000),random()*1000000000);\n>\n> pgbench -f randinsert.sql -c N -j N -M prepared -n -t 1000 -P 1 postgres\n>\n> So each client is inserting one million records.\n> The target system has 160 virtual and 80 real cores with 256GB of RAM.\n> Results (TPS) are the following:\n>\n> N nbtree bztree\n> 1 540 455\n> 10 993 2237\n> 100 1479 5025\n>\n> So bztree is more than 3 times faster for 100 clients.\n> Just for comparison: result for inserting in this table without index is\n> 10k TPS.\n>\n> I'm not familiar with bztree, but I agree novel indexing structures are\n> an interesting topic on their own. I only quickly skimmed the bztree\n> paper, but it seems it might be useful even on DRAM (assuming it will\n> work with replication etc.).\n>\n> The other \"problem\" with placing data files (tables, indexes) on PMEM\n> and making this code PMEM-aware is that these writes generally happen\n> asynchronously in the background, so the impact on transaction rate is\n> fairly low. This is why all the patches in this thread try to apply PMEM\n> on the WAL logging / flushing, which is on the critical path.\n\nI want to make an update on my prototype.\nUnfortunately my attempt to use bztree with PMEM failed,\nbecause of two problems:\n\n1. Used libpmemobj/bztree libraries are not compatible with Postgres \narchitecture.\nThem support concurrent access, but by multiple threads within one \nprocess (widely use thread-local variables).\nThe traditional Postgres approach (initialize shared data structures in \npostmaster\n(shared_preload_libraries) and inherit it by forked child processes) \ndoesn't work for libpmemobj.\nIf child doesn't open pmem itself, then any access to it cause crash.\nAnd in case of openning pmem by child, it is assigned different virtual \nmemory address.\nBut bztree and pmwcas implementations expect that addresses are the same \nin all clients.\n\n2. There is some bug in bztree/pmwcas implementation which cause its own \ntest to hang in case of multithreaded\naccess in persistence mode. I tried to find the reason of the problem \nbut didn;t succeed yet: PMwCAS implementation is very non-trivial).\n\nSo I just compared single threaded performance of bztree test: with \nIntel Optane it was about two times worser\nthan with volatile memory.\n\nI still wonder if using bztree just as in-memory index will be \ninterested because it is scaling much better than Postgres B-Tree and \neven our own PgPro\nin_memory extension. But certainly volatile index has very limited \nusages. Also full support of all Postgres types in bztree requires a lot \nof efforts\n(right now I support only equality comparison).\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 19 Feb 2021 11:51:35 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nI had a performance test in another environment. The steps, setup,\nand postgresql.conf of the test are same as the ones sent by me on\nFeb 17 [1], except the following items:\n\n# Setup\n- Distro: Red Hat Enterprise Linux release 8.2 (Ootpa)\n- C compiler: gcc-8.3.1-5.el8.x86_64\n- libc: glibc-2.28-101.el8.x86_64\n- Linux kernel: kernel-4.18.0-193.el8.x86_64\n- PMDK: libpmem-1.6.1-1.el8.x86_64, libpmem-devel-1.6.1-1.el8.x86_64\n\nSee the attached figure for the results. In short, the v5 non-volatile\nWAL buffer got better performance than the original (non-patched) one.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAOwnP3OFofOsFtmeikQcbMp0YWdJn0kVB4Ka_0tj+Urq7dtAzQ@mail.gmail.com\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Wed, 24 Feb 2021 11:03:29 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "On Sat, Feb 13, 2021 at 12:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jan 28, 2021 at 1:41 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 1/25/21 3:56 AM, Masahiko Sawada wrote:\n> > >>\n> > >> ...\n> > >>\n> > >> On 1/21/21 3:17 AM, Masahiko Sawada wrote:\n> > >>> ...\n> > >>>\n> > >>> While looking at the two methods: NTT and simple-no-buffer, I realized\n> > >>> that in XLogFlush(), NTT patch flushes (by pmem_flush() and\n> > >>> pmem_drain()) WAL without acquiring WALWriteLock whereas\n> > >>> simple-no-buffer patch acquires WALWriteLock to do that\n> > >>> (pmem_persist()). I wonder if this also affected the performance\n> > >>> differences between those two methods since WALWriteLock serializes\n> > >>> the operations. With PMEM, multiple backends can concurrently flush\n> > >>> the records if the memory region is not overlapped? If so, flushing\n> > >>> WAL without WALWriteLock would be a big benefit.\n> > >>>\n> > >>\n> > >> That's a very good question - it's quite possible the WALWriteLock is\n> > >> not really needed, because the processes are actually \"writing\" the WAL\n> > >> directly to PMEM. So it's a bit confusing, because it's only really\n> > >> concerned about making sure it's flushed.\n> > >>\n> > >> And yes, multiple processes certainly can write to PMEM at the same\n> > >> time, in fact it's a requirement to get good throughput I believe. My\n> > >> understanding is we need ~8 processes, at least that's what I heard from\n> > >> people with more PMEM experience.\n> > >\n> > > Thanks, that's good to know.\n> > >\n> > >>\n> > >> TBH I'm not convinced the code in the \"simple-no-buffer\" code (coming\n> > >> from the 0002 patch) is actually correct. Essentially, consider the\n> > >> backend needs to do a flush, but does not have a segment mapped. So it\n> > >> maps it and calls pmem_drain() on it.\n> > >>\n> > >> But does that actually flush anything? Does it properly flush changes\n> > >> done by other processes that may not have called pmem_drain() yet? I\n> > >> find this somewhat suspicious and I'd bet all processes that did write\n> > >> something have to call pmem_drain().\n> > >\n> > For the record, from what I learned / been told by engineers with PMEM\n> > experience, calling pmem_drain() should properly flush changes done by\n> > other processes. So it should be sufficient to do that in XLogFlush(),\n> > from a single process.\n> >\n> > My understanding is that we have about three challenges here:\n> >\n> > (a) we still need to track how far we flushed, so this needs to be\n> > protected by some lock anyway (although perhaps a much smaller section\n> > of the function)\n> >\n> > (b) pmem_drain() flushes all the changes, so it flushes even \"future\"\n> > part of the WAL after the requested LSN, which may negatively affects\n> > performance I guess. So I wonder if pmem_persist would be a better fit,\n> > as it allows specifying a range that should be persisted.\n> >\n> > (c) As mentioned before, PMEM behaves differently with concurrent\n> > access, i.e. it reaches peak throughput with relatively low number of\n> > threads wroting data, and then the throughput drops quite quickly. I'm\n> > not sure if the same thing applies to pmem_drain() too - if it does, we\n> > may need something like we have for insertions, i.e. a handful of locks\n> > allowing limited number of concurrent inserts.\n>\n> Thanks. That's a good summary.\n>\n> >\n> >\n> > > Yeah, in terms of experiments at least it's good to find out that the\n> > > approach mmapping each WAL segment is not good at performance.\n> > >\n> > Right. The problem with small WAL segments seems to be that each mmap\n> > causes the TLB to be thrown away, which means a lot of expensive cache\n> > misses. As the mmap needs to be done by each backend writing WAL, this\n> > is particularly bad with small WAL segments. The NTT patch works around\n> > that by doing just a single mmap.\n> >\n> > I wonder if we could pre-allocate and mmap small segments, and keep them\n> > mapped and just rename the underlying files when recycling them. That'd\n> > keep the regular segment files, as expected by various tools, etc. The\n> > question is what would happen when we temporarily need more WAL, etc.\n> >\n> > >>>\n> > >>> ...\n> > >>>\n> > >>> I think the performance improvement by NTT patch with the 16MB WAL\n> > >>> segment, the most common WAL segment size, is very good (150437 vs.\n> > >>> 212410 with 64 clients). But maybe evaluating writing WAL segment\n> > >>> files on PMEM DAX filesystem is also worth, as you mentioned, if we\n> > >>> don't do that yet.\n> > >>>\n> > >>\n> > >> Well, not sure. I think the question is still open whether it's actually\n> > >> safe to run on DAX, which does not have atomic writes of 512B sectors,\n> > >> and I think we rely on that e.g. for pg_config. But maybe for WAL that's\n> > >> not an issue.\n> > >\n> > > I think we can use the Block Translation Table (BTT) driver that\n> > > provides atomic sector updates.\n> > >\n> >\n> > But we have benchmarked that, see my message from 2020/11/26, which\n> > shows this table:\n> >\n> > master/btt master/dax ntt simple\n> > -----------------------------------------------------------\n> > 1 5469 7402 7977 6746\n> > 16 48222 80869 107025 82343\n> > 32 73974 158189 214718 158348\n> > 64 85921 154540 225715 164248\n> > 96 150602 221159 237008 217253\n> >\n> > Clearly, BTT is quite expensive. Maybe there's a way to tune that at\n> > filesystem/kernel level, I haven't tried that.\n>\n> I missed your mail. Yeah, BTT seems to be quite expensive.\n>\n> >\n> > >>\n> > >>>> I'm also wondering if WAL is the right usage for PMEM. Per [2] there's a\n> > >>>> huge read-write assymmetry (the writes being way slower), and their\n> > >>>> recommendation (in \"Observation 3\" is)\n> > >>>>\n> > >>>> The read-write asymmetry of PMem im-plies the necessity of avoiding\n> > >>>> writes as much as possible for PMem.\n> > >>>>\n> > >>>> So maybe we should not be trying to use PMEM for WAL, which is pretty\n> > >>>> write-heavy (and in most cases even write-only).\n> > >>>\n> > >>> I think using PMEM for WAL is cost-effective but it leverages the only\n> > >>> low-latency (sequential) write, but not other abilities such as\n> > >>> fine-grained access and low-latency random write. If we want to\n> > >>> exploit its all ability we might need some drastic changes to logging\n> > >>> protocol while considering storing data on PMEM.\n> > >>>\n> > >>\n> > >> True. I think investigating whether it's sensible to use PMEM for this\n> > >> purpose. It may turn out that replacing the DRAM WAL buffers with writes\n> > >> directly to PMEM is not economical, and aggregating data in a DRAM\n> > >> buffer is better :-(\n> > >\n> > > Yes. I think it might be interesting to do an analysis of the\n> > > bottlenecks of NTT patch by perf etc. If bottlenecks are moved to\n> > > other places by removing WALWriteLock during flush, it's probably a\n> > > good sign for further performance improvements. IIRC WALWriteLock is\n> > > one of the main bottlenecks on OLTP workload, although my memory might\n> > > already be out of date.\n> > >\n> >\n> > I think WALWriteLock itself (i.e. acquiring/releasing it) is not an\n> > issue - the problem is that writing the WAL to persistent storage itself\n> > is expensive, and we're waiting to that.\n> >\n> > So it's not clear to me if removing the lock (and allowing multiple\n> > processes to do pmem_drain concurrently) can actually help, considering\n> > pmem_drain() should flush writes from other processes anyway.\n> >\n> > But as I said, that is just my theory - I might be entirely wrong, it'd\n> > be good to hack XLogFlush a bit and try it out.\n> >\n> >\n>\n> I've done some performance benchmarks with the master and NTT v4\n> patch. Let me share the results.\n>\n> pgbench setup:\n> * scale factor = 2000\n> * duration = 600 sec\n> * clients = 32, 64, 96\n>\n> NVWAL setup:\n> * nvwal_size = 50GB\n> * max_wal_size = 50GB\n> * min_wal_size = 50GB\n>\n> The whole database fits in shared_buffers and WAL segment file size is 16MB.\n>\n> The results are:\n>\n> master NTT master-unlogged\n> 32 113209 67107 154298\n> 64 144880 54289 178883\n> 96 151405 50562 180018\n>\n> \"master-unlogged\" is the same setup as \"master\" except for using\n> unlogged tables (using --unlogged-tables pgbench option). The TPS\n> increased by about 20% compared to \"master\" case (i.g., logged table\n> case). The reason why I experimented unlogged table case as well is\n> that we can think these results as an ideal performance if we were\n> able to write WAL records in 0 sec. IOW, even if the PMEM patch would\n> significantly improve WAL logging performance, I think it could not\n> exceed this performance. But hope is that if we currently have a\n> performance bottle-neck in WAL logging (.e.g, locking and writing\n> WAL), removing or minimizing WAL logging would bring a chance to\n> further improve performance by eliminating the new-coming bottle-neck.\n>\n> As we can see from the above result, apparently, the performance of\n> “ntt” case was not good in this evaluation. I've not reviewed the\n> patch in-depth yet but something might be wrong with the v4 patch or\n> PMEM configuration I did on my environment is wrong.\n\nI've reconfigured PMEM and done the same benchmark. I got the\nfollowing results (changed only \"ntt\" case):\n\n master NTT master-unlogged\n32 113209 144829 154298\n64 144880 164899 178883\n96 151405 166096 180018\n\nI got a much better performance with \"ntt\" patch. I think I think it\nwas wrong that I created a partition on PMEM (i.g., created filesystem\non /dev/pmem1p1) when the last evaluation. Sorry for confusing you,\nMenjo-san.\n\nFWIW here are the top 5 wait events on new \"ntt\" case:\n\n event_type | event | sum\n------------+----------------------+------\n Client | ClientRead | 8462\n LWLock | WALInsert | 1049\n LWLock | ProcArray | 627\n IPC | ProcArrayGroupUpdate | 481\n LWLock | XactSLRU | 247\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 25 Feb 2021 12:28:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Sawada,\n\nI am relieved to hear that the performance problem was solved.\n\nAnd I added a tip about PMEM namespace and partitioning in PG wiki[1].\n\nRegards,\n\n[1] https://wiki.postgresql.org/wiki/Persistent_Memory_for_WAL#Configure_and_verify_DAX_hugepage_faults\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\n\n",
"msg_date": "Mon, 1 Mar 2021 10:30:00 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi,\n\nI've performed some additional benchmarking and testing on the patches \nsent on 26/1 [1], and I'd like to share some interesting results.\n\nI did the tests on two different machines, with slightly different \nconfigurations. Both machines use the same CPU generation with slightly \ndifferent frequency, a different OS (Ubuntu vs. RH), kernel (5.3 vs. \n4.18) and so on. A more detailed description is in the attached PDF, \nalong with the PostgreSQL configuration.\n\nThe benchmark is fairly simple - pgbench with scale 500 (fits into \nshared buffers) and 5000 (fits into RAM). The runs were just 1 minute \neach, which is fairly short - it's however intentional, because I've \ndone this with both full_page_writes=on/off to test how this behaves \nwith many and no FPIs. This models extreme behaviors at the beginning \nand at the end of a checkpoint.\n\nThis thread is rather confusing because there are far too many patches \nwith over-lapping version numbers - even [1] contains two very different \npatches. I'll refer to them as \"NTT / buffer\" (for the patch using one \nlarge PMEM buffer) and \"NTT / segments\" for the patch using regular WAL \nsegments.\n\nThe attached PDF shows all these results along with charts. The two \nsystems have a bit different performance (throughput), the conclusions \nseem to be mostly the same, so I'll just talk about results from one of \nthe systems here (aka \"System A\").\n\nNote: Those systems are hosted / provided by Intel SDP, and Intel is \ninterested in providing access to other devs interested in PMEM.\n\nFurthermore, these patches seem to be very insensitive to WAL segment \nsize (unlike the experimental patches I shared some time ago), so I'll \nonly show results for one WAL segment size. (Obviously, the NTT / buffer \npatch can't be sensitive to this by definition, as it's not using WAL \nsegments at all.)\n\n\nResults\n-------\n\nFor scale 500, the results (with full_page_writes=on) look like this:\n\n 1 8 16 32 48 64\n ------------------------------------------------------------------\n master 9411 58833 111453 181681 215552 234099\n NTT / buffer 10837 77260 145251 222586 255651 264207\n NTT / segments 11011 76892 145049 223078 255022 269737\n\nSo there is a fairly nice speedup - about 30%, which is consistent with \nthe results shared before. Moreover, the \"NTT / segments\" patch performs \nabout the same as the \"NTT / buffer\" which is encouraging.\n\nFor scale 5000, the results look like this:\n\n 1 8 16 32 48 64\n ------------------------------------------------------------------\n master 7388 42020 64523 91877 102805 111389\n NTT / buffer 8650 58018 96314 132440 139512 134228\n NTT / segments 8614 57286 97173 138435 157595 157138\n\nThat's intriguing - the speedup is even higher, almost 40-60% with \nenough clients (16-64). For me this is a bit surprising, because in this \ncase the data don't fit into shared_buffers, so extra time needs to be \nspent copying data between RAM and shared_buffers and perhaps even doing \nsome writes. So my expectation was that this increases the amount of \ntime spent outside XLOG code, thus diminishing the speedup.\n\nNow, let's look at results with full_page_writes=off. For scale 500 the \nresults are:\n\n 1 8 16 32 48 64\n ------------------------------------------------------------------\n master 10476 67191 122191 198620 234381 251452\n NTT / buffer 11119 79530 148580 229523 262142 275281\n NTT / segments 11528 79004 148978 229714 259798 274753\n\nand on scale 5000:\n\n 1 8 16 32 48 64\n ------------------------------------------------------------------\n master 8192 55870 98451 145097 172377 172907\n NTT / buffer 9063 62659 110868 161352 173977 164359\n NTT / segments 9277 63226 112307 166070 171997 158085\n\nThat is, the speedups with scale 500 drops to ~10%, and for scale 5000 \nit disappears almost entirely.\n\nI'd have expected that without FPIs the patches will actually be more \neffective - so this seems interesting. The conclusion however seems to \nbe that the lower the amount of FPIs in the WAL stream, the smaller the \nspeedup. Or in a different way - it's most effective right after a \ncheckpoint, and it decreases during the checkpoint. So in a well tuned \nsystem with significant distance between checkpoints, the speedup seems \nto be fairly limited.\n\nThis is also consistent with the fact that for scale 5000 (with FPW=on) \nthe speedups are much more significant, simply because there are far \nmore pages (and thus FPIs). Also, after disabling FPWs the speedup \nalmost entirely disappears.\n\nOn the second system, the differences are even more significant (see the \nPDF). I suspect this is dues to slightly different hardware config with \nslower CPU / different PMEM capacity, etc. The overall behavior and \nconclusions are however the same, I think.\n\nOf course, another question is how this will be affected by never PMEM \nversions with higher performance (e.g. the new generation of Intel PMEM \nshould be ~20% faster, from what I hear).\n\n\nIssues & Questions\n------------------\n\nWhile testing the \"NTT / segments\" patch, I repeatedly managed to crash \nthe cluster with errors like this:\n\n2021-02-28 00:07:21.221 PST client backend [3737139] WARNING: creating \nlogfile segment just before mapping; path \"pg_wal/00000001000000070000002F\"\n2021-02-28 00:07:21.670 PST client backend [3737142] WARNING: creating \nlogfile segment just before mapping; path \"pg_wal/000000010000000700000030\"\n...\n2021-02-28 00:07:21.698 PST client backend [3737145] WARNING: creating \nlogfile segment just before mapping; path \"pg_wal/000000010000000700000030\"\n2021-02-28 00:07:21.698 PST client backend [3737130] PANIC: could not \nopen file \"pg_wal/000000010000000700000030\": No such file or directory\n\nI do believe this is a thinko in the 0008 patch, which does XLogFileInit \nin XLogFileMap. Notice there are multiple \"creating logfile\" messages \nwith the ..0030 segment, followed by the failure. AFAICS the XLogFileMap \nmay be called from multiple backends, so they may call XLogFileInit \nconcurrently, likely triggering some sort of race condition. It's fairly \nrare issue, though - I've only seen it twice from ~20 runs.\n\n\nThe other question I have is about WALInsertLockUpdateInsertingAt. 0003 \nremoves this function, but leaves behind some of the other bits working \nwith insert locks and insertingAt. But it does not explain how it works \nwithout WaitXLogInsertionsToFinish() - how does it ensure that when we \ncommit something, all the preceding WAL is \"complete\" (i.e. written by \nother backends etc.)?\n\n\nConclusion\n----------\n\nI do think the \"NTT / segments\" patch is the most promising way forward. \nIt does perform about as well as the \"NTT / buffer\" patch (and much both \nperform much better than the experimental patches I shared in January).\n\nThe \"NTT / buffer\" patch seems much more disruptive - it introduces one \nlarge buffer for WAL, which makes various other tasks more complicated \n(i.e. it needs additional complexity to handle WAL archival, etc.). Are \nthere some advantages of this patch (compared to the other patch)?\n\nAs for the \"NTT / segments\" patch, I wonder if we can just rework the \ncode like this (to use mmap etc.) or whether we need to support both \nboth ways (file I/O and mmap). I don't have much experience with many \nother platforms, but it seems quite possible that mmap won't work all \nthat well on some of them. So my assumption is we'll need to support \nboth file I/O and mmap to make any of this committable, but I may be wrong.\n\n\n[1] \nhttps://www.postgresql.org/message-id/CAOwnP3Oz4CnKp0-_KU-x5irr9pBqPNkk7pjwZE5Pgo8i1CbFGg%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 1 Mar 2021 21:40:27 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Tomas,\n\nThank you so much for your report. I have read it with great interest.\n\nYour conclusion sounds reasonable to me. My patchset you call \"NTT /\nsegments\" got as good performance as \"NTT / buffer\" patchset. I have\nbeen worried that calling mmap/munmap for each WAL segment file could\nhave a lot of overhead. Based on your performance tests, however, the\noverhead looks less than what I thought. In addition, \"NTT / segments\"\npatchset is more compatible to the current PG and more friendly to\nDBAs because that patchset uses WAL segment files and does not\nintroduce any other new WAL-related file.\n\nI also think that supporting both file I/O and mmap is better than\nsupporting only mmap. I will continue my work on \"NTT / segments\"\npatchset to support both ways.\n\nIn the following, I will answer \"Issues & Questions\" you reported.\n\n\n> While testing the \"NTT / segments\" patch, I repeatedly managed to crash the cluster with errors like this:\n>\n> 2021-02-28 00:07:21.221 PST client backend [3737139] WARNING: creating logfile segment just before\n> mapping; path \"pg_wal/00000001000000070000002F\"\n> 2021-02-28 00:07:21.670 PST client backend [3737142] WARNING: creating logfile segment just before\n> mapping; path \"pg_wal/000000010000000700000030\"\n> ...\n> 2021-02-28 00:07:21.698 PST client backend [3737145] WARNING: creating logfile segment just before\n> mapping; path \"pg_wal/000000010000000700000030\"\n> 2021-02-28 00:07:21.698 PST client backend [3737130] PANIC: could not open file\n> \"pg_wal/000000010000000700000030\": No such file or directory\n>\n> I do believe this is a thinko in the 0008 patch, which does XLogFileInit in XLogFileMap. Notice there are multiple\n> \"creating logfile\" messages with the ..0030 segment, followed by the failure. AFAICS the XLogFileMap may be\n> called from multiple backends, so they may call XLogFileInit concurrently, likely triggering some sort of race\n> condition. It's fairly rare issue, though - I've only seen it twice from ~20 runs.\n\nThank you for your report. I found that rather the patch 0009 has an\nissue, and that will also cause WAL loss. I should have set\nuse_existent to true, or InstallXlogFileSegment and BasicOpenFile in\nXLogFileInit can be racy. I have misunderstood that use_existent can\nbe true because I am creating a brand-new file with XLogFileInit.\n\nI will fix the issue.\n\n\n> The other question I have is about WALInsertLockUpdateInsertingAt. 0003 removes this function, but leaves\n> behind some of the other bits working with insert locks and insertingAt. But it does not explain how it works without\n> WaitXLogInsertionsToFinish() - how does it ensure that when we commit something, all the preceding WAL is\n> \"complete\" (i.e. written by other backends etc.)?\n\nTo wait for *all* the WALInsertLocks to be released, no matter each of\nthem precedes or follows the current insertion.\n\nIt would have worked functionally, but I rethink it is not good for\nperformance because XLogFileMap in GetXLogBuffer (where\nWaitXLogInsertionsToFinish is removed) can block because it can\neventually call write() in XLogFileInit.\n\nI will restore the WALInsertLockUpdateInsertingAt function and related\ncode for mmap.\n\n\nBest regards,\nTakashi\n\n\nOn Tue, Mar 2, 2021 at 5:40 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I've performed some additional benchmarking and testing on the patches\n> sent on 26/1 [1], and I'd like to share some interesting results.\n>\n> I did the tests on two different machines, with slightly different\n> configurations. Both machines use the same CPU generation with slightly\n> different frequency, a different OS (Ubuntu vs. RH), kernel (5.3 vs.\n> 4.18) and so on. A more detailed description is in the attached PDF,\n> along with the PostgreSQL configuration.\n>\n> The benchmark is fairly simple - pgbench with scale 500 (fits into\n> shared buffers) and 5000 (fits into RAM). The runs were just 1 minute\n> each, which is fairly short - it's however intentional, because I've\n> done this with both full_page_writes=on/off to test how this behaves\n> with many and no FPIs. This models extreme behaviors at the beginning\n> and at the end of a checkpoint.\n>\n> This thread is rather confusing because there are far too many patches\n> with over-lapping version numbers - even [1] contains two very different\n> patches. I'll refer to them as \"NTT / buffer\" (for the patch using one\n> large PMEM buffer) and \"NTT / segments\" for the patch using regular WAL\n> segments.\n>\n> The attached PDF shows all these results along with charts. The two\n> systems have a bit different performance (throughput), the conclusions\n> seem to be mostly the same, so I'll just talk about results from one of\n> the systems here (aka \"System A\").\n>\n> Note: Those systems are hosted / provided by Intel SDP, and Intel is\n> interested in providing access to other devs interested in PMEM.\n>\n> Furthermore, these patches seem to be very insensitive to WAL segment\n> size (unlike the experimental patches I shared some time ago), so I'll\n> only show results for one WAL segment size. (Obviously, the NTT / buffer\n> patch can't be sensitive to this by definition, as it's not using WAL\n> segments at all.)\n>\n>\n> Results\n> -------\n>\n> For scale 500, the results (with full_page_writes=on) look like this:\n>\n> 1 8 16 32 48 64\n> ------------------------------------------------------------------\n> master 9411 58833 111453 181681 215552 234099\n> NTT / buffer 10837 77260 145251 222586 255651 264207\n> NTT / segments 11011 76892 145049 223078 255022 269737\n>\n> So there is a fairly nice speedup - about 30%, which is consistent with\n> the results shared before. Moreover, the \"NTT / segments\" patch performs\n> about the same as the \"NTT / buffer\" which is encouraging.\n>\n> For scale 5000, the results look like this:\n>\n> 1 8 16 32 48 64\n> ------------------------------------------------------------------\n> master 7388 42020 64523 91877 102805 111389\n> NTT / buffer 8650 58018 96314 132440 139512 134228\n> NTT / segments 8614 57286 97173 138435 157595 157138\n>\n> That's intriguing - the speedup is even higher, almost 40-60% with\n> enough clients (16-64). For me this is a bit surprising, because in this\n> case the data don't fit into shared_buffers, so extra time needs to be\n> spent copying data between RAM and shared_buffers and perhaps even doing\n> some writes. So my expectation was that this increases the amount of\n> time spent outside XLOG code, thus diminishing the speedup.\n>\n> Now, let's look at results with full_page_writes=off. For scale 500 the\n> results are:\n>\n> 1 8 16 32 48 64\n> ------------------------------------------------------------------\n> master 10476 67191 122191 198620 234381 251452\n> NTT / buffer 11119 79530 148580 229523 262142 275281\n> NTT / segments 11528 79004 148978 229714 259798 274753\n>\n> and on scale 5000:\n>\n> 1 8 16 32 48 64\n> ------------------------------------------------------------------\n> master 8192 55870 98451 145097 172377 172907\n> NTT / buffer 9063 62659 110868 161352 173977 164359\n> NTT / segments 9277 63226 112307 166070 171997 158085\n>\n> That is, the speedups with scale 500 drops to ~10%, and for scale 5000\n> it disappears almost entirely.\n>\n> I'd have expected that without FPIs the patches will actually be more\n> effective - so this seems interesting. The conclusion however seems to\n> be that the lower the amount of FPIs in the WAL stream, the smaller the\n> speedup. Or in a different way - it's most effective right after a\n> checkpoint, and it decreases during the checkpoint. So in a well tuned\n> system with significant distance between checkpoints, the speedup seems\n> to be fairly limited.\n>\n> This is also consistent with the fact that for scale 5000 (with FPW=on)\n> the speedups are much more significant, simply because there are far\n> more pages (and thus FPIs). Also, after disabling FPWs the speedup\n> almost entirely disappears.\n>\n> On the second system, the differences are even more significant (see the\n> PDF). I suspect this is dues to slightly different hardware config with\n> slower CPU / different PMEM capacity, etc. The overall behavior and\n> conclusions are however the same, I think.\n>\n> Of course, another question is how this will be affected by never PMEM\n> versions with higher performance (e.g. the new generation of Intel PMEM\n> should be ~20% faster, from what I hear).\n>\n>\n> Issues & Questions\n> ------------------\n>\n> While testing the \"NTT / segments\" patch, I repeatedly managed to crash\n> the cluster with errors like this:\n>\n> 2021-02-28 00:07:21.221 PST client backend [3737139] WARNING: creating\n> logfile segment just before mapping; path \"pg_wal/00000001000000070000002F\"\n> 2021-02-28 00:07:21.670 PST client backend [3737142] WARNING: creating\n> logfile segment just before mapping; path \"pg_wal/000000010000000700000030\"\n> ...\n> 2021-02-28 00:07:21.698 PST client backend [3737145] WARNING: creating\n> logfile segment just before mapping; path \"pg_wal/000000010000000700000030\"\n> 2021-02-28 00:07:21.698 PST client backend [3737130] PANIC: could not\n> open file \"pg_wal/000000010000000700000030\": No such file or directory\n>\n> I do believe this is a thinko in the 0008 patch, which does XLogFileInit\n> in XLogFileMap. Notice there are multiple \"creating logfile\" messages\n> with the ..0030 segment, followed by the failure. AFAICS the XLogFileMap\n> may be called from multiple backends, so they may call XLogFileInit\n> concurrently, likely triggering some sort of race condition. It's fairly\n> rare issue, though - I've only seen it twice from ~20 runs.\n>\n>\n> The other question I have is about WALInsertLockUpdateInsertingAt. 0003\n> removes this function, but leaves behind some of the other bits working\n> with insert locks and insertingAt. But it does not explain how it works\n> without WaitXLogInsertionsToFinish() - how does it ensure that when we\n> commit something, all the preceding WAL is \"complete\" (i.e. written by\n> other backends etc.)?\n>\n>\n> Conclusion\n> ----------\n>\n> I do think the \"NTT / segments\" patch is the most promising way forward.\n> It does perform about as well as the \"NTT / buffer\" patch (and much both\n> perform much better than the experimental patches I shared in January).\n>\n> The \"NTT / buffer\" patch seems much more disruptive - it introduces one\n> large buffer for WAL, which makes various other tasks more complicated\n> (i.e. it needs additional complexity to handle WAL archival, etc.). Are\n> there some advantages of this patch (compared to the other patch)?\n>\n> As for the \"NTT / segments\" patch, I wonder if we can just rework the\n> code like this (to use mmap etc.) or whether we need to support both\n> both ways (file I/O and mmap). I don't have much experience with many\n> other platforms, but it seems quite possible that mmap won't work all\n> that well on some of them. So my assumption is we'll need to support\n> both file I/O and mmap to make any of this committable, but I may be wrong.\n>\n>\n> [1]\n> https://www.postgresql.org/message-id/CAOwnP3Oz4CnKp0-_KU-x5irr9pBqPNkk7pjwZE5Pgo8i1CbFGg%40mail.gmail.com\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\n\n",
"msg_date": "Fri, 5 Mar 2021 17:08:46 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hello Takashi-san,\n\nOn 3/5/21 9:08 AM, Takashi Menjo wrote:\n> Hi Tomas,\n> \n> Thank you so much for your report. I have read it with great interest.\n> \n> Your conclusion sounds reasonable to me. My patchset you call \"NTT /\n> segments\" got as good performance as \"NTT / buffer\" patchset. I have\n> been worried that calling mmap/munmap for each WAL segment file could\n> have a lot of overhead. Based on your performance tests, however, the\n> overhead looks less than what I thought. In addition, \"NTT / segments\"\n> patchset is more compatible to the current PG and more friendly to\n> DBAs because that patchset uses WAL segment files and does not\n> introduce any other new WAL-related file.\n> \n\nI agree. I was actually a bit surprised it performs this well, mostly in\nline with the \"NTT / buffer\" patchset. I've seen significant issues with\nour simple experimental patches, which however went away with larger WAL\nsegments. But the \"NTT / segments\" patch does not have that issue, so\neither our patches were doing something wrong, or perhaps there was some\nother issue (not sure why larger WAL segments would improve that).\n\nDo these results match your benchmarks? Or are you seeing significantly\ndifferent behavior?\n\nDo you have any thoughts regarding the impact of full-page writes? So\nfar all the benchmarks we did focused on small OLTP transactions on data\nsets that fit into RAM. The assumption was that that's the workload that\nwould benefit from this, but maybe that's missing something important\nabout workloads producing much larger WAL records? Say, workloads\nworking with large BLOBs, bulk loads etc.\n\nThe other question is whether simply placing WAL on DAX (without any\ncode changes) is safe. If it's not, then all the \"speedups\" are computed\nwith respect to unsafe configuration and so are useless. And BTT should\nbe used instead, which would of course produce very different results.\n\n> I also think that supporting both file I/O and mmap is better than\n> supporting only mmap. I will continue my work on \"NTT / segments\"\n> patchset to support both ways.\n> \n\n+1\n\n> In the following, I will answer \"Issues & Questions\" you reported.\n> \n> \n>> While testing the \"NTT / segments\" patch, I repeatedly managed to crash the cluster with errors like this:\n>>\n>> 2021-02-28 00:07:21.221 PST client backend [3737139] WARNING: creating logfile segment just before\n>> mapping; path \"pg_wal/00000001000000070000002F\"\n>> 2021-02-28 00:07:21.670 PST client backend [3737142] WARNING: creating logfile segment just before\n>> mapping; path \"pg_wal/000000010000000700000030\"\n>> ...\n>> 2021-02-28 00:07:21.698 PST client backend [3737145] WARNING: creating logfile segment just before\n>> mapping; path \"pg_wal/000000010000000700000030\"\n>> 2021-02-28 00:07:21.698 PST client backend [3737130] PANIC: could not open file\n>> \"pg_wal/000000010000000700000030\": No such file or directory\n>>\n>> I do believe this is a thinko in the 0008 patch, which does XLogFileInit in XLogFileMap. Notice there are multiple\n>> \"creating logfile\" messages with the ..0030 segment, followed by the failure. AFAICS the XLogFileMap may be\n>> called from multiple backends, so they may call XLogFileInit concurrently, likely triggering some sort of race\n>> condition. It's fairly rare issue, though - I've only seen it twice from ~20 runs.\n> \n> Thank you for your report. I found that rather the patch 0009 has an\n> issue, and that will also cause WAL loss. I should have set\n> use_existent to true, or InstallXlogFileSegment and BasicOpenFile in\n> XLogFileInit can be racy. I have misunderstood that use_existent can\n> be true because I am creating a brand-new file with XLogFileInit.\n> \n> I will fix the issue.\n> \n\nOK, thanks for looking into this.\n\n> \n>> The other question I have is about WALInsertLockUpdateInsertingAt. 0003 removes this function, but leaves\n>> behind some of the other bits working with insert locks and insertingAt. But it does not explain how it works without\n>> WaitXLogInsertionsToFinish() - how does it ensure that when we commit something, all the preceding WAL is\n>> \"complete\" (i.e. written by other backends etc.)?\n> \n> To wait for *all* the WALInsertLocks to be released, no matter each of\n> them precedes or follows the current insertion.\n> \n> It would have worked functionally, but I rethink it is not good for\n> performance because XLogFileMap in GetXLogBuffer (where\n> WaitXLogInsertionsToFinish is removed) can block because it can\n> eventually call write() in XLogFileInit.\n> \n> I will restore the WALInsertLockUpdateInsertingAt function and related\n> code for mmap.\n> \n\nOK. I'm still not entirely sure I understand if the current version is\ncorrect, but I'll wait for the reworked version.\n\n\nkind regards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 5 Mar 2021 18:16:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "Hi Tomas,\n\n> Hello Takashi-san,\n>\n> On 3/5/21 9:08 AM, Takashi Menjo wrote:\n> > Hi Tomas,\n> >\n> > Thank you so much for your report. I have read it with great interest.\n> >\n> > Your conclusion sounds reasonable to me. My patchset you call \"NTT /\n> > segments\" got as good performance as \"NTT / buffer\" patchset. I have\n> > been worried that calling mmap/munmap for each WAL segment file could\n> > have a lot of overhead. Based on your performance tests, however, the\n> > overhead looks less than what I thought. In addition, \"NTT / segments\"\n> > patchset is more compatible to the current PG and more friendly to\n> > DBAs because that patchset uses WAL segment files and does not\n> > introduce any other new WAL-related file.\n> >\n>\n> I agree. I was actually a bit surprised it performs this well, mostly in\n> line with the \"NTT / buffer\" patchset. I've seen significant issues with\n> our simple experimental patches, which however went away with larger WAL\n> segments. But the \"NTT / segments\" patch does not have that issue, so\n> either our patches were doing something wrong, or perhaps there was some\n> other issue (not sure why larger WAL segments would improve that).\n>\n> Do these results match your benchmarks? Or are you seeing significantly\n> different behavior?\n\nI made a performance test for \"NTT / segments\" and added its results\nto my previous report [1], on the same conditions. The updated graph\nis attached to this mail. Note that some legends are renamed: \"Mapped\nWAL file\" to \"NTT / simple\", and \"Non-volatile WAL buffer\" to \"NTT /\nbuffer.\"\n\nThe graph tells me that \"NTT / segments\" performs as well as \"NTT /\nbuffer.\" This matches with the results you reported.\n\n> Do you have any thoughts regarding the impact of full-page writes? So\n> far all the benchmarks we did focused on small OLTP transactions on data\n> sets that fit into RAM. The assumption was that that's the workload that\n> would benefit from this, but maybe that's missing something important\n> about workloads producing much larger WAL records? Say, workloads\n> working with large BLOBs, bulk loads etc.\n\nI'd say that more work is needed for workloads producing a large\namount of WAL (in the number of records or the size per record, or\nboth of them). Based on the case Gang reported and I have tried to\nreproduce in this thread [2][3], the current inserting and flushing\nmethod can be unsuitable for such workloads. The case was for \"NTT /\nbuffer,\" but I think it can be also applied to \"NTT / segments.\"\n\n> The other question is whether simply placing WAL on DAX (without any\n> code changes) is safe. If it's not, then all the \"speedups\" are computed\n> with respect to unsafe configuration and so are useless. And BTT should\n> be used instead, which would of course produce very different results.\n\nI think it's safe, thanks to the checksum in the header of WAL record\n(xl_crc in struct XLogRecord). In DAX mode, user data (WAL record\nhere) is written to the PMEM device by a smaller unit (probably a byte\nor a cache line) than the traditional 512-byte disk sector. So a\ntorn-write such that \"some bytes in a sector persist, other bytes not\"\ncan occur when crash. AFAICS, however, the checksum for WAL records\ncan also support such a torn-write case.\n\n> > I also think that supporting both file I/O and mmap is better than\n> > supporting only mmap. I will continue my work on \"NTT / segments\"\n> > patchset to support both ways.\n> >\n>\n> +1\n>\n> > In the following, I will answer \"Issues & Questions\" you reported.\n> >\n> >\n> >> While testing the \"NTT / segments\" patch, I repeatedly managed to crash the cluster with errors like this:\n> >>\n> >> 2021-02-28 00:07:21.221 PST client backend [3737139] WARNING: creating logfile segment just before\n> >> mapping; path \"pg_wal/00000001000000070000002F\"\n> >> 2021-02-28 00:07:21.670 PST client backend [3737142] WARNING: creating logfile segment just before\n> >> mapping; path \"pg_wal/000000010000000700000030\"\n> >> ...\n> >> 2021-02-28 00:07:21.698 PST client backend [3737145] WARNING: creating logfile segment just before\n> >> mapping; path \"pg_wal/000000010000000700000030\"\n> >> 2021-02-28 00:07:21.698 PST client backend [3737130] PANIC: could not open file\n> >> \"pg_wal/000000010000000700000030\": No such file or directory\n> >>\n> >> I do believe this is a thinko in the 0008 patch, which does XLogFileInit in XLogFileMap. Notice there are multiple\n> >> \"creating logfile\" messages with the ..0030 segment, followed by the failure. AFAICS the XLogFileMap may be\n> >> called from multiple backends, so they may call XLogFileInit concurrently, likely triggering some sort of race\n> >> condition. It's fairly rare issue, though - I've only seen it twice from ~20 runs.\n> >\n> > Thank you for your report. I found that rather the patch 0009 has an\n> > issue, and that will also cause WAL loss. I should have set\n> > use_existent to true, or InstallXlogFileSegment and BasicOpenFile in\n> > XLogFileInit can be racy. I have misunderstood that use_existent can\n> > be true because I am creating a brand-new file with XLogFileInit.\n> >\n> > I will fix the issue.\n> >\n>\n> OK, thanks for looking into this.\n>\n> >\n> >> The other question I have is about WALInsertLockUpdateInsertingAt. 0003 removes this function, but leaves\n> >> behind some of the other bits working with insert locks and insertingAt. But it does not explain how it works without\n> >> WaitXLogInsertionsToFinish() - how does it ensure that when we commit something, all the preceding WAL is\n> >> \"complete\" (i.e. written by other backends etc.)?\n> >\n> > To wait for *all* the WALInsertLocks to be released, no matter each of\n> > them precedes or follows the current insertion.\n> >\n> > It would have worked functionally, but I rethink it is not good for\n> > performance because XLogFileMap in GetXLogBuffer (where\n> > WaitXLogInsertionsToFinish is removed) can block because it can\n> > eventually call write() in XLogFileInit.\n> >\n> > I will restore the WALInsertLockUpdateInsertingAt function and related\n> > code for mmap.\n> >\n>\n> OK. I'm still not entirely sure I understand if the current version is\n> correct, but I'll wait for the reworked version.\n>\n>\n> kind regards\n\nBest regards,\nTakashi\n\n[1] https://www.postgresql.org/message-id/CAOwnP3OFofOsFtmeikQcbMp0YWdJn0kVB4Ka_0tj+Urq7dtAzQ@mail.gmail.com\n[2] https://www.postgresql.org/message-id/BYAPR11MB344801FF81E9C92A081D3E10E6080@BYAPR11MB3448.namprd11.prod.outlook.com\n[3] https://www.postgresql.org/message-id/CAOwnP3NHAbVFOfAawZPs5ezn57_7fcX%3DKaaQ5YMgirc9rNrijQ%40mail.gmail.com\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Tue, 9 Mar 2021 15:53:45 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Non-volatile WAL buffer"
},
{
"msg_contents": "From: Takashi Menjo <takashi.menjo@gmail.com>\r\n> > The other question is whether simply placing WAL on DAX (without any\r\n> > code changes) is safe. If it's not, then all the \"speedups\" are\r\n> > computed with respect to unsafe configuration and so are useless. And\r\n> > BTT should be used instead, which would of course produce very different\r\n> results.\r\n> \r\n> I think it's safe, thanks to the checksum in the header of WAL record (xl_crc in\r\n> struct XLogRecord). In DAX mode, user data (WAL record\r\n> here) is written to the PMEM device by a smaller unit (probably a byte or a\r\n> cache line) than the traditional 512-byte disk sector. So a torn-write such that\r\n> \"some bytes in a sector persist, other bytes not\"\r\n> can occur when crash. AFAICS, however, the checksum for WAL records can\r\n> also support such a torn-write case.\r\n\r\nI'm afraid I may be misunderstanding, so let me ask a naive question.\r\n\r\nI understood \"simply placing WAL on DAX (without any code changes)\" means placing WAL files on DAX-aware filesystems such as ext4 and xfs, withoug modifying Postgres source code. That is, use the PMEM as a high performance storage device. Is this correct?\r\n\r\nSecond, does it what you represented as \"master\" in your test results?\r\n\r\nI'd simply like to know what percentage of performance improvement we can expect by utilizing PMDK and modifying Postgres source code, and how much improvement we consider worthwhile.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n",
"msg_date": "Tue, 9 Mar 2021 07:23:41 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] Non-volatile WAL buffer"
}
] |
[
{
"msg_contents": "hi there\n\nI am a database developer with 10+ years industry experience. I want to\nmake contribution to Postgres and look for some work items to start with.\nLooking at the TODO list, I am not sure what to start with? Any suggestions?\n\nThanks,\nXiang\n\nhi thereI am a database developer with 10+ years industry experience. I want to make contribution to Postgres and look for some work items to start with. Looking at the TODO list, I am not sure what to start with? Any suggestions?Thanks,Xiang",
"msg_date": "Fri, 24 Jan 2020 09:38:45 -0800",
"msg_from": "Xiang Xiao <xxiao23@gmail.com>",
"msg_from_op": true,
"msg_subject": "any work item suggestion for newbie?"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently pg_stat_bgwriter.buffers_backend is pretty useless to gauge\nwhether backends are doing writes they shouldn't do. That's because it\ncounts things that are either unavoidably or unlikely doable by other\nparts of the system (checkpointer, bgwriter).\n\nIn particular extending the file can not currently be done by any\nanother type of process, yet is counted. When using a buffer access\nstrategy it is also very likely that writes have to be done by the\n'dirtying' backend itself, as the buffer will be reused soon after (when\nnot previously in s_b that is).\n\nAdditionally pg_stat_bgwriter.buffers_backend also counts writes done by\nautovacuum et al.\n\n\nI think it'd make sense to at least split buffers_backend into\nbuffers_backend_extend,\nbuffers_backend_write,\nbuffers_backend_write_strat\n\nbut it could also be worthwhile to expand it into\nbuffers_backend_extend,\nbuffers_{backend,checkpoint,bgwriter,autovacuum}_write\nbuffers_{backend,autovacuum}_write_stat\n\nPossibly by internally, in contrast to SQL level, having just counter\narrays indexed by backend types.\n\n\nIt's also noteworthy that buffers_backend is accounted in an absurd\nmanner. One might think that writes are accounted from backend -> shared\nmemory or such. But instead it works like this:\n\n1) backend flushes buffer in bufmgr.c, accounts for backend *write time*\n2) mdwrite writes and registers a sync request, which forwards the sync request to checkpointer\n3) ForwardSyncRequest(), when not called by bgwriter, increments CheckpointerShmem->num_backend_writes\n4) checkpointer, whenever doing AbsorbSyncRequests(), moves\n CheckpointerShmem->num_backend_writes to\n BgWriterStats.m_buf_written_backend (local memory!)\n5) Occasionally it calls pgstat_send_bgwriter(), which sends the data to\n pgstat (which bgwriter also does)\n6) Which then updates the shared memory used by the display functions\n\nWorthwhile to note that backend buffer read/write *time* is accounted\ndifferently. That's done via pgstat_send_tabstat().\n\n\nI think there's very little excuse for the indirection via checkpointer,\nbesides architectually being weird, it actually requires that we\ncontinue to wake up checkpointer over and over instead of optimizing how\nand when we submit fsync requests.\n\nAs far as I can tell we're also simply not accounting at all for writes\ndone outside of shared buffers. All writes done directly through\nsmgrwrite()/extend() aren't accounted anywhere as far as I can tell.\n\n\nI think we also count things as writes that aren't writes: mdtruncate()\nis AFAICT counted as one backend write for each segment. Which seems\nweird to me.\n\n\nLastly, I don't understand what the point of sending fixed size stats,\nlike the stuff underlying pg_stat_bgwriter, through pgstats IPC. While\nI don't like it's architecture, we obviously need something like pgstat\nto handle variable amounts of stats (database, table level etc\nstats). But that doesn't at all apply to these types of global stats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 Jan 2020 11:52:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pg_stat_bgwriter.buffers_backend is pretty meaningless (and more?)"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> Currently pg_stat_bgwriter.buffers_backend is pretty useless to gauge\n> whether backends are doing writes they shouldn't do. That's because it\n> counts things that are either unavoidably or unlikely doable by other\n> parts of the system (checkpointer, bgwriter).\n> In particular extending the file can not currently be done by any\n> another type of process, yet is counted. When using a buffer access\n> strategy it is also very likely that writes have to be done by the\n> 'dirtying' backend itself, as the buffer will be reused soon after (when\n> not previously in s_b that is).\n\nYeah. That's quite annoying.\n\n\n> Additionally pg_stat_bgwriter.buffers_backend also counts writes done by\n> autovacuum et al.\n>\n>\n> I think it'd make sense to at least split buffers_backend into\n> buffers_backend_extend,\n> buffers_backend_write,\n> buffers_backend_write_strat\n>\n> but it could also be worthwhile to expand it into\n> buffers_backend_extend,\n> buffers_{backend,checkpoint,bgwriter,autovacuum}_write\n> buffers_{backend,autovacuum}_write_stat\n\nGiven that these are individual global counters, I don't really see\nany reason not to expand it to the bigger set of counters. It's easy\nenough to add them up together later if needed.\n\n\n> Possibly by internally, in contrast to SQL level, having just counter\n> arrays indexed by backend types.\n>\n>\n> It's also noteworthy that buffers_backend is accounted in an absurd\n> manner. One might think that writes are accounted from backend -> shared\n> memory or such. But instead it works like this:\n>\n> 1) backend flushes buffer in bufmgr.c, accounts for backend *write time*\n> 2) mdwrite writes and registers a sync request, which forwards the sync request to checkpointer\n> 3) ForwardSyncRequest(), when not called by bgwriter, increments CheckpointerShmem->num_backend_writes\n> 4) checkpointer, whenever doing AbsorbSyncRequests(), moves\n> CheckpointerShmem->num_backend_writes to\n> BgWriterStats.m_buf_written_backend (local memory!)\n> 5) Occasionally it calls pgstat_send_bgwriter(), which sends the data to\n> pgstat (which bgwriter also does)\n> 6) Which then updates the shared memory used by the display functions\n>\n> Worthwhile to note that backend buffer read/write *time* is accounted\n> differently. That's done via pgstat_send_tabstat().\n>\n>\n> I think there's very little excuse for the indirection via checkpointer,\n> besides architectually being weird, it actually requires that we\n> continue to wake up checkpointer over and over instead of optimizing how\n> and when we submit fsync requests.\n>\n> As far as I can tell we're also simply not accounting at all for writes\n> done outside of shared buffers. All writes done directly through\n> smgrwrite()/extend() aren't accounted anywhere as far as I can tell.\n>\n>\n> I think we also count things as writes that aren't writes: mdtruncate()\n> is AFAICT counted as one backend write for each segment. Which seems\n> weird to me.\n\nIt's at least slightly weird :) Might it be worth counting truncate\nevents separately?\n\n\n> Lastly, I don't understand what the point of sending fixed size stats,\n> like the stuff underlying pg_stat_bgwriter, through pgstats IPC. While\n> I don't like it's architecture, we obviously need something like pgstat\n> to handle variable amounts of stats (database, table level etc\n> stats). But that doesn't at all apply to these types of global stats.\n\nThat part has annoyed me as well a few times. +1 for just moving that\ninto a global shared memory. Given that we don't really care about\nthings being in sync between those different counters *or* if we loose\na bit of data (which the stats collector is designed to do), we could\neven do that without a lock?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sat, 25 Jan 2020 15:43:41 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-25 15:43:41 +0100, Magnus Hagander wrote:\n> On Fri, Jan 24, 2020 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > Additionally pg_stat_bgwriter.buffers_backend also counts writes done by\n> > autovacuum et al.\n\n> > I think it'd make sense to at least split buffers_backend into\n> > buffers_backend_extend,\n> > buffers_backend_write,\n> > buffers_backend_write_strat\n> >\n> > but it could also be worthwhile to expand it into\n> > buffers_backend_extend,\n> > buffers_{backend,checkpoint,bgwriter,autovacuum}_write\n> > buffers_{backend,autovacuum}_write_stat\n> \n> Given that these are individual global counters, I don't really see\n> any reason not to expand it to the bigger set of counters. It's easy\n> enough to add them up together later if needed.\n\nAre you agreeing to\nbuffers_{backend,checkpoint,bgwriter,autovacuum}_write\nor are you suggesting further ones?\n\n\n> > I think we also count things as writes that aren't writes: mdtruncate()\n> > is AFAICT counted as one backend write for each segment. Which seems\n> > weird to me.\n> \n> It's at least slightly weird :) Might it be worth counting truncate\n> events separately?\n\nIs that really something interesting? Feels like it'd have to be done at\na higher level to be useful. E.g. the truncate done by TRUNCATE (when in\nsame xact as creation) and VACUUM are quite different. I think it'd be\nbetter to just not include it.\n\n\n> > Lastly, I don't understand what the point of sending fixed size stats,\n> > like the stuff underlying pg_stat_bgwriter, through pgstats IPC. While\n> > I don't like it's architecture, we obviously need something like pgstat\n> > to handle variable amounts of stats (database, table level etc\n> > stats). But that doesn't at all apply to these types of global stats.\n> \n> That part has annoyed me as well a few times. +1 for just moving that\n> into a global shared memory. Given that we don't really care about\n> things being in sync between those different counters *or* if we loose\n> a bit of data (which the stats collector is designed to do), we could\n> even do that without a lock?\n\nI don't think we'd quite want to do it without any (single counter)\nsynchronization - high concurrency setups would be pretty likely to\nloose values that way. I suspect the best would be to have a struct in\nshared memory that contains the potential counters for each potential\nprocess. And then sum them up when actually wanting the concrete\nvalue. That way we avoid unnecessary contention, in contrast to having a\nsingle shared memory value for each(which would just pingpong between\ndifferent sockets and store buffers). There's a few details like how\nexactly to implement resetting the counters, but ...\n\nThanks,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 25 Jan 2020 16:44:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Sun, Jan 26, 2020 at 1:44 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-01-25 15:43:41 +0100, Magnus Hagander wrote:\n> > On Fri, Jan 24, 2020 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Additionally pg_stat_bgwriter.buffers_backend also counts writes done by\n> > > autovacuum et al.\n>\n> > > I think it'd make sense to at least split buffers_backend into\n> > > buffers_backend_extend,\n> > > buffers_backend_write,\n> > > buffers_backend_write_strat\n> > >\n> > > but it could also be worthwhile to expand it into\n> > > buffers_backend_extend,\n> > > buffers_{backend,checkpoint,bgwriter,autovacuum}_write\n> > > buffers_{backend,autovacuum}_write_stat\n> >\n> > Given that these are individual global counters, I don't really see\n> > any reason not to expand it to the bigger set of counters. It's easy\n> > enough to add them up together later if needed.\n>\n> Are you agreeing to\n> buffers_{backend,checkpoint,bgwriter,autovacuum}_write\n> or are you suggesting further ones?\n\nThe former.\n\n\n> > > I think we also count things as writes that aren't writes: mdtruncate()\n> > > is AFAICT counted as one backend write for each segment. Which seems\n> > > weird to me.\n> >\n> > It's at least slightly weird :) Might it be worth counting truncate\n> > events separately?\n>\n> Is that really something interesting? Feels like it'd have to be done at\n> a higher level to be useful. E.g. the truncate done by TRUNCATE (when in\n> same xact as creation) and VACUUM are quite different. I think it'd be\n> better to just not include it.\n\nYeah, you're probably right. it certainly makes very little sense\nwhere it is now.\n\n\n> > > Lastly, I don't understand what the point of sending fixed size stats,\n> > > like the stuff underlying pg_stat_bgwriter, through pgstats IPC. While\n> > > I don't like it's architecture, we obviously need something like pgstat\n> > > to handle variable amounts of stats (database, table level etc\n> > > stats). But that doesn't at all apply to these types of global stats.\n> >\n> > That part has annoyed me as well a few times. +1 for just moving that\n> > into a global shared memory. Given that we don't really care about\n> > things being in sync between those different counters *or* if we loose\n> > a bit of data (which the stats collector is designed to do), we could\n> > even do that without a lock?\n>\n> I don't think we'd quite want to do it without any (single counter)\n> synchronization - high concurrency setups would be pretty likely to\n> loose values that way. I suspect the best would be to have a struct in\n> shared memory that contains the potential counters for each potential\n> process. And then sum them up when actually wanting the concrete\n> value. That way we avoid unnecessary contention, in contrast to having a\n> single shared memory value for each(which would just pingpong between\n> different sockets and store buffers). There's a few details like how\n> exactly to implement resetting the counters, but ...\n\nRight. Each process gets to do their own write, but still in shared\nmemory. But do you need to lock them when reading them (for the\nsummary)? That's the part where I figured you could just read and\nsummarize them, and accept the possible loss.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 26 Jan 2020 16:20:03 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-26 16:20:03 +0100, Magnus Hagander wrote:\n> On Sun, Jan 26, 2020 at 1:44 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-01-25 15:43:41 +0100, Magnus Hagander wrote:\n> > > On Fri, Jan 24, 2020 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > Lastly, I don't understand what the point of sending fixed size stats,\n> > > > like the stuff underlying pg_stat_bgwriter, through pgstats IPC. While\n> > > > I don't like it's architecture, we obviously need something like pgstat\n> > > > to handle variable amounts of stats (database, table level etc\n> > > > stats). But that doesn't at all apply to these types of global stats.\n> > >\n> > > That part has annoyed me as well a few times. +1 for just moving that\n> > > into a global shared memory. Given that we don't really care about\n> > > things being in sync between those different counters *or* if we loose\n> > > a bit of data (which the stats collector is designed to do), we could\n> > > even do that without a lock?\n> >\n> > I don't think we'd quite want to do it without any (single counter)\n> > synchronization - high concurrency setups would be pretty likely to\n> > loose values that way. I suspect the best would be to have a struct in\n> > shared memory that contains the potential counters for each potential\n> > process. And then sum them up when actually wanting the concrete\n> > value. That way we avoid unnecessary contention, in contrast to having a\n> > single shared memory value for each(which would just pingpong between\n> > different sockets and store buffers). There's a few details like how\n> > exactly to implement resetting the counters, but ...\n> \n> Right. Each process gets to do their own write, but still in shared\n> memory. But do you need to lock them when reading them (for the\n> summary)? That's the part where I figured you could just read and\n> summarize them, and accept the possible loss.\n\nOh, yea, I'd not lock for that. On nearly all machines aligned 64bit\nintegers can be read / written without a danger of torn values, and I\ndon't think we need perfect cross counter accuracy. To deal with the few\nplatforms without 64bit \"single copy atomicity\", we can just use\npg_atomic_read/write_u64. These days (e8fdbd58fe) they automatically\nfall back to using locked operations for those platforms. So I don't\nthink there's actually a danger of loss.\n\nObviously we could also use atomic ops to increment the value, but I'd\nrather not add all those atomic operations, even if it's on uncontended\ncachelines. It'd allow us to reset the backend values more easily by\njust swapping in a 0, which we can't do if the backend increments\nnon-atomically. But I think we could instead just have one global \"bias\"\nvalue to implement resets (by subtracting that from the summarized\nvalue, and storing the current sum when resetting). Or use the new\nglobal barrier to trigger a reset. Or something similar.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Jan 2020 12:22:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hello.\n\nAt Sun, 26 Jan 2020 12:22:03 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n\nI feel the same on the specific issues brought in upthread.\n\n> On 2020-01-26 16:20:03 +0100, Magnus Hagander wrote:\n> > On Sun, Jan 26, 2020 at 1:44 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2020-01-25 15:43:41 +0100, Magnus Hagander wrote:\n> > > > On Fri, Jan 24, 2020 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > Lastly, I don't understand what the point of sending fixed size stats,\n> > > > > like the stuff underlying pg_stat_bgwriter, through pgstats IPC. While\n> > > > > I don't like it's architecture, we obviously need something like pgstat\n> > > > > to handle variable amounts of stats (database, table level etc\n> > > > > stats). But that doesn't at all apply to these types of global stats.\n> > > >\n> > > > That part has annoyed me as well a few times. +1 for just moving that\n> > > > into a global shared memory. Given that we don't really care about\n> > > > things being in sync between those different counters *or* if we loose\n> > > > a bit of data (which the stats collector is designed to do), we could\n> > > > even do that without a lock?\n> > >\n> > > I don't think we'd quite want to do it without any (single counter)\n> > > synchronization - high concurrency setups would be pretty likely to\n> > > loose values that way. I suspect the best would be to have a struct in\n> > > shared memory that contains the potential counters for each potential\n> > > process. And then sum them up when actually wanting the concrete\n> > > value. That way we avoid unnecessary contention, in contrast to having a\n> > > single shared memory value for each(which would just pingpong between\n> > > different sockets and store buffers). There's a few details like how\n> > > exactly to implement resetting the counters, but ...\n> > \n> > Right. Each process gets to do their own write, but still in shared\n> > memory. But do you need to lock them when reading them (for the\n> > summary)? That's the part where I figured you could just read and\n> > summarize them, and accept the possible loss.\n> \n> Oh, yea, I'd not lock for that. On nearly all machines aligned 64bit\n> integers can be read / written without a danger of torn values, and I\n> don't think we need perfect cross counter accuracy. To deal with the few\n> platforms without 64bit \"single copy atomicity\", we can just use\n> pg_atomic_read/write_u64. These days (e8fdbd58fe) they automatically\n> fall back to using locked operations for those platforms. So I don't\n> think there's actually a danger of loss.\n> \n> Obviously we could also use atomic ops to increment the value, but I'd\n> rather not add all those atomic operations, even if it's on uncontended\n> cachelines. It'd allow us to reset the backend values more easily by\n> just swapping in a 0, which we can't do if the backend increments\n> non-atomically. But I think we could instead just have one global \"bias\"\n> value to implement resets (by subtracting that from the summarized\n> value, and storing the current sum when resetting). Or use the new\n> global barrier to trigger a reset. Or something similar.\n\nFixed or global stats are suitable for the startar of shared-memory\nstats collector. In the case of buffers_*_write, the global stats\nentry for each process needs just 8 bytes plus matbe extra 8 bytes for\nthe bias value. I'm not sure how many counters like this there are,\nbut is such size of footprint acceptatble? (Each backend already uses\nthe same amount of local memory for pgstat use, though.)\n\nAnyway I will do something like that as a trial, maybe by adding a\nmember in PgBackendStatus and one global-shared for the bial value.\n\n int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n+ PgBackendStatsCounters counters;\n } PgBackendStatus;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 27 Jan 2020 13:20:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Sun, Jan 26, 2020 at 11:21 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sun, 26 Jan 2020 12:22:03 -0800, Andres Freund <andres@anarazel.de> wrote in\n> > On 2020-01-26 16:20:03 +0100, Magnus Hagander wrote:\n> > > On Sun, Jan 26, 2020 at 1:44 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > On 2020-01-25 15:43:41 +0100, Magnus Hagander wrote:\n> > > > > On Fri, Jan 24, 2020 at 8:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > Lastly, I don't understand what the point of sending fixed size stats,\n> > > > > > like the stuff underlying pg_stat_bgwriter, through pgstats IPC. While\n> > > > > > I don't like it's architecture, we obviously need something like pgstat\n> > > > > > to handle variable amounts of stats (database, table level etc\n> > > > > > stats). But that doesn't at all apply to these types of global stats.\n> > > > >\n> > > > > That part has annoyed me as well a few times. +1 for just moving that\n> > > > > into a global shared memory. Given that we don't really care about\n> > > > > things being in sync between those different counters *or* if we loose\n> > > > > a bit of data (which the stats collector is designed to do), we could\n> > > > > even do that without a lock?\n> > > >\n> > > > I don't think we'd quite want to do it without any (single counter)\n> > > > synchronization - high concurrency setups would be pretty likely to\n> > > > loose values that way. I suspect the best would be to have a struct in\n> > > > shared memory that contains the potential counters for each potential\n> > > > process. And then sum them up when actually wanting the concrete\n> > > > value. That way we avoid unnecessary contention, in contrast to having a\n> > > > single shared memory value for each(which would just pingpong between\n> > > > different sockets and store buffers). There's a few details like how\n> > > > exactly to implement resetting the counters, but ...\n> > >\n> > > Right. Each process gets to do their own write, but still in shared\n> > > memory. But do you need to lock them when reading them (for the\n> > > summary)? That's the part where I figured you could just read and\n> > > summarize them, and accept the possible loss.\n> >\n> > Oh, yea, I'd not lock for that. On nearly all machines aligned 64bit\n> > integers can be read / written without a danger of torn values, and I\n> > don't think we need perfect cross counter accuracy. To deal with the few\n> > platforms without 64bit \"single copy atomicity\", we can just use\n> > pg_atomic_read/write_u64. These days (e8fdbd58fe) they automatically\n> > fall back to using locked operations for those platforms. So I don't\n> > think there's actually a danger of loss.\n> >\n> > Obviously we could also use atomic ops to increment the value, but I'd\n> > rather not add all those atomic operations, even if it's on uncontended\n> > cachelines. It'd allow us to reset the backend values more easily by\n> > just swapping in a 0, which we can't do if the backend increments\n> > non-atomically. But I think we could instead just have one global \"bias\"\n> > value to implement resets (by subtracting that from the summarized\n> > value, and storing the current sum when resetting). Or use the new\n> > global barrier to trigger a reset. Or something similar.\n>\n> Fixed or global stats are suitable for the startar of shared-memory\n> stats collector. In the case of buffers_*_write, the global stats\n> entry for each process needs just 8 bytes plus matbe extra 8 bytes for\n> the bias value. I'm not sure how many counters like this there are,\n> but is such size of footprint acceptatble? (Each backend already uses\n> the same amount of local memory for pgstat use, though.)\n>\n> Anyway I will do something like that as a trial, maybe by adding a\n> member in PgBackendStatus and one global-shared for the bial value.\n>\n> int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n> + PgBackendStatsCounters counters;\n> } PgBackendStatus;\n>\n\nSo, I took a stab at implementing this in PgBackendStatus. The attached\npatch is not quite on top of current master, so, alas, don't try and\napply it. I went to rebase today and realized I needed to make some\nchanges in light of e1025044cd4, however, I wanted to share this WIP so\nthat I could pose a few questions that I imagine will still be relevant\nafter I rewrite the patch.\n\nI removed buffers_backend and buffers_backend_fsync from\npg_stat_bgwriter and have created a new view which tracks\n - number of shared buffers the checkpointer and bgwriter write out\n - number of shared buffers a regular backend is forced to flush\n - number of extends done by a regular backend through shared buffers\n - number of buffers flushed by a backend or autovacuum using a\n BufferAccessStrategy which, were they not to use this strategy,\n could perhaps have been avoided if a clean shared buffer was\n available\n - number of fsyncs done by a backend which could have been done by\n checkpointer if sync queue had not been full\n\nThis view currently does only track writes and extends that go through\nshared buffers and fsyncs of shared buffers (which, AFAIK are the only\nthings fsync'd though the SyncRequest machinery currently).\n\nBufferAlloc() and SyncOneBuffer() are the main points at which the\ntracking is done. I can definitely expand this, but, I want to make sure\nthat we are tracking the right kind of information.\n\nnum_backend_writes and num_backend_fsync were intended (though they were\nnot accurate) to count buffers that backends had to end up writing\nthemselves and fsyncs that backends had to end up doing themselves which\ncould have been avoided with a different configuration (or, I suppose, a\ndifferent workload/different data, etc). That is, they were meant to\ntell you if checkpointer and bgwriter were keeping up and/or if the\nsize of shared buffers was adequate.\n\nIn implementing this counting per backend, it is easy for all types of\nbackends to keep track of the number of writes, extends, fsyncs, and\nstrategy writes they are doing. So, as recommended upthread, I have\nadded columns in the view for the number of writes for checkpointer and\nbgwriter and others. Thus, this view becomes more than just stats on\n\"avoidable I/O done by backends\".\n\nSo, my question is, does it makes sense to track all extends -- those to\nextend the fsm and visimap and when making a new relation or index? Is\nthat information useful? If so, is it different than the extends done\nthrough shared buffers? Should it be tracked separately?\n\nAlso, if we care about all of the extends, then it seems a bit annoying\nto pepper the counting all over the place when it really just needs to\nbe done when smgrextend() — even though maybe a stats function doesn't\nbelong in that API.\n\nAnother question I have is, should the number of extends be for every\nsingle block extended or should we try to track the initiation of a set\nof extends (all of those added in RelationAddExtraBlocks(), in this\ncase)?\n\nWhen it comes to fsync counting, I only count the fsyncs counted by the\nprevious code — that is fsyncs done by backends themselves when the\ncheckpointer sync request queue was full.\nI did the counting in the same place in checkpointer code -- in\nForwardSyncRequest() -- partially because there did not seem to be\nanother good place to do it since register_dirty_segment() returns void\n(thought about having it return a bool to indicate if it fsync'd it or\nif it registered the fsync because that seemed alright, but mdextend(),\nmdwrite() etc, also return NULL) so there is no way to propagate the\ninformation back up to the bufmgr that the process had to do its own\nfsync, so, that means that I would have to muck with the md.c API. and,\nsince the checkpointer is the one processing these sync requests anyway,\nit actually seems okay to do it in the checkpointer code.\n\nI'm not counting fsyncs that are \"unavoidable\" in the sense that they\ncouldn't be avoided by changing settings/workload etc -- like those done\nwhen building an index, creating a table/rewriting a table/copying a\ntable -- is it useful to count these? It seems like it makes the number\nof \"avoidable fsyncs by backends\" less useful if we count the others.\nAlso, should we count how many fsyncs checkpointer has done (have to\ncheck if there is already a stat for that)? Is that useful in this\ncontext?\n\nOf course, this view, when grown, will begin to overlap with pg_statio,\nwhich is another consideration. What is its identity? I would find\n\"avoidable I/O\" either avoidable entirely or avoidable for that\nparticular type of process, to be useful.\n\nOr maybe, it should have a more expansive mandate. Maybe it would be\nuseful to aggregate some of the info from pg_stat_statements at a higher\nlevel -- like maybe shared_blks_read counted across many statements for\na period of time/context in which we expected the relation in shared\nbuffers becomes potentially interesting.\n\nAs for the way I have recorded strategy writes -- it is quite inelegant,\nbut, I wanted to make sure that I only counted a strategy write as one\nin which the backend wrote out the dirty buffer from its strategy ring\nbut did not check if there was any clean buffer in shared buffers more\ngenerally (so, it is *potentially* an avoidable write). I'm not sure if\nthis distinction is useful to anyone. I haven't done enough with\nBufferAccessStrategies to know what I'd want to know about them when\ndeveloping or using Postgres. However, if I don't need to be so careful,\nit will make the code much simpler (though, I'm sure I can improve the\ncode regardless).\n\nAs for the implementation of the counters themselves, I appreciate that\nit isn't very nice to have a bunch of random members in PgBackendStatus\nto count all of these write, extends, fsyncs. I considered if I could\nadd params that were used for all command types to st_progress_param but\nI haven't looked into it yet. Alternatively, I could create an array\njust for these kind of stats in PgBackendStatus. Though, I imagine that\nI should take a look at the changes that have been made recently to this\narea and at the shared memory stats patch.\n\nOh, also, there should be a way to reset the stats, especially if we add\nmore extends and fsyncs that happen at the time of relation/index\ncreation. I, at least, would find it useful to see these numbers once\nthe database is at some kind of steady state.\n\nOh and src/test/regress/sql/stats.sql will fail and, of course, I don't\nintend to add that SELECT from the view to regress, it was just for\ntesting purposes to make sure the view was working.\n\n-- Melanie",
"msg_date": "Mon, 12 Apr 2021 19:49:36 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 19:49:36 -0700, Melanie Plageman wrote:\n> So, I took a stab at implementing this in PgBackendStatus.\n\nCool!\n\n\n> The attached patch is not quite on top of current master, so, alas,\n> don't try and apply it. I went to rebase today and realized I needed\n> to make some changes in light of e1025044cd4, however, I wanted to\n> share this WIP so that I could pose a few questions that I imagine\n> will still be relevant after I rewrite the patch.\n>\n> I removed buffers_backend and buffers_backend_fsync from\n> pg_stat_bgwriter and have created a new view which tracks\n> - number of shared buffers the checkpointer and bgwriter write out\n> - number of shared buffers a regular backend is forced to flush\n> - number of extends done by a regular backend through shared buffers\n> - number of buffers flushed by a backend or autovacuum using a\n> BufferAccessStrategy which, were they not to use this strategy,\n> could perhaps have been avoided if a clean shared buffer was\n> available\n> - number of fsyncs done by a backend which could have been done by\n> checkpointer if sync queue had not been full\n\nI wonder if leaving buffers_alloc in pg_stat_bgwriter makes sense after\nthis? I'm tempted to move that to pg_stat_buffers or such...\n\nI'm not quite convinced by having separate columns for checkpointer,\nbgwriter, etc. That doesn't seem to scale all that well. What if we\ninstead made it a view that has one row for each BackendType?\n\n\n> In implementing this counting per backend, it is easy for all types of\n> backends to keep track of the number of writes, extends, fsyncs, and\n> strategy writes they are doing. So, as recommended upthread, I have\n> added columns in the view for the number of writes for checkpointer and\n> bgwriter and others. Thus, this view becomes more than just stats on\n> \"avoidable I/O done by backends\".\n>\n> So, my question is, does it makes sense to track all extends -- those to\n> extend the fsm and visimap and when making a new relation or index? Is\n> that information useful? If so, is it different than the extends done\n> through shared buffers? Should it be tracked separately?\n\nI don't fully understand what you mean with \"extends done through shared\nbuffers\"?\n\n\n> Another question I have is, should the number of extends be for every\n> single block extended or should we try to track the initiation of a set\n> of extends (all of those added in RelationAddExtraBlocks(), in this\n> case)?\n\nI think it should be 8k blocks, i.e. RelationAddExtraBlocks() should be\ntracked as many individual extends. It's implemented that way, but more\nimportantly, it should be in BLCKSZ units. If we later add some actually\nbatched operations, we can have separate stats for that.\n\n\n> Of course, this view, when grown, will begin to overlap with pg_statio,\n> which is another consideration. What is its identity? I would find\n> \"avoidable I/O\" either avoidable entirely or avoidable for that\n> particular type of process, to be useful.\n\nI think it's fine to overlap with pg_statio_* - those are for individual\nobjects, so it seems to be expected to overlap with coarser stats.\n\n\n> Or maybe, it should have a more expansive mandate. Maybe it would be\n> useful to aggregate some of the info from pg_stat_statements at a higher\n> level -- like maybe shared_blks_read counted across many statements for\n> a period of time/context in which we expected the relation in shared\n> buffers becomes potentially interesting.\n\nLet's do something more basic first...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 15 Apr 2021 16:59:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 7:59 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-04-12 19:49:36 -0700, Melanie Plageman wrote:\n> > So, I took a stab at implementing this in PgBackendStatus.\n>\n> Cool!\n>\n\nJust a note on v2 of the patch -- the diff for the changes I made to\npgstatfuncs.c is pretty atrocious and hard to read. I tried using a\ndifferent diff algorithm, to no avail.\n\n\n>\n>\n> > The attached patch is not quite on top of current master, so, alas,\n> > don't try and apply it. I went to rebase today and realized I needed\n> > to make some changes in light of e1025044cd4, however, I wanted to\n> > share this WIP so that I could pose a few questions that I imagine\n> > will still be relevant after I rewrite the patch.\n>\n\nRegarding the refactor done in e1025044cd4:\nMost of the functions I've added access variables in PgBackendStatus, so\nI put most of them in backend_status.h/c. However, technically, these\nare stats which are aggregated over time, which e1025044cd4 says should\ngo in pgstat.c/h. I could move some of it, but I hadn't tried to do so,\nas it made a few things inconvenient, and, I wasn't sure if it was the\nright thing to do anyway.\n\n\n> >\n> > I removed buffers_backend and buffers_backend_fsync from\n> > pg_stat_bgwriter and have created a new view which tracks\n> > - number of shared buffers the checkpointer and bgwriter write out\n> > - number of shared buffers a regular backend is forced to flush\n> > - number of extends done by a regular backend through shared buffers\n> > - number of buffers flushed by a backend or autovacuum using a\n> > BufferAccessStrategy which, were they not to use this strategy,\n> > could perhaps have been avoided if a clean shared buffer was\n> > available\n> > - number of fsyncs done by a backend which could have been done by\n> > checkpointer if sync queue had not been full\n>\n> I wonder if leaving buffers_alloc in pg_stat_bgwriter makes sense after\n> this? I'm tempted to move that to pg_stat_buffers or such...\n>\n>\nI've gone ahead and moved buffers_alloc out of pg_stat_bgwriter and into\npg_stat_buffer_actions (I've renamed it from pg_stat_buffers_written).\n\n\n> I'm not quite convinced by having separate columns for checkpointer,\n> bgwriter, etc. That doesn't seem to scale all that well. What if we\n> instead made it a view that has one row for each BackendType?\n>\n>\nI've changed the view to have one row for each backend type for which we\nwould like to report stats and one column for each buffer action type.\n\nTo make the code easier to write, I record buffer actions for all\nbackend types -- even if we don't have any buffer actions we care about\nfor that backend type. I thought it was okay because when I actually\naggregate the counters across backends, I only do so for the backend\ntypes we care about -- thus there shouldn't be much accessing of shared\nmemory by multiple different processes.\n\nAlso, I copy-pasted most of the code in pg_stat_get_buffer_actions() to\nset up the result tuplestore from pg_stat_get_activity() without totally\nunderstanding all the parts of it, so I'm not sure if all of it is\nrequired here.\n\n\n>\n> > In implementing this counting per backend, it is easy for all types of\n> > backends to keep track of the number of writes, extends, fsyncs, and\n> > strategy writes they are doing. So, as recommended upthread, I have\n> > added columns in the view for the number of writes for checkpointer and\n> > bgwriter and others. Thus, this view becomes more than just stats on\n> > \"avoidable I/O done by backends\".\n> >\n> > So, my question is, does it makes sense to track all extends -- those to\n> > extend the fsm and visimap and when making a new relation or index? Is\n> > that information useful? If so, is it different than the extends done\n> > through shared buffers? Should it be tracked separately?\n>\n> I don't fully understand what you mean with \"extends done through shared\n> buffers\"?\n>\n>\nBy \"extends done through shared buffers\", I just mean when an extend of\na relation is done and the data that will be written to the new block is\nwritten into a shared buffer (as opposed to a local one or local memory\nor a strategy buffer).\n\nRandom note:\nI added a length member to the BackendType enum (BACKEND_NUM_TYPES),\nwhich led to this compiler warning:\n\n miscinit.c: In function ‘GetBackendTypeDesc’:\n miscinit.c:236:2: warning: enumeration value ‘BACKEND_NUM_TYPES’ not\nhandled in switch [-Wswitch]\n 236 | switch (backendType)\n | ^~~~~~\n\nI tried using pg_attribute_unused() for BACKEND_NUM_TYPES, but, it\ndidn't seem to have the desired effect. As such, I just threw a case\ninto GetBackendTypeDesc() which does nothing (as opposed to erroring\nout), since the backendDesc already is initialized to \"unknown process\ntype\", erroring out doesn't seem to be expected.\n\n- Melanie",
"msg_date": "Fri, 4 Jun 2021 17:12:43 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On 2021-Apr-12, Melanie Plageman wrote:\n\n> As for the way I have recorded strategy writes -- it is quite inelegant,\n> but, I wanted to make sure that I only counted a strategy write as one\n> in which the backend wrote out the dirty buffer from its strategy ring\n> but did not check if there was any clean buffer in shared buffers more\n> generally (so, it is *potentially* an avoidable write). I'm not sure if\n> this distinction is useful to anyone. I haven't done enough with\n> BufferAccessStrategies to know what I'd want to know about them when\n> developing or using Postgres. However, if I don't need to be so careful,\n> it will make the code much simpler (though, I'm sure I can improve the\n> code regardless).\n\nI was bitten last year by REFRESH MATERIALIZED VIEW counting its writes\nvia buffers_backend, and I was very surprised/confused about it. So it\nseems definitely worthwhile to count writes via strategy separately.\nFor a DBA tuning the server configuration it is very useful.\n\nThe main thing is to *not* let these writes end up regular\nbuffers_backend (or whatever you call these now). I didn't read your\npatch, but the way you have described it seems okay to me.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Fri, 4 Jun 2021 17:52:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 5:52 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Apr-12, Melanie Plageman wrote:\n>\n> > As for the way I have recorded strategy writes -- it is quite inelegant,\n> > but, I wanted to make sure that I only counted a strategy write as one\n> > in which the backend wrote out the dirty buffer from its strategy ring\n> > but did not check if there was any clean buffer in shared buffers more\n> > generally (so, it is *potentially* an avoidable write). I'm not sure if\n> > this distinction is useful to anyone. I haven't done enough with\n> > BufferAccessStrategies to know what I'd want to know about them when\n> > developing or using Postgres. However, if I don't need to be so careful,\n> > it will make the code much simpler (though, I'm sure I can improve the\n> > code regardless).\n>\n> I was bitten last year by REFRESH MATERIALIZED VIEW counting its writes\n> via buffers_backend, and I was very surprised/confused about it. So it\n> seems definitely worthwhile to count writes via strategy separately.\n> For a DBA tuning the server configuration it is very useful.\n>\n> The main thing is to *not* let these writes end up regular\n> buffers_backend (or whatever you call these now). I didn't read your\n> patch, but the way you have described it seems okay to me.\n>\n\nThanks for the feedback!\n\nI agree it makes sense to count strategy writes separately.\n\nI thought about this some more, and I don't know if it makes sense to\nonly count \"avoidable\" strategy writes.\n\nThis would mean that a backend writing out a buffer from the strategy\nring when no clean shared buffers (as well as no clean strategy buffers)\nare available would not count that write as a strategy write (even\nthough it is writing out a buffer from its strategy ring). But, it\nobviously doesn't make sense to count it as a regular buffer being\nwritten out. So, I plan to change this code.\n\nOn another note, I've updated the patch with more correct concurrency\ncontrol control mechanisms (had some data races and other problems\nbefore). Now, I am using atomics for the buffer action counters, though\nthe code includes several #TODO questions around the correctness of what\nI have now too.\n\nI also wrapped the buffer action types in a struct to make them easier\nto work with.\n\nThe most substantial missing piece of the patch right now is persisting\nthe data across reboots.\n\nThe two places in the code I can see to persist the buffer action stats\ndata are:\n1) using the stats collector code (like in\npgstat_read/write_statsfiles()\n2) using a before_shmem_exit() hook which writes the data structure to a\nfile and then read from it when making the shared memory array initially\n\nIt feels a bit weird to me to wedge the buffer actions stats into the\nstats collector code--since the stats collector isn't receiving and\naggregating the buffer action stats.\n\nAlso, I'm unsure how writing the buffer action stats out in\npgstat_write_statsfiles() will work, since I think that backends can\nupdate their buffer action stats after we would have already persisted\nthe data from the BufferActionStatsArray -- causing us to lose those\nupdates.\n\nAnd, I don't think I can use pgstat_read_statsfiles() since the\nBufferActionStatsArray should have the data from the file as soon as the\nview containing the buffer action stats can be queried. Thus, it seems\nlike I would need to read the file while initializing the array in\nCreateBufferActionStatsCounters().\n\nI am registering the patch for September commitfest but plan to update\nthe stats persistence before then (and docs, etc).\n\n-- Melanie",
"msg_date": "Mon, 2 Aug 2021 18:25:56 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-02 18:25:56 -0400, Melanie Plageman wrote:\n> Thanks for the feedback!\n> \n> I agree it makes sense to count strategy writes separately.\n> \n> I thought about this some more, and I don't know if it makes sense to\n> only count \"avoidable\" strategy writes.\n> \n> This would mean that a backend writing out a buffer from the strategy\n> ring when no clean shared buffers (as well as no clean strategy buffers)\n> are available would not count that write as a strategy write (even\n> though it is writing out a buffer from its strategy ring). But, it\n> obviously doesn't make sense to count it as a regular buffer being\n> written out. So, I plan to change this code.\n\nWhat do you mean with \"no clean shared buffers ... are available\"?\n\n\n\n> The most substantial missing piece of the patch right now is persisting\n> the data across reboots.\n> \n> The two places in the code I can see to persist the buffer action stats\n> data are:\n> 1) using the stats collector code (like in\n> pgstat_read/write_statsfiles()\n> 2) using a before_shmem_exit() hook which writes the data structure to a\n> file and then read from it when making the shared memory array initially\n\nI think it's pretty clear that we should go for 1. Having two mechanisms for\npersisting stats data is a bad idea.\n\n\n> Also, I'm unsure how writing the buffer action stats out in\n> pgstat_write_statsfiles() will work, since I think that backends can\n> update their buffer action stats after we would have already persisted\n> the data from the BufferActionStatsArray -- causing us to lose those\n> updates.\n\nI was thinking it'd work differently. Whenever a connection ends, it reports\nits data up to pgstats.c (otherwise we'd loose those stats). By the time\nshutdown happens, they all need to have already have reported their stats - so\nwe don't need to do anything to get the data to pgstats.c during shutdown\ntime.\n\n\n> And, I don't think I can use pgstat_read_statsfiles() since the\n> BufferActionStatsArray should have the data from the file as soon as the\n> view containing the buffer action stats can be queried. Thus, it seems\n> like I would need to read the file while initializing the array in\n> CreateBufferActionStatsCounters().\n\nWhy would backends need to read that data back?\n\n\n> diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> index 55f6e3711d..96cac0a74e 100644\n> --- a/src/backend/catalog/system_views.sql\n> +++ b/src/backend/catalog/system_views.sql\n> @@ -1067,9 +1067,6 @@ CREATE VIEW pg_stat_bgwriter AS\n> pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> - pg_stat_get_buf_written_backend() AS buffers_backend,\n> - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> - pg_stat_get_buf_alloc() AS buffers_alloc,\n> pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n\nMaterial for a separate patch, not this. But if we're going to break\nmonitoring queries anyway, I think we should consider also renaming\nmaxwritten_clean (and perhaps a few others), because nobody understands what\nthat is supposed to mean.\n\n\n\n> @@ -1089,10 +1077,6 @@ ForwardSyncRequest(const FileTag *ftag, SyncRequestType type)\n> \n> \tLWLockAcquire(CheckpointerCommLock, LW_EXCLUSIVE);\n> \n> -\t/* Count all backend writes regardless of if they fit in the queue */\n> -\tif (!AmBackgroundWriterProcess())\n> -\t\tCheckpointerShmem->num_backend_writes++;\n> -\n> \t/*\n> \t * If the checkpointer isn't running or the request queue is full, the\n> \t * backend will have to perform its own fsync request. But before forcing\n> @@ -1106,8 +1090,10 @@ ForwardSyncRequest(const FileTag *ftag, SyncRequestType type)\n> \t\t * Count the subset of writes where backends have to do their own\n> \t\t * fsync\n> \t\t */\n> +\t\t/* TODO: should we count fsyncs for all types of procs? */\n> \t\tif (!AmBackgroundWriterProcess())\n> -\t\t\tCheckpointerShmem->num_backend_fsync++;\n> +\t\t\tpgstat_increment_buffer_action(BA_Fsync);\n> +\n\nYes, I think that'd make sense. Now that we can disambiguate the different\ntypes of syncs between procs, I don't see a point of having a process-type\nfilter here. We just loose data...\n\n\n\n> \t\t/* don't set checksum for all-zero page */\n> @@ -1229,11 +1234,60 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t\t\t\t\tif (XLogNeedsFlush(lsn) &&\n> \t\t\t\t\t\tStrategyRejectBuffer(strategy, buf))\n> \t\t\t\t\t{\n> +\t\t\t\t\t\t/*\n> +\t\t\t\t\t\t * Unset the strat write flag, as we will not be writing\n> +\t\t\t\t\t\t * this particular buffer from our ring out and may end\n> +\t\t\t\t\t\t * up having to find a buffer from main shared buffers,\n> +\t\t\t\t\t\t * which, if it is dirty, we may have to write out, which\n> +\t\t\t\t\t\t * could have been prevented by checkpointing and background\n> +\t\t\t\t\t\t * writing\n> +\t\t\t\t\t\t */\n> +\t\t\t\t\t\tStrategyUnChooseBufferFromRing(strategy);\n> +\n> \t\t\t\t\t\t/* Drop lock/pin and loop around for another buffer */\n> \t\t\t\t\t\tLWLockRelease(BufferDescriptorGetContentLock(buf));\n> \t\t\t\t\t\tUnpinBuffer(buf, true);\n> \t\t\t\t\t\tcontinue;\n> \t\t\t\t\t}\n\nCould we combine this with StrategyRejectBuffer()? It seems a bit wasteful to\nhave two function calls into freelist.c when the second happens exactly when\nthe first returns true?\n\n\n> +\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * TODO: there is certainly a better way to write this\n> +\t\t\t\t\t * logic\n> +\t\t\t\t\t */\n> +\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * The dirty buffer that will be written out was selected\n> +\t\t\t\t\t * from the ring and we did not bother checking the\n> +\t\t\t\t\t * freelist or doing a clock sweep to look for a clean\n> +\t\t\t\t\t * buffer to use, thus, this write will be counted as a\n> +\t\t\t\t\t * strategy write -- one that may be unnecessary without a\n> +\t\t\t\t\t * strategy\n> +\t\t\t\t\t */\n> +\t\t\t\t\tif (StrategyIsBufferFromRing(strategy))\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tpgstat_increment_buffer_action(BA_Write_Strat);\n> +\t\t\t\t\t}\n> +\n> +\t\t\t\t\t\t/*\n> +\t\t\t\t\t\t * If the dirty buffer was one we grabbed from the\n> +\t\t\t\t\t\t * freelist or through a clock sweep, it could have been\n> +\t\t\t\t\t\t * written out by bgwriter or checkpointer, thus, we will\n> +\t\t\t\t\t\t * count it as a regular write\n> +\t\t\t\t\t\t */\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\tpgstat_increment_buffer_action(BA_Write);\n\nIt seems this would be better solved by having an \"bool *from_ring\" or\nGetBufferSource* parameter to StrategyGetBuffer().\n\n\n> @@ -2895,6 +2948,20 @@ FlushBuffer(BufferDesc *buf, SMgrRelation reln)\n> \t/*\n> \t * bufToWrite is either the shared buffer or a copy, as appropriate.\n> \t */\n> +\n> +\t/*\n> +\t * TODO: consider that if we did not need to distinguish between a buffer\n> +\t * flushed that was grabbed from the ring buffer and written out as part\n> +\t * of a strategy which was not from main Shared Buffers (and thus\n> +\t * preventable by bgwriter or checkpointer), then we could move all calls\n> +\t * to pgstat_increment_buffer_action() here except for the one for\n> +\t * extends, which would remain in ReadBuffer_common() before smgrextend()\n> +\t * (unless we decide to start counting other extends). That includes the\n> +\t * call to count buffers written by bgwriter and checkpointer which go\n> +\t * through FlushBuffer() but not BufferAlloc(). That would make it\n> +\t * simpler. Perhaps instead we can find somewhere else to indicate that\n> +\t * the buffer is from the ring of buffers to reuse.\n> +\t */\n> \tsmgrwrite(reln,\n> \t\t\t buf->tag.forkNum,\n> \t\t\t buf->tag.blockNum,\n\nCan we just add a parameter to FlushBuffer indicating what the source of the\nwrite is?\n\n\n> @@ -247,7 +257,7 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state)\n> \t * the rate of buffer consumption. Note that buffers recycled by a\n> \t * strategy object are intentionally not counted here.\n> \t */\n> -\tpg_atomic_fetch_add_u32(&StrategyControl->numBufferAllocs, 1);\n> +\tpgstat_increment_buffer_action(BA_Alloc);\n> \n> \t/*\n> \t * First check, without acquiring the lock, whether there's buffers in the\n\n> @@ -411,11 +421,6 @@ StrategySyncStart(uint32 *complete_passes, uint32 *num_buf_alloc)\n> \t\t */\n> \t\t*complete_passes += nextVictimBuffer / NBuffers;\n> \t}\n> -\n> -\tif (num_buf_alloc)\n> -\t{\n> -\t\t*num_buf_alloc = pg_atomic_exchange_u32(&StrategyControl->numBufferAllocs, 0);\n> -\t}\n> \tSpinLockRelease(&StrategyControl->buffer_strategy_lock);\n> \treturn result;\n> }\n\nHm. Isn't bgwriter using the *num_buf_alloc value to pace its activity? I\nsuspect this patch shouldn't get rid of numBufferAllocs at the same time as\noverhauling the stats stuff. Perhaps we don't need both - but it's not obvious\nthat that's the case / how we can make that work.\n\n\n\n\n> +void\n> +pgstat_increment_buffer_action(BufferActionType ba_type)\n> +{\n> +\tvolatile PgBackendStatus *beentry = MyBEEntry;\n> +\n> +\tif (!beentry || !pgstat_track_activities)\n> +\t\treturn;\n> +\n> +\tif (ba_type == BA_Alloc)\n> +\t\tpg_atomic_add_fetch_u64(&beentry->buffer_action_stats.allocs, 1);\n> +\telse if (ba_type == BA_Extend)\n> +\t\tpg_atomic_add_fetch_u64(&beentry->buffer_action_stats.extends, 1);\n> +\telse if (ba_type == BA_Fsync)\n> +\t\tpg_atomic_add_fetch_u64(&beentry->buffer_action_stats.fsyncs, 1);\n> +\telse if (ba_type == BA_Write)\n> +\t\tpg_atomic_add_fetch_u64(&beentry->buffer_action_stats.writes, 1);\n> +\telse if (ba_type == BA_Write_Strat)\n> +\t\tpg_atomic_add_fetch_u64(&beentry->buffer_action_stats.writes_strat, 1);\n> +}\n\nI don't think we want to use atomic increments here - they're *slow*. And\nthere only ever can be a single writer to a backend's stats. So just doing\nsomething like\n pg_atomic_write_u64(&var, pg_atomic_read_u64(&var) + 1)\nshould do the trick.\n\n\n> +/*\n> + * Called for a single backend at the time of death to persist its I/O stats\n> + */\n> +void\n> +pgstat_record_dead_backend_buffer_actions(void)\n> +{\n> +\tvolatile PgBackendBufferActionStats *ba_stats;\n> +\tvolatile\tPgBackendStatus *beentry = MyBEEntry;\n> +\n> +\tif (beentry->st_procpid != 0)\n> +\t\treturn;\n> +\n> +\t// TODO: is this correct? could there be a data race? do I need a lock?\n> +\tba_stats = &BufferActionStatsArray[beentry->st_backendType];\n> +\tpg_atomic_add_fetch_u64(&ba_stats->allocs, pg_atomic_read_u64(&beentry->buffer_action_stats.allocs));\n> +\tpg_atomic_add_fetch_u64(&ba_stats->extends, pg_atomic_read_u64(&beentry->buffer_action_stats.extends));\n> +\tpg_atomic_add_fetch_u64(&ba_stats->fsyncs, pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs));\n> +\tpg_atomic_add_fetch_u64(&ba_stats->writes, pg_atomic_read_u64(&beentry->buffer_action_stats.writes));\n> +\tpg_atomic_add_fetch_u64(&ba_stats->writes_strat, pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat));\n> +}\n\nI don't see a race, FWIW.\n\nThis is where I propose that we instead report the values up to the stats\ncollector, instead of having a separate array that we need to persist\n\n\n> +/*\n> + * Fill the provided values array with the accumulated counts of buffer actions\n> + * taken by all backends of type backend_type (input parameter), both alive and\n> + * dead. This is currently only used by pg_stat_get_buffer_actions() to create\n> + * the rows in the pg_stat_buffer_actions system view.\n> + */\n> +void\n> +pgstat_recount_all_buffer_actions(BackendType backend_type, Datum *values)\n> +{\n> +\tint\t\t\ti;\n> +\tvolatile PgBackendStatus *beentry;\n> +\n> +\t/*\n> +\t * Add stats from all exited backends\n> +\t */\n> +\tvalues[BA_Alloc] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].allocs);\n> +\tvalues[BA_Extend] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].extends);\n> +\tvalues[BA_Fsync] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].fsyncs);\n> +\tvalues[BA_Write] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].writes);\n> +\tvalues[BA_Write_Strat] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].writes_strat);\n> +\n> +\t/*\n> +\t * Loop through all live backends and count their buffer actions\n> +\t */\n> +\t// TODO: see note in pg_stat_get_buffer_actions() about inefficiency of this method\n> +\n> +\tbeentry = BackendStatusArray;\n> +\tfor (i = 1; i <= MaxBackends; i++)\n> +\t{\n> +\t\t/* Don't count dead backends. They should already be counted */\n> +\t\tif (beentry->st_procpid == 0)\n> +\t\t\tcontinue;\n> +\t\tif (beentry->st_backendType != backend_type)\n> +\t\t\tcontinue;\n> +\n> +\t\tvalues[BA_Alloc] += pg_atomic_read_u64(&beentry->buffer_action_stats.allocs);\n> +\t\tvalues[BA_Extend] += pg_atomic_read_u64(&beentry->buffer_action_stats.extends);\n> +\t\tvalues[BA_Fsync] += pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs);\n> +\t\tvalues[BA_Write] += pg_atomic_read_u64(&beentry->buffer_action_stats.writes);\n> +\t\tvalues[BA_Write_Strat] += pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat);\n> +\n> +\t\tbeentry++;\n> +\t}\n> +}\n\nIt seems to make a bit more sense to have this sum up the stats for all\nbackend types at once.\n\n> +\t\t/*\n> +\t\t * Currently, the only supported backend types for stats are the following.\n> +\t\t * If this were to change, pg_proc.dat would need to be changed as well\n> +\t\t * to reflect the new expected number of rows.\n> +\t\t */\n> +\t\tDatum values[BUFFER_ACTION_NUM_TYPES];\n> +\t\tbool nulls[BUFFER_ACTION_NUM_TYPES];\n\nAh ;)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Aug 2021 11:12:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 2:13 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-08-02 18:25:56 -0400, Melanie Plageman wrote:\n> > Thanks for the feedback!\n> >\n> > I agree it makes sense to count strategy writes separately.\n> >\n> > I thought about this some more, and I don't know if it makes sense to\n> > only count \"avoidable\" strategy writes.\n> >\n> > This would mean that a backend writing out a buffer from the strategy\n> > ring when no clean shared buffers (as well as no clean strategy buffers)\n> > are available would not count that write as a strategy write (even\n> > though it is writing out a buffer from its strategy ring). But, it\n> > obviously doesn't make sense to count it as a regular buffer being\n> > written out. So, I plan to change this code.\n>\n> What do you mean with \"no clean shared buffers ... are available\"?\n>\n\nI think I was talking about the scenario in which a backend using a\nstrategy does not find a clean buffer in the strategy ring and goes to\nlook in the freelist for a clean shared buffer and doesn't find one.\n\nI was probably talking in circles up there. I think the current\npatch counts the right writes in the right way, though.\n\n>\n>\n> > The most substantial missing piece of the patch right now is persisting\n> > the data across reboots.\n> >\n> > The two places in the code I can see to persist the buffer action stats\n> > data are:\n> > 1) using the stats collector code (like in\n> > pgstat_read/write_statsfiles()\n> > 2) using a before_shmem_exit() hook which writes the data structure to a\n> > file and then read from it when making the shared memory array initially\n>\n> I think it's pretty clear that we should go for 1. Having two mechanisms for\n> persisting stats data is a bad idea.\n\nNew version uses the stats collector.\n\n>\n>\n> > Also, I'm unsure how writing the buffer action stats out in\n> > pgstat_write_statsfiles() will work, since I think that backends can\n> > update their buffer action stats after we would have already persisted\n> > the data from the BufferActionStatsArray -- causing us to lose those\n> > updates.\n>\n> I was thinking it'd work differently. Whenever a connection ends, it reports\n> its data up to pgstats.c (otherwise we'd loose those stats). By the time\n> shutdown happens, they all need to have already have reported their stats - so\n> we don't need to do anything to get the data to pgstats.c during shutdown\n> time.\n>\n\nWhen you say \"whenever a connection ends\", what part of the code are you\nreferring to specifically?\n\nAlso, when you say \"shutdown\", do you mean a backend shutting down or\nall backends shutting down (including postmaster) -- like pg_ctl stop?\n\n>\n> > And, I don't think I can use pgstat_read_statsfiles() since the\n> > BufferActionStatsArray should have the data from the file as soon as the\n> > view containing the buffer action stats can be queried. Thus, it seems\n> > like I would need to read the file while initializing the array in\n> > CreateBufferActionStatsCounters().\n>\n> Why would backends need to read that data back?\n>\n\nTo get totals across restarts, but, doesn't matter now that I am using\nstats collector.\n\n>\n> > diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> > index 55f6e3711d..96cac0a74e 100644\n> > --- a/src/backend/catalog/system_views.sql\n> > +++ b/src/backend/catalog/system_views.sql\n> > @@ -1067,9 +1067,6 @@ CREATE VIEW pg_stat_bgwriter AS\n> > pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> > pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> > pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> > - pg_stat_get_buf_written_backend() AS buffers_backend,\n> > - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> > - pg_stat_get_buf_alloc() AS buffers_alloc,\n> > pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n>\n> Material for a separate patch, not this. But if we're going to break\n> monitoring queries anyway, I think we should consider also renaming\n> maxwritten_clean (and perhaps a few others), because nobody understands what\n> that is supposed to mean.\n>\n>\n\nDo you mean I shouldn't remove anything from the pg_stat_bgwriter view?\n\n>\n> > @@ -1089,10 +1077,6 @@ ForwardSyncRequest(const FileTag *ftag, SyncRequestType type)\n> >\n> > LWLockAcquire(CheckpointerCommLock, LW_EXCLUSIVE);\n> >\n> > - /* Count all backend writes regardless of if they fit in the queue */\n> > - if (!AmBackgroundWriterProcess())\n> > - CheckpointerShmem->num_backend_writes++;\n> > -\n> > /*\n> > * If the checkpointer isn't running or the request queue is full, the\n> > * backend will have to perform its own fsync request. But before forcing\n> > @@ -1106,8 +1090,10 @@ ForwardSyncRequest(const FileTag *ftag, SyncRequestType type)\n> > * Count the subset of writes where backends have to do their own\n> > * fsync\n> > */\n> > + /* TODO: should we count fsyncs for all types of procs? */\n> > if (!AmBackgroundWriterProcess())\n> > - CheckpointerShmem->num_backend_fsync++;\n> > + pgstat_increment_buffer_action(BA_Fsync);\n> > +\n>\n> Yes, I think that'd make sense. Now that we can disambiguate the different\n> types of syncs between procs, I don't see a point of having a process-type\n> filter here. We just loose data...\n>\n>\n\nDone\n\n>\n> > /* don't set checksum for all-zero page */\n> > @@ -1229,11 +1234,60 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> > if (XLogNeedsFlush(lsn) &&\n> > StrategyRejectBuffer(strategy, buf))\n> > {\n> > + /*\n> > + * Unset the strat write flag, as we will not be writing\n> > + * this particular buffer from our ring out and may end\n> > + * up having to find a buffer from main shared buffers,\n> > + * which, if it is dirty, we may have to write out, which\n> > + * could have been prevented by checkpointing and background\n> > + * writing\n> > + */\n> > + StrategyUnChooseBufferFromRing(strategy);\n> > +\n> > /* Drop lock/pin and loop around for another buffer */\n> > LWLockRelease(BufferDescriptorGetContentLock(buf));\n> > UnpinBuffer(buf, true);\n> > continue;\n> > }\n>\n> Could we combine this with StrategyRejectBuffer()? It seems a bit wasteful to\n> have two function calls into freelist.c when the second happens exactly when\n> the first returns true?\n>\n>\n> > +\n> > + /*\n> > + * TODO: there is certainly a better way to write this\n> > + * logic\n> > + */\n> > +\n> > + /*\n> > + * The dirty buffer that will be written out was selected\n> > + * from the ring and we did not bother checking the\n> > + * freelist or doing a clock sweep to look for a clean\n> > + * buffer to use, thus, this write will be counted as a\n> > + * strategy write -- one that may be unnecessary without a\n> > + * strategy\n> > + */\n> > + if (StrategyIsBufferFromRing(strategy))\n> > + {\n> > + pgstat_increment_buffer_action(BA_Write_Strat);\n> > + }\n> > +\n> > + /*\n> > + * If the dirty buffer was one we grabbed from the\n> > + * freelist or through a clock sweep, it could have been\n> > + * written out by bgwriter or checkpointer, thus, we will\n> > + * count it as a regular write\n> > + */\n> > + else\n> > + pgstat_increment_buffer_action(BA_Write);\n>\n> It seems this would be better solved by having an \"bool *from_ring\" or\n> GetBufferSource* parameter to StrategyGetBuffer().\n>\n\nI've addressed both of these in the new version.\n\n>\n> > @@ -2895,6 +2948,20 @@ FlushBuffer(BufferDesc *buf, SMgrRelation reln)\n> > /*\n> > * bufToWrite is either the shared buffer or a copy, as appropriate.\n> > */\n> > +\n> > + /*\n> > + * TODO: consider that if we did not need to distinguish between a buffer\n> > + * flushed that was grabbed from the ring buffer and written out as part\n> > + * of a strategy which was not from main Shared Buffers (and thus\n> > + * preventable by bgwriter or checkpointer), then we could move all calls\n> > + * to pgstat_increment_buffer_action() here except for the one for\n> > + * extends, which would remain in ReadBuffer_common() before smgrextend()\n> > + * (unless we decide to start counting other extends). That includes the\n> > + * call to count buffers written by bgwriter and checkpointer which go\n> > + * through FlushBuffer() but not BufferAlloc(). That would make it\n> > + * simpler. Perhaps instead we can find somewhere else to indicate that\n> > + * the buffer is from the ring of buffers to reuse.\n> > + */\n> > smgrwrite(reln,\n> > buf->tag.forkNum,\n> > buf->tag.blockNum,\n>\n> Can we just add a parameter to FlushBuffer indicating what the source of the\n> write is?\n>\n\nI just noticed this comment now, so I'll address that in the next\nversion. I rebased today and noticed merge conflicts, so, it looks like\nv5 will be on its way soon anyway.\n\n>\n> > @@ -247,7 +257,7 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state)\n> > * the rate of buffer consumption. Note that buffers recycled by a\n> > * strategy object are intentionally not counted here.\n> > */\n> > - pg_atomic_fetch_add_u32(&StrategyControl->numBufferAllocs, 1);\n> > + pgstat_increment_buffer_action(BA_Alloc);\n> >\n> > /*\n> > * First check, without acquiring the lock, whether there's buffers in the\n>\n> > @@ -411,11 +421,6 @@ StrategySyncStart(uint32 *complete_passes, uint32 *num_buf_alloc)\n> > */\n> > *complete_passes += nextVictimBuffer / NBuffers;\n> > }\n> > -\n> > - if (num_buf_alloc)\n> > - {\n> > - *num_buf_alloc = pg_atomic_exchange_u32(&StrategyControl->numBufferAllocs, 0);\n> > - }\n> > SpinLockRelease(&StrategyControl->buffer_strategy_lock);\n> > return result;\n> > }\n>\n> Hm. Isn't bgwriter using the *num_buf_alloc value to pace its activity? I\n> suspect this patch shouldn't get rid of numBufferAllocs at the same time as\n> overhauling the stats stuff. Perhaps we don't need both - but it's not obvious\n> that that's the case / how we can make that work.\n>\n>\n\nI initially meant to add a function to the patch like\npg_stat_get_buffer_actions() but which took a BufferActionType and\nBackendType as parameters and returned a single value which is the\nnumber of buffer action types of that type for that type of backend.\n\nlet's say I defined it like this:\nuint64\n pg_stat_get_backend_buffer_actions_stats(BackendType backend_type,\n BufferActionType ba_type)\n\nThen, I intended to use that in StrategySyncStart() to set num_buf_alloc\nby subtracting the value of StrategyControl->numBufferAllocs from the\nvalue returned by pg_stat_get_backend_buffer_actions_stats(B_BG_WRITER,\nBA_Alloc), val, then adding that value, val, to\nStrategyControl->numBufferAllocs.\n\nI think that would have the same behavior as current, though I'm not\nsure if the performance would end up being better or worse. It wouldn't\nbe atomically incrementing StrategyControl->numBufferAllocs, but it\nwould do a few additional atomic operations in StrategySyncStart() than\nbefore. Also, we would do all the work done by\npg_stat_get_buffer_actions() in StrategySyncStart().\n\nBut that is called comparatively infrequently, right?\n\n>\n>\n> > +void\n> > +pgstat_increment_buffer_action(BufferActionType ba_type)\n> > +{\n> > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + if (!beentry || !pgstat_track_activities)\n> > + return;\n> > +\n> > + if (ba_type == BA_Alloc)\n> > + pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.allocs, 1);\n> > + else if (ba_type == BA_Extend)\n> > + pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.extends, 1);\n> > + else if (ba_type == BA_Fsync)\n> > + pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.fsyncs, 1);\n> > + else if (ba_type == BA_Write)\n> > + pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.writes, 1);\n> > + else if (ba_type == BA_Write_Strat)\n> > + pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.writes_strat, 1);\n> > +}\n>\n> I don't think we want to use atomic increments here - they're *slow*. And\n> there only ever can be a single writer to a backend's stats. So just doing\n> something like\n> pg_atomic_write_u64(&var, pg_atomic_read_u64(&var) + 1)\n> should do the trick.\n>\n\nDone\n\n>\n> > +/*\n> > + * Called for a single backend at the time of death to persist its I/O stats\n> > + */\n> > +void\n> > +pgstat_record_dead_backend_buffer_actions(void)\n> > +{\n> > + volatile PgBackendBufferActionStats *ba_stats;\n> > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + if (beentry->st_procpid != 0)\n> > + return;\n> > +\n> > + // TODO: is this correct? could there be a data race? do I need a lock?\n> > + ba_stats = &BufferActionStatsArray[beentry->st_backendType];\n> > + pg_atomic_add_fetch_u64(&ba_stats->allocs, pg_atomic_read_u64(&beentry->buffer_action_stats.allocs));\n> > + pg_atomic_add_fetch_u64(&ba_stats->extends, pg_atomic_read_u64(&beentry->buffer_action_stats.extends));\n> > + pg_atomic_add_fetch_u64(&ba_stats->fsyncs, pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs));\n> > + pg_atomic_add_fetch_u64(&ba_stats->writes, pg_atomic_read_u64(&beentry->buffer_action_stats.writes));\n> > + pg_atomic_add_fetch_u64(&ba_stats->writes_strat, pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat));\n> > +}\n>\n> I don't see a race, FWIW.\n>\n> This is where I propose that we instead report the values up to the stats\n> collector, instead of having a separate array that we need to persist\n>\n\nChanged\n\n>\n> > +/*\n> > + * Fill the provided values array with the accumulated counts of buffer actions\n> > + * taken by all backends of type backend_type (input parameter), both alive and\n> > + * dead. This is currently only used by pg_stat_get_buffer_actions() to create\n> > + * the rows in the pg_stat_buffer_actions system view.\n> > + */\n> > +void\n> > +pgstat_recount_all_buffer_actions(BackendType backend_type, Datum *values)\n> > +{\n> > + int i;\n> > + volatile PgBackendStatus *beentry;\n> > +\n> > + /*\n> > + * Add stats from all exited backends\n> > + */\n> > + values[BA_Alloc] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].allocs);\n> > + values[BA_Extend] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].extends);\n> > + values[BA_Fsync] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].fsyncs);\n> > + values[BA_Write] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].writes);\n> > + values[BA_Write_Strat] = pg_atomic_read_u64(&BufferActionStatsArray[backend_type].writes_strat);\n> > +\n> > + /*\n> > + * Loop through all live backends and count their buffer actions\n> > + */\n> > + // TODO: see note in pg_stat_get_buffer_actions() about inefficiency of this method\n> > +\n> > + beentry = BackendStatusArray;\n> > + for (i = 1; i <= MaxBackends; i++)\n> > + {\n> > + /* Don't count dead backends. They should already be counted */\n> > + if (beentry->st_procpid == 0)\n> > + continue;\n> > + if (beentry->st_backendType != backend_type)\n> > + continue;\n> > +\n> > + values[BA_Alloc] += pg_atomic_read_u64(&beentry->buffer_action_stats.allocs);\n> > + values[BA_Extend] += pg_atomic_read_u64(&beentry->buffer_action_stats.extends);\n> > + values[BA_Fsync] += pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs);\n> > + values[BA_Write] += pg_atomic_read_u64(&beentry->buffer_action_stats.writes);\n> > + values[BA_Write_Strat] += pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat);\n> > +\n> > + beentry++;\n> > + }\n> > +}\n>\n> It seems to make a bit more sense to have this sum up the stats for all\n> backend types at once.\n\nChanged.\n\n>\n> > + /*\n> > + * Currently, the only supported backend types for stats are the following.\n> > + * If this were to change, pg_proc.dat would need to be changed as well\n> > + * to reflect the new expected number of rows.\n> > + */\n> > + Datum values[BUFFER_ACTION_NUM_TYPES];\n> > + bool nulls[BUFFER_ACTION_NUM_TYPES];\n>\n> Ah ;)\n>\n\nI just went ahead and made a row for each backend type.\n\n- Melanie",
"msg_date": "Wed, 11 Aug 2021 16:11:34 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 4:11 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Tue, Aug 3, 2021 at 2:13 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > @@ -2895,6 +2948,20 @@ FlushBuffer(BufferDesc *buf, SMgrRelation reln)\n> > > /*\n> > > * bufToWrite is either the shared buffer or a copy, as appropriate.\n> > > */\n> > > +\n> > > + /*\n> > > + * TODO: consider that if we did not need to distinguish between a buffer\n> > > + * flushed that was grabbed from the ring buffer and written out as part\n> > > + * of a strategy which was not from main Shared Buffers (and thus\n> > > + * preventable by bgwriter or checkpointer), then we could move all calls\n> > > + * to pgstat_increment_buffer_action() here except for the one for\n> > > + * extends, which would remain in ReadBuffer_common() before smgrextend()\n> > > + * (unless we decide to start counting other extends). That includes the\n> > > + * call to count buffers written by bgwriter and checkpointer which go\n> > > + * through FlushBuffer() but not BufferAlloc(). That would make it\n> > > + * simpler. Perhaps instead we can find somewhere else to indicate that\n> > > + * the buffer is from the ring of buffers to reuse.\n> > > + */\n> > > smgrwrite(reln,\n> > > buf->tag.forkNum,\n> > > buf->tag.blockNum,\n> >\n> > Can we just add a parameter to FlushBuffer indicating what the source of the\n> > write is?\n> >\n>\n> I just noticed this comment now, so I'll address that in the next\n> version. I rebased today and noticed merge conflicts, so, it looks like\n> v5 will be on its way soon anyway.\n>\n\nActually, after moving the code around like you suggested, calling\npgstat_increment_buffer_action() before smgrwrite() in FlushBuffer() and\nusing a parameter to indicate if it is a strategy write or not would\nonly save us one other call to pgstat_increment_buffer_action() -- the\none in SyncOneBuffer(). We would end up moving the one in BufferAlloc()\nto FlushBuffer() and removing the one in SyncOneBuffer().\nDo you think it is still worth it?\n\nRebased v5 attached.",
"msg_date": "Wed, 11 Aug 2021 18:00:40 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-11 16:11:34 -0400, Melanie Plageman wrote:\n> On Tue, Aug 3, 2021 at 2:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Also, I'm unsure how writing the buffer action stats out in\n> > > pgstat_write_statsfiles() will work, since I think that backends can\n> > > update their buffer action stats after we would have already persisted\n> > > the data from the BufferActionStatsArray -- causing us to lose those\n> > > updates.\n> >\n> > I was thinking it'd work differently. Whenever a connection ends, it reports\n> > its data up to pgstats.c (otherwise we'd loose those stats). By the time\n> > shutdown happens, they all need to have already have reported their stats - so\n> > we don't need to do anything to get the data to pgstats.c during shutdown\n> > time.\n> >\n> \n> When you say \"whenever a connection ends\", what part of the code are you\n> referring to specifically?\n\npgstat_beshutdown_hook()\n\n\n> Also, when you say \"shutdown\", do you mean a backend shutting down or\n> all backends shutting down (including postmaster) -- like pg_ctl stop?\n\nAdmittedly our language is very imprecise around this :(. What I meant\nis that backends would report their own stats up to the stats collector\nwhen the connection ends (in pgstat_beshutdown_hook()). That means that\nwhen the whole server (pgstat and then postmaster, potentially via\npg_ctl stop) shuts down, all the per-connection stats have already been\nreported up to pgstat.\n\n\n> > > diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> > > index 55f6e3711d..96cac0a74e 100644\n> > > --- a/src/backend/catalog/system_views.sql\n> > > +++ b/src/backend/catalog/system_views.sql\n> > > @@ -1067,9 +1067,6 @@ CREATE VIEW pg_stat_bgwriter AS\n> > > pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> > > pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> > > pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> > > - pg_stat_get_buf_written_backend() AS buffers_backend,\n> > > - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> > > - pg_stat_get_buf_alloc() AS buffers_alloc,\n> > > pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n> >\n> > Material for a separate patch, not this. But if we're going to break\n> > monitoring queries anyway, I think we should consider also renaming\n> > maxwritten_clean (and perhaps a few others), because nobody understands what\n> > that is supposed to mean.\n\n> Do you mean I shouldn't remove anything from the pg_stat_bgwriter view?\n\nNo - I just meant that now that we're breaking pg_stat_bgwriter queries,\nwe should also rename the columns to be easier to understand. But that\nit should be a separate patch / commit...\n\n\n> > > @@ -411,11 +421,6 @@ StrategySyncStart(uint32 *complete_passes, uint32 *num_buf_alloc)\n> > > */\n> > > *complete_passes += nextVictimBuffer / NBuffers;\n> > > }\n> > > -\n> > > - if (num_buf_alloc)\n> > > - {\n> > > - *num_buf_alloc = pg_atomic_exchange_u32(&StrategyControl->numBufferAllocs, 0);\n> > > - }\n> > > SpinLockRelease(&StrategyControl->buffer_strategy_lock);\n> > > return result;\n> > > }\n> >\n> > Hm. Isn't bgwriter using the *num_buf_alloc value to pace its activity? I\n> > suspect this patch shouldn't get rid of numBufferAllocs at the same time as\n> > overhauling the stats stuff. Perhaps we don't need both - but it's not obvious\n> > that that's the case / how we can make that work.\n> >\n> >\n> \n> I initially meant to add a function to the patch like\n> pg_stat_get_buffer_actions() but which took a BufferActionType and\n> BackendType as parameters and returned a single value which is the\n> number of buffer action types of that type for that type of backend.\n> \n> let's say I defined it like this:\n> uint64\n> pg_stat_get_backend_buffer_actions_stats(BackendType backend_type,\n> BufferActionType ba_type)\n> \n> Then, I intended to use that in StrategySyncStart() to set num_buf_alloc\n> by subtracting the value of StrategyControl->numBufferAllocs from the\n> value returned by pg_stat_get_backend_buffer_actions_stats(B_BG_WRITER,\n> BA_Alloc), val, then adding that value, val, to\n> StrategyControl->numBufferAllocs.\n\nI don't think you could restrict this to B_BG_WRITER? The whole point of\nthis logic is that bgwriter uses the stats for *all* backends to get the\n\"usage rate\" for buffers, which it then uses to control how many buffers\nto clean.\n\n\n> I think that would have the same behavior as current, though I'm not\n> sure if the performance would end up being better or worse. It wouldn't\n> be atomically incrementing StrategyControl->numBufferAllocs, but it\n> would do a few additional atomic operations in StrategySyncStart() than\n> before. Also, we would do all the work done by\n> pg_stat_get_buffer_actions() in StrategySyncStart().\n\nI think it'd be better to separate changing the bgwriter pacing logic\n(and thus numBufferAllocs) from changing the stats reporting.\n\n\n> But that is called comparatively infrequently, right?\n\nDepending on the workload not that rarely. I'm afraid this might be a\nbit too expensive. It's possible we can work around that however.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Aug 2021 03:08:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 3:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-08-11 16:11:34 -0400, Melanie Plageman wrote:\n> > On Tue, Aug 3, 2021 at 2:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> > > > index 55f6e3711d..96cac0a74e 100644\n> > > > --- a/src/backend/catalog/system_views.sql\n> > > > +++ b/src/backend/catalog/system_views.sql\n> > > > @@ -1067,9 +1067,6 @@ CREATE VIEW pg_stat_bgwriter AS\n> > > > pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> > > > pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> > > > pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> > > > - pg_stat_get_buf_written_backend() AS buffers_backend,\n> > > > - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> > > > - pg_stat_get_buf_alloc() AS buffers_alloc,\n> > > > pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n> > >\n> > > Material for a separate patch, not this. But if we're going to break\n> > > monitoring queries anyway, I think we should consider also renaming\n> > > maxwritten_clean (and perhaps a few others), because nobody understands what\n> > > that is supposed to mean.\n>\n> > Do you mean I shouldn't remove anything from the pg_stat_bgwriter view?\n>\n> No - I just meant that now that we're breaking pg_stat_bgwriter queries,\n> we should also rename the columns to be easier to understand. But that\n> it should be a separate patch / commit...\n>\n\nI separated the removal of some redundant stats from pg_stat_bgwriter\ninto a different commit but haven't removed or clarified any additional\ncolumns in pg_stat_bgwriter.\n\n>\n>\n> > > > @@ -411,11 +421,6 @@ StrategySyncStart(uint32 *complete_passes, uint32 *num_buf_alloc)\n> > > > */\n> > > > *complete_passes += nextVictimBuffer / NBuffers;\n> > > > }\n> > > > -\n> > > > - if (num_buf_alloc)\n> > > > - {\n> > > > - *num_buf_alloc = pg_atomic_exchange_u32(&StrategyControl->numBufferAllocs, 0);\n> > > > - }\n> > > > SpinLockRelease(&StrategyControl->buffer_strategy_lock);\n> > > > return result;\n> > > > }\n> > >\n> > > Hm. Isn't bgwriter using the *num_buf_alloc value to pace its activity? I\n> > > suspect this patch shouldn't get rid of numBufferAllocs at the same time as\n> > > overhauling the stats stuff. Perhaps we don't need both - but it's not obvious\n> > > that that's the case / how we can make that work.\n> > >\n> > >\n> >\n> > I initially meant to add a function to the patch like\n> > pg_stat_get_buffer_actions() but which took a BufferActionType and\n> > BackendType as parameters and returned a single value which is the\n> > number of buffer action types of that type for that type of backend.\n> >\n> > let's say I defined it like this:\n> > uint64\n> > pg_stat_get_backend_buffer_actions_stats(BackendType backend_type,\n> > BufferActionType ba_type)\n> >\n> > Then, I intended to use that in StrategySyncStart() to set num_buf_alloc\n> > by subtracting the value of StrategyControl->numBufferAllocs from the\n> > value returned by pg_stat_get_backend_buffer_actions_stats(B_BG_WRITER,\n> > BA_Alloc), val, then adding that value, val, to\n> > StrategyControl->numBufferAllocs.\n>\n> I don't think you could restrict this to B_BG_WRITER? The whole point of\n> this logic is that bgwriter uses the stats for *all* backends to get the\n> \"usage rate\" for buffers, which it then uses to control how many buffers\n> to clean.\n>\n>\n> > I think that would have the same behavior as current, though I'm not\n> > sure if the performance would end up being better or worse. It wouldn't\n> > be atomically incrementing StrategyControl->numBufferAllocs, but it\n> > would do a few additional atomic operations in StrategySyncStart() than\n> > before. Also, we would do all the work done by\n> > pg_stat_get_buffer_actions() in StrategySyncStart().\n>\n> I think it'd be better to separate changing the bgwriter pacing logic\n> (and thus numBufferAllocs) from changing the stats reporting.\n>\n>\n> > But that is called comparatively infrequently, right?\n>\n> Depending on the workload not that rarely. I'm afraid this might be a\n> bit too expensive. It's possible we can work around that however.\n>\n\nI've restored StrategyControl->numBuffersAlloc.\n\nAttached is v6 of the patchset.\n\nI have made several small updates to the patch, including user docs\nupdates, comment clarifications, various changes related to how\nstructures are initialized, code simplications, small details like\nalphabetizing of #includes, etc.\n\nBelow are details on the remaining TODOs and open questions for this\npatch and why I haven't done them yet:\n\n1) performance testing (initial tests done, but need to do some further\ninvestigation before sharing)\n\n2) stats_reset\nBecause pg_stat_buffer_actions fields were added to the globalStats\nstructure, they get reset when the target RESET_BGWRITER is reset.\nDepending on whether or not these commits remove columns from the\npg_stat_bgwriter view, I would approach adding stats_reset to\npg_stat_buffer_actions differently. If removing all of pg_stat_bgwriter,\nI would just rename the target to apply to pg_stat_buffer_actions. If\nnot removing all of pg_stat_bgwriter, I would add a new target for\npg_stat_buffer_actions to reset those stats and then either remove them\nfrom globalStats or MemSet() only the relevant parts of the struct in\npgstat_recv_resetsharedcounter().\nI haven't done this yet because I want to get input on what should\nhappen to pg_stat_bgwriter first (all of it goes, all of it stays, some\ngoes, etc).\n\n3) what to count\nCurrently, the patch counts allocs, extends, fsyncs and writes of shared\nbuffers and writes done when using a buffer access strategy. So, it is a\nmix of mostly shared buffers and a few non-shared buffers. I am\nwondering if it makes sense to also count extends with smgrextend()\nother than those using shared buffers--for example when building an\nindex or when extending the free space map or visibility map. For\nfsyncs, the patch does not count checkpointer fsyncs or fsyncs done from\nXLogWrite().\nOn a related note, depending on what the view counts, the name\nbuffer_actions may or may not be too general.\n\nI also feel like the BackendType B_BACKEND is a bit confusing when we\nare tracking buffer actions for different backend types -- this name\nmakes it seem like other types of backends are not backends.\n\nI'm not sure what the view should track and can see arguments for\nexcluding certain extends or separating them into another stat. I\nhaven't made the changes because I am looking for other peoples'\nopinions.\n\n4) Adding some sort of protection against regressions when code is added\nthat adds additional buffer actions but doesn't count them -- more\nlikely if we are counting all users of smgrextend() but not doing the\ncounter incrementing there.\n\nI'm not sure how I would even do this, so, that's why I haven't done it.\n\n5) It seems like the code to create a tuplestore used by various stats\nfunctions like pg_stat_get_progress_info(), pg_stat_get_activity, and\npg_stat_get_slru could be refactored into a helper function since it is\nquite redundant (maybe returning a ReturnSetInfo).\n\nI haven't done this because I wasn't sure if it was a good idea, and, if\nit is, if I should do it in a separate commit.\n\n6) Cleaning up of commit message, running pgindent, and, eventually,\ncatalog bump (waiting until the patch is done to do this).\n\n7) Additional testing to ensure all codepaths added are hit (one-off\ntesting, not added to regression test suite). I am waiting to do this\nuntil all of the types of buffer actions that will be done are\nfinalized.\n\n- Melanie",
"msg_date": "Tue, 7 Sep 2021 13:16:28 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 3:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-08-11 16:11:34 -0400, Melanie Plageman wrote:\n> > On Tue, Aug 3, 2021 at 2:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > Also, I'm unsure how writing the buffer action stats out in\n> > > > pgstat_write_statsfiles() will work, since I think that backends can\n> > > > update their buffer action stats after we would have already persisted\n> > > > the data from the BufferActionStatsArray -- causing us to lose those\n> > > > updates.\n> > >\n> > > I was thinking it'd work differently. Whenever a connection ends, it reports\n> > > its data up to pgstats.c (otherwise we'd loose those stats). By the time\n> > > shutdown happens, they all need to have already have reported their stats - so\n> > > we don't need to do anything to get the data to pgstats.c during shutdown\n> > > time.\n> > >\n> >\n> > When you say \"whenever a connection ends\", what part of the code are you\n> > referring to specifically?\n>\n> pgstat_beshutdown_hook()\n>\n>\n> > Also, when you say \"shutdown\", do you mean a backend shutting down or\n> > all backends shutting down (including postmaster) -- like pg_ctl stop?\n>\n> Admittedly our language is very imprecise around this :(. What I meant\n> is that backends would report their own stats up to the stats collector\n> when the connection ends (in pgstat_beshutdown_hook()). That means that\n> when the whole server (pgstat and then postmaster, potentially via\n> pg_ctl stop) shuts down, all the per-connection stats have already been\n> reported up to pgstat.\n>\n\nSo, I realized that the patch has a problem. I added the code to send\nbuffer actions stats to the stats collector\n(pgstat_send_buffer_actions()) to pgstat_report_stat() and this isn't\ngetting called when all types of backends exit.\n\nI originally thought to add pgstat_send_buffer_actions() to\npgstat_beshutdown_hook() (as suggested), but, this is called after\npgstat_shutdown_hook(), so, we aren't able to send stats to the stats\ncollector at that time. (pgstat_shutdown_hook() sets pgstat_is_shutdown\nto true and then in pgstat_beshutdown_hook() (called after), if we call\npgstat_send_buffer_actions(), it calls pgstat_send() which calls\npgstat_assert_is_up() which trips when pgstat_is_shutdown is true.)\n\nAfter calling pgstat_send_buffer_actions() from pgstat_report_stat(), it\nseems to miss checkpointer stats entirely. I did find that if I\nsprinkled pgstat_send_buffer_actions() around in the various places that\npgstat_send_checkpointer() is called, I could get checkpointer stats\n(see attached patch, capture_checkpointer_buffer_actions.patch), but,\nthat seems a little bit haphazard since pgstat_send_buffer_actions() is\nsupposed to capture stats for all backend types. Is there somewhere else\nI can call it that is exercised by all backend types before\npgstat_shutdown_hook() is called but after they would have finished any\nrelevant buffer actions?\n\n- Melanie",
"msg_date": "Wed, 8 Sep 2021 18:28:38 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Sep 8, 2021 at 9:28 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Fri, Aug 13, 2021 at 3:08 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2021-08-11 16:11:34 -0400, Melanie Plageman wrote:\n> > > On Tue, Aug 3, 2021 at 2:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > Also, I'm unsure how writing the buffer action stats out in\n> > > > > pgstat_write_statsfiles() will work, since I think that backends can\n> > > > > update their buffer action stats after we would have already persisted\n> > > > > the data from the BufferActionStatsArray -- causing us to lose those\n> > > > > updates.\n> > > >\n> > > > I was thinking it'd work differently. Whenever a connection ends, it reports\n> > > > its data up to pgstats.c (otherwise we'd loose those stats). By the time\n> > > > shutdown happens, they all need to have already have reported their stats - so\n> > > > we don't need to do anything to get the data to pgstats.c during shutdown\n> > > > time.\n> > > >\n> > >\n> > > When you say \"whenever a connection ends\", what part of the code are you\n> > > referring to specifically?\n> >\n> > pgstat_beshutdown_hook()\n> >\n> >\n> > > Also, when you say \"shutdown\", do you mean a backend shutting down or\n> > > all backends shutting down (including postmaster) -- like pg_ctl stop?\n> >\n> > Admittedly our language is very imprecise around this :(. What I meant\n> > is that backends would report their own stats up to the stats collector\n> > when the connection ends (in pgstat_beshutdown_hook()). That means that\n> > when the whole server (pgstat and then postmaster, potentially via\n> > pg_ctl stop) shuts down, all the per-connection stats have already been\n> > reported up to pgstat.\n> >\n>\n> So, I realized that the patch has a problem. I added the code to send\n> buffer actions stats to the stats collector\n> (pgstat_send_buffer_actions()) to pgstat_report_stat() and this isn't\n> getting called when all types of backends exit.\n>\n> I originally thought to add pgstat_send_buffer_actions() to\n> pgstat_beshutdown_hook() (as suggested), but, this is called after\n> pgstat_shutdown_hook(), so, we aren't able to send stats to the stats\n> collector at that time. (pgstat_shutdown_hook() sets pgstat_is_shutdown\n> to true and then in pgstat_beshutdown_hook() (called after), if we call\n> pgstat_send_buffer_actions(), it calls pgstat_send() which calls\n> pgstat_assert_is_up() which trips when pgstat_is_shutdown is true.)\n>\n> After calling pgstat_send_buffer_actions() from pgstat_report_stat(), it\n> seems to miss checkpointer stats entirely. I did find that if I\n> sprinkled pgstat_send_buffer_actions() around in the various places that\n> pgstat_send_checkpointer() is called, I could get checkpointer stats\n> (see attached patch, capture_checkpointer_buffer_actions.patch), but,\n> that seems a little bit haphazard since pgstat_send_buffer_actions() is\n> supposed to capture stats for all backend types. Is there somewhere else\n> I can call it that is exercised by all backend types before\n> pgstat_shutdown_hook() is called but after they would have finished any\n> relevant buffer actions?\n>\n\nI realized that putting these additional calls in checkpointer code and\nnot clearing out PgBackendStatus counters for buffer actions results in\na lot of duplicate stats. I was wondering if\npgstat_send_buffer_actions() is needed, however, in\nHandleCheckpointerInterrupts() before the proc_exit().\n\nIt does seem like additional calls to pgstat_send_buffer_actions()\nshouldn't be needed since most processes register\npgstat_shutdown_hook(). However, since MyDatabaseId isn't valid for the\nauxiliary processes, even though the pgstat_shutdown_hook() is\nregistered from BaseInit(), pgstat_report_stat() never gets called for\nthem, so their stats aren't persisted using the current method.\n\nIt seems like the best solution to persisting all processes' stats would\nbe to have all processes register pgstat_shutdown_hook() and to still\ncall pgstat_report_stat() even if MyDatabaseId is not valid if the\nprocess is not a regular backend (I assume that it is only a problem\nthat MyDatabaseId is InvalidOid for backends that have had it set to a\nvalid oid at some point). For the stats that rely on database OID,\nperhaps those can be reported based on whether or not MyDatabaseId is\nvalid from within pgstat_report_stat().\n\nI also realized that I am not collecting stats from live auxiliary\nprocesses in pg_stat_get_buffer_actions(). I need to change the loop to\nfor (i = 0; i <= MaxBackends + NUM_AUXPROCTYPES; i++) to actually get\nstats from live auxiliary processes when querying the view.\n\nOn an unrelated note, I am planning to remove buffers_clean and\nbuffers_checkpoint from the pg_stat_bgwriter view since those are also\nredundant. When I was removing them, I noticed that buffers_checkpoint\nand buffers_clean count buffers as having been written even when\nFlushBuffer() \"does nothing\" because someone else wrote out the dirty\nbuffer before the bgwriter or checkpointer had a chance to do it. This\nseems like it would result in an incorrect count. Am I missing\nsomething?\n\n- Melanie\n\n\n",
"msg_date": "Fri, 10 Sep 2021 18:16:28 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nI've attached the v7 patch set.\n\nChanges from v6:\n- removed unnecessary global variable BufferActionsStats\n- fixed the loop condition in pg_stat_get_buffer_actions()\n- updated some comments\n- removed buffers_checkpoint and buffers_clean from pg_stat_bgwriter\n view (now pg_stat_bgwriter view is mainly checkpointer statistics,\n which isn't great)\n- instead of calling pgstat_send_buffer_actions() in\n pgstat_report_stat(), I renamed pgstat_send_buffer_actions() to\n pgstat_report_buffers() and call it directly from\n pgstat_shutdown_hook() for all types of processes (including processes\n with invalid MyDatabaseId [like auxiliary processes])\n\nI began changing the code to add the stats reset timestamp to the\npg_stat_buffer_actions view, but, I realized that it will be kind of\ndistracting to have every row for every backend type have a stats reset\ntimestamp (since it will be the same timestamp over and over). If,\nhowever, you could reset buffer stats for each backend type\nindividually, then, I could see having it. Otherwise, we could add a\nfunction like pg_stat_get_stats_reset_time(viewname) where viewname\nwould be pg_stat_buffer_actions in our case. Though, maybe that is\nannoying and not very usable--I'm not sure.\n\nI also think it makes sense to rename the pg_stat_buffer_actions view to\npg_stat_buffers and to name the columns using both the buffer action\ntype and buffer type -- e.g. shared, strategy, local. This leaves open\nthe possibility of counting buffer actions done on other non-shared\nbuffers -- like those done while building indexes or those using local\nbuffers. The third patch in the set does this (I wanted to see if it\nmade sense before fixing it up into the first patch in the set).\n\nThis naming convention (BufferType_BufferActionType) made me think that\nit might make sense to have two enumerations: one being the current\nBufferActionType (which could also be called BufferAccessType though\nthat might get confusing with BufferAccessStrategyType and buffer access\nstrategies in general) and the other being BufferType (which would be\none of shared, local, index, etc).\n\nI attached a patch with the outline of this idea\n(buffer_type_enum_addition.patch). It doesn't work because\npg_stat_get_buffer_actions() uses the BufferActionType as an index into\nthe values array returned. If I wanted to use a combination of the two\nenums as an indexing mechanism (BufferActionType and BufferType), we\nwould end up with a tuple having every combination of the two\nenums--some of which aren't valid. It might not make sense to implement\nthis. I do think it is useful to think of these stats as a combination\nof a buffer action and a type of buffer.\n\n- Melanie",
"msg_date": "Mon, 13 Sep 2021 17:46:02 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hello Melanie\n\nOn 2021-Sep-13, Melanie Plageman wrote:\n\n> I also think it makes sense to rename the pg_stat_buffer_actions view to\n> pg_stat_buffers and to name the columns using both the buffer action\n> type and buffer type -- e.g. shared, strategy, local. This leaves open\n> the possibility of counting buffer actions done on other non-shared\n> buffers -- like those done while building indexes or those using local\n> buffers. The third patch in the set does this (I wanted to see if it\n> made sense before fixing it up into the first patch in the set).\n\nWhat do you think of the idea of having the \"shared/strategy/local\"\nattribute be a column? So you'd have up to three rows per buffer action\ntype. Users wishing to see an aggregate can just aggregate them, just\nlike they'd do with pg_buffercache. I think that leads to an easy\ndecision with regards to this point:\n\n> I attached a patch with the outline of this idea\n> (buffer_type_enum_addition.patch). It doesn't work because\n> pg_stat_get_buffer_actions() uses the BufferActionType as an index into\n> the values array returned. If I wanted to use a combination of the two\n> enums as an indexing mechanism (BufferActionType and BufferType), we\n> would end up with a tuple having every combination of the two\n> enums--some of which aren't valid. It might not make sense to implement\n> this. I do think it is useful to think of these stats as a combination\n> of a buffer action and a type of buffer.\n\nDoes that seem sensible?\n\n\n(It's weird to have enum values that are there just to indicate what's\nthe maximum value. I think that sort of thing is better done by having\na \"#define LAST_THING\" that takes the last valid value from the enum.\nThat would free you from having to handle the last value in switch\nblocks, for example. LAST_OCLASS in dependency.h is a precedent on this.)\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"That sort of implies that there are Emacs keystrokes which aren't obscure.\nI've been using it daily for 2 years now and have yet to discover any key\nsequence which makes any sense.\" (Paul Thomas)\n\n\n",
"msg_date": "Tue, 14 Sep 2021 22:30:02 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Sep 14, 2021 at 9:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Sep-13, Melanie Plageman wrote:\n>\n> > I also think it makes sense to rename the pg_stat_buffer_actions view to\n> > pg_stat_buffers and to name the columns using both the buffer action\n> > type and buffer type -- e.g. shared, strategy, local. This leaves open\n> > the possibility of counting buffer actions done on other non-shared\n> > buffers -- like those done while building indexes or those using local\n> > buffers. The third patch in the set does this (I wanted to see if it\n> > made sense before fixing it up into the first patch in the set).\n>\n> What do you think of the idea of having the \"shared/strategy/local\"\n> attribute be a column? So you'd have up to three rows per buffer action\n> type. Users wishing to see an aggregate can just aggregate them, just\n> like they'd do with pg_buffercache. I think that leads to an easy\n> decision with regards to this point:\n\nI have rewritten the code to implement this.\n\n>\n>\n> (It's weird to have enum values that are there just to indicate what's\n> the maximum value. I think that sort of thing is better done by having\n> a \"#define LAST_THING\" that takes the last valid value from the enum.\n> That would free you from having to handle the last value in switch\n> blocks, for example. LAST_OCLASS in dependency.h is a precedent on this.)\n>\n\nI have made this change.\n\nThe attached v8 patchset is rewritten to add in an additional dimension\n-- buffer type. Now, a backend keeps track of how many buffers of a\nparticular type (e.g. shared, local) it has accessed in a particular way\n(e.g. alloc, write). It also changes the naming of various structures\nand the view members.\n\nPreviously, stats reset did not work since it did not consider live\nbackends' counters. Now, the reset message includes the current live\nbackends' counters to be tracked by the stats collector and used when\nthe view is queried.\n\nThe reset message is one of the areas in which I still need to do some\nwork -- I shoved the array of PgBufferAccesses into the existing reset\nmessage used for checkpointer, bgwriter, etc. Before making a new type\nof message, I would like feedback from a reviewer about the approach.\n\nThere are various TODOs in the code which are actually questions for the\nreviewer. Once I have some feedback, it will be easier to address these\nitems.\n\nThere a few other items which may be material for other commits that\nI would also like to do:\n1) write wrapper functions for smgr* functions which count buffer\naccesses of the appropriate type. I wasn't sure if these should\nliterally just take all the parameters that the smgr* functions take +\nbuffer type. Once these exist, there will be less possibility for\nregressions in which new code is added using smgr* functions without\ncounting this buffer activity. Once I add these, I was going to go\nthrough and replace existing calls to smgr* functions and thereby start\ncounting currently uncounted buffer type accesses (direct, local, etc).\n\n2) Separate checkpointer and bgwriter into two views and add additional\nstats to the bgwriter view.\n\n3) Consider adding a helper function to pgstatfuncs.c to help create the\ntuplestore. These functions all have quite a few lines which are exactly\nthe same, and I thought it might be nice to do something about that:\n pg_stat_get_progress_info(PG_FUNCTION_ARGS)\n pg_stat_get_activity(PG_FUNCTION_ARGS)\n pg_stat_get_buffers_accesses(PG_FUNCTION_ARGS)\n pg_stat_get_slru(PG_FUNCTION_ARGS)\n pg_stat_get_progress_info(PG_FUNCTION_ARGS)\nI can imagine a function that takes a Datums array, a nulls array, and a\nResultSetInfo and then makes the tuplestore -- though I think that will\nuse more memory. Perhaps we could make a macro which does the initial\nerror checking (checking if caller supports returning a tuplestore)? I'm\nnot sure if there is something meaningful here, but I thought I would\nask.\n\nFinally, I haven't removed the test in pg_stats and haven't done a final\npass for comment clarity, alphabetization, etc on this version.\n\n- Melanie",
"msg_date": "Thu, 23 Sep 2021 17:05:07 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 5:05 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> The attached v8 patchset is rewritten to add in an additional dimension\n> -- buffer type. Now, a backend keeps track of how many buffers of a\n> particular type (e.g. shared, local) it has accessed in a particular way\n> (e.g. alloc, write). It also changes the naming of various structures\n> and the view members.\n>\n> Previously, stats reset did not work since it did not consider live\n> backends' counters. Now, the reset message includes the current live\n> backends' counters to be tracked by the stats collector and used when\n> the view is queried.\n>\n> The reset message is one of the areas in which I still need to do some\n> work -- I shoved the array of PgBufferAccesses into the existing reset\n> message used for checkpointer, bgwriter, etc. Before making a new type\n> of message, I would like feedback from a reviewer about the approach.\n>\n> There are various TODOs in the code which are actually questions for the\n> reviewer. Once I have some feedback, it will be easier to address these\n> items.\n>\n> There a few other items which may be material for other commits that\n> I would also like to do:\n> 1) write wrapper functions for smgr* functions which count buffer\n> accesses of the appropriate type. I wasn't sure if these should\n> literally just take all the parameters that the smgr* functions take +\n> buffer type. Once these exist, there will be less possibility for\n> regressions in which new code is added using smgr* functions without\n> counting this buffer activity. Once I add these, I was going to go\n> through and replace existing calls to smgr* functions and thereby start\n> counting currently uncounted buffer type accesses (direct, local, etc).\n>\n> 2) Separate checkpointer and bgwriter into two views and add additional\n> stats to the bgwriter view.\n>\n> 3) Consider adding a helper function to pgstatfuncs.c to help create the\n> tuplestore. These functions all have quite a few lines which are exactly\n> the same, and I thought it might be nice to do something about that:\n> pg_stat_get_progress_info(PG_FUNCTION_ARGS)\n> pg_stat_get_activity(PG_FUNCTION_ARGS)\n> pg_stat_get_buffers_accesses(PG_FUNCTION_ARGS)\n> pg_stat_get_slru(PG_FUNCTION_ARGS)\n> pg_stat_get_progress_info(PG_FUNCTION_ARGS)\n> I can imagine a function that takes a Datums array, a nulls array, and a\n> ResultSetInfo and then makes the tuplestore -- though I think that will\n> use more memory. Perhaps we could make a macro which does the initial\n> error checking (checking if caller supports returning a tuplestore)? I'm\n> not sure if there is something meaningful here, but I thought I would\n> ask.\n>\n> Finally, I haven't removed the test in pg_stats and haven't done a final\n> pass for comment clarity, alphabetization, etc on this version.\n>\n\nI have addressed almost all of the issues mentioned above in v9.\nThe only remaining TODOs are described in the commit message.\nmost critical one is that the reset message doesn't work.",
"msg_date": "Fri, 24 Sep 2021 17:58:48 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 5:58 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Thu, Sep 23, 2021 at 5:05 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> The only remaining TODOs are described in the commit message.\n> most critical one is that the reset message doesn't work.\n\nv10 is attached with updated comments and some limited code refactoring",
"msg_date": "Mon, 27 Sep 2021 14:58:53 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 2:58 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Fri, Sep 24, 2021 at 5:58 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Thu, Sep 23, 2021 at 5:05 PM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > The only remaining TODOs are described in the commit message.\n> > most critical one is that the reset message doesn't work.\n>\n> v10 is attached with updated comments and some limited code refactoring\n\nv11 has fixed the oversize message issue by sending a reset message for\neach backend type. Now, we will call GetCurrentTimestamp\nBACKEND_NUM_TYPES times, so maybe I should add some kind of flag to the\nreset message that indicates the first message so that all the \"do once\"\nthings can be done at that point.\n\nI've also fixed a few style/cosmetic issues and updated the commit\nmessage with a link to the thread [1] where I proposed smgrwrite() and\nsmgrextend() wrappers (which is where I propose to call\npgstat_incremement_buffer_access_type() for unbuffered writes and\nextends).\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_aw72w70X1P%3Dba20K8iGUvSkyz7Yk03wPPh3f9WgmcJ3g%40mail.gmail.com",
"msg_date": "Wed, 29 Sep 2021 16:46:07 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 4:46 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Mon, Sep 27, 2021 at 2:58 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Fri, Sep 24, 2021 at 5:58 PM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 23, 2021 at 5:05 PM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > The only remaining TODOs are described in the commit message.\n> > > most critical one is that the reset message doesn't work.\n> >\n> > v10 is attached with updated comments and some limited code refactoring\n>\n> v11 has fixed the oversize message issue by sending a reset message for\n> each backend type. Now, we will call GetCurrentTimestamp\n> BACKEND_NUM_TYPES times, so maybe I should add some kind of flag to the\n> reset message that indicates the first message so that all the \"do once\"\n> things can be done at that point.\n>\n> I've also fixed a few style/cosmetic issues and updated the commit\n> message with a link to the thread [1] where I proposed smgrwrite() and\n> smgrextend() wrappers (which is where I propose to call\n> pgstat_incremement_buffer_access_type() for unbuffered writes and\n> extends).\n>\n> - Melanie\n>\n> [1] https://www.postgresql.org/message-id/CAAKRu_aw72w70X1P%3Dba20K8iGUvSkyz7Yk03wPPh3f9WgmcJ3g%40mail.gmail.com\n\n\nv12 (attached) has various style and code clarity updates (it is\npgindented as well). I also added a new commit which creates a utility\nfunction to make a tuplestore for views that need one in pgstatfuncs.c.\n\nHaving received some offlist feedback about the names BufferAccessType\nand BufferType being confusing, I am planning to rename these variables\nand all of the associated functions. I agree that BufferType and\nBufferAccessType are confusing for the following reasons:\n - They sound similar.\n - They aren't very precise.\n - One of the types of buffers is not using a Postgres buffer.\n\nSo far, the proposed alternative is IO_Op or IOOp for BufferAccessType\nand IOPath for BufferType.\n\n- Melanie",
"msg_date": "Thu, 30 Sep 2021 17:16:34 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Can you say more about 0001?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Use it up, wear it out, make it do, or do without\"\n\n\n",
"msg_date": "Thu, 30 Sep 2021 20:15:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v13 (attached) contains several cosmetic updates and the full rename\n(comments included) of BufferAccessType and BufferType.\n\nOn Thu, Sep 30, 2021 at 7:15 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Can you say more about 0001?\n>\n\nThe rationale for this patch was that it doesn't save much to avoid\ninitializing backend activity state in the bootstrap process and by\ndoing so, I don't have to do the check if (beentry) in pgstat_inc_ioop()\n--which happens on most buffer accesses.",
"msg_date": "Fri, 1 Oct 2021 16:05:31 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-01 16:05:31 -0400, Melanie Plageman wrote:\n> From 40c809ad1127322f3462e85be080c10534485f0d Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Fri, 24 Sep 2021 17:39:12 -0400\n> Subject: [PATCH v13 1/4] Allow bootstrap process to beinit\n>\n> ---\n> src/backend/utils/init/postinit.c | 3 +--\n> 1 file changed, 1 insertion(+), 2 deletions(-)\n>\n> diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> index 78bc64671e..fba5864172 100644\n> --- a/src/backend/utils/init/postinit.c\n> +++ b/src/backend/utils/init/postinit.c\n> @@ -670,8 +670,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> \tEnablePortalManager();\n>\n> \t/* Initialize status reporting */\n> -\tif (!bootstrap)\n> -\t\tpgstat_beinit();\n> +\tpgstat_beinit();\n>\n> \t/*\n> \t * Load relcache entries for the shared system catalogs. This must create\n> --\n> 2.27.0\n>\n\nI think it's good to remove more and more of these !bootstrap cases - they\nreally make it harder to understand the state of the system at various\npoints. Optimizing for the rarely executed bootstrap mode at the cost of\nchecks in very common codepaths...\n\n\n> From a709ddb30b2b747beb214f0b13cd1e1816094e6b Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Thu, 30 Sep 2021 16:16:22 -0400\n> Subject: [PATCH v13 2/4] Add utility to make tuplestores for pg stat views\n>\n> Most of the steps to make a tuplestore for those pg_stat views requiring\n> one are the same. Consolidate them into a single helper function for\n> clarity and to avoid bugs.\n> ---\n> src/backend/utils/adt/pgstatfuncs.c | 129 ++++++++++------------------\n> 1 file changed, 44 insertions(+), 85 deletions(-)\n>\n> diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\n> index ff5aedc99c..513f5aecf6 100644\n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -36,6 +36,42 @@\n>\n> #define HAS_PGSTAT_PERMISSIONS(role)\t (is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n>\n> +/*\n> + * Helper function for views with multiple rows constructed from a tuplestore\n> + */\n> +static Tuplestorestate *\n> +pg_stat_make_tuplestore(FunctionCallInfo fcinfo, TupleDesc *tupdesc)\n> +{\n> +\tTuplestorestate *tupstore;\n> +\tReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> +\tMemoryContext per_query_ctx;\n> +\tMemoryContext oldcontext;\n> +\n> +\t/* check to see if caller supports us returning a tuplestore */\n> +\tif (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t errmsg(\"set-valued function called in context that cannot accept a set\")));\n> +\tif (!(rsinfo->allowedModes & SFRM_Materialize))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t errmsg(\"materialize mode required, but it is not allowed in this context\")));\n> +\n> +\t/* Build a tuple descriptor for our result type */\n> +\tif (get_call_result_type(fcinfo, NULL, tupdesc) != TYPEFUNC_COMPOSITE)\n> +\t\telog(ERROR, \"return type must be a row type\");\n> +\n> +\tper_query_ctx = rsinfo->econtext->ecxt_per_query_memory;\n> +\toldcontext = MemoryContextSwitchTo(per_query_ctx);\n> +\n> +\ttupstore = tuplestore_begin_heap(true, false, work_mem);\n> +\trsinfo->returnMode = SFRM_Materialize;\n> +\trsinfo->setResult = tupstore;\n> +\trsinfo->setDesc = *tupdesc;\n> +\tMemoryContextSwitchTo(oldcontext);\n> +\treturn tupstore;\n> +}\n\nIs pgstattuple the best place for this helper? It's not really pgstatfuncs\nspecific...\n\nIt also looks vaguely familiar - I wonder if we have a helper roughly like\nthis somewhere else already...\n\n\n\n\n> From e9a5d2a021d429fdbb2daa58ab9d75a069f334d4 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Wed, 29 Sep 2021 15:39:45 -0400\n> Subject: [PATCH v13 3/4] Add system view tracking IO ops per backend type\n>\n\n> diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> index be7366379d..0d18e7f71a 100644\n> --- a/src/backend/postmaster/checkpointer.c\n> +++ b/src/backend/postmaster/checkpointer.c\n> @@ -1104,6 +1104,7 @@ ForwardSyncRequest(const FileTag *ftag, SyncRequestType type)\n> \t\t */\n> \t\tif (!AmBackgroundWriterProcess())\n> \t\t\tCheckpointerShmem->num_backend_fsync++;\n> +\t\tpgstat_inc_ioop(IOOP_FSYNC, IOPATH_SHARED);\n> \t\tLWLockRelease(CheckpointerCommLock);\n> \t\treturn false;\n> \t}\n\nISTM this doens't need to happen while holding CheckpointerCommLock?\n\n\n\n\n> @@ -1461,7 +1467,25 @@ pgstat_reset_shared_counters(const char *target)\n> \t\t\t\t errhint(\"Target must be \\\"archiver\\\", \\\"bgwriter\\\", or \\\"wal\\\".\")));\n>\n> \tpgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETSHAREDCOUNTER);\n> -\tpgstat_send(&msg, sizeof(msg));\n> +\n> +\tif (msg.m_resettarget == RESET_BUFFERS)\n> +\t{\n> +\t\tint\t\t\tbackend_type;\n> +\t\tPgStatIOPathOps ops[BACKEND_NUM_TYPES];\n> +\n> +\t\tmemset(ops, 0, sizeof(ops));\n> +\t\tpgstat_report_live_backend_io_path_ops(ops);\n> +\n> +\t\tfor (backend_type = 1; backend_type < BACKEND_NUM_TYPES; backend_type++)\n> +\t\t{\n> +\t\t\tmsg.m_backend_resets.backend_type = backend_type;\n> +\t\t\tmemcpy(&msg.m_backend_resets.iop, &ops[backend_type], sizeof(msg.m_backend_resets.iop));\n> +\t\t\tpgstat_send(&msg, sizeof(msg));\n> +\t\t}\n> +\t}\n> +\telse\n> +\t\tpgstat_send(&msg, sizeof(msg));\n> +\n> }\n\nI'd perhaps put this in a small helper function.\n\n\n> /* ----------\n> * pgstat_fetch_stat_dbentry() -\n> @@ -2999,6 +3036,14 @@ pgstat_shutdown_hook(int code, Datum arg)\n> {\n> \tAssert(!pgstat_is_shutdown);\n>\n> +\t/*\n> +\t * Only need to send stats on IO Ops for IO Paths when a process exits, as\n> +\t * pg_stat_get_buffers() will read from live backends' PgBackendStatus and\n> +\t * then sum this with totals from exited backends persisted by the stats\n> +\t * collector.\n> +\t */\n> +\tpgstat_send_buffers();\n> +\n> \t/*\n> \t * If we got as far as discovering our own database ID, we can report what\n> \t * we did to the collector. Otherwise, we'd be sending an invalid\n> @@ -3092,6 +3137,30 @@ pgstat_send(void *msg, int len)\n> #endif\n> }\n\nI think it might be nicer to move pgstat_beshutdown_hook() to be a\nbefore_shmem_exit(), and do this in there.\n\n\n> +/*\n> + * Add live IO Op stats for all IO Paths (e.g. shared, local) to those in the\n> + * equivalent stats structure for exited backends. Note that this adds and\n> + * doesn't set, so the destination stats structure should be zeroed out by the\n> + * caller initially. This would commonly be used to transfer all IO Op stats\n> + * for all IO Paths for a particular backend type to the pgstats structure.\n> + */\n\nThis seems a bit odd. Why not zero it in here? Perhaps it also should be\ncalled something like _sum_ instead of _add_?\n\n\n> +void\n> +pgstat_add_io_path_ops(PgStatIOOps *dest, IOOps *src, int io_path_num_types)\n> +{\n\nWhy is io_path_num_types a parameter?\n\n\n> +static void\n> +pgstat_recv_io_path_ops(PgStat_MsgIOPathOps *msg, int len)\n> +{\n> +\tint\t\t\tio_path;\n> +\tPgStatIOOps *src_io_path_ops = msg->iop.io_path_ops;\n> +\tPgStatIOOps *dest_io_path_ops =\n> +\tglobalStats.buffers.ops[msg->backend_type].io_path_ops;\n> +\n> +\tfor (io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n> +\t{\n> +\t\tPgStatIOOps *src = &src_io_path_ops[io_path];\n> +\t\tPgStatIOOps *dest = &dest_io_path_ops[io_path];\n> +\n> +\t\tdest->allocs += src->allocs;\n> +\t\tdest->extends += src->extends;\n> +\t\tdest->fsyncs += src->fsyncs;\n> +\t\tdest->writes += src->writes;\n> +\t}\n> +}\n\nCould this, with a bit of finessing, use pgstat_add_io_path_ops()?\n\n\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n\nWhat about writes originating in like FlushRelationBuffers()?\n\n\n> bool\n> -StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf)\n> +StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf, bool *from_ring)\n> {\n> +\t/*\n> +\t * If we decide to use the dirty buffer selected by StrategyGetBuffer(),\n> +\t * then ensure that we count it as such in pg_stat_buffers view.\n> +\t */\n> +\t*from_ring = true;\n> +\n\nAbsolutely minor nitpick: Somehow it feelsoff to talk about the view here.\n\n\n> +PgBackendStatus *\n> +pgstat_fetch_backend_statuses(void)\n> +{\n> +\treturn BackendStatusArray;\n> +}\n\nHm, not sure this adds much?\n\n\n> +\t\t\t/*\n> +\t\t\t * Subtract 1 from backend_type to avoid having rows for B_INVALID\n> +\t\t\t * BackendType\n> +\t\t\t */\n> +\t\t\tint\t\t\trownum = (beentry->st_backendType - 1) * IOPATH_NUM_TYPES + io_path;\n\n\nPerhaps worth wrapping this in a macro or inline function? It's repeated and nontrivial.\n\n\n> +\t/* Add stats from all exited backends */\n> +\tbackend_io_path_ops = pgstat_fetch_exited_backend_buffers();\n\nIt's probably *not* worth it, but I do wonder if we should do the addition on the SQL\nlevel, and actually have two functions, one returning data for exited\nbackends, and one for currently connected ones.\n\n\n> +static inline void\n> +pgstat_inc_ioop(IOOp io_op, IOPath io_path)\n> +{\n> +\tIOOps\t *io_ops;\n> +\tPgBackendStatus *beentry = MyBEEntry;\n> +\n> +\tAssert(beentry);\n> +\n> +\tio_ops = &beentry->io_path_stats[io_path];\n> +\tswitch (io_op)\n> +\t{\n> +\t\tcase IOOP_ALLOC:\n> +\t\t\tpg_atomic_write_u64(&io_ops->allocs,\n> +\t\t\t\t\t\t\t\tpg_atomic_read_u64(&io_ops->allocs) + 1);\n> +\t\t\tbreak;\n> +\t\tcase IOOP_EXTEND:\n> +\t\t\tpg_atomic_write_u64(&io_ops->extends,\n> +\t\t\t\t\t\t\t\tpg_atomic_read_u64(&io_ops->extends) + 1);\n> +\t\t\tbreak;\n> +\t\tcase IOOP_FSYNC:\n> +\t\t\tpg_atomic_write_u64(&io_ops->fsyncs,\n> +\t\t\t\t\t\t\t\tpg_atomic_read_u64(&io_ops->fsyncs) + 1);\n> +\t\t\tbreak;\n> +\t\tcase IOOP_WRITE:\n> +\t\t\tpg_atomic_write_u64(&io_ops->writes,\n> +\t\t\t\t\t\t\t\tpg_atomic_read_u64(&io_ops->writes) + 1);\n> +\t\t\tbreak;\n> +\t}\n> +}\n\nIIRC Thomas Munro had a patch adding a nonatomic_add or such\nsomewhere. Perhaps in the recovery readahead thread? Might be worth using\ninstead?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 8 Oct 2021 10:56:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Oct 8, 2021 at 1:56 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-10-01 16:05:31 -0400, Melanie Plageman wrote:\n> > From 40c809ad1127322f3462e85be080c10534485f0d Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Fri, 24 Sep 2021 17:39:12 -0400\n> > Subject: [PATCH v13 1/4] Allow bootstrap process to beinit\n> >\n> > ---\n> > src/backend/utils/init/postinit.c | 3 +--\n> > 1 file changed, 1 insertion(+), 2 deletions(-)\n> >\n> > diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> > index 78bc64671e..fba5864172 100644\n> > --- a/src/backend/utils/init/postinit.c\n> > +++ b/src/backend/utils/init/postinit.c\n> > @@ -670,8 +670,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> > EnablePortalManager();\n> >\n> > /* Initialize status reporting */\n> > - if (!bootstrap)\n> > - pgstat_beinit();\n> > + pgstat_beinit();\n> >\n> > /*\n> > * Load relcache entries for the shared system catalogs. This must create\n> > --\n> > 2.27.0\n> >\n>\n> I think it's good to remove more and more of these !bootstrap cases - they\n> really make it harder to understand the state of the system at various\n> points. Optimizing for the rarely executed bootstrap mode at the cost of\n> checks in very common codepaths...\n\nWhat scope do you suggest for this patch set? A single patch which does\nthis in more locations (remove !bootstrap) or should I remove this patch\nfrom the patchset?\n\n>\n>\n>\n> > From a709ddb30b2b747beb214f0b13cd1e1816094e6b Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Thu, 30 Sep 2021 16:16:22 -0400\n> > Subject: [PATCH v13 2/4] Add utility to make tuplestores for pg stat views\n> >\n> > Most of the steps to make a tuplestore for those pg_stat views requiring\n> > one are the same. Consolidate them into a single helper function for\n> > clarity and to avoid bugs.\n> > ---\n> > src/backend/utils/adt/pgstatfuncs.c | 129 ++++++++++------------------\n> > 1 file changed, 44 insertions(+), 85 deletions(-)\n> >\n> > diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\n> > index ff5aedc99c..513f5aecf6 100644\n> > --- a/src/backend/utils/adt/pgstatfuncs.c\n> > +++ b/src/backend/utils/adt/pgstatfuncs.c\n> > @@ -36,6 +36,42 @@\n> >\n> > #define HAS_PGSTAT_PERMISSIONS(role) (is_member_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n> >\n> > +/*\n> > + * Helper function for views with multiple rows constructed from a tuplestore\n> > + */\n> > +static Tuplestorestate *\n> > +pg_stat_make_tuplestore(FunctionCallInfo fcinfo, TupleDesc *tupdesc)\n> > +{\n> > + Tuplestorestate *tupstore;\n> > + ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > + MemoryContext per_query_ctx;\n> > + MemoryContext oldcontext;\n> > +\n> > + /* check to see if caller supports us returning a tuplestore */\n> > + if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > + errmsg(\"set-valued function called in context that cannot accept a set\")));\n> > + if (!(rsinfo->allowedModes & SFRM_Materialize))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > + errmsg(\"materialize mode required, but it is not allowed in this context\")));\n> > +\n> > + /* Build a tuple descriptor for our result type */\n> > + if (get_call_result_type(fcinfo, NULL, tupdesc) != TYPEFUNC_COMPOSITE)\n> > + elog(ERROR, \"return type must be a row type\");\n> > +\n> > + per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;\n> > + oldcontext = MemoryContextSwitchTo(per_query_ctx);\n> > +\n> > + tupstore = tuplestore_begin_heap(true, false, work_mem);\n> > + rsinfo->returnMode = SFRM_Materialize;\n> > + rsinfo->setResult = tupstore;\n> > + rsinfo->setDesc = *tupdesc;\n> > + MemoryContextSwitchTo(oldcontext);\n> > + return tupstore;\n> > +}\n>\n> Is pgstattuple the best place for this helper? It's not really pgstatfuncs\n> specific...\n>\n> It also looks vaguely familiar - I wonder if we have a helper roughly like\n> this somewhere else already...\n>\n\nI don't see a function which is specifically a utility to make a\ntuplestore. Looking at the callers of tuplestore_begin_heap(), I notice\nvery similar code to the function I added in pg_tablespace_databases()\nin utils/adt/misc.c, pg_stop_backup_v2() in xlogfuncs.c,\npg_event_trigger_dropped_objects() and pg_event_trigger_ddl_commands in\nevent_tigger.c, pg_available_extensions in extension.c, etc.\n\nDo you think it makes sense to refactor this code out of all of these\nplaces? If so, where would such a utility function belong?\n\n>\n>\n> > From e9a5d2a021d429fdbb2daa58ab9d75a069f334d4 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Wed, 29 Sep 2021 15:39:45 -0400\n> > Subject: [PATCH v13 3/4] Add system view tracking IO ops per backend type\n> >\n>\n> > diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c\n> > index be7366379d..0d18e7f71a 100644\n> > --- a/src/backend/postmaster/checkpointer.c\n> > +++ b/src/backend/postmaster/checkpointer.c\n> > @@ -1104,6 +1104,7 @@ ForwardSyncRequest(const FileTag *ftag, SyncRequestType type)\n> > */\n> > if (!AmBackgroundWriterProcess())\n> > CheckpointerShmem->num_backend_fsync++;\n> > + pgstat_inc_ioop(IOOP_FSYNC, IOPATH_SHARED);\n> > LWLockRelease(CheckpointerCommLock);\n> > return false;\n> > }\n>\n> ISTM this doens't need to happen while holding CheckpointerCommLock?\n>\n\nFixed in attached updates. I only attached the diff from my previous version.\n\n>\n>\n> > @@ -1461,7 +1467,25 @@ pgstat_reset_shared_counters(const char *target)\n> > errhint(\"Target must be \\\"archiver\\\", \\\"bgwriter\\\", or \\\"wal\\\".\")));\n> >\n> > pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETSHAREDCOUNTER);\n> > - pgstat_send(&msg, sizeof(msg));\n> > +\n> > + if (msg.m_resettarget == RESET_BUFFERS)\n> > + {\n> > + int backend_type;\n> > + PgStatIOPathOps ops[BACKEND_NUM_TYPES];\n> > +\n> > + memset(ops, 0, sizeof(ops));\n> > + pgstat_report_live_backend_io_path_ops(ops);\n> > +\n> > + for (backend_type = 1; backend_type < BACKEND_NUM_TYPES; backend_type++)\n> > + {\n> > + msg.m_backend_resets.backend_type = backend_type;\n> > + memcpy(&msg.m_backend_resets.iop, &ops[backend_type], sizeof(msg.m_backend_resets.iop));\n> > + pgstat_send(&msg, sizeof(msg));\n> > + }\n> > + }\n> > + else\n> > + pgstat_send(&msg, sizeof(msg));\n> > +\n> > }\n>\n> I'd perhaps put this in a small helper function.\n>\n\nDone.\n\n>\n> > /* ----------\n> > * pgstat_fetch_stat_dbentry() -\n> > @@ -2999,6 +3036,14 @@ pgstat_shutdown_hook(int code, Datum arg)\n> > {\n> > Assert(!pgstat_is_shutdown);\n> >\n> > + /*\n> > + * Only need to send stats on IO Ops for IO Paths when a process exits, as\n> > + * pg_stat_get_buffers() will read from live backends' PgBackendStatus and\n> > + * then sum this with totals from exited backends persisted by the stats\n> > + * collector.\n> > + */\n> > + pgstat_send_buffers();\n> > +\n> > /*\n> > * If we got as far as discovering our own database ID, we can report what\n> > * we did to the collector. Otherwise, we'd be sending an invalid\n> > @@ -3092,6 +3137,30 @@ pgstat_send(void *msg, int len)\n> > #endif\n> > }\n>\n> I think it might be nicer to move pgstat_beshutdown_hook() to be a\n> before_shmem_exit(), and do this in there.\n>\n\nNot really sure the correct way to do this. A cursory attempt to do so\nfailed because ShutdownXLOG() is also registered as a\nbefore_shmem_exit() and ends up being called after\npgstat_beshutdown_hook(). pgstat_beshutdown_hook() zeroes out\nPgBackendStatus, ShutdownXLOG() initiates a checkpoint, and during a\ncheckpoint, the checkpointer increments IO op counter for writes in its\nPgBackendStatus.\n\n>\n> > +/*\n> > + * Add live IO Op stats for all IO Paths (e.g. shared, local) to those in the\n> > + * equivalent stats structure for exited backends. Note that this adds and\n> > + * doesn't set, so the destination stats structure should be zeroed out by the\n> > + * caller initially. This would commonly be used to transfer all IO Op stats\n> > + * for all IO Paths for a particular backend type to the pgstats structure.\n> > + */\n>\n> This seems a bit odd. Why not zero it in here? Perhaps it also should be\n> called something like _sum_ instead of _add_?\n>\n\nI wanted to be able to use the function both when it was setting the\nvalues and when it needed to add to the values (which are the two\ncurrent callers). I have changed the name from add -> sum.\n\n>\n> > +void\n> > +pgstat_add_io_path_ops(PgStatIOOps *dest, IOOps *src, int io_path_num_types)\n> > +{\n>\n> Why is io_path_num_types a parameter?\n>\n\nI imagined that maybe another caller would want to only add some IO path\ntypes and still use the function, but I think it is more confusing than\nanything else so I've changed it.\n\n>\n> > +static void\n> > +pgstat_recv_io_path_ops(PgStat_MsgIOPathOps *msg, int len)\n> > +{\n> > + int io_path;\n> > + PgStatIOOps *src_io_path_ops = msg->iop.io_path_ops;\n> > + PgStatIOOps *dest_io_path_ops =\n> > + globalStats.buffers.ops[msg->backend_type].io_path_ops;\n> > +\n> > + for (io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n> > + {\n> > + PgStatIOOps *src = &src_io_path_ops[io_path];\n> > + PgStatIOOps *dest = &dest_io_path_ops[io_path];\n> > +\n> > + dest->allocs += src->allocs;\n> > + dest->extends += src->extends;\n> > + dest->fsyncs += src->fsyncs;\n> > + dest->writes += src->writes;\n> > + }\n> > +}\n>\n> Could this, with a bit of finessing, use pgstat_add_io_path_ops()?\n>\n\nI didn't really see a good way to do this -- given that\npgstat_add_io_path_ops() adds IOOps members to PgStatIOOps members --\nwhich requires a pg_atomic_read_u64() and pgstat_recv_io_path_ops adds\nPgStatIOOps to PgStatIOOps which doesn't require pg_atomic_read_u64().\nMaybe I could pass a flag which, based on the type, either does or\ndoesn't use pg_atomic_read_u64 to access the value? But that seems worse\nto me.\n\n>\n> > --- a/src/backend/storage/buffer/bufmgr.c\n> > +++ b/src/backend/storage/buffer/bufmgr.c\n>\n> What about writes originating in like FlushRelationBuffers()?\n>\n\nYes, I have made IOPath a parameter to FlushBuffer() so that it can\ndistinguish between strategy buffer writes and shared buffer writes and\nthen pushed pgstat_inc_ioop() into FlushBuffer().\n\n>\n> > bool\n> > -StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf)\n> > +StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf, bool *from_ring)\n> > {\n> > + /*\n> > + * If we decide to use the dirty buffer selected by StrategyGetBuffer(),\n> > + * then ensure that we count it as such in pg_stat_buffers view.\n> > + */\n> > + *from_ring = true;\n> > +\n>\n> Absolutely minor nitpick: Somehow it feelsoff to talk about the view here.\n\nFixed.\n\n>\n>\n> > +PgBackendStatus *\n> > +pgstat_fetch_backend_statuses(void)\n> > +{\n> > + return BackendStatusArray;\n> > +}\n>\n> Hm, not sure this adds much?\n\nIs there a better way to access the whole BackendStatusArray from within\npgstatfuncs.c?\n\n>\n>\n> > + /*\n> > + * Subtract 1 from backend_type to avoid having rows for B_INVALID\n> > + * BackendType\n> > + */\n> > + int rownum = (beentry->st_backendType - 1) * IOPATH_NUM_TYPES + io_path;\n>\n>\n> Perhaps worth wrapping this in a macro or inline function? It's repeated and nontrivial.\n>\n\nDone.\n\n>\n> > + /* Add stats from all exited backends */\n> > + backend_io_path_ops = pgstat_fetch_exited_backend_buffers();\n>\n> It's probably *not* worth it, but I do wonder if we should do the addition on the SQL\n> level, and actually have two functions, one returning data for exited\n> backends, and one for currently connected ones.\n>\n\nIt would be easy enough to implement. I would defer to others on whether\nor not this would be useful. My use case for pg_stat_buffers() is to see\nwhat backends' IO during a benchmark or test workload. For that, I reset\nthe stats before and then query pg_stat_buffers after running the\nbenchmark. I don't know if I would use exited and live stats\nindividually. In a real workload, I could see using\npg_stat_buffers live and exited to see if the workload causing lots of\nbackends to do their own writes is ongoing. Though a given workload may\nbe composed of lots of different queries, with backends exiting\nthroughout.\n\n>\n> > +static inline void\n> > +pgstat_inc_ioop(IOOp io_op, IOPath io_path)\n> > +{\n> > + IOOps *io_ops;\n> > + PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + Assert(beentry);\n> > +\n> > + io_ops = &beentry->io_path_stats[io_path];\n> > + switch (io_op)\n> > + {\n> > + case IOOP_ALLOC:\n> > + pg_atomic_write_u64(&io_ops->allocs,\n> > + pg_atomic_read_u64(&io_ops->allocs) + 1);\n> > + break;\n> > + case IOOP_EXTEND:\n> > + pg_atomic_write_u64(&io_ops->extends,\n> > + pg_atomic_read_u64(&io_ops->extends) + 1);\n> > + break;\n> > + case IOOP_FSYNC:\n> > + pg_atomic_write_u64(&io_ops->fsyncs,\n> > + pg_atomic_read_u64(&io_ops->fsyncs) + 1);\n> > + break;\n> > + case IOOP_WRITE:\n> > + pg_atomic_write_u64(&io_ops->writes,\n> > + pg_atomic_read_u64(&io_ops->writes) + 1);\n> > + break;\n> > + }\n> > +}\n>\n> IIRC Thomas Munro had a patch adding a nonatomic_add or such\n> somewhere. Perhaps in the recovery readahead thread? Might be worth using\n> instead?\n>\n\nI've added Thomas' function in a separate commit. I looked for a better\nplace to add it (I was thinking somewhere in src/backend/utils/misc) but\ncouldn't find anywhere that made sense.\n\nI also added a call to pgstat_inc_ioop() in ProcessSyncRequests() to capture\nwhen the checkpointer does fsyncs.\n\nI also added pgstat_inc_ioop() calls to callers of smgrwrite() flushing local\nbuffers. I don't know if that is desirable or not in this patch. They could be\nremoved if wrappers for smgrwrite() go in and pgstat_inc_ioop() can be called\nfrom within those wrappers.\n\n- Melanie",
"msg_date": "Mon, 11 Oct 2021 16:48:01 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-11 16:48:01 -0400, Melanie Plageman wrote:\n> On Fri, Oct 8, 2021 at 1:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-10-01 16:05:31 -0400, Melanie Plageman wrote:\n> > > diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> > > index 78bc64671e..fba5864172 100644\n> > > --- a/src/backend/utils/init/postinit.c\n> > > +++ b/src/backend/utils/init/postinit.c\n> > > @@ -670,8 +670,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> > > EnablePortalManager();\n> > >\n> > > /* Initialize status reporting */\n> > > - if (!bootstrap)\n> > > - pgstat_beinit();\n> > > + pgstat_beinit();\n> > >\n> > > /*\n> > > * Load relcache entries for the shared system catalogs. This must create\n> > > --\n> > > 2.27.0\n> > >\n> >\n> > I think it's good to remove more and more of these !bootstrap cases - they\n> > really make it harder to understand the state of the system at various\n> > points. Optimizing for the rarely executed bootstrap mode at the cost of\n> > checks in very common codepaths...\n>\n> What scope do you suggest for this patch set? A single patch which does\n> this in more locations (remove !bootstrap) or should I remove this patch\n> from the patchset?\n\nI think the scope is fine as-is.\n\n\n> > Is pgstattuple the best place for this helper? It's not really pgstatfuncs\n> > specific...\n> >\n> > It also looks vaguely familiar - I wonder if we have a helper roughly like\n> > this somewhere else already...\n> >\n>\n> I don't see a function which is specifically a utility to make a\n> tuplestore. Looking at the callers of tuplestore_begin_heap(), I notice\n> very similar code to the function I added in pg_tablespace_databases()\n> in utils/adt/misc.c, pg_stop_backup_v2() in xlogfuncs.c,\n> pg_event_trigger_dropped_objects() and pg_event_trigger_ddl_commands in\n> event_tigger.c, pg_available_extensions in extension.c, etc.\n>\n> Do you think it makes sense to refactor this code out of all of these\n> places?\n\nYes, I think it'd make sense. We have about 40 copies of this stuff, which is\nfairly ridiculous.\n\n\n> If so, where would such a utility function belong?\n\nNot quite sure. src/backend/utils/fmgr/funcapi.c maybe? I suggest starting a\nseparate thread about that...\n\n\n> > > @@ -2999,6 +3036,14 @@ pgstat_shutdown_hook(int code, Datum arg)\n> > > {\n> > > Assert(!pgstat_is_shutdown);\n> > >\n> > > + /*\n> > > + * Only need to send stats on IO Ops for IO Paths when a process exits, as\n> > > + * pg_stat_get_buffers() will read from live backends' PgBackendStatus and\n> > > + * then sum this with totals from exited backends persisted by the stats\n> > > + * collector.\n> > > + */\n> > > + pgstat_send_buffers();\n> > > +\n> > > /*\n> > > * If we got as far as discovering our own database ID, we can report what\n> > > * we did to the collector. Otherwise, we'd be sending an invalid\n> > > @@ -3092,6 +3137,30 @@ pgstat_send(void *msg, int len)\n> > > #endif\n> > > }\n> >\n> > I think it might be nicer to move pgstat_beshutdown_hook() to be a\n> > before_shmem_exit(), and do this in there.\n> >\n>\n> Not really sure the correct way to do this. A cursory attempt to do so\n> failed because ShutdownXLOG() is also registered as a\n> before_shmem_exit() and ends up being called after\n> pgstat_beshutdown_hook(). pgstat_beshutdown_hook() zeroes out\n> PgBackendStatus, ShutdownXLOG() initiates a checkpoint, and during a\n> checkpoint, the checkpointer increments IO op counter for writes in its\n> PgBackendStatus.\n\nI think we'll really need to do a proper redesign of the shutdown callback\nmechanism :(.\n\n\n\n> > > +static void\n> > > +pgstat_recv_io_path_ops(PgStat_MsgIOPathOps *msg, int len)\n> > > +{\n> > > + int io_path;\n> > > + PgStatIOOps *src_io_path_ops = msg->iop.io_path_ops;\n> > > + PgStatIOOps *dest_io_path_ops =\n> > > + globalStats.buffers.ops[msg->backend_type].io_path_ops;\n> > > +\n> > > + for (io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n> > > + {\n> > > + PgStatIOOps *src = &src_io_path_ops[io_path];\n> > > + PgStatIOOps *dest = &dest_io_path_ops[io_path];\n> > > +\n> > > + dest->allocs += src->allocs;\n> > > + dest->extends += src->extends;\n> > > + dest->fsyncs += src->fsyncs;\n> > > + dest->writes += src->writes;\n> > > + }\n> > > +}\n> >\n> > Could this, with a bit of finessing, use pgstat_add_io_path_ops()?\n> >\n>\n> I didn't really see a good way to do this -- given that\n> pgstat_add_io_path_ops() adds IOOps members to PgStatIOOps members --\n> which requires a pg_atomic_read_u64() and pgstat_recv_io_path_ops adds\n> PgStatIOOps to PgStatIOOps which doesn't require pg_atomic_read_u64().\n> Maybe I could pass a flag which, based on the type, either does or\n> doesn't use pg_atomic_read_u64 to access the value? But that seems worse\n> to me.\n\nYea, you're probably right, that's worse.\n\n\n> > > +PgBackendStatus *\n> > > +pgstat_fetch_backend_statuses(void)\n> > > +{\n> > > + return BackendStatusArray;\n> > > +}\n> >\n> > Hm, not sure this adds much?\n>\n> Is there a better way to access the whole BackendStatusArray from within\n> pgstatfuncs.c?\n\nExport the variable itself?\n\n\n> > IIRC Thomas Munro had a patch adding a nonatomic_add or such\n> > somewhere. Perhaps in the recovery readahead thread? Might be worth using\n> > instead?\n> >\n>\n> I've added Thomas' function in a separate commit. I looked for a better\n> place to add it (I was thinking somewhere in src/backend/utils/misc) but\n> couldn't find anywhere that made sense.\n\nI think it should just live in atomics.h?\n\n\n> I also added pgstat_inc_ioop() calls to callers of smgrwrite() flushing local\n> buffers. I don't know if that is desirable or not in this patch. They could be\n> removed if wrappers for smgrwrite() go in and pgstat_inc_ioop() can be called\n> from within those wrappers.\n\nMakes sense to me to to have it here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Oct 2021 12:29:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v14 attached.\n\nOn Tue, Oct 19, 2021 at 3:29 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> > > Is pgstattuple the best place for this helper? It's not really pgstatfuncs\n> > > specific...\n> > >\n> > > It also looks vaguely familiar - I wonder if we have a helper roughly like\n> > > this somewhere else already...\n> > >\n> >\n> > I don't see a function which is specifically a utility to make a\n> > tuplestore. Looking at the callers of tuplestore_begin_heap(), I notice\n> > very similar code to the function I added in pg_tablespace_databases()\n> > in utils/adt/misc.c, pg_stop_backup_v2() in xlogfuncs.c,\n> > pg_event_trigger_dropped_objects() and pg_event_trigger_ddl_commands in\n> > event_tigger.c, pg_available_extensions in extension.c, etc.\n> >\n> > Do you think it makes sense to refactor this code out of all of these\n> > places?\n>\n> Yes, I think it'd make sense. We have about 40 copies of this stuff, which is\n> fairly ridiculous.\n>\n>\n> > If so, where would such a utility function belong?\n>\n> Not quite sure. src/backend/utils/fmgr/funcapi.c maybe? I suggest starting a\n> separate thread about that...\n>\n\ndone [1]. also, I dropped that commit from this patchset.\n\n>\n> > > > @@ -2999,6 +3036,14 @@ pgstat_shutdown_hook(int code, Datum arg)\n> > > > {\n> > > > Assert(!pgstat_is_shutdown);\n> > > >\n> > > > + /*\n> > > > + * Only need to send stats on IO Ops for IO Paths when a process exits, as\n> > > > + * pg_stat_get_buffers() will read from live backends' PgBackendStatus and\n> > > > + * then sum this with totals from exited backends persisted by the stats\n> > > > + * collector.\n> > > > + */\n> > > > + pgstat_send_buffers();\n> > > > +\n> > > > /*\n> > > > * If we got as far as discovering our own database ID, we can report what\n> > > > * we did to the collector. Otherwise, we'd be sending an invalid\n> > > > @@ -3092,6 +3137,30 @@ pgstat_send(void *msg, int len)\n> > > > #endif\n> > > > }\n> > >\n> > > I think it might be nicer to move pgstat_beshutdown_hook() to be a\n> > > before_shmem_exit(), and do this in there.\n> > >\n> >\n> > Not really sure the correct way to do this. A cursory attempt to do so\n> > failed because ShutdownXLOG() is also registered as a\n> > before_shmem_exit() and ends up being called after\n> > pgstat_beshutdown_hook(). pgstat_beshutdown_hook() zeroes out\n> > PgBackendStatus, ShutdownXLOG() initiates a checkpoint, and during a\n> > checkpoint, the checkpointer increments IO op counter for writes in its\n> > PgBackendStatus.\n>\n> I think we'll really need to do a proper redesign of the shutdown callback\n> mechanism :(.\n>\n\nI've left what I originally had, then.\n\n>\n>\n> > > > +PgBackendStatus *\n> > > > +pgstat_fetch_backend_statuses(void)\n> > > > +{\n> > > > + return BackendStatusArray;\n> > > > +}\n> > >\n> > > Hm, not sure this adds much?\n> >\n> > Is there a better way to access the whole BackendStatusArray from within\n> > pgstatfuncs.c?\n>\n> Export the variable itself?\n>\n\ndone but wasn't sure about PGDLLIMPORT\n\n>\n> > > IIRC Thomas Munro had a patch adding a nonatomic_add or such\n> > > somewhere. Perhaps in the recovery readahead thread? Might be worth using\n> > > instead?\n> > >\n> >\n> > I've added Thomas' function in a separate commit. I looked for a better\n> > place to add it (I was thinking somewhere in src/backend/utils/misc) but\n> > couldn't find anywhere that made sense.\n>\n> I think it should just live in atomics.h?\n>\n\ndone\n\n-- melanie\n\n[1] https://www.postgresql.org/message-id/flat/CAAKRu_azyd1Z3W_r7Ou4sorTjRCs%2BPxeHw1CWJeXKofkE6TuZg%40mail.gmail.com",
"msg_date": "Tue, 2 Nov 2021 15:26:52 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2021-11-02 15:26:52 -0400, Melanie Plageman wrote:\n> Subject: [PATCH v14 1/4] Allow bootstrap process to beinit\n\nPushed.\n\n\n> +/*\n> + * On modern systems this is really just *counter++. On some older systems\n> + * there might be more to it, due to inability to read and write 64 bit values\n> + * atomically.\n> + */\n> +static inline void inc_counter(pg_atomic_uint64 *counter)\n> +{\n> +\tpg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);\n> +}\n> +\n> #undef INSIDE_ATOMICS_H\n\nWhy is this using a completely different naming scheme from the rest of the\nfile?\n\n\n\n> doc/src/sgml/monitoring.sgml | 116 +++++++++++++-\n> src/backend/catalog/system_views.sql | 11 ++\n> src/backend/postmaster/checkpointer.c | 3 +-\n> src/backend/postmaster/pgstat.c | 161 +++++++++++++++++++-\n> src/backend/storage/buffer/bufmgr.c | 46 ++++--\n> src/backend/storage/buffer/freelist.c | 23 ++-\n> src/backend/storage/buffer/localbuf.c | 3 +\n> src/backend/storage/sync/sync.c | 1 +\n> src/backend/utils/activity/backend_status.c | 60 +++++++-\n> src/backend/utils/adt/pgstatfuncs.c | 152 ++++++++++++++++++\n> src/include/catalog/pg_proc.dat | 9 ++\n> src/include/miscadmin.h | 2 +\n> src/include/pgstat.h | 53 +++++++\n> src/include/storage/buf_internals.h | 4 +-\n> src/include/utils/backend_status.h | 80 ++++++++++\n> src/test/regress/expected/rules.out | 8 +\n> 16 files changed, 701 insertions(+), 31 deletions(-)\n\nThis is a pretty large change, I wonder if there's a way to make it a bit more\ngranular.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 19 Nov 2021 08:49:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 11:49 AM Andres Freund <andres@anarazel.de> wrote:\n> > +/*\n> > + * On modern systems this is really just *counter++. On some older systems\n> > + * there might be more to it, due to inability to read and write 64 bit values\n> > + * atomically.\n> > + */\n> > +static inline void inc_counter(pg_atomic_uint64 *counter)\n> > +{\n> > + pg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);\n> > +}\n> > +\n> > #undef INSIDE_ATOMICS_H\n>\n> Why is this using a completely different naming scheme from the rest of the\n> file?\n\nIt was what Thomas originally named it. Also, I noticed all the other\npg_atomic* in this file were wrappers around the same impl function, so\nI thought maybe naming it this way would be confusing. I renamed it to\npg_atomic_inc_counter(), though maybe pg_atomic_readonly_write() would\nbe better?\n\n>\n> > doc/src/sgml/monitoring.sgml | 116 +++++++++++++-\n> > src/backend/catalog/system_views.sql | 11 ++\n> > src/backend/postmaster/checkpointer.c | 3 +-\n> > src/backend/postmaster/pgstat.c | 161 +++++++++++++++++++-\n> > src/backend/storage/buffer/bufmgr.c | 46 ++++--\n> > src/backend/storage/buffer/freelist.c | 23 ++-\n> > src/backend/storage/buffer/localbuf.c | 3 +\n> > src/backend/storage/sync/sync.c | 1 +\n> > src/backend/utils/activity/backend_status.c | 60 +++++++-\n> > src/backend/utils/adt/pgstatfuncs.c | 152 ++++++++++++++++++\n> > src/include/catalog/pg_proc.dat | 9 ++\n> > src/include/miscadmin.h | 2 +\n> > src/include/pgstat.h | 53 +++++++\n> > src/include/storage/buf_internals.h | 4 +-\n> > src/include/utils/backend_status.h | 80 ++++++++++\n> > src/test/regress/expected/rules.out | 8 +\n> > 16 files changed, 701 insertions(+), 31 deletions(-)\n>\n> This is a pretty large change, I wonder if there's a way to make it a bit more\n> granular.\n>\n\nI have done this. See v15 patch set attached.\n\n- Melanie",
"msg_date": "Wed, 24 Nov 2021 16:19:20 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Thanks for working on this. I was just trying to find something like\n\"pg_stat_checkpointer\".\n\nYou wrote beentry++ at the start of two loops, but I think that's wrong; it\nshould be at the end, as in the rest of the file (or as a loop increment).\nBackendStatusArray[0] is actually used (even though its backend has\nbackendId==1, not 0). \"MyBEEntry = &BackendStatusArray[MyBackendId - 1];\"\n\nYou could put *_NUM_TYPES as the last value in these enums, like\nNUM_AUXPROCTYPES, NUM_PMSIGNALS, and NUM_PROCSIGNALS:\n\n+#define IOOP_NUM_TYPES (IOOP_WRITE + 1)\n+#define IOPATH_NUM_TYPES (IOPATH_STRATEGY + 1)\n+#define BACKEND_NUM_TYPES (B_LOGGER + 1)\n\nThere's extraneous blank lines in these functions:\n\n+pgstat_sum_io_path_ops\n+pgstat_report_live_backend_io_path_ops\n+pgstat_recv_resetsharedcounter\n+GetIOPathDesc\n+StrategyRejectBuffer\n\nThis function is doubly-indented:\n\n+pgstat_send_buffers_reset\n\nAs support for C99 is now required by postgres, variables can be declared as\npart of various loops.\n\n+ int io_path;\n+ for (io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n\nRather than memset(), you could initialize msg like this.\nPgStat_MsgIOPathOps msg = {0};\n\n+pgstat_send_buffers(void)\n+{\n+ PgStat_MsgIOPathOps msg;\n+\n+ PgBackendStatus *beentry = MyBEEntry;\n+\n+ if (!beentry)\n+ return;\n+\n+ memset(&msg, 0, sizeof(msg));\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Nov 2021 19:15:59 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 07:15:59PM -0600, Justin Pryzby wrote:\n> There's extraneous blank lines in these functions:\n> \n> +pgstat_sum_io_path_ops\n> +pgstat_report_live_backend_io_path_ops\n> +pgstat_recv_resetsharedcounter\n> +GetIOPathDesc\n> +StrategyRejectBuffer\n\n+ an extra blank line pgstat_reset_shared_counters.\n\nIn 0005:\n\nmonitoring.sgml says that the columns in pg_stat_buffers are integers, but\nthey're actually bigint.\n\n+ tupstore = tuplestore_begin_heap(true, false, work_mem);\n\nYou're passing a constant randomAccess=true to tuplestore_begin_heap ;)\n\n+Datum all_values[NROWS][COLUMN_LENGTH];\n\nIf you were to allocate this as an array, I think it could actually be 3-D:\nDatum all_values[BACKEND_NUM_TYPES-1][IOPATH_NUM_TYPES][COLUMN_LENGTH];\n\nBut I don't know if this is portable across postgres' supported platforms; I\nhaven't seen any place which allocates a multidimensional array on the stack,\nnor passes one to a function:\n\n+static inline Datum *\n+get_pg_stat_buffers_row(Datum all_values[NROWS][COLUMN_LENGTH], BackendType backend_type, IOPath io_path)\n\nMaybe the allocation half is okay (I think it's ~3kB), but it seems easier to\npalloc the required amount than to research compiler behavior.\n\nThat function is only used as a one-line helper, and doesn't use\nmultidimensional array access anyway:\n\n+ return all_values[(backend_type - 1) * IOPATH_NUM_TYPES + io_path];\n\nI think it'd be better as a macro, like (I think)\n#define ROW(backend_type, io_path) all_values[NROWS*(backend_type-1)+io_path]\n\nMaybe it should take the column type as a 3 arg.\n\nThe enum with COLUMN_LENGTH should be named.\n\nOr maybe it should be removed, and the enum names moved to comments, like:\n\n+ /* backend_type */\n+ values[val++] = backend_type_desc;\n\n+ /* io_path */\n+ values[val++] = CStringGetTextDatum(GetIOPathDesc(io_path)); \n\n+ /* allocs */\n+ values[val++] += io_ops->allocs - resets->allocs;\n...\n\n*Note the use of += and not =.\n\nAlso:\nsrc/include/miscadmin.h:#define BACKEND_NUM_TYPES (B_LOGGER + 1)\n\nI think it's wrong to say NUM_TYPES = B_LOGGER + 1 (which would suggest using\nlessthan-or-equal instead of lessthan as you are).\n\nSince the valid backend types start at 1 , the \"count\" of backend types is\ncurrently B_LOGGER (13) - not 14. I think you should remove the \"+1\" here.\nThen NROWS (if it continued to exist at all) wouldn't need to subtract one.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 26 Nov 2021 15:16:36 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Thanks for the review!\n\nOn Wed, Nov 24, 2021 at 8:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> You wrote beentry++ at the start of two loops, but I think that's wrong; it\n> should be at the end, as in the rest of the file (or as a loop increment).\n> BackendStatusArray[0] is actually used (even though its backend has\n> backendId==1, not 0). \"MyBEEntry = &BackendStatusArray[MyBackendId - 1];\"\n\nI've fixed this in v16 which I will attach to the next email in the thread.\n\n> You could put *_NUM_TYPES as the last value in these enums, like\n> NUM_AUXPROCTYPES, NUM_PMSIGNALS, and NUM_PROCSIGNALS:\n>\n> +#define IOOP_NUM_TYPES (IOOP_WRITE + 1)\n> +#define IOPATH_NUM_TYPES (IOPATH_STRATEGY + 1)\n> +#define BACKEND_NUM_TYPES (B_LOGGER + 1)\n\nI originally had it as you describe, but based on this feedback upthread\nfrom Álvaro Herrera:\n\n> (It's weird to have enum values that are there just to indicate what's\n> the maximum value. I think that sort of thing is better done by having\n> a \"#define LAST_THING\" that takes the last valid value from the enum.\n> That would free you from having to handle the last value in switch\n> blocks, for example. LAST_OCLASS in dependency.h is a precedent on this.)\n\nSo, I changed it to use macros.\n\n> There's extraneous blank lines in these functions:\n>\n> +pgstat_sum_io_path_ops\n\nFixed\n\n> +pgstat_report_live_backend_io_path_ops\n\nI didn't see one here\n\n> +pgstat_recv_resetsharedcounter\n\nI didn't see one here\n\n> +GetIOPathDesc\n\nFixed\n\n> +StrategyRejectBuffer\n\nFixed\n\n> This function is doubly-indented:\n>\n> +pgstat_send_buffers_reset\n\nFixed. Thanks for catching this.\nI also ran pgindent and manually picked a few of the formatting fixes\nthat were relevant to code I added.\n\n>\n> As support for C99 is now required by postgres, variables can be declared as\n> part of various loops.\n>\n> + int io_path;\n> + for (io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n\nFixed this and all other occurrences in my code.\n\n> Rather than memset(), you could initialize msg like this.\n> PgStat_MsgIOPathOps msg = {0};\n>\n> +pgstat_send_buffers(void)\n> +{\n> + PgStat_MsgIOPathOps msg;\n> +\n> + PgBackendStatus *beentry = MyBEEntry;\n> +\n> + if (!beentry)\n> + return;\n> +\n> + memset(&msg, 0, sizeof(msg));\n>\n\nthough changing the initialization to universal zero initialization\nseems to be the correct way, I do get this compiler warning when I make\nthe change\n\npgstat.c:3212:29: warning: suggest braces around initialization of\nsubobject [-Wmissing-braces]\n PgStat_MsgIOPathOps msg = {0};\n ^\n {}\nI have seem some comments online that say that this is a spurious\nwarning present with some versions of both gcc and clang when using\n-Wmissing-braces to compile code with universal zero initialization, but\nI'm not sure what I should do.\n\nv16 attached in next message\n\n- Melanie\n\n\n",
"msg_date": "Wed, 1 Dec 2021 16:59:44 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v16 (also rebased) attached\n\nOn Fri, Nov 26, 2021 at 4:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Nov 24, 2021 at 07:15:59PM -0600, Justin Pryzby wrote:\n> > There's extraneous blank lines in these functions:\n> >\n> > +pgstat_sum_io_path_ops\n> > +pgstat_report_live_backend_io_path_ops\n> > +pgstat_recv_resetsharedcounter\n> > +GetIOPathDesc\n> > +StrategyRejectBuffer\n>\n> + an extra blank line pgstat_reset_shared_counters.\n\nFixed\n\n>\n> In 0005:\n>\n> monitoring.sgml says that the columns in pg_stat_buffers are integers, but\n> they're actually bigint.\n\nFixed\n\n>\n> + tupstore = tuplestore_begin_heap(true, false, work_mem);\n>\n> You're passing a constant randomAccess=true to tuplestore_begin_heap ;)\n\nFixed\n\n>\n> +Datum all_values[NROWS][COLUMN_LENGTH];\n>\n> If you were to allocate this as an array, I think it could actually be 3-D:\n> Datum all_values[BACKEND_NUM_TYPES-1][IOPATH_NUM_TYPES][COLUMN_LENGTH];\n\nI've changed this to a 3D array as you suggested and removed the NROWS\nmacro.\n\n> But I don't know if this is portable across postgres' supported platforms; I\n> haven't seen any place which allocates a multidimensional array on the stack,\n> nor passes one to a function:\n>\n> +static inline Datum *\n> +get_pg_stat_buffers_row(Datum all_values[NROWS][COLUMN_LENGTH], BackendType backend_type, IOPath io_path)\n>\n> Maybe the allocation half is okay (I think it's ~3kB), but it seems easier to\n> palloc the required amount than to research compiler behavior.\n\nI think passing it to the function is okay. The parameter type would be\nadjusted from an array to a pointer.\nI am not sure if the allocation on the stack in the body of\npg_stat_get_buffers is too large. (left as is for now)\n\n> That function is only used as a one-line helper, and doesn't use\n> multidimensional array access anyway:\n>\n> + return all_values[(backend_type - 1) * IOPATH_NUM_TYPES + io_path];\n\nwith your suggested changes to a 3D array, it now does use multidimensional\narray access\n\n> I think it'd be better as a macro, like (I think)\n> #define ROW(backend_type, io_path) all_values[NROWS*(backend_type-1)+io_path]\n\nIf I am understanding the idea of the macro, it would change the call\nsite from this:\n\n+Datum *values = get_pg_stat_buffers_row(all_values,\nbeentry->st_backendType, io_path);\n\n+values[COLUMN_ALLOCS] += pg_atomic_read_u64(&io_ops->allocs);\n+values[COLUMN_FSYNCS] += pg_atomic_read_u64(&io_ops->fsyncs);\n\nto this:\n\n+Datum *row = ROW(beentry->st_backendType, io_path);\n\n+row[COLUMN_ALLOCS] += pg_atomic_read_u64(&io_ops->allocs);\n+row[COLUMN_FSYNCS] += pg_atomic_read_u64(&io_ops->fsyncs);\n\nI usually prefer functions to macros, but I am fine with changing it.\n(I did not change it in this version)\nI have changed all the local variables from \"values\" to \"row\" which\nI think is a bit clearer.\n\n> Maybe it should take the column type as a 3 arg.\n\nIf I am understanding this idea, the call site would look like this now:\n+CELL(beentry->st_backendType, io_path, COLUMN_FSYNCS) +=\npg_atomic_read_u64(&io_ops->fsyncs);\n+CELL(beentry->st_backendType, io_path, COLUMN_ALLOCS) +=\npg_atomic_read_u64(&io_ops->allocs);\n\nI don't like this as much. Since this code is inside of a loop, it kind\nof makes sense to me that you get a row at the top of the loop and then\nfill in all the cells in the row using that \"row\" variable.\n\n> The enum with COLUMN_LENGTH should be named.\n\nI only use the values in it, so it didn't need a name.\n\n> Or maybe it should be removed, and the enum names moved to comments, like:\n>\n> + /* backend_type */\n> + values[val++] = backend_type_desc;\n>\n> + /* io_path */\n> + values[val++] = CStringGetTextDatum(GetIOPathDesc(io_path));\n>\n> + /* allocs */\n> + values[val++] += io_ops->allocs - resets->allocs;\n> ...\n\nI find it easier to understand with it in code instead of as a comment.\n\n> *Note the use of += and not =.\n\nThanks for seeing this. I have changed this (to use +=).\n\n> Also:\n> src/include/miscadmin.h:#define BACKEND_NUM_TYPES (B_LOGGER + 1)\n>\n> I think it's wrong to say NUM_TYPES = B_LOGGER + 1 (which would suggest using\n> lessthan-or-equal instead of lessthan as you are).\n>\n> Since the valid backend types start at 1 , the \"count\" of backend types is\n> currently B_LOGGER (13) - not 14. I think you should remove the \"+1\" here.\n> Then NROWS (if it continued to exist at all) wouldn't need to subtract one.\n\nI think what I currently have is technically correct because I start at\n1 when I am using it as a loop condition. I do waste a spot in the\narrays I allocate with BACKEND_NUM_TYPES size.\n\nI was hesitant to make the value of BACKEND_NUM_TYPES == B_LOGGER\nbecause it seems kind of weird to have it have the same value as the\nB_LOGGER enum.\n\nI am open to changing it. (I didn't change it in this v16).\n\n- Melanie",
"msg_date": "Wed, 1 Dec 2021 17:00:14 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Dec 01, 2021 at 05:00:14PM -0500, Melanie Plageman wrote:\n> > Also:\n> > src/include/miscadmin.h:#define BACKEND_NUM_TYPES (B_LOGGER + 1)\n> >\n> > I think it's wrong to say NUM_TYPES = B_LOGGER + 1 (which would suggest using\n> > lessthan-or-equal instead of lessthan as you are).\n> >\n> > Since the valid backend types start at 1 , the \"count\" of backend types is\n> > currently B_LOGGER (13) - not 14. I think you should remove the \"+1\" here.\n> > Then NROWS (if it continued to exist at all) wouldn't need to subtract one.\n> \n> I think what I currently have is technically correct because I start at\n> 1 when I am using it as a loop condition. I do waste a spot in the\n> arrays I allocate with BACKEND_NUM_TYPES size.\n> \n> I was hesitant to make the value of BACKEND_NUM_TYPES == B_LOGGER\n> because it seems kind of weird to have it have the same value as the\n> B_LOGGER enum.\n\nI don't mean to say that the code is misbehaving - I mean \"num_x\" means \"the\nnumber of x's\" - how many there are. Since the first, valid backend type is 1,\nand they're numbered consecutively and without duplicates, then \"the number of\nbackend types\" is the same as the value of the last one (B_LOGGER). It's\nconfusing if there's a macro called BACKEND_NUM_TYPES which is greater than the\nnumber of backend types.\n\nMost loops say for (int i=0; i<NUM; ++i)\nIf it's 1-based, they say for (int i=1; i<=NUM; ++i)\nYou have two different loops like:\n\n+ for (int i = 0; i < BACKEND_NUM_TYPES - 1 ; i++)\n+ for (int backend_type = 1; backend_type < BACKEND_NUM_TYPES; backend_type++)\n\nBoth of these iterate over the correct number of backend types, but they both\n*look* wrong, which isn't desirable.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 1 Dec 2021 17:59:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Dec 01, 2021 at 04:59:44PM -0500, Melanie Plageman wrote:\n> Thanks for the review!\n> \n> On Wed, Nov 24, 2021 at 8:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > You wrote beentry++ at the start of two loops, but I think that's wrong; it\n> > should be at the end, as in the rest of the file (or as a loop increment).\n> > BackendStatusArray[0] is actually used (even though its backend has\n> > backendId==1, not 0). \"MyBEEntry = &BackendStatusArray[MyBackendId - 1];\"\n> \n> I've fixed this in v16 which I will attach to the next email in the thread.\n> \n> > You could put *_NUM_TYPES as the last value in these enums, like\n> > NUM_AUXPROCTYPES, NUM_PMSIGNALS, and NUM_PROCSIGNALS:\n> >\n> > +#define IOOP_NUM_TYPES (IOOP_WRITE + 1)\n> > +#define IOPATH_NUM_TYPES (IOPATH_STRATEGY + 1)\n> > +#define BACKEND_NUM_TYPES (B_LOGGER + 1)\n> \n> I originally had it as you describe, but based on this feedback upthread\n> from �lvaro Herrera:\n\nI saw that after I made my suggestion. Sorry for the noise.\nBoth ways already exist in postgres and seem to be acceptable.\n\n> > There's extraneous blank lines in these functions:\n> > +pgstat_recv_resetsharedcounter\n> I didn't see one here\n\n=> The extra blank line is after the RESET_BUFFERS memset.\n\n> + * Reset the global, bgwriter and checkpointer statistics for the\n> + * cluster.\n\nThe first comma in this comment was introduced in 1bc8e7b09, and seems to be\nextraneous, since bgwriter and checkpointer are both global. With the comma,\nit looks like it should be memsetting 3 things.\n\n> + /* Don't count dead backends. They should already be counted */\n\nMaybe this comment should say \".. they'll be added below\"\n\n> + row[COLUMN_BACKEND_TYPE] = backend_type_desc;\n> + row[COLUMN_IO_PATH] = CStringGetTextDatum(GetIOPathDesc(io_path));\n> + row[COLUMN_ALLOCS] += io_ops->allocs - resets->allocs;\n> + row[COLUMN_EXTENDS] += io_ops->extends - resets->extends;\n> + row[COLUMN_FSYNCS] += io_ops->fsyncs - resets->fsyncs;\n> + row[COLUMN_WRITES] += io_ops->writes - resets->writes;\n> + row[COLUMN_RESET_TIME] = reset_time;\n\nIt'd be clearer if RESET_TIME were set adjacent to BACKEND_TYPE and IO_PATH.\n\n> > Rather than memset(), you could initialize msg like this.\n> > PgStat_MsgIOPathOps msg = {0};\n>\n> though changing the initialization to universal zero initialization\n> seems to be the correct way, I do get this compiler warning when I make\n> the change\n> \n> pgstat.c:3212:29: warning: suggest braces around initialization of subobject [-Wmissing-braces]\n> \n> I have seem some comments online that say that this is a spurious\n> warning present with some versions of both gcc and clang when using\n> -Wmissing-braces to compile code with universal zero initialization, but\n> I'm not sure what I should do.\n\nI think gcc is suggesting to write something like {{0}}, and I think the online\ncommentary you found is saying that the warning is a false positive.\nSo I think you should ignore my suggestion - it's not worth the bother.\n\nThis message needs to be updated:\n\terrhint(\"Target must be \\\"archiver\\\", \\\"bgwriter\\\", or \\\"wal\\\".\")))\n\nWhen I query the view, I see reset times as: 1999-12-31 18:00:00-06.\nI guess it should be initialized like this one:\n\tglobalStats.bgwriter.stat_reset_timestamp = ts\n\nThe cfbot shows failures now (I thought it was passing with the previous patch,\nbut I suppose I'm wrong.)\n\nIt looks like running recovery during single user mode hits this assertion.\nTRAP: FailedAssertion(\"beentry\", File: \"../../../../src/include/utils/backend_status.h\", Line: 359, PID: 3499)\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 1 Dec 2021 21:31:45 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Dec 01, 2021 at 04:59:44PM -0500, Melanie Plageman wrote:\n> Thanks for the review!\n> \n> On Wed, Nov 24, 2021 at 8:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > You wrote beentry++ at the start of two loops, but I think that's wrong; it\n> > should be at the end, as in the rest of the file (or as a loop increment).\n> > BackendStatusArray[0] is actually used (even though its backend has\n> > backendId==1, not 0). \"MyBEEntry = &BackendStatusArray[MyBackendId - 1];\"\n> \n> I've fixed this in v16 which I will attach to the next email in the thread.\n\nI just noticed that since beentry++ is now at the end of the loop, it's being\nmissed when you \"continue\":\n\n+ if (beentry->st_procpid == 0)\n+ continue;\n\nAlso, I saw that pgindent messed up and added spaces after pointers in function\ndeclarations, due to new typedefs not in typedefs.list:\n\n-pgstat_send_buffers_reset(PgStat_MsgResetsharedcounter *msg)\n+pgstat_send_buffers_reset(PgStat_MsgResetsharedcounter * msg)\n\n-static inline void pg_atomic_inc_counter(pg_atomic_uint64 *counter)\n+static inline void\n+pg_atomic_inc_counter(pg_atomic_uint64 * counter)\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 1 Dec 2021 22:06:53 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Thanks again! I really appreciate the thorough review.\n\nI have combined responses to all three of your emails below.\nLet me know if it is more confusing to do it this way.\n\nOn Wed, Dec 1, 2021 at 6:59 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Dec 01, 2021 at 05:00:14PM -0500, Melanie Plageman wrote:\n> > > Also:\n> > > src/include/miscadmin.h:#define BACKEND_NUM_TYPES (B_LOGGER + 1)\n> > >\n> > > I think it's wrong to say NUM_TYPES = B_LOGGER + 1 (which would suggest using\n> > > lessthan-or-equal instead of lessthan as you are).\n> > >\n> > > Since the valid backend types start at 1 , the \"count\" of backend types is\n> > > currently B_LOGGER (13) - not 14. I think you should remove the \"+1\" here.\n> > > Then NROWS (if it continued to exist at all) wouldn't need to subtract one.\n> >\n> > I think what I currently have is technically correct because I start at\n> > 1 when I am using it as a loop condition. I do waste a spot in the\n> > arrays I allocate with BACKEND_NUM_TYPES size.\n> >\n> > I was hesitant to make the value of BACKEND_NUM_TYPES == B_LOGGER\n> > because it seems kind of weird to have it have the same value as the\n> > B_LOGGER enum.\n>\n> I don't mean to say that the code is misbehaving - I mean \"num_x\" means \"the\n> number of x's\" - how many there are. Since the first, valid backend type is 1,\n> and they're numbered consecutively and without duplicates, then \"the number of\n> backend types\" is the same as the value of the last one (B_LOGGER). It's\n> confusing if there's a macro called BACKEND_NUM_TYPES which is greater than the\n> number of backend types.\n>\n> Most loops say for (int i=0; i<NUM; ++i)\n> If it's 1-based, they say for (int i=1; i<=NUM; ++i)\n> You have two different loops like:\n>\n> + for (int i = 0; i < BACKEND_NUM_TYPES - 1 ; i++)\n> + for (int backend_type = 1; backend_type < BACKEND_NUM_TYPES; backend_type++)\n>\n> Both of these iterate over the correct number of backend types, but they both\n> *look* wrong, which isn't desirable.\n\nI've changed this and added comments wherever I could to make it clear.\nWhenever the parameter was of type BackendType, I tried to use the\ncorrect (not adjusted by subtracting 1) number and wherever the type was\nint and being used as an index into the array, I used the adjusted value\nand added the idx suffix to make it clear that the number does not\nreflect the actual BackendType:\n\nOn Wed, Dec 1, 2021 at 10:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Dec 01, 2021 at 04:59:44PM -0500, Melanie Plageman wrote:\n> > Thanks for the review!\n> >\n> > On Wed, Nov 24, 2021 at 8:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > There's extraneous blank lines in these functions:\n> > > +pgstat_recv_resetsharedcounter\n> > I didn't see one here\n>\n> => The extra blank line is after the RESET_BUFFERS memset.\n\nFixed.\n\n> > + * Reset the global, bgwriter and checkpointer statistics for the\n> > + * cluster.\n>\n> The first comma in this comment was introduced in 1bc8e7b09, and seems to be\n> extraneous, since bgwriter and checkpointer are both global. With the comma,\n> it looks like it should be memsetting 3 things.\n\nFixed.\n\n> > + /* Don't count dead backends. They should already be counted */\n>\n> Maybe this comment should say \".. they'll be added below\"\n\nFixed.\n\n> > + row[COLUMN_BACKEND_TYPE] = backend_type_desc;\n> > + row[COLUMN_IO_PATH] = CStringGetTextDatum(GetIOPathDesc(io_path));\n> > + row[COLUMN_ALLOCS] += io_ops->allocs - resets->allocs;\n> > + row[COLUMN_EXTENDS] += io_ops->extends - resets->extends;\n> > + row[COLUMN_FSYNCS] += io_ops->fsyncs - resets->fsyncs;\n> > + row[COLUMN_WRITES] += io_ops->writes - resets->writes;\n> > + row[COLUMN_RESET_TIME] = reset_time;\n>\n> It'd be clearer if RESET_TIME were set adjacent to BACKEND_TYPE and IO_PATH.\n\nIf you mean just in the order here (not in the column order in the\nview), then I have changed it as you recommended.\n\n> This message needs to be updated:\n> errhint(\"Target must be \\\"archiver\\\", \\\"bgwriter\\\", or \\\"wal\\\".\")))\n\nDone.\n\n> When I query the view, I see reset times as: 1999-12-31 18:00:00-06.\n> I guess it should be initialized like this one:\n> globalStats.bgwriter.stat_reset_timestamp = ts\n\nDone.\n\n> The cfbot shows failures now (I thought it was passing with the previous patch,\n> but I suppose I'm wrong.)\n>\n> It looks like running recovery during single user mode hits this assertion.\n> TRAP: FailedAssertion(\"beentry\", File: \"../../../../src/include/utils/backend_status.h\", Line: 359, PID: 3499)\n>\n\nYes, thank you for catching this.\nI have moved up pgstat_beinit and pgstat_bestart so that single user\nmode process will also have PgBackendStatus. I also have to guard\nagainst sending these stats to the collector since there is no room for\nB_INVALID backendtype in the array of IO Op values.\n\nWith this change `make check-world` passes on my machine.\n\nOn Wed, Dec 1, 2021 at 11:06 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Dec 01, 2021 at 04:59:44PM -0500, Melanie Plageman wrote:\n> > Thanks for the review!\n> >\n> > On Wed, Nov 24, 2021 at 8:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > You wrote beentry++ at the start of two loops, but I think that's wrong; it\n> > > should be at the end, as in the rest of the file (or as a loop increment).\n> > > BackendStatusArray[0] is actually used (even though its backend has\n> > > backendId==1, not 0). \"MyBEEntry = &BackendStatusArray[MyBackendId - 1];\"\n> >\n> > I've fixed this in v16 which I will attach to the next email in the thread.\n>\n> I just noticed that since beentry++ is now at the end of the loop, it's being\n> missed when you \"continue\":\n>\n> + if (beentry->st_procpid == 0)\n> + continue;\n\nFixed.\n\n> Also, I saw that pgindent messed up and added spaces after pointers in function\n> declarations, due to new typedefs not in typedefs.list:\n>\n> -pgstat_send_buffers_reset(PgStat_MsgResetsharedcounter *msg)\n> +pgstat_send_buffers_reset(PgStat_MsgResetsharedcounter * msg)\n>\n> -static inline void pg_atomic_inc_counter(pg_atomic_uint64 *counter)\n> +static inline void\n> +pg_atomic_inc_counter(pg_atomic_uint64 * counter)\n\nFixed.\n\n-- Melanie",
"msg_date": "Fri, 3 Dec 2021 15:02:24 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-03 15:02:24 -0500, Melanie Plageman wrote:\n> From e0f7f3dfd60a68fa01f3c023bcdb69305ade3738 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Mon, 11 Oct 2021 16:15:06 -0400\n> Subject: [PATCH v17 1/7] Read-only atomic backend write function\n> \n> For counters in shared memory which can be read by any backend but only\n> written to by one backend, an atomic is still needed to protect against\n> torn values, however, pg_atomic_fetch_add_u64() is overkill for\n> incrementing the counter. pg_atomic_inc_counter() is a helper function\n> which can be used to increment these values safely but without\n> unnecessary overhead.\n>\n> Author: Thomas Munro\n> ---\n> src/include/port/atomics.h | 11 +++++++++++\n> 1 file changed, 11 insertions(+)\n> \n> diff --git a/src/include/port/atomics.h b/src/include/port/atomics.h\n> index 856338f161..39ffff24dd 100644\n> --- a/src/include/port/atomics.h\n> +++ b/src/include/port/atomics.h\n> @@ -519,6 +519,17 @@ pg_atomic_sub_fetch_u64(volatile pg_atomic_uint64 *ptr, int64 sub_)\n> \treturn pg_atomic_sub_fetch_u64_impl(ptr, sub_);\n> }\n> \n> +/*\n> + * On modern systems this is really just *counter++. On some older systems\n> + * there might be more to it, due to inability to read and write 64 bit values\n> + * atomically.\n> + */\n> +static inline void\n> +pg_atomic_inc_counter(pg_atomic_uint64 *counter)\n> +{\n> +\tpg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);\n> +}\n\nI wonder if it's worth putting something in the name indicating that this is\nnot actual atomic RMW operation. Perhaps adding _unlocked?\n\n\n\n> From b0e193cfa08f0b8cf1be929f26fe38f06a39aeae Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Wed, 24 Nov 2021 10:32:56 -0500\n> Subject: [PATCH v17 2/7] Add IO operation counters to PgBackendStatus\n> \n> Add an array of counters in PgBackendStatus which count the buffers\n> allocated, extended, fsynced, and written by a given backend. Each \"IO\n> Op\" (alloc, fsync, extend, write) is counted per \"IO Path\" (direct,\n> local, shared, or strategy). \"local\" and \"shared\" IO Path counters count\n> operations on local and shared buffers. The \"strategy\" IO Path counts\n> buffers alloc'd/written/read/fsync'd as part of a BufferAccessStrategy.\n> The \"direct\" IO Path counts blocks of IO which are read, written, or\n> fsync'd using smgrwrite/extend/immedsync directly (as opposed to through\n> [Local]BufferAlloc()).\n> \n> With this commit, all backends increment a counter in their\n> PgBackendStatus when performing an IO operation. This is in preparation\n> for future commits which will persist these stats upon backend exit and\n> use the counters to provide observability of database IO operations.\n> \n> Note that this commit does not add code to increment the \"direct\" path.\n> A separate proposed patch [1] which would add wrappers for smgrwrite(),\n> smgrextend(), and smgrimmedsync() would provide a good location to call\n> pgstat_inc_ioop() for unbuffered IO and avoid regressions for future\n> users of these functions.\n>\n> [1] https://www.postgresql.org/message-id/CAAKRu_aw72w70X1P%3Dba20K8iGUvSkyz7Yk03wPPh3f9WgmcJ3g%40mail.gmail.com\n\nOn longer thread it's nice for committers to already have Reviewed-By: in the\ncommit message.\n\n> diff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\n> index 7229598822..413cc605f8 100644\n> --- a/src/backend/utils/activity/backend_status.c\n> +++ b/src/backend/utils/activity/backend_status.c\n> @@ -399,6 +399,15 @@ pgstat_bestart(void)\n> \tlbeentry.st_progress_command = PROGRESS_COMMAND_INVALID;\n> \tlbeentry.st_progress_command_target = InvalidOid;\n> \tlbeentry.st_query_id = UINT64CONST(0);\n> +\tfor (int io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n> +\t{\n> +\t\tIOOps\t *io_ops = &lbeentry.io_path_stats[io_path];\n> +\n> +\t\tpg_atomic_init_u64(&io_ops->allocs, 0);\n> +\t\tpg_atomic_init_u64(&io_ops->extends, 0);\n> +\t\tpg_atomic_init_u64(&io_ops->fsyncs, 0);\n> +\t\tpg_atomic_init_u64(&io_ops->writes, 0);\n> +\t}\n> \n> \t/*\n> \t * we don't zero st_progress_param here to save cycles; nobody should\n\nnit: I think we nearly always have a blank line before loops\n\n\n> diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> index 646126edee..93f1b4bcfc 100644\n> --- a/src/backend/utils/init/postinit.c\n> +++ b/src/backend/utils/init/postinit.c\n> @@ -623,6 +623,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> \t\tRegisterTimeout(CLIENT_CONNECTION_CHECK_TIMEOUT, ClientCheckTimeoutHandler);\n> \t}\n> \n> +\tpgstat_beinit();\n> \t/*\n> \t * Initialize local process's access to XLOG.\n> \t */\n\nnit: same with multi-line comments.\n\n\n> @@ -649,6 +650,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> \t\t */\n> \t\tCreateAuxProcessResourceOwner();\n> \n> +\t\tpgstat_bestart();\n> \t\tStartupXLOG();\n> \t\t/* Release (and warn about) any buffer pins leaked in StartupXLOG */\n> \t\tReleaseAuxProcessResources(true);\n> @@ -676,7 +678,6 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> \tEnablePortalManager();\n> \n> \t/* Initialize status reporting */\n> -\tpgstat_beinit();\n\nI'd like to see changes like moving this kind of thing around broken around\nand committed separately. It's much easier to pinpoint breakage if the CF\nbreaks after moving just pgstat_beinit() around, rather than when committing\nthis considerably larger patch. And reordering subsystem initialization has\nthe habit of causing problems...\n\n\n> +/* ----------\n> + * IO Stats reporting utility types\n> + * ----------\n> + */\n> +\n> +typedef enum IOOp\n> +{\n> +\tIOOP_ALLOC,\n> +\tIOOP_EXTEND,\n> +\tIOOP_FSYNC,\n> +\tIOOP_WRITE,\n> +} IOOp;\n> [...]\n> +/*\n> + * Structure for counting all types of IOOps for a live backend.\n> + */\n> +typedef struct IOOps\n> +{\n> +\tpg_atomic_uint64 allocs;\n> +\tpg_atomic_uint64 extends;\n> +\tpg_atomic_uint64 fsyncs;\n> +\tpg_atomic_uint64 writes;\n> +} IOOps;\n\nTo me IOop and IOOps sound to much alike - even though they're really kind of\nseparate things. s/IOOps/IOOpCounters/ maybe?\n\n\n> @@ -3152,6 +3156,14 @@ pgstat_shutdown_hook(int code, Datum arg)\n> {\n> \tAssert(!pgstat_is_shutdown);\n> \n> +\t/*\n> +\t * Only need to send stats on IO Ops for IO Paths when a process exits.\n> +\t * Users requiring IO Ops for both live and exited backends can read from\n> +\t * live backends' PgBackendStatus and sum this with totals from exited\n> +\t * backends persisted by the stats collector.\n> +\t */\n> +\tpgstat_send_buffers();\n\nPerhaps something like this comment belongs somewhere at the top of the file,\nor in the header, or ...? It's a fairly central design piece, and it's not\nobvious one would need to look in the shutdown hook for it?\n\n\n> +/*\n> + * Before exiting, a backend sends its IO op statistics to the collector so\n> + * that they may be persisted.\n> + */\n> +void\n> +pgstat_send_buffers(void)\n> +{\n> +\tPgStat_MsgIOPathOps msg;\n> +\n> +\tPgBackendStatus *beentry = MyBEEntry;\n> +\n> +\t/*\n> +\t * Though some backends with type B_INVALID (such as the single-user mode\n> +\t * process) do initialize and increment IO operations stats, there is no\n> +\t * spot in the array of IO operations for backends of type B_INVALID. As\n> +\t * such, do not send these to the stats collector.\n> +\t */\n> +\tif (!beentry || beentry->st_backendType == B_INVALID)\n> +\t\treturn;\n\nWhy does single user mode use B_INVALID? That doesn't seem quite right.\n\n\n> +\tmemset(&msg, 0, sizeof(msg));\n> +\tmsg.backend_type = beentry->st_backendType;\n> +\n> +\tpgstat_sum_io_path_ops(msg.iop.io_path_ops,\n> +\t\t\t\t\t\t (IOOps *) &beentry->io_path_stats);\n> +\n> +\tpgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_IO_PATH_OPS);\n> +\tpgstat_send(&msg, sizeof(msg));\n> +}\n\nIt seems worth having a path skipping sending the message if there was no IO?\n\n\n\n> +/*\n> + * Helper function to sum all live IO Op stats for all IO Paths (e.g. shared,\n> + * local) to those in the equivalent stats structure for exited backends. Note\n> + * that this adds and doesn't set, so the destination stats structure should be\n> + * zeroed out by the caller initially. This would commonly be used to transfer\n> + * all IO Op stats for all IO Paths for a particular backend type to the\n> + * pgstats structure.\n> + */\n> +void\n> +pgstat_sum_io_path_ops(PgStatIOOps *dest, IOOps *src)\n> +{\n> +\tfor (int io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n> +\t{\n\nSacriligeous, but I find io_path a harder to understand variable name for the\ncounter than i (or io_path_off or ...) ;)\n\n\n> +static void\n> +pgstat_recv_io_path_ops(PgStat_MsgIOPathOps *msg, int len)\n> +{\n> +\tPgStatIOOps *src_io_path_ops;\n> +\tPgStatIOOps *dest_io_path_ops;\n> +\n> +\t/*\n> +\t * Subtract 1 from message's BackendType to get a valid index into the\n> +\t * array of IO Ops which does not include an entry for B_INVALID\n> +\t * BackendType.\n> +\t */\n> +\tAssert(msg->backend_type > B_INVALID);\n\nProbably worth also asserting the upper boundary?\n\n\n\n> From f972ea87270feaed464a74fb6541ac04b4fc7d98 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Wed, 24 Nov 2021 11:39:48 -0500\n> Subject: [PATCH v17 4/7] Add \"buffers\" to pgstat_reset_shared_counters\n> \n> Backends count IO operations for various IO paths in their PgBackendStatus.\n> Upon exit, they send these counts to the stats collector. Prior to this commit,\n> these IO Ops stats would have been reset when the target was \"bgwriter\".\n> \n> With this commit, target \"bgwriter\" no longer will cause the IO operations\n> stats to be reset, and the IO operations stats can be reset with new target,\n> \"buffers\".\n> ---\n> doc/src/sgml/monitoring.sgml | 2 +-\n> src/backend/postmaster/pgstat.c | 83 +++++++++++++++++++--\n> src/backend/utils/activity/backend_status.c | 29 +++++++\n> src/include/pgstat.h | 8 +-\n> src/include/utils/backend_status.h | 2 +\n> 5 files changed, 117 insertions(+), 7 deletions(-)\n> \n> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> index 62f2a3332b..bda3eef309 100644\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -3604,7 +3604,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n> <structfield>stats_reset</structfield> <type>timestamp with time zone</type>\n> </para>\n> <para>\n> - Time at which these statistics were last reset\n> + Time at which these statistics were last reset.\n> </para></entry>\n> </row>\n> </tbody>\n\nHm?\n\nShouldn't this new reset target be documented?\n\n\n> +/*\n> + * Helper function to collect and send live backends' current IO operations\n> + * stats counters when a stats reset is initiated so that they may be deducted\n> + * from future totals.\n> + */\n> +static void\n> +pgstat_send_buffers_reset(PgStat_MsgResetsharedcounter *msg)\n> +{\n> +\tPgStatIOPathOps ops[BACKEND_NUM_TYPES];\n> +\n> +\tmemset(ops, 0, sizeof(ops));\n> +\tpgstat_report_live_backend_io_path_ops(ops);\n> +\n> +\t/*\n> +\t * Iterate through the array of IO Ops for all IO Paths for each\n> +\t * BackendType. Because the array does not include a spot for BackendType\n> +\t * B_INVALID, add 1 to the index when setting backend_type so that there is\n> +\t * no confusion as to the BackendType with which this reset message\n> +\t * corresponds.\n> +\t */\n> +\tfor (int backend_type_idx = 0; backend_type_idx < BACKEND_NUM_TYPES; backend_type_idx++)\n> +\t{\n> +\t\tmsg->m_backend_resets.backend_type = backend_type_idx + 1;\n> +\t\tmemcpy(&msg->m_backend_resets.iop, &ops[backend_type_idx],\n> +\t\t\t\tsizeof(msg->m_backend_resets.iop));\n> +\t\tpgstat_send(msg, sizeof(PgStat_MsgResetsharedcounter));\n> +\t}\n> +}\n\nProbably worth explaining why multiple messages are sent?\n\n\n> @@ -5583,10 +5621,45 @@ pgstat_recv_resetsharedcounter(PgStat_MsgResetsharedcounter *msg, int len)\n> {\n> \tif (msg->m_resettarget == RESET_BGWRITER)\n> \t{\n> -\t\t/* Reset the global, bgwriter and checkpointer statistics for the cluster. */\n> -\t\tmemset(&globalStats, 0, sizeof(globalStats));\n> +\t\t/*\n> +\t\t * Reset the global bgwriter and checkpointer statistics for the\n> +\t\t * cluster.\n> +\t\t */\n> +\t\tmemset(&globalStats.checkpointer, 0, sizeof(globalStats.checkpointer));\n> +\t\tmemset(&globalStats.bgwriter, 0, sizeof(globalStats.bgwriter));\n> \t\tglobalStats.bgwriter.stat_reset_timestamp = GetCurrentTimestamp();\n> \t}\n\nOh, is this a live bug?\n\n\n> +\t\t/*\n> +\t\t * Subtract 1 from the BackendType to arrive at a valid index in the\n> +\t\t * array, as it does not contain a spot for B_INVALID BackendType.\n> +\t\t */\n\nInstead of repeating a comment about +- 1 in a bunch of places, would it look\nbetter to have two helper inline functions for this purpose?\n\n\n\n> +/*\n> +* When adding a new column to the pg_stat_buffers view, add a new enum\n> +* value here above COLUMN_LENGTH.\n> +*/\n> +enum\n> +{\n> +\tCOLUMN_BACKEND_TYPE,\n> +\tCOLUMN_IO_PATH,\n> +\tCOLUMN_ALLOCS,\n> +\tCOLUMN_EXTENDS,\n> +\tCOLUMN_FSYNCS,\n> +\tCOLUMN_WRITES,\n> +\tCOLUMN_RESET_TIME,\n> +\tCOLUMN_LENGTH,\n> +};\n\nCOLUMN_LENGTH seems like a fairly generic name...\n\n\n\n> From 9f22da9041e1e1fbc0ef003f5f78f4e72274d438 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Wed, 24 Nov 2021 12:20:10 -0500\n> Subject: [PATCH v17 6/7] Remove superfluous bgwriter stats\n> \n> Remove stats from pg_stat_bgwriter which are now more clearly expressed\n> in pg_stat_buffers.\n> \n> TODO:\n> - make pg_stat_checkpointer view and move relevant stats into it\n> - add additional stats to pg_stat_bgwriter\n\nWhen do you think it makes sense to tackle these wrt committing some of the\npatches?\n\n\n> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> index 6926fc5742..67447f997a 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -2164,7 +2164,6 @@ BufferSync(int flags)\n> \t\t\tif (SyncOneBuffer(buf_id, false, &wb_context) & BUF_WRITTEN)\n> \t\t\t{\n> \t\t\t\tTRACE_POSTGRESQL_BUFFER_SYNC_WRITTEN(buf_id);\n> -\t\t\t\tPendingCheckpointerStats.m_buf_written_checkpoints++;\n> \t\t\t\tnum_written++;\n> \t\t\t}\n> \t\t}\n> @@ -2273,9 +2272,6 @@ BgBufferSync(WritebackContext *wb_context)\n> \t */\n> \tstrategy_buf_id = StrategySyncStart(&strategy_passes, &recent_alloc);\n> \n> -\t/* Report buffer alloc counts to pgstat */\n> -\tPendingBgWriterStats.m_buf_alloc += recent_alloc;\n> -\n> \t/*\n> \t * If we're not running the LRU scan, just stop after doing the stats\n> \t * stuff. We mark the saved state invalid so that we can recover sanely\n> @@ -2472,8 +2468,6 @@ BgBufferSync(WritebackContext *wb_context)\n> \t\t\treusable_buffers++;\n> \t}\n> \n> -\tPendingBgWriterStats.m_buf_written_clean += num_written;\n> -\n\nIsn't num_written unused now, unless tracepoints are enabled? I'd expect some\ncompilers to warn... Perhaps we should just remove information from the\ntracepoint?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Dec 2021 11:17:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v18 attached.\n\nOn Thu, Dec 9, 2021 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-12-03 15:02:24 -0500, Melanie Plageman wrote:\n> > From e0f7f3dfd60a68fa01f3c023bcdb69305ade3738 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Mon, 11 Oct 2021 16:15:06 -0400\n> > Subject: [PATCH v17 1/7] Read-only atomic backend write function\n> >\n> > For counters in shared memory which can be read by any backend but only\n> > written to by one backend, an atomic is still needed to protect against\n> > torn values, however, pg_atomic_fetch_add_u64() is overkill for\n> > incrementing the counter. pg_atomic_inc_counter() is a helper function\n> > which can be used to increment these values safely but without\n> > unnecessary overhead.\n> >\n> > Author: Thomas Munro\n> > ---\n> > src/include/port/atomics.h | 11 +++++++++++\n> > 1 file changed, 11 insertions(+)\n> >\n> > diff --git a/src/include/port/atomics.h b/src/include/port/atomics.h\n> > index 856338f161..39ffff24dd 100644\n> > --- a/src/include/port/atomics.h\n> > +++ b/src/include/port/atomics.h\n> > @@ -519,6 +519,17 @@ pg_atomic_sub_fetch_u64(volatile pg_atomic_uint64 *ptr, int64 sub_)\n> > return pg_atomic_sub_fetch_u64_impl(ptr, sub_);\n> > }\n> >\n> > +/*\n> > + * On modern systems this is really just *counter++. On some older systems\n> > + * there might be more to it, due to inability to read and write 64 bit values\n> > + * atomically.\n> > + */\n> > +static inline void\n> > +pg_atomic_inc_counter(pg_atomic_uint64 *counter)\n> > +{\n> > + pg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);\n> > +}\n>\n> I wonder if it's worth putting something in the name indicating that this is\n> not actual atomic RMW operation. Perhaps adding _unlocked?\n>\n\nDone.\n\n>\n> > From b0e193cfa08f0b8cf1be929f26fe38f06a39aeae Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Wed, 24 Nov 2021 10:32:56 -0500\n> > Subject: [PATCH v17 2/7] Add IO operation counters to PgBackendStatus\n> >\n> > Add an array of counters in PgBackendStatus which count the buffers\n> > allocated, extended, fsynced, and written by a given backend. Each \"IO\n> > Op\" (alloc, fsync, extend, write) is counted per \"IO Path\" (direct,\n> > local, shared, or strategy). \"local\" and \"shared\" IO Path counters count\n> > operations on local and shared buffers. The \"strategy\" IO Path counts\n> > buffers alloc'd/written/read/fsync'd as part of a BufferAccessStrategy.\n> > The \"direct\" IO Path counts blocks of IO which are read, written, or\n> > fsync'd using smgrwrite/extend/immedsync directly (as opposed to through\n> > [Local]BufferAlloc()).\n> >\n> > With this commit, all backends increment a counter in their\n> > PgBackendStatus when performing an IO operation. This is in preparation\n> > for future commits which will persist these stats upon backend exit and\n> > use the counters to provide observability of database IO operations.\n> >\n> > Note that this commit does not add code to increment the \"direct\" path.\n> > A separate proposed patch [1] which would add wrappers for smgrwrite(),\n> > smgrextend(), and smgrimmedsync() would provide a good location to call\n> > pgstat_inc_ioop() for unbuffered IO and avoid regressions for future\n> > users of these functions.\n> >\n> > [1] https://www.postgresql.org/message-id/CAAKRu_aw72w70X1P%3Dba20K8iGUvSkyz7Yk03wPPh3f9WgmcJ3g%40mail.gmail.com\n>\n> On longer thread it's nice for committers to already have Reviewed-By: in the\n> commit message.\n\nDone.\n\n> > diff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\n> > index 7229598822..413cc605f8 100644\n> > --- a/src/backend/utils/activity/backend_status.c\n> > +++ b/src/backend/utils/activity/backend_status.c\n> > @@ -399,6 +399,15 @@ pgstat_bestart(void)\n> > lbeentry.st_progress_command = PROGRESS_COMMAND_INVALID;\n> > lbeentry.st_progress_command_target = InvalidOid;\n> > lbeentry.st_query_id = UINT64CONST(0);\n> > + for (int io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n> > + {\n> > + IOOps *io_ops = &lbeentry.io_path_stats[io_path];\n> > +\n> > + pg_atomic_init_u64(&io_ops->allocs, 0);\n> > + pg_atomic_init_u64(&io_ops->extends, 0);\n> > + pg_atomic_init_u64(&io_ops->fsyncs, 0);\n> > + pg_atomic_init_u64(&io_ops->writes, 0);\n> > + }\n> >\n> > /*\n> > * we don't zero st_progress_param here to save cycles; nobody should\n>\n> nit: I think we nearly always have a blank line before loops\n\nDone.\n\n> > diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> > index 646126edee..93f1b4bcfc 100644\n> > --- a/src/backend/utils/init/postinit.c\n> > +++ b/src/backend/utils/init/postinit.c\n> > @@ -623,6 +623,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> > RegisterTimeout(CLIENT_CONNECTION_CHECK_TIMEOUT, ClientCheckTimeoutHandler);\n> > }\n> >\n> > + pgstat_beinit();\n> > /*\n> > * Initialize local process's access to XLOG.\n> > */\n>\n> nit: same with multi-line comments.\n\nDone.\n\n> > @@ -649,6 +650,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> > */\n> > CreateAuxProcessResourceOwner();\n> >\n> > + pgstat_bestart();\n> > StartupXLOG();\n> > /* Release (and warn about) any buffer pins leaked in StartupXLOG */\n> > ReleaseAuxProcessResources(true);\n> > @@ -676,7 +678,6 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n> > EnablePortalManager();\n> >\n> > /* Initialize status reporting */\n> > - pgstat_beinit();\n>\n> I'd like to see changes like moving this kind of thing around broken around\n> and committed separately. It's much easier to pinpoint breakage if the CF\n> breaks after moving just pgstat_beinit() around, rather than when committing\n> this considerably larger patch. And reordering subsystem initialization has\n> the habit of causing problems...\n\nDone\n\n> > +/* ----------\n> > + * IO Stats reporting utility types\n> > + * ----------\n> > + */\n> > +\n> > +typedef enum IOOp\n> > +{\n> > + IOOP_ALLOC,\n> > + IOOP_EXTEND,\n> > + IOOP_FSYNC,\n> > + IOOP_WRITE,\n> > +} IOOp;\n> > [...]\n> > +/*\n> > + * Structure for counting all types of IOOps for a live backend.\n> > + */\n> > +typedef struct IOOps\n> > +{\n> > + pg_atomic_uint64 allocs;\n> > + pg_atomic_uint64 extends;\n> > + pg_atomic_uint64 fsyncs;\n> > + pg_atomic_uint64 writes;\n> > +} IOOps;\n>\n> To me IOop and IOOps sound to much alike - even though they're really kind of\n> separate things. s/IOOps/IOOpCounters/ maybe?\n\nDone.\n\n> > @@ -3152,6 +3156,14 @@ pgstat_shutdown_hook(int code, Datum arg)\n> > {\n> > Assert(!pgstat_is_shutdown);\n> >\n> > + /*\n> > + * Only need to send stats on IO Ops for IO Paths when a process exits.\n> > + * Users requiring IO Ops for both live and exited backends can read from\n> > + * live backends' PgBackendStatus and sum this with totals from exited\n> > + * backends persisted by the stats collector.\n> > + */\n> > + pgstat_send_buffers();\n>\n> Perhaps something like this comment belongs somewhere at the top of the file,\n> or in the header, or ...? It's a fairly central design piece, and it's not\n> obvious one would need to look in the shutdown hook for it?\n>\n\nnow in pgstat.h above the declaration of pgstat_send_buffers()\n\n> > +/*\n> > + * Before exiting, a backend sends its IO op statistics to the collector so\n> > + * that they may be persisted.\n> > + */\n> > +void\n> > +pgstat_send_buffers(void)\n> > +{\n> > + PgStat_MsgIOPathOps msg;\n> > +\n> > + PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + /*\n> > + * Though some backends with type B_INVALID (such as the single-user mode\n> > + * process) do initialize and increment IO operations stats, there is no\n> > + * spot in the array of IO operations for backends of type B_INVALID. As\n> > + * such, do not send these to the stats collector.\n> > + */\n> > + if (!beentry || beentry->st_backendType == B_INVALID)\n> > + return;\n>\n> Why does single user mode use B_INVALID? That doesn't seem quite right.\n\nI think PgBackendStatus->st_backendType is set from MyBackendType which\nisn't set for the single user mode process. What BackendType would you\nexpect to see?\n\n> > + memset(&msg, 0, sizeof(msg));\n> > + msg.backend_type = beentry->st_backendType;\n> > +\n> > + pgstat_sum_io_path_ops(msg.iop.io_path_ops,\n> > + (IOOps *) &beentry->io_path_stats);\n> > +\n> > + pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_IO_PATH_OPS);\n> > + pgstat_send(&msg, sizeof(msg));\n> > +}\n>\n> It seems worth having a path skipping sending the message if there was no IO?\n\nMakes sense. I've updated pgstat_send_buffers() to do a loop after calling\npgstat_sum_io_path_ops() and check if it should skip sending.\n\nI also thought about having pgstat_sum_io_path_ops() return a value to\nindicate if everything was 0 -- which could be useful to future callers\npotentially?\n\nI didn't do this because I am not sure what the return value would be.\nIt could be a bool and be true if any IO was done and false if none was\ndone -- but that doesn't really make sense given the function's name it\nwould be called like\nif (!pgstat_sum_io_path_ops())\n return\nwhich I'm not sure is very clear\n\n> > +/*\n> > + * Helper function to sum all live IO Op stats for all IO Paths (e.g. shared,\n> > + * local) to those in the equivalent stats structure for exited backends. Note\n> > + * that this adds and doesn't set, so the destination stats structure should be\n> > + * zeroed out by the caller initially. This would commonly be used to transfer\n> > + * all IO Op stats for all IO Paths for a particular backend type to the\n> > + * pgstats structure.\n> > + */\n> > +void\n> > +pgstat_sum_io_path_ops(PgStatIOOps *dest, IOOps *src)\n> > +{\n> > + for (int io_path = 0; io_path < IOPATH_NUM_TYPES; io_path++)\n> > + {\n>\n> Sacriligeous, but I find io_path a harder to understand variable name for the\n> counter than i (or io_path_off or ...) ;)\n\nI've updated almost all my non-standard loop index variable names.\n\n> > +static void\n> > +pgstat_recv_io_path_ops(PgStat_MsgIOPathOps *msg, int len)\n> > +{\n> > + PgStatIOOps *src_io_path_ops;\n> > + PgStatIOOps *dest_io_path_ops;\n> > +\n> > + /*\n> > + * Subtract 1 from message's BackendType to get a valid index into the\n> > + * array of IO Ops which does not include an entry for B_INVALID\n> > + * BackendType.\n> > + */\n> > + Assert(msg->backend_type > B_INVALID);\n>\n> Probably worth also asserting the upper boundary?\n\nDone.\n\n> > From f972ea87270feaed464a74fb6541ac04b4fc7d98 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Wed, 24 Nov 2021 11:39:48 -0500\n> > Subject: [PATCH v17 4/7] Add \"buffers\" to pgstat_reset_shared_counters\n> >\n> > Backends count IO operations for various IO paths in their PgBackendStatus.\n> > Upon exit, they send these counts to the stats collector. Prior to this commit,\n> > these IO Ops stats would have been reset when the target was \"bgwriter\".\n> >\n> > With this commit, target \"bgwriter\" no longer will cause the IO operations\n> > stats to be reset, and the IO operations stats can be reset with new target,\n> > \"buffers\".\n> > ---\n> > doc/src/sgml/monitoring.sgml | 2 +-\n> > src/backend/postmaster/pgstat.c | 83 +++++++++++++++++++--\n> > src/backend/utils/activity/backend_status.c | 29 +++++++\n> > src/include/pgstat.h | 8 +-\n> > src/include/utils/backend_status.h | 2 +\n> > 5 files changed, 117 insertions(+), 7 deletions(-)\n> >\n> > diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> > index 62f2a3332b..bda3eef309 100644\n> > --- a/doc/src/sgml/monitoring.sgml\n> > +++ b/doc/src/sgml/monitoring.sgml\n> > @@ -3604,7 +3604,7 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n> > <structfield>stats_reset</structfield> <type>timestamp with time zone</type>\n> > </para>\n> > <para>\n> > - Time at which these statistics were last reset\n> > + Time at which these statistics were last reset.\n> > </para></entry>\n> > </row>\n> > </tbody>\n>\n> Hm?\n>\n> Shouldn't this new reset target be documented?\n\nIt is in the commit adding the view. I didn't include it in this commit\nbecause the pg_stat_buffers view doesn't exist yet, as of this commit,\nand I thought it would be odd to mention it in the docs (in this\ncommit).\nAs an aside, I shouldn't have left this correction in this commit. I\nmoved it now to the other one.\n\n> > +/*\n> > + * Helper function to collect and send live backends' current IO operations\n> > + * stats counters when a stats reset is initiated so that they may be deducted\n> > + * from future totals.\n> > + */\n> > +static void\n> > +pgstat_send_buffers_reset(PgStat_MsgResetsharedcounter *msg)\n> > +{\n> > + PgStatIOPathOps ops[BACKEND_NUM_TYPES];\n> > +\n> > + memset(ops, 0, sizeof(ops));\n> > + pgstat_report_live_backend_io_path_ops(ops);\n> > +\n> > + /*\n> > + * Iterate through the array of IO Ops for all IO Paths for each\n> > + * BackendType. Because the array does not include a spot for BackendType\n> > + * B_INVALID, add 1 to the index when setting backend_type so that there is\n> > + * no confusion as to the BackendType with which this reset message\n> > + * corresponds.\n> > + */\n> > + for (int backend_type_idx = 0; backend_type_idx < BACKEND_NUM_TYPES; backend_type_idx++)\n> > + {\n> > + msg->m_backend_resets.backend_type = backend_type_idx + 1;\n> > + memcpy(&msg->m_backend_resets.iop, &ops[backend_type_idx],\n> > + sizeof(msg->m_backend_resets.iop));\n> > + pgstat_send(msg, sizeof(PgStat_MsgResetsharedcounter));\n> > + }\n> > +}\n>\n> Probably worth explaining why multiple messages are sent?\n\nDone.\n\n> > @@ -5583,10 +5621,45 @@ pgstat_recv_resetsharedcounter(PgStat_MsgResetsharedcounter *msg, int len)\n> > {\n> > if (msg->m_resettarget == RESET_BGWRITER)\n> > {\n> > - /* Reset the global, bgwriter and checkpointer statistics for the cluster. */\n> > - memset(&globalStats, 0, sizeof(globalStats));\n> > + /*\n> > + * Reset the global bgwriter and checkpointer statistics for the\n> > + * cluster.\n> > + */\n> > + memset(&globalStats.checkpointer, 0, sizeof(globalStats.checkpointer));\n> > + memset(&globalStats.bgwriter, 0, sizeof(globalStats.bgwriter));\n> > globalStats.bgwriter.stat_reset_timestamp = GetCurrentTimestamp();\n> > }\n>\n> Oh, is this a live bug?\n\nI don't think it is a bug. globalStats only contained bgwriter and\ncheckpointer stats and those were all only displayed in\npg_stat_bgwriter(), so memsetting the whole thing seems fine.\n\n> > + /*\n> > + * Subtract 1 from the BackendType to arrive at a valid index in the\n> > + * array, as it does not contain a spot for B_INVALID BackendType.\n> > + */\n>\n> Instead of repeating a comment about +- 1 in a bunch of places, would it look\n> better to have two helper inline functions for this purpose?\n\nDone.\n\n> > +/*\n> > +* When adding a new column to the pg_stat_buffers view, add a new enum\n> > +* value here above COLUMN_LENGTH.\n> > +*/\n> > +enum\n> > +{\n> > + COLUMN_BACKEND_TYPE,\n> > + COLUMN_IO_PATH,\n> > + COLUMN_ALLOCS,\n> > + COLUMN_EXTENDS,\n> > + COLUMN_FSYNCS,\n> > + COLUMN_WRITES,\n> > + COLUMN_RESET_TIME,\n> > + COLUMN_LENGTH,\n> > +};\n>\n> COLUMN_LENGTH seems like a fairly generic name...\n\nChanged.\n\n> > From 9f22da9041e1e1fbc0ef003f5f78f4e72274d438 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Wed, 24 Nov 2021 12:20:10 -0500\n> > Subject: [PATCH v17 6/7] Remove superfluous bgwriter stats\n> >\n> > Remove stats from pg_stat_bgwriter which are now more clearly expressed\n> > in pg_stat_buffers.\n> >\n> > TODO:\n> > - make pg_stat_checkpointer view and move relevant stats into it\n> > - add additional stats to pg_stat_bgwriter\n>\n> When do you think it makes sense to tackle these wrt committing some of the\n> patches?\n\nWell, the new stats are a superset of the old stats (no stats have been\nremoved that are not represented in the new or old views). So, I don't\nsee that as a blocker for committing these patches.\n\nSince it is weird that pg_stat_bgwriter had mostly checkpointer stats,\nI've edited this commit to rename that view to pg_stat_checkpointer.\n\nI have not made a separate view just for maxwritten_clean (presumably\ncalled pg_stat_bgwriter), but I would not be opposed to doing this if\nyou thought having a view with a single column isn't a problem (in the\nevent that we don't get around to adding more bgwriter stats right\naway).\n\nI noticed after changing the docs on the \"bgwriter\" target for\npg_stat_reset_shared to say \"checkpointer\", that it still said \"bgwriter\" in\n src/backend/po/ko.po\n src/backend/po/it.po\n ...\nI presume these are automatically updated with some incantation, but I wasn't\nsure what it was nor could I find documentation on this.\n\n> > diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> > index 6926fc5742..67447f997a 100644\n> > --- a/src/backend/storage/buffer/bufmgr.c\n> > +++ b/src/backend/storage/buffer/bufmgr.c\n> > @@ -2164,7 +2164,6 @@ BufferSync(int flags)\n> > if (SyncOneBuffer(buf_id, false, &wb_context) & BUF_WRITTEN)\n> > {\n> > TRACE_POSTGRESQL_BUFFER_SYNC_WRITTEN(buf_id);\n> > - PendingCheckpointerStats.m_buf_written_checkpoints++;\n> > num_written++;\n> > }\n> > }\n> > @@ -2273,9 +2272,6 @@ BgBufferSync(WritebackContext *wb_context)\n> > */\n> > strategy_buf_id = StrategySyncStart(&strategy_passes, &recent_alloc);\n> >\n> > - /* Report buffer alloc counts to pgstat */\n> > - PendingBgWriterStats.m_buf_alloc += recent_alloc;\n> > -\n> > /*\n> > * If we're not running the LRU scan, just stop after doing the stats\n> > * stuff. We mark the saved state invalid so that we can recover sanely\n> > @@ -2472,8 +2468,6 @@ BgBufferSync(WritebackContext *wb_context)\n> > reusable_buffers++;\n> > }\n> >\n> > - PendingBgWriterStats.m_buf_written_clean += num_written;\n> > -\n>\n> Isn't num_written unused now, unless tracepoints are enabled? I'd expect some\n> compilers to warn... Perhaps we should just remove information from the\n> tracepoint?\n\nThe local variable num_written is used in BgBufferSync() to determine\nwhether or not to increment maxwritten_clean which is still represented\nin the view pg_stat_checkpointer (formerly pg_stat_bgwriter).\n\nA local variable num_written is used in BufferSync() to increment\nCheckpointStats.ckpt_bufs_written which is logged in LogCheckpointEnd(),\nso I'm not sure that can be removed.\n\n- Melanie",
"msg_date": "Wed, 15 Dec 2021 16:40:27 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Dec 03, 2021 at 03:02:24PM -0500, Melanie Plageman wrote:\n> Thanks again! I really appreciate the thorough review.\n> \n> I have combined responses to all three of your emails below.\n> Let me know if it is more confusing to do it this way.\n\nOne email is better than three - I'm just not a model citizen ;)\n\nThanks for updating the patch. I checked that all my previous review comments\nwere addressed (except for the part about passing the 3D array to a function -\nI know that technically the pointer is being passed).\n\n+int backend_type_get_idx(BackendType backend_type) \n+BackendType idx_get_backend_type(int idx) \n\n=> I think it'd be desirable for these to be either static functions (which\nwon't work for your needs) or macros, or inline functions in the header.\n\n- if (strcmp(target, \"archiver\") == 0) \n+ pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETSHAREDCOUNTER); \n+ if (strcmp(target, \"buffers\") == 0) \n\n=> This should be added in alphabetical order. Which is unimportant, but it\nwill also makes the patch 2 lines shorter. The doc patch should also be in\norder.\n\n+ * Don't count dead backends. They will be added below There are no \n\n=> Missing a period.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 15 Dec 2021 16:38:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-15 16:40:27 -0500, Melanie Plageman wrote:\n> > > +/*\n> > > + * Before exiting, a backend sends its IO op statistics to the collector so\n> > > + * that they may be persisted.\n> > > + */\n> > > +void\n> > > +pgstat_send_buffers(void)\n> > > +{\n> > > + PgStat_MsgIOPathOps msg;\n> > > +\n> > > + PgBackendStatus *beentry = MyBEEntry;\n> > > +\n> > > + /*\n> > > + * Though some backends with type B_INVALID (such as the single-user mode\n> > > + * process) do initialize and increment IO operations stats, there is no\n> > > + * spot in the array of IO operations for backends of type B_INVALID. As\n> > > + * such, do not send these to the stats collector.\n> > > + */\n> > > + if (!beentry || beentry->st_backendType == B_INVALID)\n> > > + return;\n> >\n> > Why does single user mode use B_INVALID? That doesn't seem quite right.\n> \n> I think PgBackendStatus->st_backendType is set from MyBackendType which\n> isn't set for the single user mode process. What BackendType would you\n> expect to see?\n\nEither B_BACKEND or something new like B_SINGLE_USER_BACKEND?\n\n\n\n> I also thought about having pgstat_sum_io_path_ops() return a value to\n> indicate if everything was 0 -- which could be useful to future callers\n> potentially?\n> \n> I didn't do this because I am not sure what the return value would be.\n> It could be a bool and be true if any IO was done and false if none was\n> done -- but that doesn't really make sense given the function's name it\n> would be called like\n> if (!pgstat_sum_io_path_ops())\n> return\n> which I'm not sure is very clear\n\nYea, I think it's ok to not do something fancier here for nwo.\n\n\n> > > From 9f22da9041e1e1fbc0ef003f5f78f4e72274d438 Mon Sep 17 00:00:00 2001\n> > > From: Melanie Plageman <melanieplageman@gmail.com>\n> > > Date: Wed, 24 Nov 2021 12:20:10 -0500\n> > > Subject: [PATCH v17 6/7] Remove superfluous bgwriter stats\n> > >\n> > > Remove stats from pg_stat_bgwriter which are now more clearly expressed\n> > > in pg_stat_buffers.\n> > >\n> > > TODO:\n> > > - make pg_stat_checkpointer view and move relevant stats into it\n> > > - add additional stats to pg_stat_bgwriter\n> >\n> > When do you think it makes sense to tackle these wrt committing some of the\n> > patches?\n> \n> Well, the new stats are a superset of the old stats (no stats have been\n> removed that are not represented in the new or old views). So, I don't\n> see that as a blocker for committing these patches.\n\n> Since it is weird that pg_stat_bgwriter had mostly checkpointer stats,\n> I've edited this commit to rename that view to pg_stat_checkpointer.\n\n> I have not made a separate view just for maxwritten_clean (presumably\n> called pg_stat_bgwriter), but I would not be opposed to doing this if\n> you thought having a view with a single column isn't a problem (in the\n> event that we don't get around to adding more bgwriter stats right\n> away).\n\nHow about keeping old bgwriter values in place in the view , but generated\nfrom the new stats stuff?\n\n\n> I noticed after changing the docs on the \"bgwriter\" target for\n> pg_stat_reset_shared to say \"checkpointer\", that it still said \"bgwriter\" in\n> src/backend/po/ko.po\n> src/backend/po/it.po\n> ...\n> I presume these are automatically updated with some incantation, but I wasn't\n> sure what it was nor could I find documentation on this.\n\nYes, they are - and often some languages lag updating things. There's a bit\nof docs at https://www.postgresql.org/docs/devel/nls.html\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Dec 2021 12:18:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On 2021-Dec-15, Melanie Plageman wrote:\n\n> I noticed after changing the docs on the \"bgwriter\" target for\n> pg_stat_reset_shared to say \"checkpointer\", that it still said \"bgwriter\" in\n> src/backend/po/ko.po\n> src/backend/po/it.po\n> ...\n> I presume these are automatically updated with some incantation, but I wasn't\n> sure what it was nor could I find documentation on this.\n\nYes, feel free to ignore those files completely. They are updated using\nan external workflow that you don't need to concern yourself with.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n",
"msg_date": "Thu, 16 Dec 2021 18:24:44 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Combined responses to both Justin and Andres here.\nv19 attached.\n\nOn Wed, Dec 15, 2021 at 5:38 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> +int backend_type_get_idx(BackendType backend_type)\n> +BackendType idx_get_backend_type(int idx)\n>\n> => I think it'd be desirable for these to be either static functions (which\n> won't work for your needs) or macros, or inline functions in the header.\n>\n> - if (strcmp(target, \"archiver\") == 0)\n> + pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETSHAREDCOUNTER);\n> + if (strcmp(target, \"buffers\") == 0)\n\nDone\n\n>\n> => This should be added in alphabetical order. Which is unimportant, but it\n> will also makes the patch 2 lines shorter. The doc patch should also be in\n> order.\n\nThanks for catching this.\nI've corrected the order in most locations. The exception is in\npgstat_reset_shared_counters():\n\n if (strcmp(target, \"buffers\") == 0)\n {\n msg.m_resettarget = RESET_BUFFERS;\n pgstat_send_buffers_reset(&msg);\n return;\n }\n\nBecause \"buffers\" is a special case\nwhich uses a different send function, I prefer to have it first.\n\n>\n> + * Don't count dead backends. They will be added below There are no\n>\n> => Missing a period.\n\nFixed.\n\nOn Thu, Dec 16, 2021 at 3:18 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-12-15 16:40:27 -0500, Melanie Plageman wrote:\n> > > > +/*\n> > > > + * Before exiting, a backend sends its IO op statistics to the collector so\n> > > > + * that they may be persisted.\n> > > > + */\n> > > > +void\n> > > > +pgstat_send_buffers(void)\n> > > > +{\n> > > > + PgStat_MsgIOPathOps msg;\n> > > > +\n> > > > + PgBackendStatus *beentry = MyBEEntry;\n> > > > +\n> > > > + /*\n> > > > + * Though some backends with type B_INVALID (such as the single-user mode\n> > > > + * process) do initialize and increment IO operations stats, there is no\n> > > > + * spot in the array of IO operations for backends of type B_INVALID. As\n> > > > + * such, do not send these to the stats collector.\n> > > > + */\n> > > > + if (!beentry || beentry->st_backendType == B_INVALID)\n> > > > + return;\n> > >\n> > > Why does single user mode use B_INVALID? That doesn't seem quite right.\n> >\n> > I think PgBackendStatus->st_backendType is set from MyBackendType which\n> > isn't set for the single user mode process. What BackendType would you\n> > expect to see?\n>\n> Either B_BACKEND or something new like B_SINGLE_USER_BACKEND?\n\nI added B_STANDALONE_BACKEND and set it in InitStandaloneBackend() (as\nopposed to in PostgresSingleUserMain()) so that the bootstrap process\ncould also use it.\n\n> > > > From 9f22da9041e1e1fbc0ef003f5f78f4e72274d438 Mon Sep 17 00:00:00 2001\n> > > > From: Melanie Plageman <melanieplageman@gmail.com>\n> > > > Date: Wed, 24 Nov 2021 12:20:10 -0500\n> > > > Subject: [PATCH v17 6/7] Remove superfluous bgwriter stats\n> > > >\n> > > > Remove stats from pg_stat_bgwriter which are now more clearly expressed\n> > > > in pg_stat_buffers.\n> > > >\n> > > > TODO:\n> > > > - make pg_stat_checkpointer view and move relevant stats into it\n> > > > - add additional stats to pg_stat_bgwriter\n> > >\n> > > When do you think it makes sense to tackle these wrt committing some of the\n> > > patches?\n> >\n> > Well, the new stats are a superset of the old stats (no stats have been\n> > removed that are not represented in the new or old views). So, I don't\n> > see that as a blocker for committing these patches.\n>\n> > Since it is weird that pg_stat_bgwriter had mostly checkpointer stats,\n> > I've edited this commit to rename that view to pg_stat_checkpointer.\n>\n> > I have not made a separate view just for maxwritten_clean (presumably\n> > called pg_stat_bgwriter), but I would not be opposed to doing this if\n> > you thought having a view with a single column isn't a problem (in the\n> > event that we don't get around to adding more bgwriter stats right\n> > away).\n>\n> How about keeping old bgwriter values in place in the view , but generated\n> from the new stats stuff?\n\nI tried this, but I actually don't think it is the right way to go. In\norder to maintain the old view with the new source code, I had to add\nnew code to maintain a separate resets array just for the bgwriter view.\nIt adds some fiddly code that will be annoying to maintain (the reset\nlogic is confusing enough as is).\nAnd, besides the implementation complexity, if a user resets\npg_stat_bgwriter and not pg_stat_buffers (or vice versa), they will\nsee totally different numbers for \"buffers_backend\" in pg_stat_bgwriter\nthan shared buffers written by B_BACKEND in pg_stat_buffers. I would\nfind that confusing.\n\nInstead, what I did was create the separate pg_stat_checkpointer view\nand move most of the old pg_stat_bgwriter stats over there.\n\nBecause that left us with a pg_stat_bgwriter view with one column, I\nadded a few stats to it which could later be expanded.\n\nIn pg_stat_bgwriter, I renamed \"maxwritten_clean\" to \"rounds_hit_limit\".\nI added \"rounds_cleaned_estimate\" and \"rounds_lapped_clock\" which are\nthe other two exit conditions from the LRU scan loop in BgBufferSync().\n\nThere are other stats related to bgwriter that might be more interesting\n(e.g. number of times bgwriter was woken up to clean, % of time bgwriter\nspends in hibernation vs cleaning, etc); however the stats I ended up\nadding were available in the same scope as maxwritten_clean and seemed\nlike a non-intrusive way to start building out pg_stat_bgwriter.\n\nBgBufferSync() has a *lot* of local variables that are all getting\nincremented and reset in a complicated way, so I'm not 100% sure that\nthe new stats I added are actually correct.\n\n> > I noticed after changing the docs on the \"bgwriter\" target for\n> > pg_stat_reset_shared to say \"checkpointer\", that it still said \"bgwriter\" in\n> > src/backend/po/ko.po\n> > src/backend/po/it.po\n> > ...\n> > I presume these are automatically updated with some incantation, but I wasn't\n> > sure what it was nor could I find documentation on this.\n>\n> Yes, they are - and often some languages lag updating things. There's a bit\n> of docs at https://www.postgresql.org/docs/devel/nls.html\n\nI noticed that the po files for pgstat.c are not updated (the msgid I am\nconcerned with has the old line number in pgstat.c and the old message).\nSo, I tried running `make update-po`, but it didn't have the documented\neffect:\n\n\"to be called if the messages in the program source have changed, in\norder to merge the changes into the existing .po files\"\n\nNo po.new files were created.\n\nI can look into it more, though if it is part of an external workflow, as\nÁlvaro suggested, perhaps I shouldn't?\n\n- Melanie",
"msg_date": "Tue, 21 Dec 2021 20:32:44 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Dec 21, 2021 at 8:32 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Thu, Dec 16, 2021 at 3:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > From 9f22da9041e1e1fbc0ef003f5f78f4e72274d438 Mon Sep 17 00:00:00 2001\n> > > > > From: Melanie Plageman <melanieplageman@gmail.com>\n> > > > > Date: Wed, 24 Nov 2021 12:20:10 -0500\n> > > > > Subject: [PATCH v17 6/7] Remove superfluous bgwriter stats\n> > > > >\n> > > > > Remove stats from pg_stat_bgwriter which are now more clearly expressed\n> > > > > in pg_stat_buffers.\n> > > > >\n> > > > > TODO:\n> > > > > - make pg_stat_checkpointer view and move relevant stats into it\n> > > > > - add additional stats to pg_stat_bgwriter\n> > > >\n> > > > When do you think it makes sense to tackle these wrt committing some of the\n> > > > patches?\n> > >\n> > > Well, the new stats are a superset of the old stats (no stats have been\n> > > removed that are not represented in the new or old views). So, I don't\n> > > see that as a blocker for committing these patches.\n> >\n> > > Since it is weird that pg_stat_bgwriter had mostly checkpointer stats,\n> > > I've edited this commit to rename that view to pg_stat_checkpointer.\n> >\n> > > I have not made a separate view just for maxwritten_clean (presumably\n> > > called pg_stat_bgwriter), but I would not be opposed to doing this if\n> > > you thought having a view with a single column isn't a problem (in the\n> > > event that we don't get around to adding more bgwriter stats right\n> > > away).\n> >\n> > How about keeping old bgwriter values in place in the view , but generated\n> > from the new stats stuff?\n>\n> I tried this, but I actually don't think it is the right way to go. In\n> order to maintain the old view with the new source code, I had to add\n> new code to maintain a separate resets array just for the bgwriter view.\n> It adds some fiddly code that will be annoying to maintain (the reset\n> logic is confusing enough as is).\n> And, besides the implementation complexity, if a user resets\n> pg_stat_bgwriter and not pg_stat_buffers (or vice versa), they will\n> see totally different numbers for \"buffers_backend\" in pg_stat_bgwriter\n> than shared buffers written by B_BACKEND in pg_stat_buffers. I would\n> find that confusing.\n\nIn a quick chat off-list, Andres suggested it might be okay to have a\nsingle reset target for both the pg_stat_buffers view and legacy\npg_stat_bgwriter view. So, I am planning to share a new patchset which\nhas only the new \"buffers\" target which will also reset the legacy\npg_stat_bgwriter view.\n\nI'll also remove the bgwriter stats I proposed and the\npg_stat_checkpointer view to keep things simple for now.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 30 Dec 2021 15:30:50 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Dec 30, 2021 at 3:30 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Tue, Dec 21, 2021 at 8:32 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > On Thu, Dec 16, 2021 at 3:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > From 9f22da9041e1e1fbc0ef003f5f78f4e72274d438 Mon Sep 17 00:00:00 2001\n> > > > > > From: Melanie Plageman <melanieplageman@gmail.com>\n> > > > > > Date: Wed, 24 Nov 2021 12:20:10 -0500\n> > > > > > Subject: [PATCH v17 6/7] Remove superfluous bgwriter stats\n> > > > > >\n> > > > > > Remove stats from pg_stat_bgwriter which are now more clearly expressed\n> > > > > > in pg_stat_buffers.\n> > > > > >\n> > > > > > TODO:\n> > > > > > - make pg_stat_checkpointer view and move relevant stats into it\n> > > > > > - add additional stats to pg_stat_bgwriter\n> > > > >\n> > > > > When do you think it makes sense to tackle these wrt committing some of the\n> > > > > patches?\n> > > >\n> > > > Well, the new stats are a superset of the old stats (no stats have been\n> > > > removed that are not represented in the new or old views). So, I don't\n> > > > see that as a blocker for committing these patches.\n> > >\n> > > > Since it is weird that pg_stat_bgwriter had mostly checkpointer stats,\n> > > > I've edited this commit to rename that view to pg_stat_checkpointer.\n> > >\n> > > > I have not made a separate view just for maxwritten_clean (presumably\n> > > > called pg_stat_bgwriter), but I would not be opposed to doing this if\n> > > > you thought having a view with a single column isn't a problem (in the\n> > > > event that we don't get around to adding more bgwriter stats right\n> > > > away).\n> > >\n> > > How about keeping old bgwriter values in place in the view , but generated\n> > > from the new stats stuff?\n> >\n> > I tried this, but I actually don't think it is the right way to go. In\n> > order to maintain the old view with the new source code, I had to add\n> > new code to maintain a separate resets array just for the bgwriter view.\n> > It adds some fiddly code that will be annoying to maintain (the reset\n> > logic is confusing enough as is).\n> > And, besides the implementation complexity, if a user resets\n> > pg_stat_bgwriter and not pg_stat_buffers (or vice versa), they will\n> > see totally different numbers for \"buffers_backend\" in pg_stat_bgwriter\n> > than shared buffers written by B_BACKEND in pg_stat_buffers. I would\n> > find that confusing.\n>\n> In a quick chat off-list, Andres suggested it might be okay to have a\n> single reset target for both the pg_stat_buffers view and legacy\n> pg_stat_bgwriter view. So, I am planning to share a new patchset which\n> has only the new \"buffers\" target which will also reset the legacy\n> pg_stat_bgwriter view.\n>\n> I'll also remove the bgwriter stats I proposed and the\n> pg_stat_checkpointer view to keep things simple for now.\n>\n\nI've done the above in v20, attached.\n\n- Melanie",
"msg_date": "Mon, 3 Jan 2022 20:39:56 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v21 rebased with compile errors fixed is attached.",
"msg_date": "Sat, 19 Feb 2022 11:06:18 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-19 11:06:18 -0500, Melanie Plageman wrote:\n> v21 rebased with compile errors fixed is attached.\n\nThis currently doesn't apply (mea culpa likely): http://cfbot.cputube.org/patch_37_3272.log\n\nCould you rebase? Marked as waiting-on-author for now.\n\n- Andres\n\n\n",
"msg_date": "Mon, 21 Mar 2022 17:15:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "I already rebased this in a local branch, so here it's.\nI don't expect it to survive the day.\n\nThis should be updated to use the tuplestore helper.",
"msg_date": "Wed, 6 Apr 2022 11:16:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 8:15 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-02-19 11:06:18 -0500, Melanie Plageman wrote:\n> > v21 rebased with compile errors fixed is attached.\n>\n> This currently doesn't apply (mea culpa likely):\n> http://cfbot.cputube.org/patch_37_3272.log\n>\n> Could you rebase? Marked as waiting-on-author for now.\n>\n>\n>\nAttached is the rebased/rewritten version of the pg_stat_buffers patch\nwhich uses the cumulative stats system instead of stats collector.\n\nI've moved to the model of backend-local pending stats which get\naccumulated into shared memory by pgstat_report_stat().\n\nIt is worth noting that, with this method, other backends will no longer\nhave access to each other's individual IO operation statistics. An\nargument could be made to keep the statistics in each backend in\nPgBackendStatus before accumulating them to the cumulative stats system\nso that they can be accessed at the per-backend level of detail.\n\nThere are two TODOs related to when pgstat_report_io_ops() should be\ncalled. pgstat_report_io_ops() is meant for backends that will not\ncommonly call pgstat_report_stat(). I was unsure if it made sense for\nBootstrapModeMain() to explicitly call pgstat_report_io_ops() and if\nauto vacuum worker should call it explicitly and, if so, if it was the\nright location to call it after do_autovacuum().\n\nArchiver and syslogger do not increment or report IO operations.\n\nI did not change pg_stat_bgwriter fields to derive from the IO\noperations statistics structures since the reset targets differ.\n\nAlso, I added one test, but I'm not sure if it will be flakey. It tests\nthat the \"writes\" for checkpointer are tracked when data is inserted\ninto a table and then CHECKPOINT is explicitly invoked directly after. I\ndon't know if this will have a problem if the checkpointer is busy and\nsomehow the backend which dirtied the buffer is forced to write out its\nown buffer, causing the test to potentially fail (even if the\ncheckpointer is doing other writes [causing it to be busy], it may not\ndo them in between the INSERT and the SELECT from pg_stat_buffers).\n\nI am wondering how to add a non-flakey test. For regular backends, I\ncouldn't think of a way to suspend checkpointer to make them do their\nown writes and fsyncs in the context of a regression or isolation test.\nIn fact for many of the dirty buffers it seems like it will be difficult\nto keep bgwriter, checkpointer, and regular backends from competing and\nsometimes causing test failures.\n\n- Melanie",
"msg_date": "Tue, 5 Jul 2022 13:24:55 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-05 13:24:55 -0400, Melanie Plageman wrote:\n> From 2d089e26236c55d1be5b93833baa0cf7667ba38d Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Tue, 28 Jun 2022 11:33:04 -0400\n> Subject: [PATCH v22 1/3] Add BackendType for standalone backends\n> \n> All backends should have a BackendType to enable statistics reporting\n> per BackendType.\n> \n> Add a new BackendType for standalone backends, B_STANDALONE_BACKEND (and\n> alphabetize the BackendTypes). Both the bootstrap backend and single\n> user mode backends will have BackendType B_STANDALONE_BACKEND.\n> \n> Author: Melanie Plageman <melanieplageman@gmail.com>\n> Discussion: https://www.postgresql.org/message-id/CAAKRu_aaq33UnG4TXq3S-OSXGWj1QGf0sU%2BECH4tNwGFNERkZA%40mail.gmail.com\n> ---\n> src/backend/utils/init/miscinit.c | 17 +++++++++++------\n> src/include/miscadmin.h | 5 +++--\n> 2 files changed, 14 insertions(+), 8 deletions(-)\n> \n> diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c\n> index eb43b2c5e5..07e6db1a1c 100644\n> --- a/src/backend/utils/init/miscinit.c\n> +++ b/src/backend/utils/init/miscinit.c\n> @@ -176,6 +176,8 @@ InitStandaloneProcess(const char *argv0)\n> {\n> \tAssert(!IsPostmasterEnvironment);\n> \n> +\tMyBackendType = B_STANDALONE_BACKEND;\n\nHm. This is used for singleuser mode as well as bootstrap. Should we\nsplit those? It's not like bootstrap mode really matters for stats, so\nI'm inclined not to.\n\n\n> @@ -375,6 +376,8 @@ BootstrapModeMain(int argc, char *argv[], bool check_only)\n> \t * out the initial relation mapping files.\n> \t */\n> \tRelationMapFinishBootstrap();\n> +\t// TODO: should this be done for bootstrap?\n> +\tpgstat_report_io_ops();\n\nHm. Not particularly useful, but also not harmful. But we don't need an\nexplicit call, because it'll be done at process exit too. At least I\nthink, it could be that it's different for bootstrap.\n\n\n\n> diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\n> index 2e146aac93..e6dbb1c4bb 100644\n> --- a/src/backend/postmaster/autovacuum.c\n> +++ b/src/backend/postmaster/autovacuum.c\n> @@ -1712,6 +1712,9 @@ AutoVacWorkerMain(int argc, char *argv[])\n> \t\trecentXid = ReadNextTransactionId();\n> \t\trecentMulti = ReadNextMultiXactId();\n> \t\tdo_autovacuum();\n> +\n> +\t\t// TODO: should this be done more often somewhere in do_autovacuum()?\n> +\t\tpgstat_report_io_ops();\n> \t}\n\nDon't think you need all these calls before process exit - it'll happen\nvia pgstat_shutdown_hook().\n\nIMO it'd be a good idea to add pgstat_report_io_ops() to\npgstat_report_vacuum()/analyze(), so that the stats for a longrunning\nautovac worker get updated more regularly.\n\n\n> diff --git a/src/backend/postmaster/bgwriter.c b/src/backend/postmaster/bgwriter.c\n> index 91e6f6ea18..87e4b9e9bd 100644\n> --- a/src/backend/postmaster/bgwriter.c\n> +++ b/src/backend/postmaster/bgwriter.c\n> @@ -242,6 +242,7 @@ BackgroundWriterMain(void)\n> \n> \t\t/* Report pending statistics to the cumulative stats system */\n> \t\tpgstat_report_bgwriter();\n> +\t\tpgstat_report_io_ops();\n> \n> \t\tif (FirstCallSinceLastCheckpoint())\n> \t\t{\n\nHow about moving the pgstat_report_io_ops() into\npgstat_report_bgwriter(), pgstat_report_autovacuum() etc? Seems\nunnecessary to have multiple pgstat_* calls in these places.\n\n\n\n> +/*\n> + * Flush out locally pending IO Operation statistics entries\n> + *\n> + * If nowait is true, this function returns false on lock failure. Otherwise\n> + * this function always returns true. Writer processes are mutually excluded\n> + * using LWLock, but readers are expected to use change-count protocol to avoid\n> + * interference with writers.\n> + *\n> + * If nowait is true, this function returns true if the lock could not be\n> + * acquired. Otherwise return false.\n> + *\n> + */\n> +bool\n> +pgstat_flush_io_ops(bool nowait)\n> +{\n> +\tPgStat_IOPathOps *dest_io_path_ops;\n> +\tPgStatShared_BackendIOPathOps *stats_shmem;\n> +\n> +\tPgBackendStatus *beentry = MyBEEntry;\n> +\n> +\tif (!have_ioopstats)\n> +\t\treturn false;\n> +\n> +\tif (!beentry || beentry->st_backendType == B_INVALID)\n> +\t\treturn false;\n> +\n> +\tstats_shmem = &pgStatLocal.shmem->io_ops;\n> +\n> +\tif (!nowait)\n> +\t\tLWLockAcquire(&stats_shmem->lock, LW_EXCLUSIVE);\n> +\telse if (!LWLockConditionalAcquire(&stats_shmem->lock, LW_EXCLUSIVE))\n> +\t\treturn true;\n\nWonder if it's worth making the lock specific to the backend type?\n\n\n> +\tdest_io_path_ops =\n> +\t\t&stats_shmem->stats[backend_type_get_idx(beentry->st_backendType)];\n> +\n\nThis could be done before acquiring the lock, right?\n\n\n> +void\n> +pgstat_io_ops_snapshot_cb(void)\n> +{\n> +\tPgStatShared_BackendIOPathOps *stats_shmem = &pgStatLocal.shmem->io_ops;\n> +\tPgStat_IOPathOps *snapshot_ops = pgStatLocal.snapshot.io_path_ops;\n> +\tPgStat_IOPathOps *reset_ops;\n> +\n> +\tPgStat_IOPathOps *reset_offset = stats_shmem->reset_offset;\n> +\tPgStat_IOPathOps reset[BACKEND_NUM_TYPES];\n> +\n> +\tpgstat_copy_changecounted_stats(snapshot_ops,\n> +\t\t\t&stats_shmem->stats, sizeof(stats_shmem->stats),\n> +\t\t\t&stats_shmem->changecount);\n\nThis doesn't make sense - with multiple writers you can't use the\nchangecount approach (and you don't in the flush part above).\n\n\n> +\tLWLockAcquire(&stats_shmem->lock, LW_SHARED);\n> +\tmemcpy(&reset, reset_offset, sizeof(stats_shmem->stats));\n> +\tLWLockRelease(&stats_shmem->lock);\n\nWhich then also means that you don't need the reset offset stuff. It's\nonly there because with the changecount approach we can't take a lock to\nreset the stats (since there is no lock). With a lock you can just reset\nthe shared state.\n\n\n> +void\n> +pgstat_count_io_op(IOOp io_op, IOPath io_path)\n> +{\n> +\tPgStat_IOOpCounters *pending_counters = &pending_IOOpStats.data[io_path];\n> +\tPgStat_IOOpCounters *cumulative_counters =\n> +\t\t\t&cumulative_IOOpStats.data[io_path];\n\nthe pending_/cumultive_ prefix before an uppercase-first camelcase name\nseems ugly...\n\n> +\tswitch (io_op)\n> +\t{\n> +\t\tcase IOOP_ALLOC:\n> +\t\t\tpending_counters->allocs++;\n> +\t\t\tcumulative_counters->allocs++;\n> +\t\t\tbreak;\n> +\t\tcase IOOP_EXTEND:\n> +\t\t\tpending_counters->extends++;\n> +\t\t\tcumulative_counters->extends++;\n> +\t\t\tbreak;\n> +\t\tcase IOOP_FSYNC:\n> +\t\t\tpending_counters->fsyncs++;\n> +\t\t\tcumulative_counters->fsyncs++;\n> +\t\t\tbreak;\n> +\t\tcase IOOP_WRITE:\n> +\t\t\tpending_counters->writes++;\n> +\t\t\tcumulative_counters->writes++;\n> +\t\t\tbreak;\n> +\t}\n> +\n> +\thave_ioopstats = true;\n> +}\n\nDoing two math ops / memory accesses every time seems off. Seems better\nto maintain cumultive_counters whenever reporting stats, just before\nzeroing pending_counters?\n\n\n> +/*\n> + * Report IO operation statistics\n> + *\n> + * This works in much the same way as pgstat_flush_io_ops() but is meant for\n> + * BackendTypes like bgwriter for whom pgstat_report_stat() will not be called\n> + * frequently enough to keep shared memory stats fresh.\n> + * Backends not typically calling pgstat_report_stat() can invoke\n> + * pgstat_report_io_ops() explicitly.\n> + */\n> +void\n> +pgstat_report_io_ops(void)\n> +{\n\nThis shouldn't be needed - the flush function above can be used.\n\n\n> +\tPgStat_IOPathOps *dest_io_path_ops;\n> +\tPgStatShared_BackendIOPathOps *stats_shmem;\n> +\n> +\tPgBackendStatus *beentry = MyBEEntry;\n> +\n> +\tAssert(!pgStatLocal.shmem->is_shutdown);\n> +\tpgstat_assert_is_up();\n> +\n> +\tif (!have_ioopstats)\n> +\t\treturn;\n> +\n> +\tif (!beentry || beentry->st_backendType == B_INVALID)\n> +\t\treturn;\n\nIs there a case where this may be called where we have no beentry?\n\nWhy not just use MyBackendType?\n\n\n> +\tstats_shmem = &pgStatLocal.shmem->io_ops;\n> +\n> +\tdest_io_path_ops =\n> +\t\t&stats_shmem->stats[backend_type_get_idx(beentry->st_backendType)];\n> +\n> +\tpgstat_begin_changecount_write(&stats_shmem->changecount);\n\nA mentioned before, the changecount stuff doesn't apply here. You need a\nlock.\n\n\n> +PgStat_IOPathOps *\n> +pgstat_fetch_backend_io_path_ops(void)\n> +{\n> +\tpgstat_snapshot_fixed(PGSTAT_KIND_IOOPS);\n> +\treturn pgStatLocal.snapshot.io_path_ops;\n> +}\n> +\n> +PgStat_Counter\n> +pgstat_fetch_cumulative_io_ops(IOPath io_path, IOOp io_op)\n> +{\n> +\tPgStat_IOOpCounters *counters = &cumulative_IOOpStats.data[io_path];\n> +\n> +\tswitch (io_op)\n> +\t{\n> +\t\tcase IOOP_ALLOC:\n> +\t\t\treturn counters->allocs;\n> +\t\tcase IOOP_EXTEND:\n> +\t\t\treturn counters->extends;\n> +\t\tcase IOOP_FSYNC:\n> +\t\t\treturn counters->fsyncs;\n> +\t\tcase IOOP_WRITE:\n> +\t\t\treturn counters->writes;\n> +\t\tdefault:\n> +\t\t\telog(ERROR, \"IO Operation %s for IO Path %s is undefined.\",\n> +\t\t\t\t\tpgstat_io_op_desc(io_op), pgstat_io_path_desc(io_path));\n> +\t}\n> +}\n\nThere's currently no user for this, right? Maybe let's just defer the\ncumulative stuff until we need it?\n\n\n> +const char *\n> +pgstat_io_path_desc(IOPath io_path)\n> +{\n> +\tconst char *io_path_desc = \"Unknown IO Path\";\n> +\n\nThis should be unreachable, right?\n\n\n> From f2b5b75f5063702cbc3c64efdc1e7ef3cf1acdb4 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Mon, 4 Jul 2022 15:44:17 -0400\n> Subject: [PATCH v22 3/3] Add system view tracking IO ops per backend type\n\n> Add pg_stat_buffers, a system view which tracks the number of IO\n> operations (allocs, writes, fsyncs, and extends) done through each IO\n> path (e.g. shared buffers, local buffers, unbuffered IO) by each type of\n> backend.\n\nI think I like pg_stat_io a bit better? Nearly everything in here seems\nto fit better in that.\n\nI guess we could split out buffers allocated, but that's actually\ninteresting in the context of the kind of IO too.\n\n\n> <row>\n> <entry><structname>pg_stat_wal</structname><indexterm><primary>pg_stat_wal</primary></indexterm></entry>\n> <entry>One row only, showing statistics about WAL activity. See\n> @@ -3595,7 +3604,102 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n> <structfield>stats_reset</structfield> <type>timestamp with time zone</type>\n> </para>\n> <para>\n> - Time at which these statistics were last reset\n> + Time at which these statistics were last reset.\n> + </para></entry>\n\nGrammar critique time :)\n\n\n> +CREATE VIEW pg_stat_buffers AS\n> +SELECT\n> + b.backend_type,\n> + b.io_path,\n> + b.alloc,\n> + b.extend,\n> + b.fsync,\n> + b.write,\n> + b.stats_reset\n> +FROM pg_stat_get_buffers() b;\n\nDo we want to expose all data to all users? I guess pg_stat_bgwriter\ndoes? But this does split things out a lot more...\n\n\n\n> +\tfor (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> +\t{\n> +\t\tPgStat_IOOpCounters *counters = io_path_ops->data;\n> +\t\tDatum\t\tbackend_type_desc =\n> +\t\t\tCStringGetTextDatum(GetBackendTypeDesc(idx_get_backend_type(i)));\n> +\t\t\t/* const char *log_name = GetBackendTypeDesc(idx_get_backend_type(i)); */\n> +\n> +\t\tfor (int j = 0; j < IOPATH_NUM_TYPES; j++)\n> +\t\t{\n> +\t\t\tDatum values[BUFFERS_NUM_COLUMNS];\n> +\t\t\tbool nulls[BUFFERS_NUM_COLUMNS];\n> +\t\t\tmemset(values, 0, sizeof(values));\n> +\t\t\tmemset(nulls, 0, sizeof(nulls));\n> +\n> +\t\t\tvalues[BUFFERS_COLUMN_BACKEND_TYPE] = backend_type_desc;\n> +\t\t\tvalues[BUFFERS_COLUMN_IO_PATH] = CStringGetTextDatum(pgstat_io_path_desc(j));\n\nRandom musing: I wonder if we should start to use SQL level enums for\nthis kind of thing.\n\n\n> DROP TABLE trunc_stats_test, trunc_stats_test1, trunc_stats_test2, trunc_stats_test3, trunc_stats_test4;\n> DROP TABLE prevstats;\n> +SELECT pg_stat_reset_shared('buffers');\n> + pg_stat_reset_shared \n> +----------------------\n> + \n> +(1 row)\n> +\n> +SELECT pg_stat_force_next_flush();\n> + pg_stat_force_next_flush \n> +--------------------------\n> + \n> +(1 row)\n> +\n> +SELECT write = 0 FROM pg_stat_buffers WHERE io_path = 'Shared' and backend_type = 'checkpointer';\n> + ?column? \n> +----------\n> + t\n> +(1 row)\n\n\nDon't think you can rely on that. The lookup of the view, functions\nmight have needed to load catalog data, which might have needed to evict\nbuffers. I think you can do something more reliable by checking that\nthere's more written buffers after a checkpoint than before, or such.\n\n\nWould be nice to have something testing that the ringbuffer stats stuff\ndoes something sensible - that feels not entirely trivial.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Jul 2022 12:20:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nIn the attached patch set, I've added in missing IO operations for\ncertain IO Paths as well as enumerating in the commit message which IO\nPaths and IO Operations are not currently counted and or not possible.\n\nThere is a TODO in HandleWalWriterInterrupts() about removing\npgstat_report_wal() since it is immediately before a proc_exit()\n\nI was wondering if LocalBufferAlloc() should increment the counter or if\nI should wait until GetLocalBufferStorage() to increment the counter.\n\nI also realized that I am not differentiating between IOPATH_SHARED and\nIOPATH_STRATEGY for IOOP_FSYNC. But, given that we don't know what type\nof buffer we are fsync'ing by the time we call register_dirty_segment(),\nI'm not sure how we would fix this.\n\nOn Wed, Jul 6, 2022 at 3:20 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-07-05 13:24:55 -0400, Melanie Plageman wrote:\n> > From 2d089e26236c55d1be5b93833baa0cf7667ba38d Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Tue, 28 Jun 2022 11:33:04 -0400\n> > Subject: [PATCH v22 1/3] Add BackendType for standalone backends\n> >\n> > All backends should have a BackendType to enable statistics reporting\n> > per BackendType.\n> >\n> > Add a new BackendType for standalone backends, B_STANDALONE_BACKEND (and\n> > alphabetize the BackendTypes). Both the bootstrap backend and single\n> > user mode backends will have BackendType B_STANDALONE_BACKEND.\n> >\n> > Author: Melanie Plageman <melanieplageman@gmail.com>\n> > Discussion:\n> https://www.postgresql.org/message-id/CAAKRu_aaq33UnG4TXq3S-OSXGWj1QGf0sU%2BECH4tNwGFNERkZA%40mail.gmail.com\n> > ---\n> > src/backend/utils/init/miscinit.c | 17 +++++++++++------\n> > src/include/miscadmin.h | 5 +++--\n> > 2 files changed, 14 insertions(+), 8 deletions(-)\n> >\n> > diff --git a/src/backend/utils/init/miscinit.c\n> b/src/backend/utils/init/miscinit.c\n> > index eb43b2c5e5..07e6db1a1c 100644\n> > --- a/src/backend/utils/init/miscinit.c\n> > +++ b/src/backend/utils/init/miscinit.c\n> > @@ -176,6 +176,8 @@ InitStandaloneProcess(const char *argv0)\n> > {\n> > Assert(!IsPostmasterEnvironment);\n> >\n> > + MyBackendType = B_STANDALONE_BACKEND;\n>\n> Hm. This is used for singleuser mode as well as bootstrap. Should we\n> split those? It's not like bootstrap mode really matters for stats, so\n> I'm inclined not to.\n>\n>\nI have no opinion currently.\nIt depends on how commonly you think developers might want separate\nbootstrap and single user mode IO stats.\n\n\n>\n> > @@ -375,6 +376,8 @@ BootstrapModeMain(int argc, char *argv[], bool\n> check_only)\n> > * out the initial relation mapping files.\n> > */\n> > RelationMapFinishBootstrap();\n> > + // TODO: should this be done for bootstrap?\n> > + pgstat_report_io_ops();\n>\n> Hm. Not particularly useful, but also not harmful. But we don't need an\n> explicit call, because it'll be done at process exit too. At least I\n> think, it could be that it's different for bootstrap.\n>\n>\n>\nI've removed this and other occurrences which were before proc_exit()\n(and thus redundant). (Though I did not explicitly check if it was\ndifferent for bootstrap.)\n\n\n>\n> > diff --git a/src/backend/postmaster/autovacuum.c\n> b/src/backend/postmaster/autovacuum.c\n> > index 2e146aac93..e6dbb1c4bb 100644\n> > --- a/src/backend/postmaster/autovacuum.c\n> > +++ b/src/backend/postmaster/autovacuum.c\n> > @@ -1712,6 +1712,9 @@ AutoVacWorkerMain(int argc, char *argv[])\n> > recentXid = ReadNextTransactionId();\n> > recentMulti = ReadNextMultiXactId();\n> > do_autovacuum();\n> > +\n> > + // TODO: should this be done more often somewhere in\n> do_autovacuum()?\n> > + pgstat_report_io_ops();\n> > }\n>\n> Don't think you need all these calls before process exit - it'll happen\n> via pgstat_shutdown_hook().\n>\n> IMO it'd be a good idea to add pgstat_report_io_ops() to\n> pgstat_report_vacuum()/analyze(), so that the stats for a longrunning\n> autovac worker get updated more regularly.\n>\n\nnoted and fixed.\n\n\n>\n>\n> > diff --git a/src/backend/postmaster/bgwriter.c\n> b/src/backend/postmaster/bgwriter.c\n> > index 91e6f6ea18..87e4b9e9bd 100644\n> > --- a/src/backend/postmaster/bgwriter.c\n> > +++ b/src/backend/postmaster/bgwriter.c\n> > @@ -242,6 +242,7 @@ BackgroundWriterMain(void)\n> >\n> > /* Report pending statistics to the cumulative stats\n> system */\n> > pgstat_report_bgwriter();\n> > + pgstat_report_io_ops();\n> >\n> > if (FirstCallSinceLastCheckpoint())\n> > {\n>\n> How about moving the pgstat_report_io_ops() into\n> pgstat_report_bgwriter(), pgstat_report_autovacuum() etc? Seems\n> unnecessary to have multiple pgstat_* calls in these places.\n>\n>\n>\nnoted and fixed.\n\n\n>\n> > +/*\n> > + * Flush out locally pending IO Operation statistics entries\n> > + *\n> > + * If nowait is true, this function returns false on lock failure.\n> Otherwise\n> > + * this function always returns true. Writer processes are mutually\n> excluded\n> > + * using LWLock, but readers are expected to use change-count protocol\n> to avoid\n> > + * interference with writers.\n> > + *\n> > + * If nowait is true, this function returns true if the lock could not\n> be\n> > + * acquired. Otherwise return false.\n> > + *\n> > + */\n> > +bool\n> > +pgstat_flush_io_ops(bool nowait)\n> > +{\n> > + PgStat_IOPathOps *dest_io_path_ops;\n> > + PgStatShared_BackendIOPathOps *stats_shmem;\n> > +\n> > + PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + if (!have_ioopstats)\n> > + return false;\n> > +\n> > + if (!beentry || beentry->st_backendType == B_INVALID)\n> > + return false;\n> > +\n> > + stats_shmem = &pgStatLocal.shmem->io_ops;\n> > +\n> > + if (!nowait)\n> > + LWLockAcquire(&stats_shmem->lock, LW_EXCLUSIVE);\n> > + else if (!LWLockConditionalAcquire(&stats_shmem->lock,\n> LW_EXCLUSIVE))\n> > + return true;\n>\n> Wonder if it's worth making the lock specific to the backend type?\n>\n\nI've added another Lock into PgStat_IOPathOps so that each BackendType\ncan be locked separately. But, I've also kept the lock in\nPgStatShared_BackendIOPathOps so that reset_all and snapshot could be\ndone easily.\n\n\n>\n>\n> > + dest_io_path_ops =\n> > +\n> &stats_shmem->stats[backend_type_get_idx(beentry->st_backendType)];\n> > +\n>\n> This could be done before acquiring the lock, right?\n>\n>\n> > +void\n> > +pgstat_io_ops_snapshot_cb(void)\n> > +{\n> > + PgStatShared_BackendIOPathOps *stats_shmem =\n> &pgStatLocal.shmem->io_ops;\n> > + PgStat_IOPathOps *snapshot_ops = pgStatLocal.snapshot.io_path_ops;\n> > + PgStat_IOPathOps *reset_ops;\n> > +\n> > + PgStat_IOPathOps *reset_offset = stats_shmem->reset_offset;\n> > + PgStat_IOPathOps reset[BACKEND_NUM_TYPES];\n> > +\n> > + pgstat_copy_changecounted_stats(snapshot_ops,\n> > + &stats_shmem->stats, sizeof(stats_shmem->stats),\n> > + &stats_shmem->changecount);\n>\n> This doesn't make sense - with multiple writers you can't use the\n> changecount approach (and you don't in the flush part above).\n>\n>\n> > + LWLockAcquire(&stats_shmem->lock, LW_SHARED);\n> > + memcpy(&reset, reset_offset, sizeof(stats_shmem->stats));\n> > + LWLockRelease(&stats_shmem->lock);\n>\n> Which then also means that you don't need the reset offset stuff. It's\n> only there because with the changecount approach we can't take a lock to\n> reset the stats (since there is no lock). With a lock you can just reset\n> the shared state.\n>\n\nYes, I believe I have cleaned up all of this embarrassing mess. I use the\nlock in PgStatShared_BackendIOPathOps for reset all and snapshot and the\nlocks in PgStat_IOPathOps for flush.\n\n\n>\n>\n> > +void\n> > +pgstat_count_io_op(IOOp io_op, IOPath io_path)\n> > +{\n> > + PgStat_IOOpCounters *pending_counters =\n> &pending_IOOpStats.data[io_path];\n> > + PgStat_IOOpCounters *cumulative_counters =\n> > + &cumulative_IOOpStats.data[io_path];\n>\n> the pending_/cumultive_ prefix before an uppercase-first camelcase name\n> seems ugly...\n>\n> > + switch (io_op)\n> > + {\n> > + case IOOP_ALLOC:\n> > + pending_counters->allocs++;\n> > + cumulative_counters->allocs++;\n> > + break;\n> > + case IOOP_EXTEND:\n> > + pending_counters->extends++;\n> > + cumulative_counters->extends++;\n> > + break;\n> > + case IOOP_FSYNC:\n> > + pending_counters->fsyncs++;\n> > + cumulative_counters->fsyncs++;\n> > + break;\n> > + case IOOP_WRITE:\n> > + pending_counters->writes++;\n> > + cumulative_counters->writes++;\n> > + break;\n> > + }\n> > +\n> > + have_ioopstats = true;\n> > +}\n>\n> Doing two math ops / memory accesses every time seems off. Seems better\n> to maintain cumultive_counters whenever reporting stats, just before\n> zeroing pending_counters?\n>\n\nI've gone ahead and cut the cumulative counters concept.\n\n\n>\n>\n> > +/*\n> > + * Report IO operation statistics\n> > + *\n> > + * This works in much the same way as pgstat_flush_io_ops() but is\n> meant for\n> > + * BackendTypes like bgwriter for whom pgstat_report_stat() will not be\n> called\n> > + * frequently enough to keep shared memory stats fresh.\n> > + * Backends not typically calling pgstat_report_stat() can invoke\n> > + * pgstat_report_io_ops() explicitly.\n> > + */\n> > +void\n> > +pgstat_report_io_ops(void)\n> > +{\n>\n> This shouldn't be needed - the flush function above can be used.\n>\n\nFixed.\n\n\n>\n>\n> > + PgStat_IOPathOps *dest_io_path_ops;\n> > + PgStatShared_BackendIOPathOps *stats_shmem;\n> > +\n> > + PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + Assert(!pgStatLocal.shmem->is_shutdown);\n> > + pgstat_assert_is_up();\n> > +\n> > + if (!have_ioopstats)\n> > + return;\n> > +\n> > + if (!beentry || beentry->st_backendType == B_INVALID)\n> > + return;\n>\n> Is there a case where this may be called where we have no beentry?\n>\n> Why not just use MyBackendType?\n>\n\nFixed.\n\n\n>\n>\n> > + stats_shmem = &pgStatLocal.shmem->io_ops;\n> > +\n> > + dest_io_path_ops =\n> > +\n> &stats_shmem->stats[backend_type_get_idx(beentry->st_backendType)];\n> > +\n> > + pgstat_begin_changecount_write(&stats_shmem->changecount);\n>\n> A mentioned before, the changecount stuff doesn't apply here. You need a\n> lock.\n>\n\nFixed.\n\n\n>\n>\n> > +PgStat_IOPathOps *\n> > +pgstat_fetch_backend_io_path_ops(void)\n> > +{\n> > + pgstat_snapshot_fixed(PGSTAT_KIND_IOOPS);\n> > + return pgStatLocal.snapshot.io_path_ops;\n> > +}\n> > +\n> > +PgStat_Counter\n> > +pgstat_fetch_cumulative_io_ops(IOPath io_path, IOOp io_op)\n> > +{\n> > + PgStat_IOOpCounters *counters =\n> &cumulative_IOOpStats.data[io_path];\n> > +\n> > + switch (io_op)\n> > + {\n> > + case IOOP_ALLOC:\n> > + return counters->allocs;\n> > + case IOOP_EXTEND:\n> > + return counters->extends;\n> > + case IOOP_FSYNC:\n> > + return counters->fsyncs;\n> > + case IOOP_WRITE:\n> > + return counters->writes;\n> > + default:\n> > + elog(ERROR, \"IO Operation %s for IO Path %s is\n> undefined.\",\n> > + pgstat_io_op_desc(io_op),\n> pgstat_io_path_desc(io_path));\n> > + }\n> > +}\n>\n> There's currently no user for this, right? Maybe let's just defer the\n> cumulative stuff until we need it?\n>\n\nRemoved.\n\n\n>\n>\n> > +const char *\n> > +pgstat_io_path_desc(IOPath io_path)\n> > +{\n> > + const char *io_path_desc = \"Unknown IO Path\";\n> > +\n>\n> This should be unreachable, right?\n>\n\nChanged it to an error.\n\n\n>\n>\n> > From f2b5b75f5063702cbc3c64efdc1e7ef3cf1acdb4 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Mon, 4 Jul 2022 15:44:17 -0400\n> > Subject: [PATCH v22 3/3] Add system view tracking IO ops per backend type\n>\n> > Add pg_stat_buffers, a system view which tracks the number of IO\n> > operations (allocs, writes, fsyncs, and extends) done through each IO\n> > path (e.g. shared buffers, local buffers, unbuffered IO) by each type of\n> > backend.\n>\n> I think I like pg_stat_io a bit better? Nearly everything in here seems\n> to fit better in that.\n>\n> I guess we could split out buffers allocated, but that's actually\n> interesting in the context of the kind of IO too.\n>\n\nchanged it to pg_stat_io\n\n\n>\n> > +CREATE VIEW pg_stat_buffers AS\n> > +SELECT\n> > + b.backend_type,\n> > + b.io_path,\n> > + b.alloc,\n> > + b.extend,\n> > + b.fsync,\n> > + b.write,\n> > + b.stats_reset\n> > +FROM pg_stat_get_buffers() b;\n>\n> Do we want to expose all data to all users? I guess pg_stat_bgwriter\n> does? But this does split things out a lot more...\n>\n>\nI didn't see another similar example limiting access.\n\n\n> > DROP TABLE trunc_stats_test, trunc_stats_test1, trunc_stats_test2,\n> trunc_stats_test3, trunc_stats_test4;\n> > DROP TABLE prevstats;\n> > +SELECT pg_stat_reset_shared('buffers');\n> > + pg_stat_reset_shared\n> > +----------------------\n> > +\n> > +(1 row)\n> > +\n> > +SELECT pg_stat_force_next_flush();\n> > + pg_stat_force_next_flush\n> > +--------------------------\n> > +\n> > +(1 row)\n> > +\n> > +SELECT write = 0 FROM pg_stat_buffers WHERE io_path = 'Shared' and\n> backend_type = 'checkpointer';\n> > + ?column?\n> > +----------\n> > + t\n> > +(1 row)\n>\n>\n> Don't think you can rely on that. The lookup of the view, functions\n> might have needed to load catalog data, which might have needed to evict\n> buffers. I think you can do something more reliable by checking that\n> there's more written buffers after a checkpoint than before, or such.\n>\n>\nYes, per an off list suggestion by you, I have changed the tests to use a\nsum of writes. I've also added a test for IOPATH_LOCAL and fixed some of\nthe missing calls to count IO Operations for IOPATH_LOCAL and\nIOPATH_STRATEGY.\n\nI struggled to come up with a way to test writes for a particular\ntype of backend are counted correctly since a dirty buffer could be\nwritten out by another type of backend before the target BackendType has\na chance to write it out.\n\nI also struggled to come up with a way to test IO operations for\nbackground workers. I'm not sure of a way to deterministically have a\nbackground worker do a particular kind of IO in a test scenario.\n\nI'm not sure how to cause a strategy \"extend\" for testing.\n\n\n>\n> Would be nice to have something testing that the ringbuffer stats stuff\n> does something sensible - that feels not entirely trivial.\n>\n>\nI've added a test to test that reused strategy buffers are counted as\nallocs. I would like to add a test which checks that if a buffer in the\nring is pinned and thus not reused, that it is not counted as a strategy\nalloc, but I found it challenging without a way to pause vacuuming, pin\na buffer, then resume vacuuming.\n\nThanks,\nMelanie",
"msg_date": "Mon, 11 Jul 2022 22:22:28 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Mon, 11 Jul 2022 22:22:28 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in \n> Hi,\n> \n> In the attached patch set, I've added in missing IO operations for\n> certain IO Paths as well as enumerating in the commit message which IO\n> Paths and IO Operations are not currently counted and or not possible.\n> \n> There is a TODO in HandleWalWriterInterrupts() about removing\n> pgstat_report_wal() since it is immediately before a proc_exit()\n\nRight. walwriter does that without needing the explicit call.\n\n> I was wondering if LocalBufferAlloc() should increment the counter or if\n> I should wait until GetLocalBufferStorage() to increment the counter.\n\nDepends on what \"allocate\" means. Different from shared buffers, local\nbuffers are taken from OS then allocated to page. OS-allcoated pages\nare restricted by num_temp_buffers so I think what we're interested in\nis the count incremented by LocalBuferAlloc(). (And it is the parallel\nof alloc for shared-buffers)\n\n> I also realized that I am not differentiating between IOPATH_SHARED and\n> IOPATH_STRATEGY for IOOP_FSYNC. But, given that we don't know what type\n> of buffer we are fsync'ing by the time we call register_dirty_segment(),\n> I'm not sure how we would fix this.\n\nI think there scarcely happens flush for strategy-loaded buffers. If\nthat is sensible, IOOP_FSYNC would not make much sense for\nIOPATH_STRATEGY.\n\n> On Wed, Jul 6, 2022 at 3:20 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > On 2022-07-05 13:24:55 -0400, Melanie Plageman wrote:\n> > > @@ -176,6 +176,8 @@ InitStandaloneProcess(const char *argv0)\n> > > {\n> > > Assert(!IsPostmasterEnvironment);\n> > >\n> > > + MyBackendType = B_STANDALONE_BACKEND;\n> >\n> > Hm. This is used for singleuser mode as well as bootstrap. Should we\n> > split those? It's not like bootstrap mode really matters for stats, so\n> > I'm inclined not to.\n> >\n> >\n> I have no opinion currently.\n> It depends on how commonly you think developers might want separate\n> bootstrap and single user mode IO stats.\n\nRegarding to stats, I don't think separating them makes much sense.\n\n> > > @@ -375,6 +376,8 @@ BootstrapModeMain(int argc, char *argv[], bool\n> > check_only)\n> > > * out the initial relation mapping files.\n> > > */\n> > > RelationMapFinishBootstrap();\n> > > + // TODO: should this be done for bootstrap?\n> > > + pgstat_report_io_ops();\n> >\n> > Hm. Not particularly useful, but also not harmful. But we don't need an\n> > explicit call, because it'll be done at process exit too. At least I\n> > think, it could be that it's different for bootstrap.\n>\n> I've removed this and other occurrences which were before proc_exit()\n> (and thus redundant). (Though I did not explicitly check if it was\n> different for bootstrap.)\n\npgstat_report_stat(true) is supposed to be called as needed via\nbefore_shmem_hook so I think that's the right thing.\n\n> > IMO it'd be a good idea to add pgstat_report_io_ops() to\n> > pgstat_report_vacuum()/analyze(), so that the stats for a longrunning\n> > autovac worker get updated more regularly.\n> >\n> \n> noted and fixed.\n\n> > How about moving the pgstat_report_io_ops() into\n> > pgstat_report_bgwriter(), pgstat_report_autovacuum() etc? Seems\n> > unnecessary to have multiple pgstat_* calls in these places.\n> >\n> >\n> >\n> noted and fixed.\n\n+\t * Also report IO Operations statistics\n\nI think that the function comment also should mention this.\n\n> > Wonder if it's worth making the lock specific to the backend type?\n> >\n> \n> I've added another Lock into PgStat_IOPathOps so that each BackendType\n> can be locked separately. But, I've also kept the lock in\n> PgStatShared_BackendIOPathOps so that reset_all and snapshot could be\n> done easily.\n\nLooks fine about the lock separation.\nBy the way, in the following line:\n\n+\t\t&pgStatLocal.shmem->io_ops.stats[backend_type_get_idx(MyBackendType)];\n\nbackend_type_get_idx(x) is actually (x - 1) plus assertion on the\nvalue range. And the only use-case is here. There's an reverse\nfunction and also used only at one place.\n\n+\t\tDatum\t\tbackend_type_desc =\n+\t\t\tCStringGetTextDatum(GetBackendTypeDesc(idx_get_backend_type(i)));\n\nIn this usage GetBackendTypeDesc() gracefully treats out-of-domain\nvalues but idx_get_backend_type keenly kills the process for the\nsame. This is inconsistent.\n\nMy humbel opinion on this is we don't define the two functions and\nreplace the calls to them with (x +/- 1). Addition to that, I think\nwe should not abort() by invalid backend types. In that sense, I\nwonder if we could use B_INVALIDth element for this purpose.\n\n> > > + LWLockAcquire(&stats_shmem->lock, LW_SHARED);\n> > > + memcpy(&reset, reset_offset, sizeof(stats_shmem->stats));\n> > > + LWLockRelease(&stats_shmem->lock);\n> >\n> > Which then also means that you don't need the reset offset stuff. It's\n> > only there because with the changecount approach we can't take a lock to\n> > reset the stats (since there is no lock). With a lock you can just reset\n> > the shared state.\n> >\n> \n> Yes, I believe I have cleaned up all of this embarrassing mess. I use the\n> lock in PgStatShared_BackendIOPathOps for reset all and snapshot and the\n> locks in PgStat_IOPathOps for flush.\n\nLooks fine, but I think pgstat_flush_io_ops() need more comments like\nother pgstat_flush_* functions.\n\n+\tfor (int i = 0; i < BACKEND_NUM_TYPES; i++)\n+\t\tstats_shmem->stats[i].stat_reset_timestamp = ts;\n\nI'm not sure we need a separate reset timestamp for each backend type\nbut SLRU counter does the same thing..\n\n> > > +pgstat_report_io_ops(void)\n> > > +{\n> >\n> > This shouldn't be needed - the flush function above can be used.\n> >\n> \n> Fixed.\n\nThe commit message of 0002 contains that name:p\n\n> > > +const char *\n> > > +pgstat_io_path_desc(IOPath io_path)\n> > > +{\n> > > + const char *io_path_desc = \"Unknown IO Path\";\n> > > +\n> >\n> > This should be unreachable, right?\n> >\n> \n> Changed it to an error.\n\n+\telog(ERROR, \"Attempt to describe an unknown IOPath\");\n\nI think we usually spell it as (\"unrecognized IOPath value: %d\", io_path).\n\n> > > From f2b5b75f5063702cbc3c64efdc1e7ef3cf1acdb4 Mon Sep 17 00:00:00 2001\n> > > From: Melanie Plageman <melanieplageman@gmail.com>\n> > > Date: Mon, 4 Jul 2022 15:44:17 -0400\n> > > Subject: [PATCH v22 3/3] Add system view tracking IO ops per backend type\n> >\n> > > Add pg_stat_buffers, a system view which tracks the number of IO\n> > > operations (allocs, writes, fsyncs, and extends) done through each IO\n> > > path (e.g. shared buffers, local buffers, unbuffered IO) by each type of\n> > > backend.\n> >\n> > I think I like pg_stat_io a bit better? Nearly everything in here seems\n> > to fit better in that.\n> >\n> > I guess we could split out buffers allocated, but that's actually\n> > interesting in the context of the kind of IO too.\n> >\n> \n> changed it to pg_stat_io\n\nA bit different thing, but I felt a little uneasy about some uses of\n\"pgstat_io_ops\". IOOp looks like a neighbouring word of IOPath. On the\nother hand, actually iopath is used as an attribute of io_ops in many\nplaces. Couldn't we be more consistent about the relationship between\nthe names?\n\nIOOp -> PgStat_IOOpType\nIOPath -> PgStat_IOPath\nPgStat_IOOpCOonters -> PgStat_IOCounters\nPgStat_IOPathOps -> PgStat_IO\npgstat_count_io_op -> pgstat_count_io\n...\n\n(Better wordings are welcome.)\n\n> > > +CREATE VIEW pg_stat_buffers AS\n> > > +SELECT\n> > > + b.backend_type,\n> > > + b.io_path,\n> > > + b.alloc,\n> > > + b.extend,\n> > > + b.fsync,\n> > > + b.write,\n> > > + b.stats_reset\n> > > +FROM pg_stat_get_buffers() b;\n> >\n> > Do we want to expose all data to all users? I guess pg_stat_bgwriter\n> > does? But this does split things out a lot more...\n> >\n> >\n> I didn't see another similar example limiting access.\n\n(The doc told me that) pg_buffercache view is restricted to\npg_monitor. But other activity-stats(aka stats collector:)-related\npg_stat_* views are not restricted to pg_monitor.\n\ndoc> pg_monitor\tRead/execute various monitoring views and functions. \n\nHmm....\n\n> > Don't think you can rely on that. The lookup of the view, functions\n> > might have needed to load catalog data, which might have needed to evict\n> > buffers. I think you can do something more reliable by checking that\n> > there's more written buffers after a checkpoint than before, or such.\n> >\n> >\n> Yes, per an off list suggestion by you, I have changed the tests to use a\n> sum of writes. I've also added a test for IOPATH_LOCAL and fixed some of\n> the missing calls to count IO Operations for IOPATH_LOCAL and\n> IOPATH_STRATEGY.\n> \n> I struggled to come up with a way to test writes for a particular\n> type of backend are counted correctly since a dirty buffer could be\n> written out by another type of backend before the target BackendType has\n> a chance to write it out.\n> \n> I also struggled to come up with a way to test IO operations for\n> background workers. I'm not sure of a way to deterministically have a\n> background worker do a particular kind of IO in a test scenario.\n> \n> I'm not sure how to cause a strategy \"extend\" for testing.\n\nI'm not sure what you are expecting, but for example, \"create table t\nas select generate_series(0, 99999)\" increments Strategy-extend by\nabout 400. (I'm surprised that autovac worker-shared-extend has\nnon-zero number)\n\n\n> > Would be nice to have something testing that the ringbuffer stats stuff\n> > does something sensible - that feels not entirely trivial.\n> >\n> >\n> I've added a test to test that reused strategy buffers are counted as\n> allocs. I would like to add a test which checks that if a buffer in the\n> ring is pinned and thus not reused, that it is not counted as a strategy\n> alloc, but I found it challenging without a way to pause vacuuming, pin\n> a buffer, then resume vacuuming.\n\n===\n\nIf I'm not missing something, in BufferAlloc, when strategy is not\nused and the victim is dirty, iopath is determined based on the\nuninitialized from_ring. It seems to me from_ring is equivalent to\nstrategy_current_was_in_ring. And if StrategyGetBuffer has set\nfrom_ring to false, StratetgyRejectBuffer may set it to true, which is\nis wrong. The logic around there seems to need a rethink.\n\nWhat can we read from the values separated to Shared and Strategy?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 12 Jul 2022 17:06:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Thanks for the review!\n\nOn Tue, Jul 12, 2022 at 4:06 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Mon, 11 Jul 2022 22:22:28 -0400, Melanie Plageman <\n> melanieplageman@gmail.com> wrote in\n> > Hi,\n> >\n> > In the attached patch set, I've added in missing IO operations for\n> > certain IO Paths as well as enumerating in the commit message which IO\n> > Paths and IO Operations are not currently counted and or not possible.\n> >\n> > There is a TODO in HandleWalWriterInterrupts() about removing\n> > pgstat_report_wal() since it is immediately before a proc_exit()\n>\n> Right. walwriter does that without needing the explicit call.\n>\n\nI have deleted it.\n\n\n>\n> > I was wondering if LocalBufferAlloc() should increment the counter or if\n> > I should wait until GetLocalBufferStorage() to increment the counter.\n>\n> Depends on what \"allocate\" means. Different from shared buffers, local\n> buffers are taken from OS then allocated to page. OS-allcoated pages\n> are restricted by num_temp_buffers so I think what we're interested in\n> is the count incremented by LocalBuferAlloc(). (And it is the parallel\n> of alloc for shared-buffers)\n>\n\nI've left it in LocalBufferAlloc().\n\n\n>\n> > I also realized that I am not differentiating between IOPATH_SHARED and\n> > IOPATH_STRATEGY for IOOP_FSYNC. But, given that we don't know what type\n> > of buffer we are fsync'ing by the time we call register_dirty_segment(),\n> > I'm not sure how we would fix this.\n>\n> I think there scarcely happens flush for strategy-loaded buffers. If\n> that is sensible, IOOP_FSYNC would not make much sense for\n> IOPATH_STRATEGY.\n>\n\nWhy would it be less likely for a backend to do its own fsync when\nflushing a dirty strategy buffer than a regular dirty shared buffer?\n\n\n>\n> > > IMO it'd be a good idea to add pgstat_report_io_ops() to\n> > > pgstat_report_vacuum()/analyze(), so that the stats for a longrunning\n> > > autovac worker get updated more regularly.\n> > >\n> >\n> > noted and fixed.\n>\n> > > How about moving the pgstat_report_io_ops() into\n> > > pgstat_report_bgwriter(), pgstat_report_autovacuum() etc? Seems\n> > > unnecessary to have multiple pgstat_* calls in these places.\n> > >\n> > >\n> > >\n> > noted and fixed.\n>\n> + * Also report IO Operations statistics\n>\n> I think that the function comment also should mention this.\n>\n\nI've added comments at the top of all these functions.\n\n\n>\n> > > Wonder if it's worth making the lock specific to the backend type?\n> > >\n> >\n> > I've added another Lock into PgStat_IOPathOps so that each BackendType\n> > can be locked separately. But, I've also kept the lock in\n> > PgStatShared_BackendIOPathOps so that reset_all and snapshot could be\n> > done easily.\n>\n> Looks fine about the lock separation.\n>\n\nActually, I think it is not safe to use both of these locks. So for\npicking one method, it is probably better to go with the locks in\nPgStat_IOPathOps, it will be more efficient for flush (and not for\nfetching and resetting), so that is probably the way to go, right?\n\n\n> By the way, in the following line:\n>\n> +\n> &pgStatLocal.shmem->io_ops.stats[backend_type_get_idx(MyBackendType)];\n>\n> backend_type_get_idx(x) is actually (x - 1) plus assertion on the\n> value range. And the only use-case is here. There's an reverse\n> function and also used only at one place.\n>\n> + Datum backend_type_desc =\n> +\n> CStringGetTextDatum(GetBackendTypeDesc(idx_get_backend_type(i)));\n>\n> In this usage GetBackendTypeDesc() gracefully treats out-of-domain\n> values but idx_get_backend_type keenly kills the process for the\n> same. This is inconsistent.\n>\n> My humbel opinion on this is we don't define the two functions and\n> replace the calls to them with (x +/- 1). Addition to that, I think\n> we should not abort() by invalid backend types. In that sense, I\n> wonder if we could use B_INVALIDth element for this purpose.\n>\n\nI think that GetBackendTypeDesc() should probably also error out for an\nunknown value.\n\nI would be open to not using the helper functions. I thought it would be\nless error-prone, but since it is limited to the code in\npgstat_io_ops.c, it is probably okay. Let me think a bit more.\n\nCould you explain more about what you mean about using B_INVALID\nBackendType?\n\n\n>\n> > > > + LWLockAcquire(&stats_shmem->lock, LW_SHARED);\n> > > > + memcpy(&reset, reset_offset, sizeof(stats_shmem->stats));\n> > > > + LWLockRelease(&stats_shmem->lock);\n> > >\n> > > Which then also means that you don't need the reset offset stuff. It's\n> > > only there because with the changecount approach we can't take a lock\n> to\n> > > reset the stats (since there is no lock). With a lock you can just\n> reset\n> > > the shared state.\n> > >\n> >\n> > Yes, I believe I have cleaned up all of this embarrassing mess. I use the\n> > lock in PgStatShared_BackendIOPathOps for reset all and snapshot and the\n> > locks in PgStat_IOPathOps for flush.\n>\n> Looks fine, but I think pgstat_flush_io_ops() need more comments like\n> other pgstat_flush_* functions.\n>\n> + for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> + stats_shmem->stats[i].stat_reset_timestamp = ts;\n>\n> I'm not sure we need a separate reset timestamp for each backend type\n> but SLRU counter does the same thing..\n>\n\nYes, I think for SLRU stats it is because you can reset individual SLRU\nstats. Also there is no wrapper data structure to put it in. I could\nkeep it in PgStatShared_BackendIOPathOps since you have to reset all IO\noperation stats at once, but I am thinking of getting rid of\nPgStatShared_BackendIOPathOps since it is not needed if I only keep the\nlocks in PgStat_IOPathOps and make the global shared value an array of\nPgStat_IOPathOps.\n\n\n>\n> > > > +pgstat_report_io_ops(void)\n> > > > +{\n> > >\n> > > This shouldn't be needed - the flush function above can be used.\n> > >\n> >\n> > Fixed.\n>\n> The commit message of 0002 contains that name:p\n>\n\nThanks! Fixed.\n\n\n>\n> > > > +const char *\n> > > > +pgstat_io_path_desc(IOPath io_path)\n> > > > +{\n> > > > + const char *io_path_desc = \"Unknown IO Path\";\n> > > > +\n> > >\n> > > This should be unreachable, right?\n> > >\n> >\n> > Changed it to an error.\n>\n> + elog(ERROR, \"Attempt to describe an unknown IOPath\");\n>\n> I think we usually spell it as (\"unrecognized IOPath value: %d\", io_path).\n>\n\nI have changed to this.\n\n\n>\n> > > > From f2b5b75f5063702cbc3c64efdc1e7ef3cf1acdb4 Mon Sep 17 00:00:00\n> 2001\n> > > > From: Melanie Plageman <melanieplageman@gmail.com>\n> > > > Date: Mon, 4 Jul 2022 15:44:17 -0400\n> > > > Subject: [PATCH v22 3/3] Add system view tracking IO ops per backend\n> type\n> > >\n> > > > Add pg_stat_buffers, a system view which tracks the number of IO\n> > > > operations (allocs, writes, fsyncs, and extends) done through each IO\n> > > > path (e.g. shared buffers, local buffers, unbuffered IO) by each\n> type of\n> > > > backend.\n> > >\n> > > I think I like pg_stat_io a bit better? Nearly everything in here seems\n> > > to fit better in that.\n> > >\n> > > I guess we could split out buffers allocated, but that's actually\n> > > interesting in the context of the kind of IO too.\n> > >\n> >\n> > changed it to pg_stat_io\n>\n> A bit different thing, but I felt a little uneasy about some uses of\n> \"pgstat_io_ops\". IOOp looks like a neighbouring word of IOPath. On the\n> other hand, actually iopath is used as an attribute of io_ops in many\n> places. Couldn't we be more consistent about the relationship between\n> the names?\n>\n> IOOp -> PgStat_IOOpType\n> IOPath -> PgStat_IOPath\n> PgStat_IOOpCOonters -> PgStat_IOCounters\n> PgStat_IOPathOps -> PgStat_IO\n> pgstat_count_io_op -> pgstat_count_io\n> ...\n>\n> (Better wordings are welcome.)\n>\n\nLet me think about naming and make changes in the next version.\n\n\n> > > Would be nice to have something testing that the ringbuffer stats stuff\n> > > does something sensible - that feels not entirely trivial.\n> > >\n> > >\n> > I've added a test to test that reused strategy buffers are counted as\n> > allocs. I would like to add a test which checks that if a buffer in the\n> > ring is pinned and thus not reused, that it is not counted as a strategy\n> > alloc, but I found it challenging without a way to pause vacuuming, pin\n> > a buffer, then resume vacuuming.\n>\n> ===\n>\n> If I'm not missing something, in BufferAlloc, when strategy is not\n> used and the victim is dirty, iopath is determined based on the\n> uninitialized from_ring. It seems to me from_ring is equivalent to\n> strategy_current_was_in_ring. And if StrategyGetBuffer has set\n> from_ring to false, StratetgyRejectBuffer may set it to true, which is\n> is wrong. The logic around there seems to need a rethink.\n>\n> What can we read from the values separated to Shared and Strategy?\n>\n\nI have changed this local variable to only be used for communicating if\nthe buffer which was not rejected by StrategyRejectBuffer() was from the\nring or not for the purposes of counting strategy writes. I could add an\naccessor for this member (strategy->current_was_in_ring) if that makes\nmore sense? For strategy allocs, I just use\nstrategy->current_was_in_ring inside of StrategyGetBuffer() since this\nhas access to that member of the struct.\n\nCurrently, strategy allocs count only reuses of a strategy buffer (not\ninitial shared buffers which are added to the ring).\nstrategy writes count only the writing out of dirty buffers which are\nalready in the ring and are being reused.\n\nAlternatively, we could also count as strategy allocs all those buffers\nwhich are added to the ring and count as strategy writes all those\nshared buffers which are dirty when initially added to the ring.\n\n- Melanie",
"msg_date": "Tue, 12 Jul 2022 12:19:06 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-12 12:19:06 -0400, Melanie Plageman wrote:\n> > > I also realized that I am not differentiating between IOPATH_SHARED and\n> > > IOPATH_STRATEGY for IOOP_FSYNC. But, given that we don't know what type\n> > > of buffer we are fsync'ing by the time we call register_dirty_segment(),\n> > > I'm not sure how we would fix this.\n> >\n> > I think there scarcely happens flush for strategy-loaded buffers. If\n> > that is sensible, IOOP_FSYNC would not make much sense for\n> > IOPATH_STRATEGY.\n> >\n> \n> Why would it be less likely for a backend to do its own fsync when\n> flushing a dirty strategy buffer than a regular dirty shared buffer?\n\nWe really just don't expect a backend to do many segment fsyncs at\nall. Otherwise there's something wrong with the forwarding mechanism.\n\nIt'd be different if we tracked WAL fsyncs more granularly - which would be\nquite interesting - but that's something for another day^Wpatch.\n\n\n> > > > Wonder if it's worth making the lock specific to the backend type?\n> > > >\n> > >\n> > > I've added another Lock into PgStat_IOPathOps so that each BackendType\n> > > can be locked separately. But, I've also kept the lock in\n> > > PgStatShared_BackendIOPathOps so that reset_all and snapshot could be\n> > > done easily.\n> >\n> > Looks fine about the lock separation.\n> >\n> \n> Actually, I think it is not safe to use both of these locks. So for\n> picking one method, it is probably better to go with the locks in\n> PgStat_IOPathOps, it will be more efficient for flush (and not for\n> fetching and resetting), so that is probably the way to go, right?\n\nI think it's good to just use one kind of lock, and efficiency of snapshotting\n/ resetting is nearly irrelevant. But I don't see why it's not safe to use\nboth kinds of locks?\n\n\n> > Looks fine, but I think pgstat_flush_io_ops() need more comments like\n> > other pgstat_flush_* functions.\n> >\n> > + for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> > + stats_shmem->stats[i].stat_reset_timestamp = ts;\n> >\n> > I'm not sure we need a separate reset timestamp for each backend type\n> > but SLRU counter does the same thing..\n> >\n> \n> Yes, I think for SLRU stats it is because you can reset individual SLRU\n> stats. Also there is no wrapper data structure to put it in. I could\n> keep it in PgStatShared_BackendIOPathOps since you have to reset all IO\n> operation stats at once, but I am thinking of getting rid of\n> PgStatShared_BackendIOPathOps since it is not needed if I only keep the\n> locks in PgStat_IOPathOps and make the global shared value an array of\n> PgStat_IOPathOps.\n\nI'm strongly against introducing super granular reset timestamps. I think that\nwas a mistake for SLRU stats, but we can't fix that as easily.\n\n\n> Currently, strategy allocs count only reuses of a strategy buffer (not\n> initial shared buffers which are added to the ring).\n> strategy writes count only the writing out of dirty buffers which are\n> already in the ring and are being reused.\n\nThat seems right to me.\n\n\n> Alternatively, we could also count as strategy allocs all those buffers\n> which are added to the ring and count as strategy writes all those\n> shared buffers which are dirty when initially added to the ring.\n\nI don't think that'd provide valuable information. The whole reason that\nstrategy writes are interesting is that they can lead to writing out data a\nlot sooner than they would be written out without a strategy being used.\n\n\n> Subject: [PATCH v24 2/3] Track IO operation statistics\n> \n> Introduce \"IOOp\", an IO operation done by a backend, and \"IOPath\", the\n> location or type of IO done by a backend. For example, the checkpointer\n> may write a shared buffer out. This would be counted as an IOOp write on\n> an IOPath IOPATH_SHARED by BackendType \"checkpointer\".\n\nI'm still not 100% happy with IOPath - seems a bit too easy to confuse with\nthe file path. What about 'origin'?\n\n\n> Each IOOp (alloc, fsync, extend, write) is counted per IOPath\n> (direct, local, shared, or strategy) through a call to\n> pgstat_count_io_op().\n\nIt seems we should track reads too - it's quite interesting to know whether\nreads happened because of a strategy, for example. You do reference reads in a\nlater part of the commit message even :)\n\n\n> The primary concern of these statistics is IO operations on data blocks\n> during the course of normal database operations. IO done by, for\n> example, the archiver or syslogger is not counted in these statistics.\n\nWe could extend this at a later stage, if we really want to. But I'm not sure\nit's interesting or fully possible. E.g. the archiver's write are largely not\ndone by the archiver itself, but by a command (or module these days) it shells\nout to.\n\n\n> Note that this commit does not add code to increment IOPATH_DIRECT. A\n> future patch adding wrappers for smgrwrite(), smgrextend(), and\n> smgrimmedsync() would provide a good location to call\n> pgstat_count_io_op() for unbuffered IO and avoid regressions for future\n> users of these functions.\n\nHm. Perhaps we should defer introducing IOPATH_DIRECT for now then?\n\n\n> Stats on IOOps for all IOPaths for a backend are initially accumulated\n> locally.\n> \n> Later they are flushed to shared memory and accumulated with those from\n> all other backends, exited and live.\n\nPerhaps mention here that this later could be extended to make per-connection\nstats visible?\n\n\n> Some BackendTypes will not execute pgstat_report_stat() and thus must\n> explicitly call pgstat_flush_io_ops() in order to flush their backend\n> local IO operation statistics to shared memory.\n\nMaybe add \"flush ... during ongoing operation\" or such? Because they'd all\nflush at commit, IIRC.\n\n\n> diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c\n> index 088556ab54..963b05321e 100644\n> --- a/src/backend/bootstrap/bootstrap.c\n> +++ b/src/backend/bootstrap/bootstrap.c\n> @@ -33,6 +33,7 @@\n> #include \"miscadmin.h\"\n> #include \"nodes/makefuncs.h\"\n> #include \"pg_getopt.h\"\n> +#include \"pgstat.h\"\n> #include \"storage/bufmgr.h\"\n> #include \"storage/bufpage.h\"\n> #include \"storage/condition_variable.h\"\n\nHm?\n\n\n> diff --git a/src/backend/postmaster/walwriter.c b/src/backend/postmaster/walwriter.c\n> index e926f8c27c..beb46dcb55 100644\n> --- a/src/backend/postmaster/walwriter.c\n> +++ b/src/backend/postmaster/walwriter.c\n> @@ -293,18 +293,7 @@ HandleWalWriterInterrupts(void)\n> \t}\n> \n> \tif (ShutdownRequestPending)\n> -\t{\n> -\t\t/*\n> -\t\t * Force reporting remaining WAL statistics at process exit.\n> -\t\t *\n> -\t\t * Since pgstat_report_wal is invoked with 'force' is false in main\n> -\t\t * loop to avoid overloading the cumulative stats system, there may\n> -\t\t * exist unreported stats counters for the WAL writer.\n> -\t\t */\n> -\t\tpgstat_report_wal(true);\n> -\n> \t\tproc_exit(0);\n> -\t}\n> \n> \t/* Perform logging of memory contexts of this process */\n> \tif (LogMemoryContextPending)\n\nLet's do this in a separate commit and get it out of the way...\n\n\n> @@ -682,16 +694,37 @@ AddBufferToRing(BufferAccessStrategy strategy, BufferDesc *buf)\n> * if this buffer should be written and re-used.\n> */\n> bool\n> -StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf)\n> +StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf, bool *write_from_ring)\n> {\n> -\t/* We only do this in bulkread mode */\n> +\n> +\t/*\n> +\t * We only reject reusing and writing out the strategy buffer this in\n> +\t * bulkread mode.\n> +\t */\n> \tif (strategy->btype != BAS_BULKREAD)\n> +\t{\n> +\t\t/*\n> +\t\t * If the buffer was from the ring and we are not rejecting it, consider it\n> +\t\t * a write of a strategy buffer.\n> +\t\t */\n> +\t\tif (strategy->current_was_in_ring)\n> +\t\t\t*write_from_ring = true;\n\nHm. This is set even if the buffer wasn't dirty? I guess we don't expect\nStrategyRejectBuffer() to be called for clean buffers...\n\n\n> /*\n> diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c\n> index d9275611f0..d3963f59d0 100644\n> --- a/src/backend/utils/activity/pgstat_database.c\n> +++ b/src/backend/utils/activity/pgstat_database.c\n> @@ -47,7 +47,8 @@ pgstat_drop_database(Oid databaseid)\n> }\n> \n> /*\n> - * Called from autovacuum.c to report startup of an autovacuum process.\n> + * Called from autovacuum.c to report startup of an autovacuum process and\n> + * flush IO Operation statistics.\n> * We are called before InitPostgres is done, so can't rely on MyDatabaseId;\n> * the db OID must be passed in, instead.\n> */\n> @@ -72,6 +73,11 @@ pgstat_report_autovac(Oid dboid)\n> \tdbentry->stats.last_autovac_time = GetCurrentTimestamp();\n> \n> \tpgstat_unlock_entry(entry_ref);\n> +\n> +\t/*\n> +\t * Report IO Operation statistics\n> +\t */\n> +\tpgstat_flush_io_ops(false);\n> }\n\nHm. I suspect this will always be zero - at this point we haven't connected to\na database, so there really can't have been much, if any, IO. I think I\nsuggested doing something here, but on a second look it really doesn't make\nmuch sense.\n\nNote that that's different from doing something in\npgstat_report_(vacuum|analyze) - clearly we've done something at that point.\n\n\n> /*\n> - * Report that the table was just vacuumed.\n> + * Report that the table was just vacuumed and flush IO Operation statistics.\n> */\n> void\n> pgstat_report_vacuum(Oid tableoid, bool shared,\n> @@ -257,10 +257,15 @@ pgstat_report_vacuum(Oid tableoid, bool shared,\n> \t}\n> \n> \tpgstat_unlock_entry(entry_ref);\n> +\n> +\t/*\n> +\t * Report IO Operations statistics\n> +\t */\n> +\tpgstat_flush_io_ops(false);\n> }\n> \n> /*\n> - * Report that the table was just analyzed.\n> + * Report that the table was just analyzed and flush IO Operation statistics.\n> *\n> * Caller must provide new live- and dead-tuples estimates, as well as a\n> * flag indicating whether to reset the changes_since_analyze counter.\n> @@ -340,6 +345,11 @@ pgstat_report_analyze(Relation rel,\n> \t}\n> \n> \tpgstat_unlock_entry(entry_ref);\n> +\n> +\t/*\n> +\t * Report IO Operations statistics\n> +\t */\n> +\tpgstat_flush_io_ops(false);\n> }\n\nThink it'd be good to amend these comments to say that otherwise stats would\nonly get flushed after a multi-relatio autovacuum cycle is done / a\nVACUUM/ANALYZE command processed all tables. Perhaps add the comment to one\nof the two functions, and just reference it in the other place?\n\n\n> --- a/src/include/utils/backend_status.h\n> +++ b/src/include/utils/backend_status.h\n> @@ -306,6 +306,40 @@ extern const char *pgstat_get_crashed_backend_activity(int pid, char *buffer,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t int buflen);\n> extern uint64 pgstat_get_my_query_id(void);\n> \n> +/* Utility functions */\n> +\n> +/*\n> + * When maintaining an array of information about all valid BackendTypes, in\n> + * order to avoid wasting the 0th spot, use this helper to convert a valid\n> + * BackendType to a valid location in the array (given that no spot is\n> + * maintained for B_INVALID BackendType).\n> + */\n> +static inline int backend_type_get_idx(BackendType backend_type)\n> +{\n> +\t/*\n> +\t * backend_type must be one of the valid backend types. If caller is\n> +\t * maintaining backend information in an array that includes B_INVALID,\n> +\t * this function is unnecessary.\n> +\t */\n> +\tAssert(backend_type > B_INVALID && backend_type <= BACKEND_NUM_TYPES);\n> +\treturn backend_type - 1;\n> +}\n\nIn function definitions (vs declarations) we put the 'static inline int' in a\nseparate line from the rest of the function signature.\n\n> +/*\n> + * When using a value from an array of information about all valid\n> + * BackendTypes, add 1 to the index before using it as a BackendType to adjust\n> + * for not maintaining a spot for B_INVALID BackendType.\n> + */\n> +static inline BackendType idx_get_backend_type(int idx)\n> +{\n> +\tint backend_type = idx + 1;\n> +\t/*\n> +\t * If the array includes a spot for B_INVALID BackendType this function is\n> +\t * not required.\n\nThe comments around this seem a bit over the top, but I also don't mind them\nmuch.\n\n\n> Add pg_stat_io, a system view which tracks the number of IOOp (allocs,\n> writes, fsyncs, and extends) done through each IOPath (e.g. shared\n> buffers, local buffers, unbuffered IO) by each type of backend.\n\nAnnoying question: pg_stat_io vs pg_statio? I'd not think of suggesting the\nlatter, except that we already have a bunch of views with that prefix.\n\n\n> Some of these should always be zero. For example, checkpointer does not\n> use a BufferAccessStrategy (currently), so the \"strategy\" IOPath for\n> checkpointer will be 0 for all IOOps.\n\nWhat do you think about returning NULL for the values that we except to never\nbe non-zero? Perhaps with an assert against non-zero values? Seems like it\nmight be helpful for understanding the view.\n\n\n> +/*\n> +* When adding a new column to the pg_stat_io view, add a new enum\n> +* value here above IO_NUM_COLUMNS.\n> +*/\n> +enum\n> +{\n> +\tIO_COLUMN_BACKEND_TYPE,\n> +\tIO_COLUMN_IO_PATH,\n> +\tIO_COLUMN_ALLOCS,\n> +\tIO_COLUMN_EXTENDS,\n> +\tIO_COLUMN_FSYNCS,\n> +\tIO_COLUMN_WRITES,\n> +\tIO_COLUMN_RESET_TIME,\n> +\tIO_NUM_COLUMNS,\n> +};\n\nWe typedef pretty much every enum so the enum can be referenced without the\n'enum' prefix. I'd do that here, even if we don't need it.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:01:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-11 22:22:28 -0400, Melanie Plageman wrote:\n> Yes, per an off list suggestion by you, I have changed the tests to use a\n> sum of writes. I've also added a test for IOPATH_LOCAL and fixed some of\n> the missing calls to count IO Operations for IOPATH_LOCAL and\n> IOPATH_STRATEGY.\n> \n> I struggled to come up with a way to test writes for a particular\n> type of backend are counted correctly since a dirty buffer could be\n> written out by another type of backend before the target BackendType has\n> a chance to write it out.\n\nI guess temp file writes would be reliably done by one backend... Don't have a\ngood idea otherwise.\n\n\n> I also struggled to come up with a way to test IO operations for\n> background workers. I'm not sure of a way to deterministically have a\n> background worker do a particular kind of IO in a test scenario.\n\nI think it's perfectly fine to not test that - for it to be broken we'd have\nto somehow screw up setting the backend type. Everything else is the same as\nother types of backends anyway.\n\nIf you *do* want to test it, you probably could use\nSET parallel_leader_participation = false;\nSET force_parallel_mode = 'regress';\nSELECT something_triggering_io;\n\n\n> I'm not sure how to cause a strategy \"extend\" for testing.\n\nCOPY into a table should work. But might be unattractive due to the size of of\nthe COPY ringbuffer.\n\n\n> > Would be nice to have something testing that the ringbuffer stats stuff\n> > does something sensible - that feels not entirely trivial.\n> >\n> >\n> I've added a test to test that reused strategy buffers are counted as\n> allocs. I would like to add a test which checks that if a buffer in the\n> ring is pinned and thus not reused, that it is not counted as a strategy\n> alloc, but I found it challenging without a way to pause vacuuming, pin\n> a buffer, then resume vacuuming.\n\nYea, that's probably too hard to make reliable to be worth it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:18:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Tue, 12 Jul 2022 12:19:06 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in \n> > +\n> > &pgStatLocal.shmem->io_ops.stats[backend_type_get_idx(MyBackendType)];\n> >\n> > backend_type_get_idx(x) is actually (x - 1) plus assertion on the\n> > value range. And the only use-case is here. There's an reverse\n> > function and also used only at one place.\n> >\n> > + Datum backend_type_desc =\n> > +\n> > CStringGetTextDatum(GetBackendTypeDesc(idx_get_backend_type(i)));\n> >\n> > In this usage GetBackendTypeDesc() gracefully treats out-of-domain\n> > values but idx_get_backend_type keenly kills the process for the\n> > same. This is inconsistent.\n> >\n> > My humbel opinion on this is we don't define the two functions and\n> > replace the calls to them with (x +/- 1). Addition to that, I think\n> > we should not abort() by invalid backend types. In that sense, I\n> > wonder if we could use B_INVALIDth element for this purpose.\n> >\n> \n> I think that GetBackendTypeDesc() should probably also error out for an\n> unknown value.\n> \n> I would be open to not using the helper functions. I thought it would be\n> less error-prone, but since it is limited to the code in\n> pgstat_io_ops.c, it is probably okay. Let me think a bit more.\n> \n> Could you explain more about what you mean about using B_INVALID\n> BackendType?\n\nI imagined to use B_INVALID as a kind of \"default\" partition, which\naccepts all unknown backend types. We can just ignore that values but\nthen we lose the clue for malfunction of stats machinery. I thought\nthat that backend-type as the sentinel for malfunctions. Thus we can\nemit logs instead.\n\nI feel that the stats machinery shouldn't stop the server as possible,\nor I think it is overreaction to abort for invalid values that can be\neasily coped with.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Jul 2022 11:00:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-13 11:00:07 +0900, Kyotaro Horiguchi wrote:\n> I imagined to use B_INVALID as a kind of \"default\" partition, which\n> accepts all unknown backend types.\n\nThere shouldn't be any unknown backend types. Something has gone wrong if we\nget far without a backend type set.\n\n\n> We can just ignore that values but then we lose the clue for malfunction of\n> stats machinery. I thought that that backend-type as the sentinel for\n> malfunctions. Thus we can emit logs instead.\n> \n> I feel that the stats machinery shouldn't stop the server as possible,\n> or I think it is overreaction to abort for invalid values that can be\n> easily coped with.\n\nI strongly disagree. That just ends up with hard to find bugs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 19:18:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Tue, 12 Jul 2022 19:18:22 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-07-13 11:00:07 +0900, Kyotaro Horiguchi wrote:\n> > I imagined to use B_INVALID as a kind of \"default\" partition, which\n> > accepts all unknown backend types.\n> \n> There shouldn't be any unknown backend types. Something has gone wrong if we\n> get far without a backend type set.\n> \n> \n> > We can just ignore that values but then we lose the clue for malfunction of\n> > stats machinery. I thought that that backend-type as the sentinel for\n> > malfunctions. Thus we can emit logs instead.\n> > \n> > I feel that the stats machinery shouldn't stop the server as possible,\n> > or I think it is overreaction to abort for invalid values that can be\n> > easily coped with.\n> \n> I strongly disagree. That just ends up with hard to find bugs.\n\nI was not sure about the policy on that since, as Melanie (and I)\nmentioned, GetBackendTypeDesc() is gracefully treating invalid values.\n\nSince both of you are agreeing on this point, I'm fine with\nAssert()ing assuming that GetBackendTypeDesc() (or other places\nbackend-type is handled) is modified to behave the same way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Jul 2022 11:41:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Attached patch set is substantially different enough from previous\nversions that I kept it as a new patch set.\nNote that local buffer allocations are now correctly tracked.\n\nOn Tue, Jul 12, 2022 at 1:01 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-07-12 12:19:06 -0400, Melanie Plageman wrote:\n> > > > I also realized that I am not differentiating between IOPATH_SHARED\n> and\n> > > > IOPATH_STRATEGY for IOOP_FSYNC. But, given that we don't know what\n> type\n> > > > of buffer we are fsync'ing by the time we call\n> register_dirty_segment(),\n> > > > I'm not sure how we would fix this.\n> > >\n> > > I think there scarcely happens flush for strategy-loaded buffers. If\n> > > that is sensible, IOOP_FSYNC would not make much sense for\n> > > IOPATH_STRATEGY.\n> > >\n> >\n> > Why would it be less likely for a backend to do its own fsync when\n> > flushing a dirty strategy buffer than a regular dirty shared buffer?\n>\n> We really just don't expect a backend to do many segment fsyncs at\n> all. Otherwise there's something wrong with the forwarding mechanism.\n>\n\nWhen a dirty strategy buffer is written out, if pendingOps sync queue is\nfull and the backend has to fsync the segment itself instead of relying\non the checkpointer, this will show in the statistics as an IOOP_FSYNC\nfor IOPATH_SHARED not IOPATH_STRATEGY.\nIOPATH_STRATEGY + IOOP_FSYNC will always be 0 for all BackendTypes.\nDoes this seem right?\n\n\n>\n> It'd be different if we tracked WAL fsyncs more granularly - which would be\n> quite interesting - but that's something for another day^Wpatch.\n>\n>\nI do have a question about this.\nSo, if we were to start tracking WAL IO would it fit within this\nparadigm to have a new IOPATH_WAL for WAL or would it add a separate\ndimension?\n\nI was thinking that we might want to consider calling this view\npg_stat_io_data because we might want to have a separate view,\npg_stat_io_wal and then, maybe eventually, convert pg_stat_slru to\npg_stat_io_slru (or a subset of what is in pg_stat_slru).\nAnd maybe then later add pg_stat_io_[archiver/other]\n\nIs pg_stat_io_data a good name that gives us flexibility to\nintroduce views which expose per-backend IO operation stats (maybe that\ngoes in pg_stat_activity, though [or maybe not because it wouldn't\ninclude exited backends?]) and per query IO operation stats?\n\nI would like to add roughly the same additional columns to all of\nthese during AIO development (basically the columns from iostat):\n- average block size (will usually be 8kB for pg_stat_io_data but won't\nnecessarily for the others)\n- IOPS/BW\n- avg read/write wait time\n- demand rate/completion rate\n- merges\n- maybe queue depth\n\nAnd I would like to be able to see all of these per query, per backend,\nper relation, per BackendType, per IOPath, per SLRU type, etc.\n\nBasically, what I'm asking is\n1) what can we name the view to enable these future stats to exist with\nthe least confusing/wordy view names?\n2) will the current view layout and column titles work with minimal\nchanges for future stats extensions like what I mention above?\n\n\n>\n> > > > > Wonder if it's worth making the lock specific to the backend type?\n> > > > >\n> > > >\n> > > > I've added another Lock into PgStat_IOPathOps so that each\n> BackendType\n> > > > can be locked separately. But, I've also kept the lock in\n> > > > PgStatShared_BackendIOPathOps so that reset_all and snapshot could be\n> > > > done easily.\n> > >\n> > > Looks fine about the lock separation.\n> > >\n> >\n> > Actually, I think it is not safe to use both of these locks. So for\n> > picking one method, it is probably better to go with the locks in\n> > PgStat_IOPathOps, it will be more efficient for flush (and not for\n> > fetching and resetting), so that is probably the way to go, right?\n>\n> I think it's good to just use one kind of lock, and efficiency of\n> snapshotting\n> / resetting is nearly irrelevant. But I don't see why it's not safe to use\n> both kinds of locks?\n>\n>\nThe way I implemented it was not safe because I didn't use both locks\nwhen resetting the stats.\n\nIn this new version of the patch, I've done the following: In shared\nmemory I've put the lock in PgStatShared_IOPathOps -- the data structure\nwhich contains an array of PgStat_IOOpCounters for all IOOp types for\nall IOPaths. Thus, different BackendType + IOPath combinations can be\nupdated concurrently without contending for the same lock.\n\nTo make this work, I made two versions of the PgStat_IOPathOps -- one\nthat has the lock, PgStatShared_IOPathOps, and one without,\nPgStat_IOPathOps, so that I can persist it to the stats file without\nwriting and reading the LWLock and can have a local and snapshot version\nof the data structure without the lock.\n\nThis also necessitated two versions of the data structure wrapping\nPgStat_IOPathOps, PgStat_BackendIOPathOps, which contains an array with\na PgStat_IOPathOps for each BackendType, and\nPgStatShared_BackendIOPathOps, containing an array of\nPgStatShared_IOPathOps.\n\n\n>\n> > > Looks fine, but I think pgstat_flush_io_ops() need more comments like\n> > > other pgstat_flush_* functions.\n> > >\n> > > + for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> > > + stats_shmem->stats[i].stat_reset_timestamp = ts;\n> > >\n> > > I'm not sure we need a separate reset timestamp for each backend type\n> > > but SLRU counter does the same thing..\n> > >\n> >\n> > Yes, I think for SLRU stats it is because you can reset individual SLRU\n> > stats. Also there is no wrapper data structure to put it in. I could\n> > keep it in PgStatShared_BackendIOPathOps since you have to reset all IO\n> > operation stats at once, but I am thinking of getting rid of\n> > PgStatShared_BackendIOPathOps since it is not needed if I only keep the\n> > locks in PgStat_IOPathOps and make the global shared value an array of\n> > PgStat_IOPathOps.\n>\n> I'm strongly against introducing super granular reset timestamps. I think\n> that\n> was a mistake for SLRU stats, but we can't fix that as easily.\n>\n>\nSince all stats in pg_stat_io must be reset at the same time, I've put\nthe reset timestamp can in the PgStat[Shared]_BackendIOPathOps and\nremoved it from each PgStat[Shared]_IOPathOps.\n\n\n>\n> > Currently, strategy allocs count only reuses of a strategy buffer (not\n> > initial shared buffers which are added to the ring).\n> > strategy writes count only the writing out of dirty buffers which are\n> > already in the ring and are being reused.\n>\n> That seems right to me.\n>\n>\n> > Alternatively, we could also count as strategy allocs all those buffers\n> > which are added to the ring and count as strategy writes all those\n> > shared buffers which are dirty when initially added to the ring.\n>\n> I don't think that'd provide valuable information. The whole reason that\n> strategy writes are interesting is that they can lead to writing out data a\n> lot sooner than they would be written out without a strategy being used.\n>\n>\nThen I agree that strategy writes should only count strategy buffers\nthat are written out in order to reuse the buffer (which is in lieu of\ngetting a new, potentially clean, shared buffer). This patch implements\nthat behavior.\n\nHowever, for strategy allocs, it seems like we would want to count all\ndemand for buffers as part of a BufferAccessStrategy. So, that would\ninclude allocating buffers to initially fill the ring, allocations of\nnew shared buffers after the ring was already full that are added to the\nring because all existing buffers in the ring are pinned, and buffers\nalready in the ring which are being reused.\n\nThis version of the patch only counts the third scenario as a strategy\nallocation, but I think it would make more sense to count all three as\nstrategy allocs.\n\nThe downside of this behavior is that strategy allocs count different\nscenarios than strategy writes, reads, and extends. But, I think that\nthis is okay.\n\nI'll clarify it in the docs once there is a decision.\n\nAlso, note that, as stated above, there will never be any strategy\nfsyncs (that is, IOPATH_STRATEGY + IOOP_FSYNC will always be 0) because\nthe code path starting with register_dirty_segment() which ends with a\nregular backend doing its own fsync when pendingOps is full does not\nknow what the current IOPATH is and checkpointer does not use a\nBufferAccessStrategy.\n\n\n>\n> > Subject: [PATCH v24 2/3] Track IO operation statistics\n> >\n> > Introduce \"IOOp\", an IO operation done by a backend, and \"IOPath\", the\n> > location or type of IO done by a backend. For example, the checkpointer\n> > may write a shared buffer out. This would be counted as an IOOp write on\n> > an IOPath IOPATH_SHARED by BackendType \"checkpointer\".\n>\n> I'm still not 100% happy with IOPath - seems a bit too easy to confuse with\n> the file path. What about 'origin'?\n>\n>\nEnough has changed in this version of the patch that I decided to defer\nrenaming until some of the other issues are resolved.\n\n\n>\n> > Each IOOp (alloc, fsync, extend, write) is counted per IOPath\n> > (direct, local, shared, or strategy) through a call to\n> > pgstat_count_io_op().\n>\n> It seems we should track reads too - it's quite interesting to know whether\n> reads happened because of a strategy, for example. You do reference reads\n> in a\n> later part of the commit message even :)\n>\n\nI've added reads to what is counted.\n\n\n>\n> > The primary concern of these statistics is IO operations on data blocks\n> > during the course of normal database operations. IO done by, for\n> > example, the archiver or syslogger is not counted in these statistics.\n>\n> We could extend this at a later stage, if we really want to. But I'm not\n> sure\n> it's interesting or fully possible. E.g. the archiver's write are largely\n> not\n> done by the archiver itself, but by a command (or module these days) it\n> shells\n> out to.\n>\n\nI've added note of this to some of the comments and the commit message.\nI also omit rows for these BackendTypes from the view. See my later\ncomment in this email for more detail on that.\n\n\n>\n> > Note that this commit does not add code to increment IOPATH_DIRECT. A\n> > future patch adding wrappers for smgrwrite(), smgrextend(), and\n> > smgrimmedsync() would provide a good location to call\n> > pgstat_count_io_op() for unbuffered IO and avoid regressions for future\n> > users of these functions.\n>\n> Hm. Perhaps we should defer introducing IOPATH_DIRECT for now then?\n>\n>\nIt's gone.\n\n\n>\n> > Stats on IOOps for all IOPaths for a backend are initially accumulated\n> > locally.\n> >\n> > Later they are flushed to shared memory and accumulated with those from\n> > all other backends, exited and live.\n>\n> Perhaps mention here that this later could be extended to make\n> per-connection\n> stats visible?\n>\n>\nMentioned.\n\n\n>\n> > Some BackendTypes will not execute pgstat_report_stat() and thus must\n> > explicitly call pgstat_flush_io_ops() in order to flush their backend\n> > local IO operation statistics to shared memory.\n>\n> Maybe add \"flush ... during ongoing operation\" or such? Because they'd all\n> flush at commit, IIRC.\n>\n>\nAdded.\n\n\n>\n> > diff --git a/src/backend/bootstrap/bootstrap.c\n> b/src/backend/bootstrap/bootstrap.c\n> > index 088556ab54..963b05321e 100644\n> > --- a/src/backend/bootstrap/bootstrap.c\n> > +++ b/src/backend/bootstrap/bootstrap.c\n> > @@ -33,6 +33,7 @@\n> > #include \"miscadmin.h\"\n> > #include \"nodes/makefuncs.h\"\n> > #include \"pg_getopt.h\"\n> > +#include \"pgstat.h\"\n> > #include \"storage/bufmgr.h\"\n> > #include \"storage/bufpage.h\"\n> > #include \"storage/condition_variable.h\"\n>\n> Hm?\n>\n\nRemoved\n\n\n>\n> > diff --git a/src/backend/postmaster/walwriter.c\n> b/src/backend/postmaster/walwriter.c\n> > index e926f8c27c..beb46dcb55 100644\n> > --- a/src/backend/postmaster/walwriter.c\n> > +++ b/src/backend/postmaster/walwriter.c\n> > @@ -293,18 +293,7 @@ HandleWalWriterInterrupts(void)\n> > }\n> >\n> > if (ShutdownRequestPending)\n> > - {\n> > - /*\n> > - * Force reporting remaining WAL statistics at process\n> exit.\n> > - *\n> > - * Since pgstat_report_wal is invoked with 'force' is\n> false in main\n> > - * loop to avoid overloading the cumulative stats system,\n> there may\n> > - * exist unreported stats counters for the WAL writer.\n> > - */\n> > - pgstat_report_wal(true);\n> > -\n> > proc_exit(0);\n> > - }\n> >\n> > /* Perform logging of memory contexts of this process */\n> > if (LogMemoryContextPending)\n>\n> Let's do this in a separate commit and get it out of the way...\n>\n>\nI've put it in a separate commit.\n\n\n>\n> > @@ -682,16 +694,37 @@ AddBufferToRing(BufferAccessStrategy strategy,\n> BufferDesc *buf)\n> > * if this buffer should be written and re-used.\n> > */\n> > bool\n> > -StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf)\n> > +StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf,\n> bool *write_from_ring)\n> > {\n> > - /* We only do this in bulkread mode */\n> > +\n> > + /*\n> > + * We only reject reusing and writing out the strategy buffer this\n> in\n> > + * bulkread mode.\n> > + */\n> > if (strategy->btype != BAS_BULKREAD)\n> > + {\n> > + /*\n> > + * If the buffer was from the ring and we are not\n> rejecting it, consider it\n> > + * a write of a strategy buffer.\n> > + */\n> > + if (strategy->current_was_in_ring)\n> > + *write_from_ring = true;\n>\n> Hm. This is set even if the buffer wasn't dirty? I guess we don't expect\n> StrategyRejectBuffer() to be called for clean buffers...\n>\n>\nYes, we do not expect it to be called for clean buffers.\nI've added a comment about this assumption.\n\n\n>\n> > /*\n> > diff --git a/src/backend/utils/activity/pgstat_database.c\n> b/src/backend/utils/activity/pgstat_database.c\n> > index d9275611f0..d3963f59d0 100644\n> > --- a/src/backend/utils/activity/pgstat_database.c\n> > +++ b/src/backend/utils/activity/pgstat_database.c\n> > @@ -47,7 +47,8 @@ pgstat_drop_database(Oid databaseid)\n> > }\n> >\n> > /*\n> > - * Called from autovacuum.c to report startup of an autovacuum process.\n> > + * Called from autovacuum.c to report startup of an autovacuum process\n> and\n> > + * flush IO Operation statistics.\n> > * We are called before InitPostgres is done, so can't rely on\n> MyDatabaseId;\n> > * the db OID must be passed in, instead.\n> > */\n> > @@ -72,6 +73,11 @@ pgstat_report_autovac(Oid dboid)\n> > dbentry->stats.last_autovac_time = GetCurrentTimestamp();\n> >\n> > pgstat_unlock_entry(entry_ref);\n> > +\n> > + /*\n> > + * Report IO Operation statistics\n> > + */\n> > + pgstat_flush_io_ops(false);\n> > }\n>\n> Hm. I suspect this will always be zero - at this point we haven't\n> connected to\n> a database, so there really can't have been much, if any, IO. I think I\n> suggested doing something here, but on a second look it really doesn't make\n> much sense.\n>\n> Note that that's different from doing something in\n> pgstat_report_(vacuum|analyze) - clearly we've done something at that\n> point.\n>\n\nI've removed this.\n\n\n>\n> > /*\n> > - * Report that the table was just vacuumed.\n> > + * Report that the table was just vacuumed and flush IO Operation\n> statistics.\n> > */\n> > void\n> > pgstat_report_vacuum(Oid tableoid, bool shared,\n> > @@ -257,10 +257,15 @@ pgstat_report_vacuum(Oid tableoid, bool shared,\n> > }\n> >\n> > pgstat_unlock_entry(entry_ref);\n> > +\n> > + /*\n> > + * Report IO Operations statistics\n> > + */\n> > + pgstat_flush_io_ops(false);\n> > }\n> >\n> > /*\n> > - * Report that the table was just analyzed.\n> > + * Report that the table was just analyzed and flush IO Operation\n> statistics.\n> > *\n> > * Caller must provide new live- and dead-tuples estimates, as well as a\n> > * flag indicating whether to reset the changes_since_analyze counter.\n> > @@ -340,6 +345,11 @@ pgstat_report_analyze(Relation rel,\n> > }\n> >\n> > pgstat_unlock_entry(entry_ref);\n> > +\n> > + /*\n> > + * Report IO Operations statistics\n> > + */\n> > + pgstat_flush_io_ops(false);\n> > }\n>\n> Think it'd be good to amend these comments to say that otherwise stats\n> would\n> only get flushed after a multi-relatio autovacuum cycle is done / a\n> VACUUM/ANALYZE command processed all tables. Perhaps add the comment to\n> one\n> of the two functions, and just reference it in the other place?\n>\n\nDone\n\n\n>\n>\n> > --- a/src/include/utils/backend_status.h\n> > +++ b/src/include/utils/backend_status.h\n> > @@ -306,6 +306,40 @@ extern const char\n> *pgstat_get_crashed_backend_activity(int pid, char *buffer,\n> >\n> int buflen);\n> > extern uint64 pgstat_get_my_query_id(void);\n> >\n> > +/* Utility functions */\n> > +\n> > +/*\n> > + * When maintaining an array of information about all valid\n> BackendTypes, in\n> > + * order to avoid wasting the 0th spot, use this helper to convert a\n> valid\n> > + * BackendType to a valid location in the array (given that no spot is\n> > + * maintained for B_INVALID BackendType).\n> > + */\n> > +static inline int backend_type_get_idx(BackendType backend_type)\n> > +{\n> > + /*\n> > + * backend_type must be one of the valid backend types. If caller\n> is\n> > + * maintaining backend information in an array that includes\n> B_INVALID,\n> > + * this function is unnecessary.\n> > + */\n> > + Assert(backend_type > B_INVALID && backend_type <=\n> BACKEND_NUM_TYPES);\n> > + return backend_type - 1;\n> > +}\n>\n> In function definitions (vs declarations) we put the 'static inline int'\n> in a\n> separate line from the rest of the function signature.\n>\n\nFixed.\n\n\n>\n> > +/*\n> > + * When using a value from an array of information about all valid\n> > + * BackendTypes, add 1 to the index before using it as a BackendType to\n> adjust\n> > + * for not maintaining a spot for B_INVALID BackendType.\n> > + */\n> > +static inline BackendType idx_get_backend_type(int idx)\n> > +{\n> > + int backend_type = idx + 1;\n> > + /*\n> > + * If the array includes a spot for B_INVALID BackendType this\n> function is\n> > + * not required.\n>\n> The comments around this seem a bit over the top, but I also don't mind\n> them\n> much.\n>\n\nFeel free to change them to something shorter. I couldn't think of\nsomething I liked.\n\n\n>\n>\n> > Add pg_stat_io, a system view which tracks the number of IOOp (allocs,\n> > writes, fsyncs, and extends) done through each IOPath (e.g. shared\n> > buffers, local buffers, unbuffered IO) by each type of backend.\n>\n> Annoying question: pg_stat_io vs pg_statio? I'd not think of suggesting the\n> latter, except that we already have a bunch of views with that prefix.\n>\n>\nI have thoughts on this but thought it best deferred until after the _data\ndecision.\n\n\n>\n> > Some of these should always be zero. For example, checkpointer does not\n> > use a BufferAccessStrategy (currently), so the \"strategy\" IOPath for\n> > checkpointer will be 0 for all IOOps.\n>\n> What do you think about returning NULL for the values that we except to\n> never\n> be non-zero? Perhaps with an assert against non-zero values? Seems like it\n> might be helpful for understanding the view.\n>\n\nYes, I like this idea.\n\nBeyond just setting individual cells to NULL, if an entire row would be\nNULL, I have now dropped it from the view.\n\nSo far, I have omitted from the view all rows for BackendTypes\nB_ARCHIVER, B_LOGGER, and B_STARTUP.\n\nShould I also omit rows for B_WAL_RECEIVER and B_WAL_WRITER for now?\n\nI have also omitted rows for IOPATH_STRATEGY for all BackendTypes\n*except* B_AUTOVAC_WORKER, B_BACKEND, B_STANDALONE_BACKEND, and\nB_BG_WORKER.\n\nDo these seem correct?\n\nI think there are some BackendTypes which will never do IO Operations on\nIOPATH_LOCAL but I am not sure which. Do you know which?\n\nAs for individual cells which should be NULL, so far what I have is:\n- IOPATH_LOCAL + IOOP_FSYNC\nI am sure there are others as well. Can you think of any?\n\n\n>\n> > +/*\n> > +* When adding a new column to the pg_stat_io view, add a new enum\n> > +* value here above IO_NUM_COLUMNS.\n> > +*/\n> > +enum\n> > +{\n> > + IO_COLUMN_BACKEND_TYPE,\n> > + IO_COLUMN_IO_PATH,\n> > + IO_COLUMN_ALLOCS,\n> > + IO_COLUMN_EXTENDS,\n> > + IO_COLUMN_FSYNCS,\n> > + IO_COLUMN_WRITES,\n> > + IO_COLUMN_RESET_TIME,\n> > + IO_NUM_COLUMNS,\n> > +};\n>\n> We typedef pretty much every enum so the enum can be referenced without the\n> 'enum' prefix. I'd do that here, even if we don't need it.\n>\n>\nSo, I left it anonymous because I didn't want it being used as a type\nor referenced anywhere else.\n\nI am interested to hear more about your SQL enums idea from upthread.\n\n- Melanie",
"msg_date": "Wed, 13 Jul 2022 13:14:52 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "In addition to adding several new tests, the attached version 26 fixes a\nmajor bug in constructing the view.\n\nThe only valid combination of IOPATH/IOOP that is not tested now is\nIOPATH_STRATEGY + IOOP_WRITE. In most cases when I ran this in regress,\nthe checkpointer wrote out the dirty strategy buffer before VACUUM got\naround to reusing and writing it out in my tests.\n\nI've also changed the BACKEND_NUM_TYPES definition. Now arrays will have\nthat dead spot for B_INVALID, but I feel like it is much easier to\nunderstand without trying to skip that spot and use those special helper\nfunctions.\n\nI also started skipping adding rows to the view for WAL_RECEIVER and\nWAL_WRITER and for BackendTypes except B_BACKEND and WAL_SENDER for\nIOPATH_LOCAL.\n\nOn Tue, Jul 12, 2022 at 1:18 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-07-11 22:22:28 -0400, Melanie Plageman wrote:\n> > Yes, per an off list suggestion by you, I have changed the tests to use a\n> > sum of writes. I've also added a test for IOPATH_LOCAL and fixed some of\n> > the missing calls to count IO Operations for IOPATH_LOCAL and\n> > IOPATH_STRATEGY.\n> >\n> > I struggled to come up with a way to test writes for a particular\n> > type of backend are counted correctly since a dirty buffer could be\n> > written out by another type of backend before the target BackendType has\n> > a chance to write it out.\n>\n> I guess temp file writes would be reliably done by one backend... Don't\n> have a\n> good idea otherwise.\n>\n>\nThis was mainly an issue for IOPATH_STRATEGY writes as I mentioned. I\nstill have not solved this.\n\n\n>\n> > I'm not sure how to cause a strategy \"extend\" for testing.\n>\n> COPY into a table should work. But might be unattractive due to the size\n> of of\n> the COPY ringbuffer.\n>\n\nDid it with a CTAS as Horiguchi-san suggested.\n\n\n>\n> > > Would be nice to have something testing that the ringbuffer stats stuff\n> > > does something sensible - that feels not entirely trivial.\n> > >\n> > >\n> > I've added a test to test that reused strategy buffers are counted as\n> > allocs. I would like to add a test which checks that if a buffer in the\n> > ring is pinned and thus not reused, that it is not counted as a strategy\n> > alloc, but I found it challenging without a way to pause vacuuming, pin\n> > a buffer, then resume vacuuming.\n>\n> Yea, that's probably too hard to make reliable to be worth it.\n>\n>\nYes, I have skipped this.\n\n- Melanie",
"msg_date": "Thu, 14 Jul 2022 18:44:48 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "I am consolidating the various naming points from this thread into one\nemail:\n\n From Horiguchi-san:\n\n> A bit different thing, but I felt a little uneasy about some uses of\n> \"pgstat_io_ops\". IOOp looks like a neighbouring word of IOPath. On the\n> other hand, actually iopath is used as an attribute of io_ops in many\n> places. Couldn't we be more consistent about the relationship between\n> the names?\n>\n> IOOp -> PgStat_IOOpType\n> IOPath -> PgStat_IOPath\n> PgStat_IOOpCOonters -> PgStat_IOCounters\n> PgStat_IOPathOps -> PgStat_IO\n> pgstat_count_io_op -> pgstat_count_io\n\nSo, because of the way the data structures contain arrays of each other\nthe naming was meant to specify all the information contained in the\ndata structure:\n\nPgStat_IOOpCounters are all IOOp (I could see removing the word\n\"counters\" from the name for more consistency)\n\nPgStat_IOPathOps are all IOOp for all IOPath\n\nPgStat_BackendIOPathOps are all IOOp for all IOPath for all BackendType\n\nThe downside of this naming is that, when choosing a local variable name\nfor all of the IOOp for all IOPath for a single BackendType,\n\"backend_io_path_ops\" seems accurate but is actually confusing if the\ntype name for all IOOp for all IOPath for all BackendType is\nPgStat_BackendIOPathOps.\n\nI would be open to changing PgStat_BackendIOPathOps to PgStat_IO, but I\ndon't see how I could omit Path or Op from PgStat_IOPathOps without\nmaking its meaning unclear.\n\nI'm not sure about the idea of prefixing the IOOp and IOPath enums with\nPg_Stat. I could see them being used outside of statistics (though they\nare defined in pgstat.h) and could see myself using them in, for\nexample, calculations for the prefetcher.\n\n From Andres:\n\nQuoting me (Melanie):\n> > Introduce \"IOOp\", an IO operation done by a backend, and \"IOPath\", the\n> > location or type of IO done by a backend. For example, the checkpointer\n> > may write a shared buffer out. This would be counted as an IOOp write on\n> > an IOPath IOPATH_SHARED by BackendType \"checkpointer\".\n\n> I'm still not 100% happy with IOPath - seems a bit too easy to confuse\nwith\n> the file path. What about 'origin'?\n\nI can see the point about IOPATH.\nI'm not wild about origin mostly because of the number of O's given that\nIO Operation already has two O's. It gets kind of hard to read when\nusing Pascal Case: IOOrigin and IOOp.\nAlso, it doesn't totally make sense for alloc. I could be convinced,\nthough.\n\nIOSOURCE doesn't have the O problem but does still not make sense for\nalloc. I also thought of IOSITE and IOVENUE.\n\n> Annoying question: pg_stat_io vs pg_statio? I'd not think of suggesting\nthe\n> latter, except that we already have a bunch of views with that prefix.\n\nAs far as pg_stat_io vs pg_statio, they are the only stats views which\ndon't have an underscore between stat and the rest of the view name, so\nperhaps we should move away from statio to stat_io going forward anyway.\nI am imagining adding to them with other iostat type metrics once direct\nIO is introduced, so they may well be changing soon anyway.\n\n- Melanie\n\nI am consolidating the various naming points from this thread into oneemail:From Horiguchi-san:> A bit different thing, but I felt a little uneasy about some uses of> \"pgstat_io_ops\". IOOp looks like a neighbouring word of IOPath. On the> other hand, actually iopath is used as an attribute of io_ops in many> places. Couldn't we be more consistent about the relationship between> the names?>> IOOp -> PgStat_IOOpType> IOPath -> PgStat_IOPath> PgStat_IOOpCOonters -> PgStat_IOCounters> PgStat_IOPathOps -> PgStat_IO> pgstat_count_io_op -> pgstat_count_ioSo, because of the way the data structures contain arrays of each otherthe naming was meant to specify all the information contained in thedata structure:PgStat_IOOpCounters are all IOOp (I could see removing the word\"counters\" from the name for more consistency)PgStat_IOPathOps are all IOOp for all IOPathPgStat_BackendIOPathOps are all IOOp for all IOPath for all BackendTypeThe downside of this naming is that, when choosing a local variable namefor all of the IOOp for all IOPath for a single BackendType,\"backend_io_path_ops\" seems accurate but is actually confusing if thetype name for all IOOp for all IOPath for all BackendType isPgStat_BackendIOPathOps.I would be open to changing PgStat_BackendIOPathOps to PgStat_IO, but Idon't see how I could omit Path or Op from PgStat_IOPathOps withoutmaking its meaning unclear.I'm not sure about the idea of prefixing the IOOp and IOPath enums withPg_Stat. I could see them being used outside of statistics (though theyare defined in pgstat.h) and could see myself using them in, forexample, calculations for the prefetcher.From Andres:Quoting me (Melanie):> > Introduce \"IOOp\", an IO operation done by a backend, and \"IOPath\", the> > location or type of IO done by a backend. For example, the checkpointer> > may write a shared buffer out. This would be counted as an IOOp write on> > an IOPath IOPATH_SHARED by BackendType \"checkpointer\".> I'm still not 100% happy with IOPath - seems a bit too easy to confuse with> the file path. What about 'origin'?I can see the point about IOPATH.I'm not wild about origin mostly because of the number of O's given thatIO Operation already has two O's. It gets kind of hard to read whenusing Pascal Case: IOOrigin and IOOp.Also, it doesn't totally make sense for alloc. I could be convinced,though.IOSOURCE doesn't have the O problem but does still not make sense foralloc. I also thought of IOSITE and IOVENUE.> Annoying question: pg_stat_io vs pg_statio? I'd not think of suggesting the> latter, except that we already have a bunch of views with that prefix.As far as pg_stat_io vs pg_statio, they are the only stats views whichdon't have an underscore between stat and the rest of the view name, soperhaps we should move away from statio to stat_io going forward anyway.I am imagining adding to them with other iostat type metrics once directIO is introduced, so they may well be changing soon anyway.- Melanie",
"msg_date": "Fri, 15 Jul 2022 11:59:41 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-15 11:59:41 -0400, Melanie Plageman wrote:\n> I'm not sure about the idea of prefixing the IOOp and IOPath enums with\n> Pg_Stat. I could see them being used outside of statistics (though they\n> are defined in pgstat.h)\n\n+1\n\n\n> From Andres:\n> \n> Quoting me (Melanie):\n> > > Introduce \"IOOp\", an IO operation done by a backend, and \"IOPath\", the\n> > > location or type of IO done by a backend. For example, the checkpointer\n> > > may write a shared buffer out. This would be counted as an IOOp write on\n> > > an IOPath IOPATH_SHARED by BackendType \"checkpointer\".\n> \n> > I'm still not 100% happy with IOPath - seems a bit too easy to confuse\n> with\n> > the file path. What about 'origin'?\n> \n> I can see the point about IOPATH.\n> I'm not wild about origin mostly because of the number of O's given that\n> IO Operation already has two O's. It gets kind of hard to read when\n> using Pascal Case: IOOrigin and IOOp.\n> Also, it doesn't totally make sense for alloc. I could be convinced,\n> though.\n> \n> IOSOURCE doesn't have the O problem but does still not make sense for\n> alloc. I also thought of IOSITE and IOVENUE.\n\nI like \"source\" - not too bothered by the alloc aspect. I can also see\n\"context\" working.\n\n\n> > Annoying question: pg_stat_io vs pg_statio? I'd not think of suggesting\n> the\n> > latter, except that we already have a bunch of views with that prefix.\n> \n> As far as pg_stat_io vs pg_statio, they are the only stats views which\n> don't have an underscore between stat and the rest of the view name, so\n> perhaps we should move away from statio to stat_io going forward anyway.\n> I am imagining adding to them with other iostat type metrics once direct\n> IO is introduced, so they may well be changing soon anyway.\n\nI don't think I have strong opinions on this one. I can see arguments for\neither naming.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 11:52:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-14 18:44:48 -0400, Melanie Plageman wrote:\n> Subject: [PATCH v26 1/4] Add BackendType for standalone backends\n> Subject: [PATCH v26 2/4] Remove unneeded call to pgstat_report_wal()\n\nLGTM.\n\n\n> Subject: [PATCH v26 3/4] Track IO operation statistics\n\n> @@ -978,8 +979,17 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \n> \tbufBlock = isLocalBuf ? LocalBufHdrGetBlock(bufHdr) : BufHdrGetBlock(bufHdr);\n> \n> +\tif (isLocalBuf)\n> +\t\tio_path = IOPATH_LOCAL;\n> +\telse if (strategy != NULL)\n> +\t\tio_path = IOPATH_STRATEGY;\n> +\telse\n> +\t\tio_path = IOPATH_SHARED;\n\nSeems a bit ugly to have an if (isLocalBuf) just after an isLocalBuf ?.\n\n\n> +\t\t\t/*\n> +\t\t\t * When a strategy is in use, reused buffers from the strategy ring will\n> +\t\t\t * be counted as allocations for the purposes of IO Operation statistics\n> +\t\t\t * tracking.\n> +\t\t\t *\n> +\t\t\t * However, even when a strategy is in use, if a new buffer must be\n> +\t\t\t * allocated from shared buffers and added to the ring, this is counted\n> +\t\t\t * as a IOPATH_SHARED allocation.\n> +\t\t\t */\n\nThere's a bit too much duplication between the paragraphs...\n\n> @@ -628,6 +637,9 @@ pgstat_report_stat(bool force)\n> \t/* flush database / relation / function / ... stats */\n> \tpartial_flush |= pgstat_flush_pending_entries(nowait);\n> \n> +\t/* flush IO Operations stats */\n> +\tpartial_flush |= pgstat_flush_io_ops(nowait);\n\nCould you either add a note to the commit message that the stats file\nversion needs to be increased, or just iclude that in the patch.\n\n\n\n\n> @@ -1427,8 +1445,10 @@ pgstat_read_statsfile(void)\n> \tFILE\t *fpin;\n> \tint32\t\tformat_id;\n> \tbool\t\tfound;\n> +\tPgStat_BackendIOPathOps io_stats;\n> \tconst char *statfile = PGSTAT_STAT_PERMANENT_FILENAME;\n> \tPgStat_ShmemControl *shmem = pgStatLocal.shmem;\n> +\tPgStatShared_BackendIOPathOps *io_stats_shmem = &shmem->io_ops;\n> \n> \t/* shouldn't be called from postmaster */\n> \tAssert(IsUnderPostmaster || !IsPostmasterEnvironment);\n> @@ -1486,6 +1506,22 @@ pgstat_read_statsfile(void)\n> \tif (!read_chunk_s(fpin, &shmem->checkpointer.stats))\n> \t\tgoto error;\n> \n> +\t/*\n> +\t * Read IO Operations stats struct\n> +\t */\n> +\tif (!read_chunk_s(fpin, &io_stats))\n> +\t\tgoto error;\n> +\n> +\tio_stats_shmem->stat_reset_timestamp = io_stats.stat_reset_timestamp;\n> +\n> +\tfor (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> +\t{\n> +\t\tPgStat_IOPathOps *stats = &io_stats.stats[i];\n> +\t\tPgStatShared_IOPathOps *stats_shmem = &io_stats_shmem->stats[i];\n> +\n> +\t\tmemcpy(stats_shmem->data, stats->data, sizeof(stats->data));\n> +\t}\n\nWhy can't the data be read directly into shared memory?\n\n\n> \t/*\n\n\n> +void\n> +pgstat_io_ops_snapshot_cb(void)\n> +{\n> +\tPgStatShared_BackendIOPathOps *all_backend_stats_shmem = &pgStatLocal.shmem->io_ops;\n> +\tPgStat_BackendIOPathOps *all_backend_stats_snap = &pgStatLocal.snapshot.io_ops;\n> +\n> +\tfor (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> +\t{\n> +\t\tPgStatShared_IOPathOps *stats_shmem = &all_backend_stats_shmem->stats[i];\n> +\t\tPgStat_IOPathOps *stats_snap = &all_backend_stats_snap->stats[i];\n> +\n> +\t\tLWLockAcquire(&stats_shmem->lock, LW_EXCLUSIVE);\n\nWhy acquire the same lock repeatedly for each type, rather than once for\nthe whole?\n\n\n> +\t\t/*\n> +\t\t * Use the lock in the first BackendType's PgStat_IOPathOps to protect the\n> +\t\t * reset timestamp as well.\n> +\t\t */\n> +\t\tif (i == 0)\n> +\t\t\tall_backend_stats_snap->stat_reset_timestamp = all_backend_stats_shmem->stat_reset_timestamp;\n\nWhich also would make this look a bit less awkward.\n\nStarting to look pretty good...\n\n- Andres\n\n\n",
"msg_date": "Wed, 20 Jul 2022 09:50:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 12:50 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> On 2022-07-14 18:44:48 -0400, Melanie Plageman wrote:\n>\n> > @@ -1427,8 +1445,10 @@ pgstat_read_statsfile(void)\n> > FILE *fpin;\n> > int32 format_id;\n> > bool found;\n> > + PgStat_BackendIOPathOps io_stats;\n> > const char *statfile = PGSTAT_STAT_PERMANENT_FILENAME;\n> > PgStat_ShmemControl *shmem = pgStatLocal.shmem;\n> > + PgStatShared_BackendIOPathOps *io_stats_shmem = &shmem->io_ops;\n> >\n> > /* shouldn't be called from postmaster */\n> > Assert(IsUnderPostmaster || !IsPostmasterEnvironment);\n> > @@ -1486,6 +1506,22 @@ pgstat_read_statsfile(void)\n> > if (!read_chunk_s(fpin, &shmem->checkpointer.stats))\n> > goto error;\n> >\n> > + /*\n> > + * Read IO Operations stats struct\n> > + */\n> > + if (!read_chunk_s(fpin, &io_stats))\n> > + goto error;\n> > +\n> > + io_stats_shmem->stat_reset_timestamp =\n> io_stats.stat_reset_timestamp;\n> > +\n> > + for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> > + {\n> > + PgStat_IOPathOps *stats = &io_stats.stats[i];\n> > + PgStatShared_IOPathOps *stats_shmem =\n> &io_stats_shmem->stats[i];\n> > +\n> > + memcpy(stats_shmem->data, stats->data,\n> sizeof(stats->data));\n> > + }\n>\n> Why can't the data be read directly into shared memory?\n>\n>\nIt is not the same lock. Each PgStatShared_IOPathOps has a lock so that\nthey can be accessed individually (per BackendType in\nPgStatShared_BackendIOPathOps). It is optimized for the more common\noperation of flushing at the expense of the snapshot operation (which\nshould be less common) and reset operation.\n\n\n> > +void\n> > +pgstat_io_ops_snapshot_cb(void)\n> > +{\n> > + PgStatShared_BackendIOPathOps *all_backend_stats_shmem =\n> &pgStatLocal.shmem->io_ops;\n> > + PgStat_BackendIOPathOps *all_backend_stats_snap =\n> &pgStatLocal.snapshot.io_ops;\n> > +\n> > + for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> > + {\n> > + PgStatShared_IOPathOps *stats_shmem =\n> &all_backend_stats_shmem->stats[i];\n> > + PgStat_IOPathOps *stats_snap =\n> &all_backend_stats_snap->stats[i];\n> > +\n> > + LWLockAcquire(&stats_shmem->lock, LW_EXCLUSIVE);\n>\n> Why acquire the same lock repeatedly for each type, rather than once for\n> the whole?\n>\n>\nThis is also because of having a LWLock in each PgStatShared_IOPathOps.\nBecause I don't want a lock in the backend local stats, I have two data\nstructures PgStatShared_IOPathOps and PgStat_IOPathOps. I thought it was\nodd to write out the lock to the file, so when persisting the stats, I\nwrite out the relevant data only and when reading it back in to shared\nmemory, I read in the data member of PgStatShared_IOPathOps.\n\nOn Wed, Jul 20, 2022 at 12:50 PM Andres Freund <andres@anarazel.de> wrote:\nOn 2022-07-14 18:44:48 -0400, Melanie Plageman wrote:\n> @@ -1427,8 +1445,10 @@ pgstat_read_statsfile(void)\n> FILE *fpin;\n> int32 format_id;\n> bool found;\n> + PgStat_BackendIOPathOps io_stats;\n> const char *statfile = PGSTAT_STAT_PERMANENT_FILENAME;\n> PgStat_ShmemControl *shmem = pgStatLocal.shmem;\n> + PgStatShared_BackendIOPathOps *io_stats_shmem = &shmem->io_ops;\n> \n> /* shouldn't be called from postmaster */\n> Assert(IsUnderPostmaster || !IsPostmasterEnvironment);\n> @@ -1486,6 +1506,22 @@ pgstat_read_statsfile(void)\n> if (!read_chunk_s(fpin, &shmem->checkpointer.stats))\n> goto error;\n> \n> + /*\n> + * Read IO Operations stats struct\n> + */\n> + if (!read_chunk_s(fpin, &io_stats))\n> + goto error;\n> +\n> + io_stats_shmem->stat_reset_timestamp = io_stats.stat_reset_timestamp;\n> +\n> + for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> + {\n> + PgStat_IOPathOps *stats = &io_stats.stats[i];\n> + PgStatShared_IOPathOps *stats_shmem = &io_stats_shmem->stats[i];\n> +\n> + memcpy(stats_shmem->data, stats->data, sizeof(stats->data));\n> + }\n\nWhy can't the data be read directly into shared memory?\nIt is not the same lock. Each PgStatShared_IOPathOps has a lock so thatthey can be accessed individually (per BackendType inPgStatShared_BackendIOPathOps). It is optimized for the more commonoperation of flushing at the expense of the snapshot operation (whichshould be less common) and reset operation.\n\n> +void\n> +pgstat_io_ops_snapshot_cb(void)\n> +{\n> + PgStatShared_BackendIOPathOps *all_backend_stats_shmem = &pgStatLocal.shmem->io_ops;\n> + PgStat_BackendIOPathOps *all_backend_stats_snap = &pgStatLocal.snapshot.io_ops;\n> +\n> + for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> + {\n> + PgStatShared_IOPathOps *stats_shmem = &all_backend_stats_shmem->stats[i];\n> + PgStat_IOPathOps *stats_snap = &all_backend_stats_snap->stats[i];\n> +\n> + LWLockAcquire(&stats_shmem->lock, LW_EXCLUSIVE);\n\nWhy acquire the same lock repeatedly for each type, rather than once for\nthe whole?\nThis is also because of having a LWLock in each PgStatShared_IOPathOps.Because I don't want a lock in the backend local stats, I have two datastructures PgStatShared_IOPathOps and PgStat_IOPathOps. I thought it wasodd to write out the lock to the file, so when persisting the stats, Iwrite out the relevant data only and when reading it back in to sharedmemory, I read in the data member of PgStatShared_IOPathOps.",
"msg_date": "Wed, 20 Jul 2022 13:40:40 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "I've attached v27 of the patch.\n\nI've renamed IOPATH to IOCONTEXT. I also have added assertions to\nconfirm that unexpected statistics are not being accumulated.\n\nThere are also assorted other cleanups and changes.\n\nIt would be good to confirm that the rows being skipped and cells that\nare NULL in the view are the correct ones.\nThe startup process will never use a BufferAccessStrategy, right?\n\n\nOn Wed, Jul 20, 2022 at 12:50 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > Subject: [PATCH v26 3/4] Track IO operation statistics\n>\n> > @@ -978,8 +979,17 @@ ReadBuffer_common(SMgrRelation smgr, char\n> relpersistence, ForkNumber forkNum,\n> >\n> > bufBlock = isLocalBuf ? LocalBufHdrGetBlock(bufHdr) :\n> BufHdrGetBlock(bufHdr);\n> >\n> > + if (isLocalBuf)\n> > + io_path = IOPATH_LOCAL;\n> > + else if (strategy != NULL)\n> > + io_path = IOPATH_STRATEGY;\n> > + else\n> > + io_path = IOPATH_SHARED;\n>\n> Seems a bit ugly to have an if (isLocalBuf) just after an isLocalBuf ?.\n>\n\nChanged this.\n\n\n>\n>\n> > + /*\n> > + * When a strategy is in use, reused buffers from\n> the strategy ring will\n> > + * be counted as allocations for the purposes of\n> IO Operation statistics\n> > + * tracking.\n> > + *\n> > + * However, even when a strategy is in use, if a\n> new buffer must be\n> > + * allocated from shared buffers and added to the\n> ring, this is counted\n> > + * as a IOPATH_SHARED allocation.\n> > + */\n>\n> There's a bit too much duplication between the paragraphs...\n>\n\nI actually think the two paragraphs are making separate points. I've\nedited this, so see if you like it better now.\n\n\n>\n> > @@ -628,6 +637,9 @@ pgstat_report_stat(bool force)\n> > /* flush database / relation / function / ... stats */\n> > partial_flush |= pgstat_flush_pending_entries(nowait);\n> >\n> > + /* flush IO Operations stats */\n> > + partial_flush |= pgstat_flush_io_ops(nowait);\n>\n> Could you either add a note to the commit message that the stats file\n> version needs to be increased, or just iclude that in the patch.\n>\n>\nBumped the stats file version in attached patchset.\n\n- Melanie",
"msg_date": "Thu, 11 Aug 2022 19:53:09 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v28 attached.\n\nI've added the new structs I added to typedefs.list.\n\nI've split the commit which adds all of the logic to track\nIO operation statistics into two commits -- one which includes all of\nthe code to count IOOps for IOContexts locally in a backend and a second\nwhich includes all of the code to accumulate and manage these with the\ncumulative stats system.\n\nA few notes about the commit which adds local IO Operation stats:\n\n- There is a comment above pgstat_io_op_stats_collected() which mentions\nthe cumulative stats system even though this commit doesn't engage the\ncumulative stats system. I wasn't sure if it was more or less\nconfusing to have two different versions of this comment.\n\n- should pgstat_count_io_op() take BackendType as a parameter instead of\nusing MyBackendType internally?\n\n- pgstat_count_io_op() Assert()s that the passed-in IOOp and IOContext\nare valid for this BackendType, but it doesn't check that all of the\npending stats which should be zero are zero. I thought this was okay\nbecause if I did add that zero-check, it would be added to\npgstat_count_ioop() as well, and we already Assert() there that we can\ncount the op. Thus, it doesn't seem like checking that the stats are\nzero would add any additional regression protection.\n\n- I've kept pgstat_io_context_desc() and pgstat_io_op_desc() in the\ncommit which adds those types (the local stats commit), however they\nare not used in that commit. I wasn't sure if I should keep them in\nthat commit or move them to the first commit using them (the commit\nadding the new view).\n\nNotes on the commit which accumulates IO Operation stats in shared\nmemory:\n\n- I've extended the usage of the Assert()s that IO Operation stats that\nshould be zero are. Previously we only checked the stats validity when\nquerying the view. Now we check it when flushing pending stats and\nwhen reading the stats file into shared memory.\n\nNote that the three locations with these validity checks (when\nflushing pending stats, when reading stats file into shared memory,\nand when querying the view) have similar looking code to loop through\nand validate the stats. However, the actual action they perform if the\nstats are valid is different for each site (adding counters together,\ndoing a read, setting nulls in a tuple column to true). Also, some of\nthese instances have other code interspersed in the loops which would\nrequire additional looping if separated from this logic. So it was\ndifficult to see a way of combining these into a single helper\nfunction.\n\n- I've left pgstat_fetch_backend_io_context_ops() in the shared stats\ncommit, however it is not used until the commit which adds the view in\npg_stat_get_io(). I wasn't sure which way seemed better.\n\n- Melanie",
"msg_date": "Mon, 22 Aug 2022 13:15:18 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 13:15:18 -0400, Melanie Plageman wrote:\n> v28 attached.\n\nPushed 0001, 0002. Thanks!\n\n- Andres\n\n\n",
"msg_date": "Mon, 22 Aug 2022 20:31:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 13:15:18 -0400, Melanie Plageman wrote:\n> v28 attached.\n> \n> I've added the new structs I added to typedefs.list.\n> \n> I've split the commit which adds all of the logic to track\n> IO operation statistics into two commits -- one which includes all of\n> the code to count IOOps for IOContexts locally in a backend and a second\n> which includes all of the code to accumulate and manage these with the\n> cumulative stats system.\n\nThanks!\n\n\n> A few notes about the commit which adds local IO Operation stats:\n> \n> - There is a comment above pgstat_io_op_stats_collected() which mentions\n> the cumulative stats system even though this commit doesn't engage the\n> cumulative stats system. I wasn't sure if it was more or less\n> confusing to have two different versions of this comment.\n\nNot worth being worried about...\n\n\n> - should pgstat_count_io_op() take BackendType as a parameter instead of\n> using MyBackendType internally?\n\nI don't forsee a case where a different value would be passed in.\n\n\n> - pgstat_count_io_op() Assert()s that the passed-in IOOp and IOContext\n> are valid for this BackendType, but it doesn't check that all of the\n> pending stats which should be zero are zero. I thought this was okay\n> because if I did add that zero-check, it would be added to\n> pgstat_count_ioop() as well, and we already Assert() there that we can\n> count the op. Thus, it doesn't seem like checking that the stats are\n> zero would add any additional regression protection.\n\nIt's probably ok.\n\n\n> - I've kept pgstat_io_context_desc() and pgstat_io_op_desc() in the\n> commit which adds those types (the local stats commit), however they\n> are not used in that commit. I wasn't sure if I should keep them in\n> that commit or move them to the first commit using them (the commit\n> adding the new view).\n\n> - I've left pgstat_fetch_backend_io_context_ops() in the shared stats\n> commit, however it is not used until the commit which adds the view in\n> pg_stat_get_io(). I wasn't sure which way seemed better.\n\n\nThink that's fine.\n\n\n> Notes on the commit which accumulates IO Operation stats in shared\n> memory:\n> \n> - I've extended the usage of the Assert()s that IO Operation stats that\n> should be zero are. Previously we only checked the stats validity when\n> querying the view. Now we check it when flushing pending stats and\n> when reading the stats file into shared memory.\n\n> Note that the three locations with these validity checks (when\n> flushing pending stats, when reading stats file into shared memory,\n> and when querying the view) have similar looking code to loop through\n> and validate the stats. However, the actual action they perform if the\n> stats are valid is different for each site (adding counters together,\n> doing a read, setting nulls in a tuple column to true). Also, some of\n> these instances have other code interspersed in the loops which would\n> require additional looping if separated from this logic. So it was\n> difficult to see a way of combining these into a single helper\n> function.\n\nAll of them seem to repeat something like\n\n> +\t\t\t\tif (!pgstat_bktype_io_op_valid(bktype, io_op) ||\n> +\t\t\t\t\t!pgstat_io_context_io_op_valid(io_context, io_op))\n\nperhaps those could be combined? Afaics nothing uses pgstat_bktype_io_op_valid\nseparately.\n\n\n> Subject: [PATCH v28 3/5] Track IO operation statistics locally\n> \n> Introduce \"IOOp\", an IO operation done by a backend, and \"IOContext\",\n> the IO location source or target or IO type done by a backend. For\n> example, the checkpointer may write a shared buffer out. This would be\n> counted as an IOOp \"write\" on an IOContext IOCONTEXT_SHARED by\n> BackendType \"checkpointer\".\n> \n> Each IOOp (alloc, extend, fsync, read, write) is counted per IOContext\n> (local, shared, or strategy) through a call to pgstat_count_io_op().\n> \n> The primary concern of these statistics is IO operations on data blocks\n> during the course of normal database operations. IO done by, for\n> example, the archiver or syslogger is not counted in these statistics.\n\ns/is/are/?\n\n\n> Stats on IOOps for all IOContexts for a backend are counted in a\n> backend's local memory. This commit does not expose any functions for\n> aggregating or viewing these stats.\n\ns/This commit does not/A subsequent commit will expose/...\n\n\n> @@ -823,6 +823,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \tBufferDesc *bufHdr;\n> \tBlock\t\tbufBlock;\n> \tbool\t\tfound;\n> +\tIOContext\tio_context;\n> \tbool\t\tisExtend;\n> \tbool\t\tisLocalBuf = SmgrIsTemp(smgr);\n> \n> @@ -986,10 +987,25 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t */\n> \tAssert(!(pg_atomic_read_u32(&bufHdr->state) & BM_VALID));\t/* spinlock not needed */\n> \n> -\tbufBlock = isLocalBuf ? LocalBufHdrGetBlock(bufHdr) : BufHdrGetBlock(bufHdr);\n> +\tif (isLocalBuf)\n> +\t{\n> +\t\tbufBlock = LocalBufHdrGetBlock(bufHdr);\n> +\t\tio_context = IOCONTEXT_LOCAL;\n> +\t}\n> +\telse\n> +\t{\n> +\t\tbufBlock = BufHdrGetBlock(bufHdr);\n> +\n> +\t\tif (strategy != NULL)\n> +\t\t\tio_context = IOCONTEXT_STRATEGY;\n> +\t\telse\n> +\t\t\tio_context = IOCONTEXT_SHARED;\n> +\t}\n\nThere's a isLocalBuf block earlier on, couldn't we just determine the context\nthere? I guess there's a branch here already, so it's probably fine as is.\n\n\n> \tif (isExtend)\n> \t{\n> +\n> +\t\tpgstat_count_io_op(IOOP_EXTEND, io_context);\n\nSpurious newline.\n\n\n> @@ -2820,9 +2857,12 @@ BufferGetTag(Buffer buffer, RelFileLocator *rlocator, ForkNumber *forknum,\n> *\n> * If the caller has an smgr reference for the buffer's relation, pass it\n> * as the second parameter. If not, pass NULL.\n> + *\n> + * IOContext will always be IOCONTEXT_SHARED except when a buffer access strategy is\n> + * used and the buffer being flushed is a buffer from the strategy ring.\n> */\n> static void\n> -FlushBuffer(BufferDesc *buf, SMgrRelation reln)\n> +FlushBuffer(BufferDesc *buf, SMgrRelation reln, IOContext io_context)\n\nToo long line?\n\nBut also, why document the possible values here? Seems likely to get out of\ndate at some point, and it doesn't seem important to know?\n\n\n> @@ -3549,6 +3591,8 @@ FlushRelationBuffers(Relation rel)\n> \t\t\t\t\t\t localpage,\n> \t\t\t\t\t\t false);\n> \n> +\t\t\t\tpgstat_count_io_op(IOOP_WRITE, IOCONTEXT_LOCAL);\n> +\n> \t\t\t\tbuf_state &= ~(BM_DIRTY | BM_JUST_DIRTIED);\n> \t\t\t\tpg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n> \n\nProbably not worth doing, but these made me wonder whether there should be a\nfunction for counting N operations at once.\n\n\n\n> @@ -212,8 +215,23 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state)\n> \tif (strategy != NULL)\n> \t{\n> \t\tbuf = GetBufferFromRing(strategy, buf_state);\n> -\t\tif (buf != NULL)\n> +\t\t*from_ring = buf != NULL;\n> +\t\tif (*from_ring)\n> +\t\t{\n\nDon't really like the if (*from_ring) - why not keep it as buf != NULL? Seems\na bit confusing this way, making it less obvious what's being changed.\n\n\n> diff --git a/src/backend/storage/buffer/localbuf.c b/src/backend/storage/buffer/localbuf.c\n> index 014f644bf9..a3d76599bf 100644\n> --- a/src/backend/storage/buffer/localbuf.c\n> +++ b/src/backend/storage/buffer/localbuf.c\n> @@ -15,6 +15,7 @@\n> */\n> #include \"postgres.h\"\n> \n> +#include \"pgstat.h\"\n> #include \"access/parallel.h\"\n> #include \"catalog/catalog.h\"\n> #include \"executor/instrument.h\"\n\nDo most other places not put pgstat.h in the alphabetical order of headers?\n\n\n> @@ -432,6 +432,15 @@ ProcessSyncRequests(void)\n> \t\t\t\t\ttotal_elapsed += elapsed;\n> \t\t\t\t\tprocessed++;\n> \n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * Note that if a backend using a BufferAccessStrategy is\n> +\t\t\t\t\t * forced to do its own fsync (as opposed to the\n> +\t\t\t\t\t * checkpointer doing it), it will not be counted as an\n> +\t\t\t\t\t * IOCONTEXT_STRATEGY IOOP_FSYNC and instead will be\n> +\t\t\t\t\t * counted as an IOCONTEXT_SHARED IOOP_FSYNC.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tpgstat_count_io_op(IOOP_FSYNC, IOCONTEXT_SHARED);\n\nWhy is this noted here? Perhaps just point to the place where that happens\ninstead? I think it's also documented in ForwardSyncRequest()? Or just only\nmention it there...\n\n\n> @@ -0,0 +1,191 @@\n> +/* -------------------------------------------------------------------------\n> + *\n> + * pgstat_io_ops.c\n> + *\t Implementation of IO operation statistics.\n> + *\n> + * This file contains the implementation of IO operation statistics. It is kept\n> + * separate from pgstat.c to enforce the line between the statistics access /\n> + * storage implementation and the details about individual types of\n> + * statistics.\n> + *\n> + * Copyright (c) 2001-2022, PostgreSQL Global Development Group\n\nArguably this would just be 2021-2022\n\n\n> +void\n> +pgstat_count_io_op(IOOp io_op, IOContext io_context)\n> +{\n> +\tPgStat_IOOpCounters *pending_counters = &pending_IOOpStats.data[io_context];\n> +\n> +\tAssert(pgstat_expect_io_op(MyBackendType, io_context, io_op));\n> +\n> +\tswitch (io_op)\n> +\t{\n> +\t\tcase IOOP_ALLOC:\n> +\t\t\tpending_counters->allocs++;\n> +\t\t\tbreak;\n> +\t\tcase IOOP_EXTEND:\n> +\t\t\tpending_counters->extends++;\n> +\t\t\tbreak;\n> +\t\tcase IOOP_FSYNC:\n> +\t\t\tpending_counters->fsyncs++;\n> +\t\t\tbreak;\n> +\t\tcase IOOP_READ:\n> +\t\t\tpending_counters->reads++;\n> +\t\t\tbreak;\n> +\t\tcase IOOP_WRITE:\n> +\t\t\tpending_counters->writes++;\n> +\t\t\tbreak;\n> +\t}\n> +\n> +}\n\nHow about replacing the breaks with a return and then erroring out if we reach\nthe end of the function? You did that below, and I think it makes sense.\n\n\n> +bool\n> +pgstat_bktype_io_context_valid(BackendType bktype, IOContext io_context)\n> +{\n\nMaybe add a tiny comment about what 'valid' means here? Something like\n'return whether the backend type counts io in io_context'.\n\n\n> +\t/*\n> +\t * Only regular backends and WAL Sender processes executing queries should\n> +\t * use local buffers.\n> +\t */\n> +\tno_local = bktype == B_AUTOVAC_LAUNCHER || bktype ==\n> +\t\tB_BG_WRITER || bktype == B_CHECKPOINTER || bktype ==\n> +\t\tB_AUTOVAC_WORKER || bktype == B_BG_WORKER || bktype ==\n> +\t\tB_STANDALONE_BACKEND || bktype == B_STARTUP;\n\nI think BG_WORKERS could end up using local buffers, extensions can do just\nabout everything in them.\n\n\n> +bool\n> +pgstat_bktype_io_op_valid(BackendType bktype, IOOp io_op)\n> +{\n> +\tif ((bktype == B_BG_WRITER || bktype == B_CHECKPOINTER) && io_op ==\n> +\t\tIOOP_READ)\n> +\t\treturn false;\n\nPerhaps we should add an assertion about the backend type making sense here?\nI.e. that it's not archiver, walwriter etc?\n\n\n> +bool\n> +pgstat_io_context_io_op_valid(IOContext io_context, IOOp io_op)\n> +{\n> +\t/*\n> +\t * Temporary tables using local buffers are not logged and thus do not\n> +\t * require fsync'ing. Set this cell to NULL to differentiate between an\n> +\t * invalid combination and 0 observed IO Operations.\n\nThis comment feels a bit out of place?\n\n\n> +bool\n> +pgstat_expect_io_op(BackendType bktype, IOContext io_context, IOOp io_op)\n> +{\n> +\tif (!pgstat_io_op_stats_collected(bktype))\n> +\t\treturn false;\n> +\n> +\tif (!pgstat_bktype_io_context_valid(bktype, io_context))\n> +\t\treturn false;\n> +\n> +\tif (!pgstat_bktype_io_op_valid(bktype, io_op))\n> +\t\treturn false;\n> +\n> +\tif (!pgstat_io_context_io_op_valid(io_context, io_op))\n> +\t\treturn false;\n> +\n> +\t/*\n> +\t * There are currently no cases of a BackendType, IOContext, IOOp\n> +\t * combination that are specifically invalid.\n> +\t */\n\n\"specifically\"?\n\n\n> From 0f141fa7f97a57b8628b1b6fd6029bd3782f16a1 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Mon, 22 Aug 2022 11:35:20 -0400\n> Subject: [PATCH v28 4/5] Aggregate IO operation stats per BackendType\n> \n> Stats on IOOps for all IOContexts for a backend are tracked locally. Add\n> functionality for backends to flush these stats to shared memory and\n> accumulate them with those from all other backends, exited and live.\n> Also add reset and snapshot functions used by cumulative stats system\n> for management of these statistics.\n> \n> The aggregated stats in shared memory could be extended in the future\n> with per-backend stats -- useful for per connection IO statistics and\n> monitoring.\n> \n> Some BackendTypes will not flush their pending statistics at regular\n> intervals and explicitly call pgstat_flush_io_ops() during the course of\n> normal operations to flush their backend-local IO Operation statistics\n> to shared memory in a timely manner.\n\n> Because not all BackendType, IOOp, IOContext combinations are valid, the\n> validity of the stats are checked before flushing pending stats and\n> before reading in the existing stats file to shared memory.\n\ns/are checked/is checked/?\n\n\n\n> @@ -1486,6 +1507,42 @@ pgstat_read_statsfile(void)\n> \tif (!read_chunk_s(fpin, &shmem->checkpointer.stats))\n> \t\tgoto error;\n> \n> +\t/*\n> +\t * Read IO Operations stats struct\n> +\t */\n> +\tif (!read_chunk_s(fpin, &shmem->io_ops.stat_reset_timestamp))\n> +\t\tgoto error;\n> +\n> +\tfor (int backend_type = 0; backend_type < BACKEND_NUM_TYPES; backend_type++)\n> +\t{\n> +\t\tPgStatShared_IOContextOps *backend_io_context_ops = &shmem->io_ops.stats[backend_type];\n> +\t\tbool\t\texpect_backend_stats = true;\n> +\n> +\t\tif (!pgstat_io_op_stats_collected(backend_type))\n> +\t\t\texpect_backend_stats = false;\n> +\n> +\t\tfor (int io_context = 0; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> +\t\t{\n> +\t\t\tif (!expect_backend_stats ||\n> +\t\t\t\t!pgstat_bktype_io_context_valid(backend_type, io_context))\n> +\t\t\t{\n> +\t\t\t\tpgstat_io_context_ops_assert_zero(&backend_io_context_ops->data[io_context]);\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\tfor (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> +\t\t\t{\n> +\t\t\t\tif (!pgstat_bktype_io_op_valid(backend_type, io_op) ||\n> +\t\t\t\t\t!pgstat_io_context_io_op_valid(io_context, io_op))\n> +\t\t\t\t\tpgstat_io_op_assert_zero(&backend_io_context_ops->data[io_context],\n> +\t\t\t\t\t\t\t\t\t\t\t io_op);\n> +\t\t\t}\n> +\t\t}\n> +\n> +\t\tif (!read_chunk_s(fpin, &backend_io_context_ops->data))\n> +\t\t\tgoto error;\n> +\t}\n\nCould we put the validation out of line? That's a lot of io stats specific\ncode to be in pgstat_read_statsfile().\n\n> +/*\n> + * Helper function to accumulate PgStat_IOOpCounters. If either of the\n> + * passed-in PgStat_IOOpCounters are members of PgStatShared_IOContextOps, the\n> + * caller is responsible for ensuring that the appropriate lock is held. This\n> + * is not asserted because this function could plausibly be used to accumulate\n> + * two local/pending PgStat_IOOpCounters.\n\nWhat's \"this\" here?\n\n\n> + */\n> +static void\n> +pgstat_accum_io_op(PgStat_IOOpCounters *shared, PgStat_IOOpCounters *local, IOOp io_op)\n\nGiven that the comment above says both of them may be local, it's a bit odd to\ncall it 'shared' here...\n\n\n> +PgStat_BackendIOContextOps *\n> +pgstat_fetch_backend_io_context_ops(void)\n> +{\n> +\tpgstat_snapshot_fixed(PGSTAT_KIND_IOOPS);\n> +\n> +\treturn &pgStatLocal.snapshot.io_ops;\n> +}\n\nNot for this patch series, but we really should replace this set of functions\nwith storing the relevant offset in the kind_info.\n\n\n> @@ -496,6 +503,8 @@ extern PgStat_CheckpointerStats *pgstat_fetch_stat_checkpointer(void);\n> */\n> \n> extern void pgstat_count_io_op(IOOp io_op, IOContext io_context);\n> +extern PgStat_BackendIOContextOps *pgstat_fetch_backend_io_context_ops(void);\n> +extern bool pgstat_flush_io_ops(bool nowait);\n> extern const char *pgstat_io_context_desc(IOContext io_context);\n> extern const char *pgstat_io_op_desc(IOOp io_op);\n> \n\nIs there any call to pgstat_flush_io_ops() from outside pgstat*.c? So possibly\nit could be in pgstat_internal.h? Not that it's particularly important...\n\n\n> @@ -506,6 +515,43 @@ extern bool pgstat_bktype_io_op_valid(BackendType bktype, IOOp io_op);\n> extern bool pgstat_io_context_io_op_valid(IOContext io_context, IOOp io_op);\n> extern bool pgstat_expect_io_op(BackendType bktype, IOContext io_context, IOOp io_op);\n> \n> +/*\n> + * Functions to assert that invalid IO Operation counters are zero. Used with\n> + * the validation functions in pgstat_io_ops.c\n> + */\n> +static inline void\n> +pgstat_io_context_ops_assert_zero(PgStat_IOOpCounters *counters)\n> +{\n> +\tAssert(counters->allocs == 0 && counters->extends == 0 &&\n> +\t\t counters->fsyncs == 0 && counters->reads == 0 &&\n> +\t\t counters->writes == 0);\n> +}\n> +\n> +static inline void\n> +pgstat_io_op_assert_zero(PgStat_IOOpCounters *counters, IOOp io_op)\n> +{\n> +\tswitch (io_op)\n> +\t{\n> +\t\tcase IOOP_ALLOC:\n> +\t\t\tAssert(counters->allocs == 0);\n> +\t\t\treturn;\n> +\t\tcase IOOP_EXTEND:\n> +\t\t\tAssert(counters->extends == 0);\n> +\t\t\treturn;\n> +\t\tcase IOOP_FSYNC:\n> +\t\t\tAssert(counters->fsyncs == 0);\n> +\t\t\treturn;\n> +\t\tcase IOOP_READ:\n> +\t\t\tAssert(counters->reads == 0);\n> +\t\t\treturn;\n> +\t\tcase IOOP_WRITE:\n> +\t\t\tAssert(counters->writes == 0);\n> +\t\t\treturn;\n> +\t}\n> +\n> +\telog(ERROR, \"unrecognized IOOp value: %d\", io_op);\n\nHm. This means it'll emit code even in non-assertion builds - this should\nprobably just be an Assert(false) or pg_unreachable().\n\n\n> Subject: [PATCH v28 5/5] Add system view tracking IO ops per backend type\n\n> View stats are fetched from statistics incremented when a backend\n> performs an IO Operation and maintained by the cumulative statistics\n> subsystem.\n\n\"fetched from statistics incremented\"?\n\n\n> Each row of the view is stats for a particular BackendType for a\n> particular IOContext (e.g. shared buffer accesses by checkpointer) and\n> each column in the view is the total number of IO Operations done (e.g.\n> writes).\n\ns/is/shows/?\n\ns/for a particular BackendType for a particular IOContext/for a particularl\nBackendType and IOContext/? Somehow the repetition is weird.\n\n\n> Note that some of the cells in the view are redundant with fields in\n> pg_stat_bgwriter (e.g. buffers_backend), however these have been kept in\n> pg_stat_bgwriter for backwards compatibility. Deriving the redundant\n> pg_stat_bgwriter stats from the IO operations stats structures was also\n> problematic due to the separate reset targets for 'bgwriter' and\n> 'io'.\n\nI suspect we should still consider doing that in the future, perhaps by\ndocumenting that the relevant fields in pg_stat_bgwriter aren't reset by the\n'bgwriter' target anymore? And noting that reliance on those fields is\n\"deprecated\" and that pg_stat_io should be used instead?\n\n\n> Suggested by Andres Freund\n> \n> Author: Melanie Plageman <melanieplageman@gmail.com>\n> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> Discussion: https://www.postgresql.org/message-id/flat/20200124195226.lth52iydq2n2uilq%40alap3.anarazel.de\n> ---\n> doc/src/sgml/monitoring.sgml | 115 ++++++++++++++-\n> src/backend/catalog/system_views.sql | 12 ++\n> src/backend/utils/adt/pgstatfuncs.c | 100 +++++++++++++\n> src/include/catalog/pg_proc.dat | 9 ++\n> src/test/regress/expected/rules.out | 9 ++\n> src/test/regress/expected/stats.out | 201 +++++++++++++++++++++++++++\n> src/test/regress/sql/stats.sql | 103 ++++++++++++++\n> 7 files changed, 548 insertions(+), 1 deletion(-)\n> \n> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> index 9440b41770..9949011ba3 100644\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -448,6 +448,15 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser\n> </entry>\n> </row>\n> \n> + <row>\n> + <entry><structname>pg_stat_io</structname><indexterm><primary>pg_stat_io</primary></indexterm></entry>\n> + <entry>A row for each IO Context for each backend type showing\n> + statistics about backend IO operations. See\n> + <link linkend=\"monitoring-pg-stat-io-view\">\n> + <structname>pg_stat_io</structname></link> for details.\n> + </entry>\n> + </row>\n\nThe \"for each for each\" thing again :)\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>io_context</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + IO Context used (e.g. shared buffers, direct).\n> + </para></entry>\n> + </row>\n\nWrong list of contexts.\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>alloc</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of buffers allocated.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>extend</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of blocks extended.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>fsync</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of blocks fsynced.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>read</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of blocks read.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>write</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of blocks written.\n> + </para></entry>\n> + </row>\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>stats_reset</structfield> <type>timestamp with time zone</type>\n> + </para>\n> + <para>\n> + Time at which these statistics were last reset.\n> </para></entry>\n> </row>\n> </tbody>\n\nPart of me thinks it'd be nicer if it were \"allocated, read, written, extended,\nfsynced, stats_reset\", instead of alphabetical order. The order already isn't\nalphabetical.\n\n\n> +\t/*\n> +\t * When adding a new column to the pg_stat_io view, add a new enum value\n> +\t * here above IO_NUM_COLUMNS.\n> +\t */\n> +\tenum\n> +\t{\n> +\t\tIO_COLUMN_BACKEND_TYPE,\n> +\t\tIO_COLUMN_IO_CONTEXT,\n> +\t\tIO_COLUMN_ALLOCS,\n> +\t\tIO_COLUMN_EXTENDS,\n> +\t\tIO_COLUMN_FSYNCS,\n> +\t\tIO_COLUMN_READS,\n> +\t\tIO_COLUMN_WRITES,\n> +\t\tIO_COLUMN_RESET_TIME,\n> +\t\tIO_NUM_COLUMNS,\n> +\t};\n\nGiven it's local and some of the lines are long, maybe just use COL?\n\n\n> +#define IO_COLUMN_IOOP_OFFSET (IO_COLUMN_IO_CONTEXT + 1)\n\nUndef'ing it probably worth doing.\n\n\n> +\tSetSingleFuncCall(fcinfo, 0);\n> +\trsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> +\n> +\tbackends_io_stats = pgstat_fetch_backend_io_context_ops();\n> +\n> +\treset_time = TimestampTzGetDatum(backends_io_stats->stat_reset_timestamp);\n> +\n> +\tfor (int bktype = 0; bktype < BACKEND_NUM_TYPES; bktype++)\n> +\t{\n> +\t\tDatum\t\tbktype_desc = CStringGetTextDatum(GetBackendTypeDesc(bktype));\n> +\t\tbool\t\texpect_backend_stats = true;\n> +\t\tPgStat_IOContextOps *io_context_ops = &backends_io_stats->stats[bktype];\n> +\n> +\t\t/*\n> +\t\t * For those BackendTypes without IO Operation stats, skip\n> +\t\t * representing them in the view altogether.\n> +\t\t */\n> +\t\tif (!pgstat_io_op_stats_collected(bktype))\n> +\t\t\texpect_backend_stats = false;\n\nWhy not just expect_backend_stats = pgstat_io_op_stats_collected()?\n\n\n> +\t\tfor (int io_context = 0; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> +\t\t{\n> +\t\t\tPgStat_IOOpCounters *counters = &io_context_ops->data[io_context];\n> +\t\t\tDatum\t\tvalues[IO_NUM_COLUMNS];\n> +\t\t\tbool\t\tnulls[IO_NUM_COLUMNS];\n> +\n> +\t\t\t/*\n> +\t\t\t * Some combinations of IOCONTEXT and BackendType are not valid\n> +\t\t\t * for any type of IO Operation. In such cases, omit the entire\n> +\t\t\t * row from the view.\n> +\t\t\t */\n> +\t\t\tif (!expect_backend_stats ||\n> +\t\t\t\t!pgstat_bktype_io_context_valid(bktype, io_context))\n> +\t\t\t{\n> +\t\t\t\tpgstat_io_context_ops_assert_zero(counters);\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\tmemset(values, 0, sizeof(values));\n> +\t\t\tmemset(nulls, 0, sizeof(nulls));\n\nI'd replace the memset with values[...] = {0} etc.\n\n\n> +\t\t\tvalues[IO_COLUMN_BACKEND_TYPE] = bktype_desc;\n> +\t\t\tvalues[IO_COLUMN_IO_CONTEXT] = CStringGetTextDatum(\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t pgstat_io_context_desc(io_context));\n\nPgindent, I hate you.\n\nPerhaps put it the context desc in a local var, so it doesn't look quite this\nugly?\n\n\n> +\t\t\tvalues[IO_COLUMN_ALLOCS] = Int64GetDatum(counters->allocs);\n> +\t\t\tvalues[IO_COLUMN_EXTENDS] = Int64GetDatum(counters->extends);\n> +\t\t\tvalues[IO_COLUMN_FSYNCS] = Int64GetDatum(counters->fsyncs);\n> +\t\t\tvalues[IO_COLUMN_READS] = Int64GetDatum(counters->reads);\n> +\t\t\tvalues[IO_COLUMN_WRITES] = Int64GetDatum(counters->writes);\n> +\t\t\tvalues[IO_COLUMN_RESET_TIME] = TimestampTzGetDatum(reset_time);\n> +\n> +\n> +\t\t\t/*\n> +\t\t\t * Some combinations of BackendType and IOOp and of IOContext and\n> +\t\t\t * IOOp are not valid. Set these cells in the view NULL and assert\n> +\t\t\t * that these stats are zero as expected.\n> +\t\t\t */\n> +\t\t\tfor (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> +\t\t\t{\n> +\t\t\t\tif (!pgstat_bktype_io_op_valid(bktype, io_op) ||\n> +\t\t\t\t\t!pgstat_io_context_io_op_valid(io_context, io_op))\n> +\t\t\t\t{\n> +\t\t\t\t\tpgstat_io_op_assert_zero(counters, io_op);\n> +\t\t\t\t\tnulls[io_op + IO_COLUMN_IOOP_OFFSET] = true;\n> +\t\t\t\t}\n> +\t\t\t}\n\nA bit weird that we first assign a value and then set nulls separately. But\nit's not obvious how to make it look nice otherwise.\n\n> +-- Test that allocs, extends, reads, and writes to Shared Buffers and fsyncs\n> +-- done to ensure durability of Shared Buffers are tracked in pg_stat_io.\n> +SELECT sum(alloc) AS io_sum_shared_allocs_before FROM pg_stat_io WHERE io_context = 'Shared' \\gset\n> +SELECT sum(extend) AS io_sum_shared_extends_before FROM pg_stat_io WHERE io_context = 'Shared' \\gset\n> +SELECT sum(fsync) AS io_sum_shared_fsyncs_before FROM pg_stat_io WHERE io_context = 'Shared' \\gset\n> +SELECT sum(read) AS io_sum_shared_reads_before FROM pg_stat_io WHERE io_context = 'Shared' \\gset\n> +SELECT sum(write) AS io_sum_shared_writes_before FROM pg_stat_io WHERE io_context = 'Shared' \\gset\n> +-- Create a regular table and insert some data to generate IOCONTEXT_SHARED allocs and extends.\n> +CREATE TABLE test_io_shared(a int);\n> +INSERT INTO test_io_shared SELECT i FROM generate_series(1,100)i;\n> +SELECT pg_stat_force_next_flush();\n> + pg_stat_force_next_flush \n> +--------------------------\n> + \n> +(1 row)\n> +\n> +-- After a checkpoint, there should be some additional IOCONTEXT_SHARED writes and fsyncs.\n> +CHECKPOINT;\n\nDoes that work reliably? A checkpoint could have started just before the\nCREATE TABLE, I think? Then it'd not have flushed those writes yet. I think\ndoing two checkpoints would protect against that.\n\n\n> +DROP TABLE test_io_shared;\n> +DROP TABLESPACE test_io_shared_stats_tblspc;\n\nTablespace creation is somewhat expensive, do we really need that? There\nshould be one set up in setup.sql or such.\n\n\n> +-- Test that allocs, extends, reads, and writes of temporary tables are tracked\n> +-- in pg_stat_io.\n> +CREATE TEMPORARY TABLE test_io_local(a int, b TEXT);\n> +SELECT sum(alloc) AS io_sum_local_allocs_before FROM pg_stat_io WHERE io_context = 'Local' \\gset\n> +SELECT sum(extend) AS io_sum_local_extends_before FROM pg_stat_io WHERE io_context = 'Local' \\gset\n> +SELECT sum(read) AS io_sum_local_reads_before FROM pg_stat_io WHERE io_context = 'Local' \\gset\n> +SELECT sum(write) AS io_sum_local_writes_before FROM pg_stat_io WHERE io_context = 'Local' \\gset\n> +-- Insert enough values that we need to reuse and write out dirty local\n> +-- buffers.\n> +INSERT INTO test_io_local SELECT generate_series(1, 80000) as id,\n> +'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';\n\nCould be abbreviated with repeat('a', some-number) :P\n\nCan the table be smaller than this? That might show up on a slow machine.\n\n\n> +SELECT sum(alloc) AS io_sum_local_allocs_after FROM pg_stat_io WHERE io_context = 'Local' \\gset\n> +SELECT sum(extend) AS io_sum_local_extends_after FROM pg_stat_io WHERE io_context = 'Local' \\gset\n> +SELECT sum(read) AS io_sum_local_reads_after FROM pg_stat_io WHERE io_context = 'Local' \\gset\n> +SELECT sum(write) AS io_sum_local_writes_after FROM pg_stat_io WHERE io_context = 'Local' \\gset\n> +SELECT :io_sum_local_allocs_after > :io_sum_local_allocs_before;\n\nRandom q: Why are we uppercasing the first letter of the context?\n\n\n\n> +CREATE TABLE test_io_strategy(a INT, b INT);\n> +ALTER TABLE test_io_strategy SET (autovacuum_enabled = 'false');\n\nI think you can specify that as part of the CREATE TABLE. Not sure if\notherwise there's not a race where autovac coul start before you do the ALTER.\n\n\n> +INSERT INTO test_io_strategy SELECT i, i from generate_series(1, 8000)i;\n> +-- Ensure that the next VACUUM will need to perform IO by rewriting the table\n> +-- first with VACUUM (FULL).\n\n... because VACUUM FULL currently doesn't set all-visible etc on the pages,\nwhich the subsequent vacuum will then do.\n\n\n> +-- Hope that the previous value of wal_skip_threshold was the default. We\n> +-- can't use BEGIN...SET LOCAL since VACUUM can't be run inside a transaction\n> +-- block.\n> +RESET wal_skip_threshold;\n\nNothing in this file set it before, so that's a pretty sure-to-be-fulfilled\nhope.\n\n\n> +-- Test that, when using a Strategy, if creating a relation, Strategy extends\n\ns/if/when/?\n\n\nLooks good!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Aug 2022 12:15:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v29 attached\n\nOn Thu, Aug 25, 2022 at 3:15 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-08-22 13:15:18 -0400, Melanie Plageman wrote:\n>\n> > Notes on the commit which accumulates IO Operation stats in shared\n> > memory:\n> >\n> > - I've extended the usage of the Assert()s that IO Operation stats that\n> > should be zero are. Previously we only checked the stats validity when\n> > querying the view. Now we check it when flushing pending stats and\n> > when reading the stats file into shared memory.\n>\n> > Note that the three locations with these validity checks (when\n> > flushing pending stats, when reading stats file into shared memory,\n> > and when querying the view) have similar looking code to loop through\n> > and validate the stats. However, the actual action they perform if the\n> > stats are valid is different for each site (adding counters together,\n> > doing a read, setting nulls in a tuple column to true). Also, some of\n> > these instances have other code interspersed in the loops which would\n> > require additional looping if separated from this logic. So it was\n> > difficult to see a way of combining these into a single helper\n> > function.\n>\n> All of them seem to repeat something like\n>\n> > + if (!pgstat_bktype_io_op_valid(bktype,\n> io_op) ||\n> > +\n> !pgstat_io_context_io_op_valid(io_context, io_op))\n>\n> perhaps those could be combined? Afaics nothing uses\n> pgstat_bktype_io_op_valid\n> separately.\n>\n\nI've combined these into pgstat_io_op_valid().\n\n\n>\n> > Subject: [PATCH v28 3/5] Track IO operation statistics locally\n> >\n> > Introduce \"IOOp\", an IO operation done by a backend, and \"IOContext\",\n> > the IO location source or target or IO type done by a backend. For\n> > example, the checkpointer may write a shared buffer out. This would be\n> > counted as an IOOp \"write\" on an IOContext IOCONTEXT_SHARED by\n> > BackendType \"checkpointer\".\n> >\n> > Each IOOp (alloc, extend, fsync, read, write) is counted per IOContext\n> > (local, shared, or strategy) through a call to pgstat_count_io_op().\n> >\n> > The primary concern of these statistics is IO operations on data blocks\n> > during the course of normal database operations. IO done by, for\n> > example, the archiver or syslogger is not counted in these statistics.\n>\n> s/is/are/?\n>\n\nchanged\n\n\n>\n> > Stats on IOOps for all IOContexts for a backend are counted in a\n> > backend's local memory. This commit does not expose any functions for\n> > aggregating or viewing these stats.\n>\n> s/This commit does not/A subsequent commit will expose/...\n>\n\nchanged\n\n\n>\n> > @@ -823,6 +823,7 @@ ReadBuffer_common(SMgrRelation smgr, char\n> relpersistence, ForkNumber forkNum,\n> > BufferDesc *bufHdr;\n> > Block bufBlock;\n> > bool found;\n> > + IOContext io_context;\n> > bool isExtend;\n> > bool isLocalBuf = SmgrIsTemp(smgr);\n> >\n> > @@ -986,10 +987,25 @@ ReadBuffer_common(SMgrRelation smgr, char\n> relpersistence, ForkNumber forkNum,\n> > */\n> > Assert(!(pg_atomic_read_u32(&bufHdr->state) & BM_VALID)); /*\n> spinlock not needed */\n> >\n> > - bufBlock = isLocalBuf ? LocalBufHdrGetBlock(bufHdr) :\n> BufHdrGetBlock(bufHdr);\n> > + if (isLocalBuf)\n> > + {\n> > + bufBlock = LocalBufHdrGetBlock(bufHdr);\n> > + io_context = IOCONTEXT_LOCAL;\n> > + }\n> > + else\n> > + {\n> > + bufBlock = BufHdrGetBlock(bufHdr);\n> > +\n> > + if (strategy != NULL)\n> > + io_context = IOCONTEXT_STRATEGY;\n> > + else\n> > + io_context = IOCONTEXT_SHARED;\n> > + }\n>\n> There's a isLocalBuf block earlier on, couldn't we just determine the\n> context\n> there? I guess there's a branch here already, so it's probably fine as is.\n>\n\nI've added this as close as possible to the code where we use the\nio_context. If I were to move it, it would make sense to move it all the\nway to the top of ReadBuffer_common() where we first define isLocalBuf.\nI've left it as is.\n\n\n>\n> > if (isExtend)\n> > {\n> > +\n> > + pgstat_count_io_op(IOOP_EXTEND, io_context);\n>\n> Spurious newline.\n>\n\nfixed\n\n\n>\n> > @@ -2820,9 +2857,12 @@ BufferGetTag(Buffer buffer, RelFileLocator\n> *rlocator, ForkNumber *forknum,\n> > *\n> > * If the caller has an smgr reference for the buffer's relation, pass\n> it\n> > * as the second parameter. If not, pass NULL.\n> > + *\n> > + * IOContext will always be IOCONTEXT_SHARED except when a buffer\n> access strategy is\n> > + * used and the buffer being flushed is a buffer from the strategy ring.\n> > */\n> > static void\n> > -FlushBuffer(BufferDesc *buf, SMgrRelation reln)\n> > +FlushBuffer(BufferDesc *buf, SMgrRelation reln, IOContext io_context)\n>\n> Too long line?\n>\n> But also, why document the possible values here? Seems likely to get out of\n> date at some point, and it doesn't seem important to know?\n>\n\nDeleted.\n\n\n>\n> > @@ -3549,6 +3591,8 @@ FlushRelationBuffers(Relation rel)\n> > localpage,\n> > false);\n> >\n> > + pgstat_count_io_op(IOOP_WRITE,\n> IOCONTEXT_LOCAL);\n> > +\n> > buf_state &= ~(BM_DIRTY | BM_JUST_DIRTIED);\n> >\n> pg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n> >\n>\n> Probably not worth doing, but these made me wonder whether there should be\n> a\n> function for counting N operations at once.\n>\n>\nWould it be worth it here? We would need a local variable to track how\nmany local buffers we end up writing. Do you think that\npgstat_count_io_op() will not be inlined and thus we will end up with\nlots of extra function calls if we do a pgstat_count_io_op() on every\niteration? And that it will matter in FlushRelationBuffers()?\nThe other times that pgstat_count_io_op() is used in a loop, it is\npart of the branch that will exit the loop and only be called once-ish.\n\nOr are you thinking that just generally it might be nice to have?\n\n\n>\n> > @@ -212,8 +215,23 @@ StrategyGetBuffer(BufferAccessStrategy strategy,\n> uint32 *buf_state)\n> > if (strategy != NULL)\n> > {\n> > buf = GetBufferFromRing(strategy, buf_state);\n> > - if (buf != NULL)\n> > + *from_ring = buf != NULL;\n> > + if (*from_ring)\n> > + {\n>\n> Don't really like the if (*from_ring) - why not keep it as buf != NULL?\n> Seems\n> a bit confusing this way, making it less obvious what's being changed.\n>\n>\nChanged\n\n\n>\n> > diff --git a/src/backend/storage/buffer/localbuf.c\n> b/src/backend/storage/buffer/localbuf.c\n> > index 014f644bf9..a3d76599bf 100644\n> > --- a/src/backend/storage/buffer/localbuf.c\n> > +++ b/src/backend/storage/buffer/localbuf.c\n> > @@ -15,6 +15,7 @@\n> > */\n> > #include \"postgres.h\"\n> >\n> > +#include \"pgstat.h\"\n> > #include \"access/parallel.h\"\n> > #include \"catalog/catalog.h\"\n> > #include \"executor/instrument.h\"\n>\n> Do most other places not put pgstat.h in the alphabetical order of headers?\n>\n\nFixed\n\n\n>\n> > @@ -432,6 +432,15 @@ ProcessSyncRequests(void)\n> > total_elapsed += elapsed;\n> > processed++;\n> >\n> > + /*\n> > + * Note that if a backend using a\n> BufferAccessStrategy is\n> > + * forced to do its own fsync (as\n> opposed to the\n> > + * checkpointer doing it), it will\n> not be counted as an\n> > + * IOCONTEXT_STRATEGY IOOP_FSYNC\n> and instead will be\n> > + * counted as an IOCONTEXT_SHARED\n> IOOP_FSYNC.\n> > + */\n> > + pgstat_count_io_op(IOOP_FSYNC,\n> IOCONTEXT_SHARED);\n>\n> Why is this noted here? Perhaps just point to the place where that happens\n> instead? I think it's also documented in ForwardSyncRequest()? Or just only\n> mention it there...\n>\n\nRemoved\n\n\n>\n> > @@ -0,0 +1,191 @@\n> > +/*\n> -------------------------------------------------------------------------\n> > + *\n> > + * pgstat_io_ops.c\n> > + * Implementation of IO operation statistics.\n> > + *\n> > + * This file contains the implementation of IO operation statistics. It\n> is kept\n> > + * separate from pgstat.c to enforce the line between the statistics\n> access /\n> > + * storage implementation and the details about individual types of\n> > + * statistics.\n> > + *\n> > + * Copyright (c) 2001-2022, PostgreSQL Global Development Group\n>\n> Arguably this would just be 2021-2022\n>\n\nChanged\n\n\n>\n> > +void\n> > +pgstat_count_io_op(IOOp io_op, IOContext io_context)\n> > +{\n> > + PgStat_IOOpCounters *pending_counters =\n> &pending_IOOpStats.data[io_context];\n> > +\n> > + Assert(pgstat_expect_io_op(MyBackendType, io_context, io_op));\n> > +\n> > + switch (io_op)\n> > + {\n> > + case IOOP_ALLOC:\n> > + pending_counters->allocs++;\n> > + break;\n> > + case IOOP_EXTEND:\n> > + pending_counters->extends++;\n> > + break;\n> > + case IOOP_FSYNC:\n> > + pending_counters->fsyncs++;\n> > + break;\n> > + case IOOP_READ:\n> > + pending_counters->reads++;\n> > + break;\n> > + case IOOP_WRITE:\n> > + pending_counters->writes++;\n> > + break;\n> > + }\n> > +\n> > +}\n>\n> How about replacing the breaks with a return and then erroring out if we\n> reach\n> the end of the function? You did that below, and I think it makes sense.\n>\n>\nI used breaks because in the subsequent commit I introduce the variable\n\"have_ioopstats\", and I set have_ioopstats to false in\npgstat_count_io_op() after counting.\nIt is probably safe to set have_ioopstats to true before incrementing it\nsince this backend is the only one that can see have_ioopstats and it\nshouldn't fail while incrementing the counter but it seems less clear\nthan doing it after.\n\nInstead of erroring out for an unknown IOOp, I decided to add Asserts\nabout the IOContext and IOOp being valid and that the combination of\nMyBackendType, IOContext, and IOOp are valid. I think it will be good to\nassert that the IOContext is valid before using it as an array index for\nlookup in pending stats.\n\n\n> > +bool\n> > +pgstat_bktype_io_context_valid(BackendType bktype, IOContext io_context)\n> > +{\n>\n> Maybe add a tiny comment about what 'valid' means here? Something like\n> 'return whether the backend type counts io in io_context'.\n>\n>\nChanged\n\n\n>\n> > + /*\n> > + * Only regular backends and WAL Sender processes executing\n> queries should\n> > + * use local buffers.\n> > + */\n> > + no_local = bktype == B_AUTOVAC_LAUNCHER || bktype ==\n> > + B_BG_WRITER || bktype == B_CHECKPOINTER || bktype ==\n> > + B_AUTOVAC_WORKER || bktype == B_BG_WORKER || bktype ==\n> > + B_STANDALONE_BACKEND || bktype == B_STARTUP;\n>\n> I think BG_WORKERS could end up using local buffers, extensions can do just\n> about everything in them.\n>\n\nFixed and added comment.\n\n\n>\n> > +bool\n> > +pgstat_bktype_io_op_valid(BackendType bktype, IOOp io_op)\n> > +{\n> > + if ((bktype == B_BG_WRITER || bktype == B_CHECKPOINTER) && io_op ==\n> > + IOOP_READ)\n> > + return false;\n>\n> Perhaps we should add an assertion about the backend type making sense\n> here?\n> I.e. that it's not archiver, walwriter etc?\n>\n\nDone\n\n\n>\n> > +bool\n> > +pgstat_io_context_io_op_valid(IOContext io_context, IOOp io_op)\n> > +{\n> > + /*\n> > + * Temporary tables using local buffers are not logged and thus do\n> not\n> > + * require fsync'ing. Set this cell to NULL to differentiate\n> between an\n> > + * invalid combination and 0 observed IO Operations.\n>\n> This comment feels a bit out of place?\n>\n\nDeleted\n\n\n>\n> > +bool\n> > +pgstat_expect_io_op(BackendType bktype, IOContext io_context, IOOp\n> io_op)\n> > +{\n> > + if (!pgstat_io_op_stats_collected(bktype))\n> > + return false;\n> > +\n> > + if (!pgstat_bktype_io_context_valid(bktype, io_context))\n> > + return false;\n> > +\n> > + if (!pgstat_bktype_io_op_valid(bktype, io_op))\n> > + return false;\n> > +\n> > + if (!pgstat_io_context_io_op_valid(io_context, io_op))\n> > + return false;\n> > +\n> > + /*\n> > + * There are currently no cases of a BackendType, IOContext, IOOp\n> > + * combination that are specifically invalid.\n> > + */\n>\n> \"specifically\"?\n>\n\nI removed this and mentioned it (rephrased) above pgstat_io_op_valid()\n\n\n>\n> > From 0f141fa7f97a57b8628b1b6fd6029bd3782f16a1 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Mon, 22 Aug 2022 11:35:20 -0400\n> > Subject: [PATCH v28 4/5] Aggregate IO operation stats per BackendType\n> >\n> > Stats on IOOps for all IOContexts for a backend are tracked locally. Add\n> > functionality for backends to flush these stats to shared memory and\n> > accumulate them with those from all other backends, exited and live.\n> > Also add reset and snapshot functions used by cumulative stats system\n> > for management of these statistics.\n> >\n> > The aggregated stats in shared memory could be extended in the future\n> > with per-backend stats -- useful for per connection IO statistics and\n> > monitoring.\n> >\n> > Some BackendTypes will not flush their pending statistics at regular\n> > intervals and explicitly call pgstat_flush_io_ops() during the course of\n> > normal operations to flush their backend-local IO Operation statistics\n> > to shared memory in a timely manner.\n>\n> > Because not all BackendType, IOOp, IOContext combinations are valid, the\n> > validity of the stats are checked before flushing pending stats and\n> > before reading in the existing stats file to shared memory.\n>\n> s/are checked/is checked/?\n>\n>\nFixed\n\n\n>\n> > @@ -1486,6 +1507,42 @@ pgstat_read_statsfile(void)\n> > if (!read_chunk_s(fpin, &shmem->checkpointer.stats))\n> > goto error;\n> >\n> > + /*\n> > + * Read IO Operations stats struct\n> > + */\n> > + if (!read_chunk_s(fpin, &shmem->io_ops.stat_reset_timestamp))\n> > + goto error;\n> > +\n> > + for (int backend_type = 0; backend_type < BACKEND_NUM_TYPES;\n> backend_type++)\n> > + {\n> > + PgStatShared_IOContextOps *backend_io_context_ops =\n> &shmem->io_ops.stats[backend_type];\n> > + bool expect_backend_stats = true;\n> > +\n> > + if (!pgstat_io_op_stats_collected(backend_type))\n> > + expect_backend_stats = false;\n> > +\n> > + for (int io_context = 0; io_context < IOCONTEXT_NUM_TYPES;\n> io_context++)\n> > + {\n> > + if (!expect_backend_stats ||\n> > +\n> !pgstat_bktype_io_context_valid(backend_type, io_context))\n> > + {\n> > +\n> pgstat_io_context_ops_assert_zero(&backend_io_context_ops->data[io_context]);\n> > + continue;\n> > + }\n> > +\n> > + for (int io_op = 0; io_op < IOOP_NUM_TYPES;\n> io_op++)\n> > + {\n> > + if\n> (!pgstat_bktype_io_op_valid(backend_type, io_op) ||\n> > +\n> !pgstat_io_context_io_op_valid(io_context, io_op))\n> > +\n> pgstat_io_op_assert_zero(&backend_io_context_ops->data[io_context],\n> > +\n> io_op);\n> > + }\n> > + }\n> > +\n> > + if (!read_chunk_s(fpin, &backend_io_context_ops->data))\n> > + goto error;\n> > + }\n>\n> Could we put the validation out of line? That's a lot of io stats specific\n> code to be in pgstat_read_statsfile().\n>\n\nDone.\n\n\n>\n> > +/*\n> > + * Helper function to accumulate PgStat_IOOpCounters. If either of the\n> > + * passed-in PgStat_IOOpCounters are members of\n> PgStatShared_IOContextOps, the\n> > + * caller is responsible for ensuring that the appropriate lock is\n> held. This\n> > + * is not asserted because this function could plausibly be used to\n> accumulate\n> > + * two local/pending PgStat_IOOpCounters.\n>\n> What's \"this\" here?\n>\n\nI rephrased it.\n\n\n>\n> > @@ -496,6 +503,8 @@ extern PgStat_CheckpointerStats\n> *pgstat_fetch_stat_checkpointer(void);\n> > */\n> >\n> > extern void pgstat_count_io_op(IOOp io_op, IOContext io_context);\n> > +extern PgStat_BackendIOContextOps\n> *pgstat_fetch_backend_io_context_ops(void);\n> > +extern bool pgstat_flush_io_ops(bool nowait);\n> > extern const char *pgstat_io_context_desc(IOContext io_context);\n> > extern const char *pgstat_io_op_desc(IOOp io_op);\n> >\n>\n> Is there any call to pgstat_flush_io_ops() from outside pgstat*.c? So\n> possibly\n> it could be in pgstat_internal.h? Not that it's particularly important...\n>\n\nMoved it.\n\n\n>\n> > @@ -506,6 +515,43 @@ extern bool pgstat_bktype_io_op_valid(BackendType\n> bktype, IOOp io_op);\n> > extern bool pgstat_io_context_io_op_valid(IOContext io_context, IOOp\n> io_op);\n> > extern bool pgstat_expect_io_op(BackendType bktype, IOContext\n> io_context, IOOp io_op);\n> >\n> > +/*\n> > + * Functions to assert that invalid IO Operation counters are zero.\n> Used with\n> > + * the validation functions in pgstat_io_ops.c\n> > + */\n> > +static inline void\n> > +pgstat_io_context_ops_assert_zero(PgStat_IOOpCounters *counters)\n> > +{\n> > + Assert(counters->allocs == 0 && counters->extends == 0 &&\n> > + counters->fsyncs == 0 && counters->reads == 0 &&\n> > + counters->writes == 0);\n> > +}\n> > +\n> > +static inline void\n> > +pgstat_io_op_assert_zero(PgStat_IOOpCounters *counters, IOOp io_op)\n> > +{\n> > + switch (io_op)\n> > + {\n> > + case IOOP_ALLOC:\n> > + Assert(counters->allocs == 0);\n> > + return;\n> > + case IOOP_EXTEND:\n> > + Assert(counters->extends == 0);\n> > + return;\n> > + case IOOP_FSYNC:\n> > + Assert(counters->fsyncs == 0);\n> > + return;\n> > + case IOOP_READ:\n> > + Assert(counters->reads == 0);\n> > + return;\n> > + case IOOP_WRITE:\n> > + Assert(counters->writes == 0);\n> > + return;\n> > + }\n> > +\n> > + elog(ERROR, \"unrecognized IOOp value: %d\", io_op);\n>\n> Hm. This means it'll emit code even in non-assertion builds - this should\n> probably just be an Assert(false) or pg_unreachable().\n>\n\nFixed.\n\n\n>\n> > Subject: [PATCH v28 5/5] Add system view tracking IO ops per backend type\n>\n> > View stats are fetched from statistics incremented when a backend\n> > performs an IO Operation and maintained by the cumulative statistics\n> > subsystem.\n>\n> \"fetched from statistics incremented\"?\n>\n\nRephrased it.\n\n\n>\n> > Each row of the view is stats for a particular BackendType for a\n> > particular IOContext (e.g. shared buffer accesses by checkpointer) and\n> > each column in the view is the total number of IO Operations done (e.g.\n> > writes).\n>\n> s/is/shows/?\n>\n> s/for a particular BackendType for a particular IOContext/for a particularl\n> BackendType and IOContext/? Somehow the repetition is weird.\n>\n\nBoth of the above wordings are now changed.\n\n\n>\n> > diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> > index 9440b41770..9949011ba3 100644\n> > --- a/doc/src/sgml/monitoring.sgml\n> > +++ b/doc/src/sgml/monitoring.sgml\n> > @@ -448,6 +448,15 @@ postgres 27093 0.0 0.0 30096 2752 ?\n> Ss 11:34 0:00 postgres: ser\n> > </entry>\n> > </row>\n> >\n> > + <row>\n> > +\n> <entry><structname>pg_stat_io</structname><indexterm><primary>pg_stat_io</primary></indexterm></entry>\n> > + <entry>A row for each IO Context for each backend type showing\n> > + statistics about backend IO operations. See\n> > + <link linkend=\"monitoring-pg-stat-io-view\">\n> > + <structname>pg_stat_io</structname></link> for details.\n> > + </entry>\n> > + </row>\n>\n> The \"for each for each\" thing again :)\n>\n\nChanged it.\n\n\n>\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>io_context</structfield> <type>text</type>\n> > + </para>\n> > + <para>\n> > + IO Context used (e.g. shared buffers, direct).\n> > + </para></entry>\n> > + </row>\n>\n> Wrong list of contexts.\n>\n\nFixed it.\n\n\n>\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>alloc</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of buffers allocated.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>extend</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of blocks extended.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>fsync</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of blocks fsynced.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>read</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of blocks read.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>write</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of blocks written.\n> > + </para></entry>\n> > + </row>\n>\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>stats_reset</structfield> <type>timestamp with time\n> zone</type>\n> > + </para>\n> > + <para>\n> > + Time at which these statistics were last reset.\n> > </para></entry>\n> > </row>\n> > </tbody>\n>\n> Part of me thinks it'd be nicer if it were \"allocated, read, written,\n> extended,\n> fsynced, stats_reset\", instead of alphabetical order. The order already\n> isn't\n> alphabetical.\n>\n\nI've updated the order in the view and docs.\n\n\n>\n> > + /*\n> > + * When adding a new column to the pg_stat_io view, add a new enum\n> value\n> > + * here above IO_NUM_COLUMNS.\n> > + */\n> > + enum\n> > + {\n> > + IO_COLUMN_BACKEND_TYPE,\n> > + IO_COLUMN_IO_CONTEXT,\n> > + IO_COLUMN_ALLOCS,\n> > + IO_COLUMN_EXTENDS,\n> > + IO_COLUMN_FSYNCS,\n> > + IO_COLUMN_READS,\n> > + IO_COLUMN_WRITES,\n> > + IO_COLUMN_RESET_TIME,\n> > + IO_NUM_COLUMNS,\n> > + };\n>\n> Given it's local and some of the lines are long, maybe just use COL?\n>\n>\nI've shortened COLUMN to COL. However, I've also moved this enum outside\nof the function and typedef'd it. I did this because, upon changing the\norder of the columns in the view, I could no longer use\nIO_COLUMN_IOOP_OFFSET and the IOOp value in the loop at the bottom of\npg_sta_get_io() to set the correct column to NULL. So, I created a\nhelper function which translates IOOp to io_stat_col.\n\n\n>\n> > +#define IO_COLUMN_IOOP_OFFSET (IO_COLUMN_IO_CONTEXT + 1)\n>\n> Undef'ing it probably worth doing.\n>\n\nIt's gone now anyway.\n\n\n>\n> > + SetSingleFuncCall(fcinfo, 0);\n> > + rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > +\n> > + backends_io_stats = pgstat_fetch_backend_io_context_ops();\n> > +\n> > + reset_time =\n> TimestampTzGetDatum(backends_io_stats->stat_reset_timestamp);\n> > +\n> > + for (int bktype = 0; bktype < BACKEND_NUM_TYPES; bktype++)\n> > + {\n> > + Datum bktype_desc =\n> CStringGetTextDatum(GetBackendTypeDesc(bktype));\n> > + bool expect_backend_stats = true;\n> > + PgStat_IOContextOps *io_context_ops =\n> &backends_io_stats->stats[bktype];\n> > +\n> > + /*\n> > + * For those BackendTypes without IO Operation stats, skip\n> > + * representing them in the view altogether.\n> > + */\n> > + if (!pgstat_io_op_stats_collected(bktype))\n> > + expect_backend_stats = false;\n>\n> Why not just expect_backend_stats = pgstat_io_op_stats_collected()?\n>\n\nUpdated this everywhere it occurred.\n\n\n>\n> > + for (int io_context = 0; io_context < IOCONTEXT_NUM_TYPES;\n> io_context++)\n> > + {\n> > + PgStat_IOOpCounters *counters =\n> &io_context_ops->data[io_context];\n> > + Datum values[IO_NUM_COLUMNS];\n> > + bool nulls[IO_NUM_COLUMNS];\n> > +\n> > + /*\n> > + * Some combinations of IOCONTEXT and BackendType\n> are not valid\n> > + * for any type of IO Operation. In such cases,\n> omit the entire\n> > + * row from the view.\n> > + */\n> > + if (!expect_backend_stats ||\n> > + !pgstat_bktype_io_context_valid(bktype,\n> io_context))\n> > + {\n> > +\n> pgstat_io_context_ops_assert_zero(counters);\n> > + continue;\n> > + }\n> > +\n> > + memset(values, 0, sizeof(values));\n> > + memset(nulls, 0, sizeof(nulls));\n>\n> I'd replace the memset with values[...] = {0} etc.\n>\n\nDone.\n\n\n>\n> > + values[IO_COLUMN_BACKEND_TYPE] = bktype_desc;\n> > + values[IO_COLUMN_IO_CONTEXT] = CStringGetTextDatum(\n> > +\n>\n> pgstat_io_context_desc(io_context));\n>\n> Pgindent, I hate you.\n>\n> Perhaps put it the context desc in a local var, so it doesn't look quite\n> this\n> ugly?\n>\n\nDid this.\n\n\n>\n> > +-- Test that allocs, extends, reads, and writes to Shared Buffers and\n> fsyncs\n> > +-- done to ensure durability of Shared Buffers are tracked in\n> pg_stat_io.\n> > +SELECT sum(alloc) AS io_sum_shared_allocs_before FROM pg_stat_io WHERE\n> io_context = 'Shared' \\gset\n> > +SELECT sum(extend) AS io_sum_shared_extends_before FROM pg_stat_io\n> WHERE io_context = 'Shared' \\gset\n> > +SELECT sum(fsync) AS io_sum_shared_fsyncs_before FROM pg_stat_io WHERE\n> io_context = 'Shared' \\gset\n> > +SELECT sum(read) AS io_sum_shared_reads_before FROM pg_stat_io WHERE\n> io_context = 'Shared' \\gset\n> > +SELECT sum(write) AS io_sum_shared_writes_before FROM pg_stat_io WHERE\n> io_context = 'Shared' \\gset\n> > +-- Create a regular table and insert some data to generate\n> IOCONTEXT_SHARED allocs and extends.\n> > +CREATE TABLE test_io_shared(a int);\n> > +INSERT INTO test_io_shared SELECT i FROM generate_series(1,100)i;\n> > +SELECT pg_stat_force_next_flush();\n> > + pg_stat_force_next_flush\n> > +--------------------------\n> > +\n> > +(1 row)\n> > +\n> > +-- After a checkpoint, there should be some additional IOCONTEXT_SHARED\n> writes and fsyncs.\n> > +CHECKPOINT;\n>\n> Does that work reliably? A checkpoint could have started just before the\n> CREATE TABLE, I think? Then it'd not have flushed those writes yet. I think\n> doing two checkpoints would protect against that.\n>\n\nIf the first checkpoint starts just before creating the table and those\nbuffers are dirtied during that checkpoint and thus not written out by\ncheckpointer during that checkpoint, then the test's (single) explicit\ncheckpoint would end up picking up those dirty buffers and writing them\nout, right?\n\n\n>\n> > +DROP TABLE test_io_shared;\n> > +DROP TABLESPACE test_io_shared_stats_tblspc;\n>\n> Tablespace creation is somewhat expensive, do we really need that? There\n> should be one set up in setup.sql or such.\n>\n\nThe only ones I see in regress are for tablespace.sql which drops them\nin the same test and is testing dropping tablespaces.\n\n\n>\n> > +-- Test that allocs, extends, reads, and writes of temporary tables are\n> tracked\n> > +-- in pg_stat_io.\n> > +CREATE TEMPORARY TABLE test_io_local(a int, b TEXT);\n> > +SELECT sum(alloc) AS io_sum_local_allocs_before FROM pg_stat_io WHERE\n> io_context = 'Local' \\gset\n> > +SELECT sum(extend) AS io_sum_local_extends_before FROM pg_stat_io WHERE\n> io_context = 'Local' \\gset\n> > +SELECT sum(read) AS io_sum_local_reads_before FROM pg_stat_io WHERE\n> io_context = 'Local' \\gset\n> > +SELECT sum(write) AS io_sum_local_writes_before FROM pg_stat_io WHERE\n> io_context = 'Local' \\gset\n> > +-- Insert enough values that we need to reuse and write out dirty local\n> > +-- buffers.\n> > +INSERT INTO test_io_local SELECT generate_series(1, 80000) as id,\n> >\n> +'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa';\n>\n> Could be abbreviated with repeat('a', some-number) :P\n>\n\nDone.\n\n\n>\n> Can the table be smaller than this? That might show up on a slow machine.\n>\n>\nSetting temp_buffers to 1MB, 7500 tuples of this width seem like enough.\nI inserted 8000 to be safe -- seems like an order of magnitude less\nshould be good.\n\n\n>\n> > +SELECT sum(alloc) AS io_sum_local_allocs_after FROM pg_stat_io WHERE\n> io_context = 'Local' \\gset\n> > +SELECT sum(extend) AS io_sum_local_extends_after FROM pg_stat_io WHERE\n> io_context = 'Local' \\gset\n> > +SELECT sum(read) AS io_sum_local_reads_after FROM pg_stat_io WHERE\n> io_context = 'Local' \\gset\n> > +SELECT sum(write) AS io_sum_local_writes_after FROM pg_stat_io WHERE\n> io_context = 'Local' \\gset\n> > +SELECT :io_sum_local_allocs_after > :io_sum_local_allocs_before;\n>\n> Random q: Why are we uppercasing the first letter of the context?\n>\n>\nhmm. dunno. I changed it to be lowercase now.\n\n\n>\n> > +CREATE TABLE test_io_strategy(a INT, b INT);\n> > +ALTER TABLE test_io_strategy SET (autovacuum_enabled = 'false');\n>\n> I think you can specify that as part of the CREATE TABLE. Not sure if\n> otherwise there's not a race where autovac coul start before you do the\n> ALTER.\n>\n>\nDone.\n\n\n>\n> > +INSERT INTO test_io_strategy SELECT i, i from generate_series(1, 8000)i;\n> > +-- Ensure that the next VACUUM will need to perform IO by rewriting the\n> table\n> > +-- first with VACUUM (FULL).\n>\n> ... because VACUUM FULL currently doesn't set all-visible etc on the pages,\n> which the subsequent vacuum will then do.\n>\n\nIt is true that the second VACUUM will set all-visible while VACUUM FULL\nwill not. However, I didn't think that that writing was what allowed us\nto test strategy reads and allocs. It would theoretically allow us to\ntest strategy writes, however, in practice, checkpointer or background\nwriter often wrote out these dirty pages with all-visible set before\nthis backend had a chance to reuse them and write them out itself.\n\nUnless you are saying that the subsequent VACUUM would be a no-op were\nVACUUM FULL to set all-visible on the rewritten pages?\n\n\n>\n> > +-- Hope that the previous value of wal_skip_threshold was the default.\n> We\n> > +-- can't use BEGIN...SET LOCAL since VACUUM can't be run inside a\n> transaction\n> > +-- block.\n> > +RESET wal_skip_threshold;\n>\n> Nothing in this file set it before, so that's a pretty sure-to-be-fulfilled\n> hope.\n>\n\nI've removed the comment.\n\n\n>\n> > +-- Test that, when using a Strategy, if creating a relation, Strategy\n> extends\n>\n> s/if/when/?\n>\n\nChanged this.\n\nThanks for the detailed review!\n\n- Melanie",
"msg_date": "Fri, 26 Aug 2022 15:34:06 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v30 attached\nrebased and pgstat_io_ops.c builds with meson now\nalso, I tested with pgstat_report_stat() only flushing when forced and\ntests still pass",
"msg_date": "Tue, 27 Sep 2022 14:20:44 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 11:20 AM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n> v30 attached\n> rebased and pgstat_io_ops.c builds with meson now\n> also, I tested with pgstat_report_stat() only flushing when forced and\n> tests still pass\n>\n\nFirst of all, I'm excited about this patch, and I think it will be a big\nhelp to understand better which part of Postgres is producing I/O (and why).\n\nI've paired up with Maciek (CCed) on a review of this patch and had a few\ncomments, focused on the user experience:\n\nThe term \"strategy\" as an \"io_context\" is hard to understand, as its not a\nconcept an end-user / DBA would be familiar with. Since this comes from\nBufferAccessStrategyType (i.e. anything not NULL/BAS_NORMAL is treated as\n\"strategy\"), maybe we could instead split this out into the individual\nstrategy types? i.e. making \"strategy\" three different I/O contexts\ninstead: \"shared_bulkread\", \"shared_bulkwrite\" and \"shared_vacuum\",\nretaining \"shared\" to mean NULL / BAS_NORMAL.\n\nSeparately, could we also track buffer hits without incurring extra\noverhead? (not just allocs and reads) -- Whilst we already have shared read\nand hit counters in a few other places, this would help make the common\n\"What's my cache hit ratio\" question more accurate to answer in the\npresence of different shared buffer access strategies. Tracking hits could\nalso help for local buffers (e.g. to tune temp_buffers based on seeing a\nlow cache hit ratio).\n\nAdditionally, some minor notes:\n\n- Since the stats are counting blocks, it would make sense to prefix the\nview columns with \"blks_\", and word them in the past tense (to match\ncurrent style), i.e. \"blks_written\", \"blks_read\", \"blks_extended\",\n\"blks_fsynced\" (realistically one would combine this new view with other\ndata e.g. from pg_stat_database or pg_stat_statements, which all use the\n\"blks_\" prefix, and stop using pg_stat_bgwriter for this which does not use\nsuch a prefix)\n\n- \"alloc\" as a name doesn't seem intuitive (and it may be confused with\nmemory allocations) - whilst this is already named this way in\npg_stat_bgwriter, it feels like this is an opportunity to eventually\ndeprecate the column there and make this easier to understand -\nspecifically, maybe we can clarify that this means buffer *acquisitions*?\n(either by renaming the field to \"blks_acquired\", or clarifying in the\ndocumentation)\n\n- Assuming we think this view could realistically cover all I/O produced by\nPostgres in the future (thus warranting the name \"pg_stat_io\"), it may be\nbest to have an explicit list of things that are not currently tracked in\nthe documentation, to reduce user confusion (i.e. WAL writes are not\ntracked, temporary files are not tracked, and some forms of direct writes\nare not tracked, e.g. when a table moves to a different tablespace)\n\n- In the view documentation, it would be good to explain the different\nvalues for \"io_strategy\" (and what they mean)\n\n- Overall it would be helpful if we had a dedicated documentation page on\nI/O statistics that's linked from the pg_stat_io view description, and\nexplains how the I/O statistics tie into the various concepts of shared\nbuffers / buffer access strategies / etc (and what is not tracked today)\n\nThanks,\nLukas\n\n-- \nLukas Fittl\n\nOn Tue, Sep 27, 2022 at 11:20 AM Melanie Plageman <melanieplageman@gmail.com> wrote:v30 attached\nrebased and pgstat_io_ops.c builds with meson now\nalso, I tested with pgstat_report_stat() only flushing when forced and\ntests still pass\nFirst of all, I'm excited about this patch, and I think it will be a big help to understand better which part of Postgres is producing I/O (and why).I've paired up with Maciek (CCed) on a review of this patch and had a few comments, focused on the user experience:The term \"strategy\" as an \"io_context\" is hard to understand, as its not a concept an end-user / DBA would be familiar with. Since this comes from BufferAccessStrategyType (i.e. anything not NULL/BAS_NORMAL is treated as \"strategy\"), maybe we could instead split this out into the individual strategy types? i.e. making \"strategy\" three different I/O contexts instead: \"shared_bulkread\", \"shared_bulkwrite\" and \"shared_vacuum\", retaining \"shared\" to mean NULL / BAS_NORMAL.Separately, could we also track buffer hits without incurring extra overhead? (not just allocs and reads) -- Whilst we already have shared read and hit counters in a few other places, this would help make the common \"What's my cache hit ratio\" question more accurate to answer in the presence of different shared buffer access strategies. Tracking hits could also help for local buffers (e.g. to tune temp_buffers based on seeing a low cache hit ratio).Additionally, some minor notes:- Since the stats are counting blocks, it would make sense to prefix the view columns with \"blks_\", and word them in the past tense (to match current style), i.e. \"blks_written\", \"blks_read\", \"blks_extended\", \"blks_fsynced\" (realistically one would combine this new view with other data e.g. from pg_stat_database or pg_stat_statements, which all use the \"blks_\" prefix, and stop using pg_stat_bgwriter for this which does not use such a prefix)- \"alloc\" as a name doesn't seem intuitive (and it may be confused with memory allocations) - whilst this is already named this way in pg_stat_bgwriter, it feels like this is an opportunity to eventually deprecate the column there and make this easier to understand - specifically, maybe we can clarify that this means buffer *acquisitions*? (either by renaming the field to \"blks_acquired\", or clarifying in the documentation)- Assuming we think this view could realistically cover all I/O produced by Postgres in the future (thus warranting the name \"pg_stat_io\"), it may be best to have an explicit list of things that are not currently tracked in the documentation, to reduce user confusion (i.e. WAL writes are not tracked, temporary files are not tracked, and some forms of direct writes are not tracked, e.g. when a table moves to a different tablespace)- In the view documentation, it would be good to explain the different values for \"io_strategy\" (and what they mean)- Overall it would be helpful if we had a dedicated documentation page on I/O statistics that's linked from the pg_stat_io view description, and explains how the I/O statistics tie into the various concepts of shared buffers / buffer access strategies / etc (and what is not tracked today)Thanks,Lukas-- Lukas Fittl",
"msg_date": "Fri, 30 Sep 2022 16:17:25 -0700",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-27 14:20:44 -0400, Melanie Plageman wrote:\n> v30 attached\n> rebased and pgstat_io_ops.c builds with meson now\n> also, I tested with pgstat_report_stat() only flushing when forced and\n> tests still pass\n\nUnfortunately tests fail in CI / cfbot. E.g.,\nhttps://cirrus-ci.com/task/5816109319323648\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5816109319323648/testrun/build/testrun/main/regress/regression.diffs\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/stats.out /tmp/cirrus-ci-build/build/testrun/main/regress/results/stats.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/stats.out\t2022-10-01 12:07:47.779183501 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/main/regress/results/stats.out\t2022-10-01 12:11:38.686433303 +0000\n@@ -997,6 +997,8 @@\n -- Set temp_buffers to a low value so that we can trigger writes with fewer\n -- inserted tuples.\n SET temp_buffers TO '1MB';\n+ERROR: invalid value for parameter \"temp_buffers\": 128\n+DETAIL: \"temp_buffers\" cannot be changed after any temporary tables have been accessed in the session.\n CREATE TEMPORARY TABLE test_io_local(a int, b TEXT);\n SELECT sum(alloc) AS io_sum_local_allocs_before FROM pg_stat_io WHERE io_context = 'local' \\gset\n SELECT sum(read) AS io_sum_local_reads_before FROM pg_stat_io WHERE io_context = 'local' \\gset\n@@ -1037,7 +1039,7 @@\n SELECT :io_sum_local_writes_after > :io_sum_local_writes_before;\n ?column? \n ----------\n- t\n+ f\n (1 row)\n \n SELECT :io_sum_local_extends_after > :io_sum_local_extends_before;\n\n\nSo the problem is just that something else accesses temp buffers earlier in\nthe same test.\n\nThat's likely because since you sent your email\n\ncommit d7e39d72ca1c6f188b400d7d58813ff5b5b79064\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2022-09-29 12:14:39 -0400\n \n Use actual backend IDs in pg_stat_get_backend_idset() and friends.\n\nwas applied, which adds a temp table earlier in the same session.\n\n\nI think the easiest way to make this robust would be to just add a reconnect\nbefore the place you need to set temp_buffers, that way additional temp tables\nwon't cause a problem.\n\nSetting the patch to waiting-for-author for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Oct 2022 10:24:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v31 attached\nI've also addressed failing test mentioned by Andres in [1]\n\nOn Fri, Sep 30, 2022 at 7:18 PM Lukas Fittl <lukas@fittl.com> wrote:\n>\n> On Tue, Sep 27, 2022 at 11:20 AM Melanie Plageman <melanieplageman@gmail.com> wrote:\n>\n> First of all, I'm excited about this patch, and I think it will be a big help to understand better which part of Postgres is producing I/O (and why).\n>\n\nThanks! I'm happy to hear that.\n\n> I've paired up with Maciek (CCed) on a review of this patch and had a few comments, focused on the user experience:\n>\n\nThanks for taking the time to review!\n\n> The term \"strategy\" as an \"io_context\" is hard to understand, as its not a concept an end-user / DBA would be familiar with. Since this comes from BufferAccessStrategyType (i.e. anything not NULL/BAS_NORMAL is treated as \"strategy\"), maybe we could instead split this out into the individual strategy types? i.e. making \"strategy\" three different I/O contexts instead: \"shared_bulkread\", \"shared_bulkwrite\" and \"shared_vacuum\", retaining \"shared\" to mean NULL / BAS_NORMAL.\n\nI have split strategy out into \"vacuum\", \"bulkread\", and \"bulkwrite\". I\nthought it was less clear with shared as a prefix. If we were to have\nBufferAccessStrategies in the future which acquire local buffers (for\nexample), we could start prefixing the columns to differentiate.\n\nThis opened up some new questions about which BufferAccessStrategies\nwill be employed by which BackendTypes and which IOOps will be valid in\na given BufferAccessStrategy.\n\nI've excluded IOCONTEXT_BULKREAD and IOCONTEXT_BULKWRITE for autovacuum\nworker -- though those may not be inherently invalid, they seem not to\nbe done now and added extra rows to the view.\n\nI've also disallowed IOOP_EXTEND for IOCONTEXT_BULKREAD.\n\n> Separately, could we also track buffer hits without incurring extra overhead? (not just allocs and reads) -- Whilst we already have shared read and hit counters in a few other places, this would help make the common \"What's my cache hit ratio\" question more accurate to answer in the presence of different shared buffer access strategies. Tracking hits could also help for local buffers (e.g. to tune temp_buffers based on seeing a low cache hit ratio).\n\nI've started tracking hits and added \"hit\" to the view.\nI added IOOP_HIT and IOOP_ACQUIRE to those IOOps disallowed for\ncheckpointer and bgwriter.\n\nI have added tests for hit, but I'm not sure I can keep them. It seems\nlike they might fail if the blocks are evicted between the first and\nsecond time I try to read them.\n\n> Additionally, some minor notes:\n>\n> - Since the stats are counting blocks, it would make sense to prefix the view columns with \"blks_\", and word them in the past tense (to match current style), i.e. \"blks_written\", \"blks_read\", \"blks_extended\", \"blks_fsynced\" (realistically one would combine this new view with other data e.g. from pg_stat_database or pg_stat_statements, which all use the \"blks_\" prefix, and stop using pg_stat_bgwriter for this which does not use such a prefix)\n\nI have changed the column names to be in the past tense.\n\nThere are no columns equivalent to \"dirty\" or \"misses\" from the other\nviews containing information on buffer hits/block reads/writes/etc. I'm\nnot sure whether or not those make sense in this context.\n\nBecause we want to add non-block-oriented IO in the future (like\ntemporary file IO) to this view and want to use the same \"read\",\n\"written\", \"extended\" columns, I would prefer not to prefix the columns\nwith \"blks_\". I have added a column \"unit\" which would contain the unit\nin which read, written, and extended are in. Unfortunately, fsyncs are\nnot per block, so \"unit\" doesn't really work for this. I documented\nthis.\n\nThe most correct thing to do to accommodate block-oriented and\nnon-block-oriented IO would be to specify all the values in bytes.\nHowever, I would like this view to be usable visually (as opposed to\njust in scripts and by tools). The only current value of unit is\n\"block_size\" which could potentially be combined with the value of the\nGUC to get bytes.\n\nI've hard-coded the string \"block_size\" into the view generation\nfunction pg_stat_get_io(), so, if this idea makes sense, perhaps I\nshould do something better there.\n\n> - \"alloc\" as a name doesn't seem intuitive (and it may be confused with memory allocations) - whilst this is already named this way in pg_stat_bgwriter, it feels like this is an opportunity to eventually deprecate the column there and make this easier to understand - specifically, maybe we can clarify that this means buffer *acquisitions*? (either by renaming the field to \"blks_acquired\", or clarifying in the documentation)\n\nI have renamed it to acquired. It doesn't overlap completely with\nbuffers_alloc in pg_stat_bgwriter, so I didn't mention that in docs.\n\n> - Assuming we think this view could realistically cover all I/O produced by Postgres in the future (thus warranting the name \"pg_stat_io\"), it may be best to have an explicit list of things that are not currently tracked in the documentation, to reduce user confusion (i.e. WAL writes are not tracked, temporary files are not tracked, and some forms of direct writes are not tracked, e.g. when a table moves to a different tablespace)\n\nI have added this to the docs. The list is not exhaustive, so I would\nlove to get feedback on if there are other specific examples of IO which\nis using smgr* directly that users will wonder about and I should call\nout.\n\n> - In the view documentation, it would be good to explain the different values for \"io_strategy\" (and what they mean)\n\nI have added this and would love feedback on my docs additions.\n\n> - Overall it would be helpful if we had a dedicated documentation page on I/O statistics that's linked from the pg_stat_io view description, and explains how the I/O statistics tie into the various concepts of shared buffers / buffer access strategies / etc (and what is not tracked today)\n\nI haven't done this yet. How specific were you thinking -- like\ninterpretations of all the combinations and what to do with what you\nsee? Like you should run pg_prewarm if you see X? Specific checkpointer\nor bgwriter GUCs to change? Or just links to other docs pages on\nrecommended tunings?\n\nWere you imagining the other IO statistics views (like\npg_statio_all_tables and pg_stat_database) also being included in this\npage? Like would it be a comprehensive guide to IO statistics and what\ntheir significance/purposes are?\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/20221002172404.xyzhftbedh4zpio2%40awork3.anarazel.de",
"msg_date": "Thu, 6 Oct 2022 13:42:09 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v31 failed in CI, so\nI've attached v32 which has a few issues fixed:\n- addressed some compiler warnings I hadn't noticed locally\n- autovac launcher and worker do indeed use bulkread strategy if they\n end up starting before critical indexes have loaded and end up doing a\n sequential scan of some catalog tables, so I have changed the\n restrictions on BackendTypes allowed to track IO Operations in\n IOCONTEXT_BULKREAD\n- changed the name of the column \"fsynced\" to \"files_synced\" to make it\n more clear what unit it is in (and that the unit differs from that of\n the \"unit\" column)\n\nIn an off-list discussion with Andres, he mentioned that he thought\nbuffers reused by a BufferAccessStrategy should be split from buffers\n\"acquired\" and that \"acquired\" should be renamed \"clocksweeps\".\n\nI have started doing this, but for BufferAccessStrategy IO there are a\nfew choices about how we want to count the clocksweeps:\n\nCurrently the following situations are counted under the following\nIOContexts and IOOps:\n\nIOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_ACQUIRE\n- reuse a buffer from the ring\n\nIOCONTEXT_SHARED, IOOP_ACQUIRE\n- add a buffer to the strategy ring initially\n- add a new shared buffer to the ring when all the existing buffers in\n the ring are pinned\n\nAnd in the new paradigm, I think these are two good options:\n\n1)\nIOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_CLOCKSWEEP\n- add a buffer to the strategy ring initially\n- add a new shared buffer to the ring when all the existing buffers in\n the ring are pinned\n\nIOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_REUSE\n- reuse a buffer from the ring\n\n2)\nIOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_CLOCKSWEEP\n- add a buffer to the strategy ring initially\n\nIOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_REUSE\n- reuse a buffer from the ring\n\nIOCONTEXT SHARED, IOOP_CLOCKSWEEP\n- add a new shared buffer to the ring when all the existing buffers in\n the ring are pinned\n\nHowever, if we want to differentiate between buffers initially added to\nthe ring and buffers taken from shared buffers and added to the ring\nbecause all strategy ring buffers are pinned or have a usage count above\none, then we would need to either do so inside of GetBufferFromRing() or\npropagate this distinction out somehow (easy enough if we care to do\nit).\n\nThere are other combinations that I could come up with a justification\nfor as well, but I wanted to know what other people thought made sense\n(and would make sense to users).\n\n- Melanie",
"msg_date": "Thu, 6 Oct 2022 18:23:53 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "I've gone ahead and implemented option 1 (commented below).\n\nOn Thu, Oct 6, 2022 at 6:23 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> v31 failed in CI, so\n> I've attached v32 which has a few issues fixed:\n> - addressed some compiler warnings I hadn't noticed locally\n> - autovac launcher and worker do indeed use bulkread strategy if they\n> end up starting before critical indexes have loaded and end up doing a\n> sequential scan of some catalog tables, so I have changed the\n> restrictions on BackendTypes allowed to track IO Operations in\n> IOCONTEXT_BULKREAD\n> - changed the name of the column \"fsynced\" to \"files_synced\" to make it\n> more clear what unit it is in (and that the unit differs from that of\n> the \"unit\" column)\n>\n> In an off-list discussion with Andres, he mentioned that he thought\n> buffers reused by a BufferAccessStrategy should be split from buffers\n> \"acquired\" and that \"acquired\" should be renamed \"clocksweeps\".\n>\n> I have started doing this, but for BufferAccessStrategy IO there are a\n> few choices about how we want to count the clocksweeps:\n>\n> Currently the following situations are counted under the following\n> IOContexts and IOOps:\n>\n> IOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_ACQUIRE\n> - reuse a buffer from the ring\n>\n> IOCONTEXT_SHARED, IOOP_ACQUIRE\n> - add a buffer to the strategy ring initially\n> - add a new shared buffer to the ring when all the existing buffers in\n> the ring are pinned\n>\n> And in the new paradigm, I think these are two good options:\n>\n> 1)\n> IOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_CLOCKSWEEP\n> - add a buffer to the strategy ring initially\n> - add a new shared buffer to the ring when all the existing buffers in\n> the ring are pinned\n>\n> IOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_REUSE\n> - reuse a buffer from the ring\n>\n\nI've implemented this option in attached v33.\n\n> 2)\n> IOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_CLOCKSWEEP\n> - add a buffer to the strategy ring initially\n>\n> IOCONTEXT_[VACUUM,BULKREAD,BULKWRITE], IOOP_REUSE\n> - reuse a buffer from the ring\n>\n> IOCONTEXT SHARED, IOOP_CLOCKSWEEP\n> - add a new shared buffer to the ring when all the existing buffers in\n> the ring are pinned\n\n\n- Melanie",
"msg_date": "Mon, 10 Oct 2022 14:48:49 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Thanks for working on this! Like Lukas, I'm excited to see more\nvisibility into important parts of the system like this.\n\nOn Mon, Oct 10, 2022 at 11:49 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> I've gone ahead and implemented option 1 (commented below).\n\nNo strong opinion on 1 versus 2, but I guess at least partly because I\ndon't understand the implications (I do understand the difference,\njust not when it might be important in terms of stats). Can we think\nof a situation where combining stats about initial additions with\npinned additions hides some behavior that might be good to understand\nand hard to pinpoint otherwise?\n\nI took a look at the latest docs (as someone mostly familiar with\ninternals at only a pretty high level, so probably somewhat close to\nthe target audience) and have some feedback.\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para\nrole=\"column_definition\">\n+ <structfield>backend_type</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ Type of backend (e.g. background worker, autovacuum worker).\n+ </para></entry>\n+ </row>\n\nNot critical, but is there a list of backend types we could\ncross-reference elsewhere in the docs?\n\n From the io_context column description:\n\n+ The autovacuum daemon, explicit <command>VACUUM</command>,\nexplicit\n+ <command>ANALYZE</command>, many bulk reads, and many bulk\nwrites use a\n+ fixed amount of memory, acquiring the equivalent number of\nshared\n+ buffers and reusing them circularly to avoid occupying an\nundue portion\n+ of the main shared buffer pool.\n+ </para></entry>\n\nI don't understand how this is relevant to the io_context column.\nCould you expand on that, or am I just missing something obvious?\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para\nrole=\"column_definition\">\n+ <structfield>extended</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Extends of relations done by this\n<varname>backend_type</varname> in\n+ order to write data in this <varname>io_context</varname>.\n+ </para></entry>\n+ </row>\n\nI understand what this is, but not why this is something I might want\nto know about.\n\nAnd from your earlier e-mail:\n\nOn Thu, Oct 6, 2022 at 10:42 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Because we want to add non-block-oriented IO in the future (like\n> temporary file IO) to this view and want to use the same \"read\",\n> \"written\", \"extended\" columns, I would prefer not to prefix the columns\n> with \"blks_\". I have added a column \"unit\" which would contain the unit\n> in which read, written, and extended are in. Unfortunately, fsyncs are\n> not per block, so \"unit\" doesn't really work for this. I documented\n> this.\n>\n> The most correct thing to do to accommodate block-oriented and\n> non-block-oriented IO would be to specify all the values in bytes.\n> However, I would like this view to be usable visually (as opposed to\n> just in scripts and by tools). The only current value of unit is\n> \"block_size\" which could potentially be combined with the value of the\n> GUC to get bytes.\n>\n> I've hard-coded the string \"block_size\" into the view generation\n> function pg_stat_get_io(), so, if this idea makes sense, perhaps I\n> should do something better there.\n\nThat seems broadly reasonable, but pg_settings also has a 'unit'\nfield, and in that view, unit is '8kB' on my system--i.e., it\n(presumably) reflects the block size. Is that something we should try\nto be consistent with (not sure if that's a good idea, but thought it\nwas worth asking)?\n\n> On Fri, Sep 30, 2022 at 7:18 PM Lukas Fittl <lukas@fittl.com> wrote:\n> > - Overall it would be helpful if we had a dedicated documentation page on I/O statistics that's linked from the pg_stat_io view description, and explains how the I/O statistics tie into the various concepts of shared buffers / buffer access strategies / etc (and what is not tracked today)\n>\n> I haven't done this yet. How specific were you thinking -- like\n> interpretations of all the combinations and what to do with what you\n> see? Like you should run pg_prewarm if you see X? Specific checkpointer\n> or bgwriter GUCs to change? Or just links to other docs pages on\n> recommended tunings?\n>\n> Were you imagining the other IO statistics views (like\n> pg_statio_all_tables and pg_stat_database) also being included in this\n> page? Like would it be a comprehensive guide to IO statistics and what\n> their significance/purposes are?\n\nI can't speak for Lukas here, but I encouraged him to suggest more\nthorough documentation in general, so I can speak to my concerns: in\ngeneral, these stats should be usable for someone who does not know\nmuch about Postgres internals. It's pretty low-level information,\nsure, so I think you need some understanding of how the system broadly\nworks to make sense of it. But ideally you should be able to find what\nyou need to understand the concepts involved within the docs.\n\nI think your updated docs are much clearer (with the caveats of my\nspecific comments above). It would still probably be helpful to have a\ndedicated page on I/O stats (and yeah, something with a broad scope,\nalong the lines of a comprehensive guide), but I think that can wait\nuntil a future patch.\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Mon, 10 Oct 2022 16:42:55 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Oct 10, 2022 at 7:43 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> Thanks for working on this! Like Lukas, I'm excited to see more\n> visibility into important parts of the system like this.\n\nThanks for taking another look!\n\n>\n> On Mon, Oct 10, 2022 at 11:49 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > I've gone ahead and implemented option 1 (commented below).\n>\n> No strong opinion on 1 versus 2, but I guess at least partly because I\n> don't understand the implications (I do understand the difference,\n> just not when it might be important in terms of stats). Can we think\n> of a situation where combining stats about initial additions with\n> pinned additions hides some behavior that might be good to understand\n> and hard to pinpoint otherwise?\n\nI think that it makes sense to count both the initial buffers added to\nthe ring and subsequent shared buffers added to the ring (either when\nthe current strategy buffer is pinned or in use or when a bulkread\nrejects dirty strategy buffers in favor of new shared buffers) as\nstrategy clocksweeps because of how the statistic would be used.\n\nClocksweeps give you an idea of how much of your working set is cached\n(setting aside initially reading data into shared buffers when you are\nwarming up the db). You may use clocksweeps to determine if you need to\nmake shared buffers larger.\n\nDistinguishing strategy buffer clocksweeps from shared buffer\nclocksweeps allows us to avoid enlarging shared buffers if most of the\nclocksweeps are to bring in blocks for the strategy operation.\n\nHowever, I could see an argument that discounting strategy clocksweeps\ndone because the current strategy buffer is pinned makes the number of\nshared buffer clocksweeps artificially low since those other queries\nusing the buffer would have suffered a cache miss were it not for the\nstrategy. And, in this case, you would take strategy clocksweeps\ntogether with shared clocksweeps to make your decision. And if we\ninclude buffers initially added to the strategy ring in the strategy\nclocksweep statistic, this number may be off because those blocks may\nnot be needed in the main shared working set. But you won't know that\nuntil you try to reuse the buffer and it is pinned. So, I think we don't\nhave a better option than counting initial buffers added to the ring as\nstrategy clocksweeps (as opposed to as reuses).\n\nSo, in answer to your question, no, I cannot think of a scenario like\nthat.\n\nSitting down and thinking about that for a long time did, however, help\nme realize that some of my code comments were misleading (and some\nincorrect). I will update these in the next version once we agree on\nupdated docs.\n\nIt also made me remember that I am incorrectly counting rejected buffers\nas reused. I'm not sure if it is a good idea to subtract from reuses\nwhen a buffer is rejected. Waiting until after it is rejected to count\nthe reuse will take some other code changes. Perhaps we could also count\nrejections in the stats?\n\n>\n> I took a look at the latest docs (as someone mostly familiar with\n> internals at only a pretty high level, so probably somewhat close to\n> the target audience) and have some feedback.\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para\n> role=\"column_definition\">\n> + <structfield>backend_type</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Type of backend (e.g. background worker, autovacuum worker).\n> + </para></entry>\n> + </row>\n>\n> Not critical, but is there a list of backend types we could\n> cross-reference elsewhere in the docs?\n\nThe most I could find was this longer explanation (with exhaustive list\nof types) in pg_stat_activity docs [1]. I could duplicate what it says\nor I could link to the view and say \"see pg_stat_activity\" for a\ndescription of backend_type\" or something like that (to keep them from\ngetting out of sync as new backend_types are added. I suppose I could\nalso add docs on backend_types, but I'm not sure where something like\nthat would go.\n\n>\n> From the io_context column description:\n>\n> + The autovacuum daemon, explicit <command>VACUUM</command>,\n> explicit\n> + <command>ANALYZE</command>, many bulk reads, and many bulk\n> writes use a\n> + fixed amount of memory, acquiring the equivalent number of\n> shared\n> + buffers and reusing them circularly to avoid occupying an\n> undue portion\n> + of the main shared buffer pool.\n> + </para></entry>\n>\n> I don't understand how this is relevant to the io_context column.\n> Could you expand on that, or am I just missing something obvious?\n>\n\nI'm trying to explain why those other IO Contexts exist (bulkread,\nbulkwrite, vacuum) and why they are separate from shared buffers.\nShould I cut it altogether or preface it with something like: these are\ncounted separate from shared buffers because...?\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para\n> role=\"column_definition\">\n> + <structfield>extended</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Extends of relations done by this\n> <varname>backend_type</varname> in\n> + order to write data in this <varname>io_context</varname>.\n> + </para></entry>\n> + </row>\n>\n> I understand what this is, but not why this is something I might want\n> to know about.\n\nUnlike writes, backends largely have to do their own extends, so\nseparating this from writes lets us determine whether or not we need to\nchange checkpointer/bgwriter to be more aggressive using the writes\nwithout the distraction of the extends. Should I mention this in the\ndocs? The other stats views don't seems to editorialize at all, and I\nwasn't sure if this was an objective enough point to include in docs.\n\n>\n> And from your earlier e-mail:\n>\n> On Thu, Oct 6, 2022 at 10:42 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > Because we want to add non-block-oriented IO in the future (like\n> > temporary file IO) to this view and want to use the same \"read\",\n> > \"written\", \"extended\" columns, I would prefer not to prefix the columns\n> > with \"blks_\". I have added a column \"unit\" which would contain the unit\n> > in which read, written, and extended are in. Unfortunately, fsyncs are\n> > not per block, so \"unit\" doesn't really work for this. I documented\n> > this.\n> >\n> > The most correct thing to do to accommodate block-oriented and\n> > non-block-oriented IO would be to specify all the values in bytes.\n> > However, I would like this view to be usable visually (as opposed to\n> > just in scripts and by tools). The only current value of unit is\n> > \"block_size\" which could potentially be combined with the value of the\n> > GUC to get bytes.\n> >\n> > I've hard-coded the string \"block_size\" into the view generation\n> > function pg_stat_get_io(), so, if this idea makes sense, perhaps I\n> > should do something better there.\n>\n> That seems broadly reasonable, but pg_settings also has a 'unit'\n> field, and in that view, unit is '8kB' on my system--i.e., it\n> (presumably) reflects the block size. Is that something we should try\n> to be consistent with (not sure if that's a good idea, but thought it\n> was worth asking)?\n>\n\nI think this idea is a good option. I am wondering if it would be clear\nwhen mixed with non-block-oriented IO. Block-oriented IO would say 8kB\n(or whatever the build-time value of a block was) and non-block-oriented\nIO would say B or kB. The math would work out.\n\nLooking at pg_settings now though, I am confused about\nhow the units for wal_buffers is 8kB but then the value of wal_buffers\nwhen I show it in psql is \"16MB\"...\n\nThough the units for the pg_stat_io view for block-oriented IO would be\nthe build-time values for block size, so it wouldn't line up exactly\nwith pg_settings. However, I do like the idea of having a unit column\nthat reflects the value and not the name of the GUC/setting which\ndetermined the unit. I can update this in the next version.\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/15/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW\n\n\n",
"msg_date": "Thu, 13 Oct 2022 13:29:32 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Oct 13, 2022 at 10:29 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I think that it makes sense to count both the initial buffers added to\n> the ring and subsequent shared buffers added to the ring (either when\n> the current strategy buffer is pinned or in use or when a bulkread\n> rejects dirty strategy buffers in favor of new shared buffers) as\n> strategy clocksweeps because of how the statistic would be used.\n>\n> Clocksweeps give you an idea of how much of your working set is cached\n> (setting aside initially reading data into shared buffers when you are\n> warming up the db). You may use clocksweeps to determine if you need to\n> make shared buffers larger.\n>\n> Distinguishing strategy buffer clocksweeps from shared buffer\n> clocksweeps allows us to avoid enlarging shared buffers if most of the\n> clocksweeps are to bring in blocks for the strategy operation.\n>\n> However, I could see an argument that discounting strategy clocksweeps\n> done because the current strategy buffer is pinned makes the number of\n> shared buffer clocksweeps artificially low since those other queries\n> using the buffer would have suffered a cache miss were it not for the\n> strategy. And, in this case, you would take strategy clocksweeps\n> together with shared clocksweeps to make your decision. And if we\n> include buffers initially added to the strategy ring in the strategy\n> clocksweep statistic, this number may be off because those blocks may\n> not be needed in the main shared working set. But you won't know that\n> until you try to reuse the buffer and it is pinned. So, I think we don't\n> have a better option than counting initial buffers added to the ring as\n> strategy clocksweeps (as opposed to as reuses).\n>\n> So, in answer to your question, no, I cannot think of a scenario like\n> that.\n\nThat analysis makes sense to me; thanks.\n\n> It also made me remember that I am incorrectly counting rejected buffers\n> as reused. I'm not sure if it is a good idea to subtract from reuses\n> when a buffer is rejected. Waiting until after it is rejected to count\n> the reuse will take some other code changes. Perhaps we could also count\n> rejections in the stats?\n\nI'm not sure what makes sense here.\n\n> > Not critical, but is there a list of backend types we could\n> > cross-reference elsewhere in the docs?\n>\n> The most I could find was this longer explanation (with exhaustive list\n> of types) in pg_stat_activity docs [1]. I could duplicate what it says\n> or I could link to the view and say \"see pg_stat_activity\" for a\n> description of backend_type\" or something like that (to keep them from\n> getting out of sync as new backend_types are added. I suppose I could\n> also add docs on backend_types, but I'm not sure where something like\n> that would go.\n\nI think linking pg_stat_activity is reasonable for now. A separate\nsection for this might be nice at some point, but that seems out of\nscope.\n\n> > From the io_context column description:\n> >\n> > + The autovacuum daemon, explicit <command>VACUUM</command>,\n> > explicit\n> > + <command>ANALYZE</command>, many bulk reads, and many bulk\n> > writes use a\n> > + fixed amount of memory, acquiring the equivalent number of\n> > shared\n> > + buffers and reusing them circularly to avoid occupying an\n> > undue portion\n> > + of the main shared buffer pool.\n> > + </para></entry>\n> >\n> > I don't understand how this is relevant to the io_context column.\n> > Could you expand on that, or am I just missing something obvious?\n> >\n>\n> I'm trying to explain why those other IO Contexts exist (bulkread,\n> bulkwrite, vacuum) and why they are separate from shared buffers.\n> Should I cut it altogether or preface it with something like: these are\n> counted separate from shared buffers because...?\n\nOh I see. That makes sense; it just wasn't obvious to me this was\ntalking about the last three values of io_context. I think a brief\npreface like that would be helpful (maybe explicitly with \"these last\nthree values\", and I think \"counted separately\").\n\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para\n> > role=\"column_definition\">\n> > + <structfield>extended</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Extends of relations done by this\n> > <varname>backend_type</varname> in\n> > + order to write data in this <varname>io_context</varname>.\n> > + </para></entry>\n> > + </row>\n> >\n> > I understand what this is, but not why this is something I might want\n> > to know about.\n>\n> Unlike writes, backends largely have to do their own extends, so\n> separating this from writes lets us determine whether or not we need to\n> change checkpointer/bgwriter to be more aggressive using the writes\n> without the distraction of the extends. Should I mention this in the\n> docs? The other stats views don't seems to editorialize at all, and I\n> wasn't sure if this was an objective enough point to include in docs.\n\nThanks for the clarification. Just to make sure I understand, you mean\nthat if I see a high extended count, that may be interesting in terms\nof write activity, but I can't fix that by tuning--it's just the\nnature of my workload?\n\nI think you're right that this is not objective enough. It's\nunfortunate that there's not a good place in the docs for info like\nthat, since stats like this are hard to interpret without that\ncontext, but I admit that it's not really this patch's job to solve\nthat larger issue.\n\n> > That seems broadly reasonable, but pg_settings also has a 'unit'\n> > field, and in that view, unit is '8kB' on my system--i.e., it\n> > (presumably) reflects the block size. Is that something we should try\n> > to be consistent with (not sure if that's a good idea, but thought it\n> > was worth asking)?\n> >\n>\n> I think this idea is a good option. I am wondering if it would be clear\n> when mixed with non-block-oriented IO. Block-oriented IO would say 8kB\n> (or whatever the build-time value of a block was) and non-block-oriented\n> IO would say B or kB. The math would work out.\n\nRight, yeah. Although maybe that's a little confusing? When you\noriginally added \"unit\", you had said:\n\n>The most correct thing to do to accommodate block-oriented and\n>non-block-oriented IO would be to specify all the values in bytes.\n>However, I would like this view to be usable visually (as opposed to\n>just in scripts and by tools). The only current value of unit is\n>\"block_size\" which could potentially be combined with the value of the\n>GUC to get bytes.\n\nIs this still usable visually if you have to compare values across\nunits? I don't really have any great ideas here (and maybe this is\nstill the best option), just pointing it out.\n\n> Looking at pg_settings now though, I am confused about\n> how the units for wal_buffers is 8kB but then the value of wal_buffers\n> when I show it in psql is \"16MB\"...\n\nYou mean the difference between\n\nmaciek=# select setting, unit from pg_settings where name = 'wal_buffers';\n setting | unit\n---------+------\n 512 | 8kB\n(1 row)\n\nand\n\nmaciek=# show wal_buffers;\n wal_buffers\n-------------\n 4MB\n(1 row)\n\n?\n\nPoking around, I think it looks like that's due to\nconvert_int_from_base_unit (indirectly called from SHOW /\ncurrent_setting):\n\n/*\n * Convert an integer value in some base unit to a human-friendly\nunit.\n *\n * The output unit is chosen so that it's the greatest unit that can\nrepresent\n * the value without loss. For example, if the base unit is\nGUC_UNIT_KB, 1024\n * is converted to 1 MB, but 1025 is represented as 1025 kB.\n */\n\n> Though the units for the pg_stat_io view for block-oriented IO would be\n> the build-time values for block size, so it wouldn't line up exactly\n> with pg_settings.\n\nI don't follow--what would be the discrepancy?\n\n\n",
"msg_date": "Sun, 16 Oct 2022 22:28:34 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v34 is attached.\nI think the column names need discussion. Also, the docs need more work\n(I added a lot of new content there). I could use feedback on the column\nnames and definitions and review/rephrasing ideas for the docs\nadditions.\n\nOn Mon, Oct 17, 2022 at 1:28 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> On Thu, Oct 13, 2022 at 10:29 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I think that it makes sense to count both the initial buffers added to\n> > the ring and subsequent shared buffers added to the ring (either when\n> > the current strategy buffer is pinned or in use or when a bulkread\n> > rejects dirty strategy buffers in favor of new shared buffers) as\n> > strategy clocksweeps because of how the statistic would be used.\n> >\n> > Clocksweeps give you an idea of how much of your working set is cached\n> > (setting aside initially reading data into shared buffers when you are\n> > warming up the db). You may use clocksweeps to determine if you need to\n> > make shared buffers larger.\n> >\n> > Distinguishing strategy buffer clocksweeps from shared buffer\n> > clocksweeps allows us to avoid enlarging shared buffers if most of the\n> > clocksweeps are to bring in blocks for the strategy operation.\n> >\n> > However, I could see an argument that discounting strategy clocksweeps\n> > done because the current strategy buffer is pinned makes the number of\n> > shared buffer clocksweeps artificially low since those other queries\n> > using the buffer would have suffered a cache miss were it not for the\n> > strategy. And, in this case, you would take strategy clocksweeps\n> > together with shared clocksweeps to make your decision. And if we\n> > include buffers initially added to the strategy ring in the strategy\n> > clocksweep statistic, this number may be off because those blocks may\n> > not be needed in the main shared working set. But you won't know that\n> > until you try to reuse the buffer and it is pinned. So, I think we don't\n> > have a better option than counting initial buffers added to the ring as\n> > strategy clocksweeps (as opposed to as reuses).\n> >\n> > So, in answer to your question, no, I cannot think of a scenario like\n> > that.\n>\n> That analysis makes sense to me; thanks.\n\nI have made some major changes in this area to make the columns more\nuseful. I have renamed and split \"clocksweeps\". It is now \"evicted\" and\n\"freelist acquired\". This makes it clear when a block must be evicted\nfrom a shared buffer must be and may help to identify misconfiguration\nof shared buffers.\n\nThere is some nuance here that I tried to make clear in the docs.\n\"freelist acquired\" in a shared context is straightforward.\n\"freelist acquired\" in a strategy context is counted when a shared\nbuffer is added to the strategy ring (not when it is reused).\n\n\"freelist acquired\" in the local buffer context is actually the initial\nallocation of a local buffer (in contrast with reuse).\n\n\"evicted\" in the shared IOContext is a block being evicted from a shared\nbuffer in order to reuse that buffer when not using a strategy.\n\n\"evicted\" in a strategy IOContext is a block being evicted from\na shared buffer in order to add that shared buffer to the strategy ring.\n\nThis is in contrast with \"reused\" in a strategy IOContext which is when\nan existing buffer in the strategy ring has a block evicted in order to\nreuse that buffer in a strategy context.\n\n\"evicted\" in a local IOContext is when an existing local buffer has a\nblock evicted in order to reuse that local buffer.\n\n\"freelist_acquired\" is confusing for local buffers but I wanted to\ndistinguish between reuse/eviction of local buffers and initial\nallocation. \"freelist_acquired\" seemed more fitting because there is a\nclocksweep to find a local buffer and if it hasn't been allocated yet it\nis allocated in a place similar to where shared buffers acquire a buffer\nfrom the freelist. If I didn't count it here, I would need to make a new\ncolumn only for local buffers called \"allocated\" or something like that.\n\nI chose not to call \"evicted\" \"sb_evicted\"\nbecause then we would need a separate \"local_evicted\". I could instead\nmake \"local_evicted\", \"sb_evicted\", and rename \"reused\" to\n\"strat_evicted\". If I did that we would end up with separate columns for\nevery IO Context describing behavior when a buffer is initially acquired\nvs when it is reused.\n\nIt would look something like this:\n\nshared buffers:\n initial: freelist_acquired\n reused: sb_evicted\n\nlocal buffers:\n initial: allocated\n reused: local_evicted\n\nstrategy buffers:\n initial: sb_evicted | freelist_acquired\n reused: strat_evicted\n replaced: sb_evicted | freelist_acquired\n\nThis seems not too bad at first, but if you consider that later we will\nadd other kinds of IO -- eg WAL IO or temporary file IO, we won't be\nable to use these existing columns and will need to add even more\ncolumns describing the exact behavior in those cases.\n\nI wanted to devise a paradigm which allowed for reuse of columns across\nIOContexts even if with slightly different meanings.\n\nI have also added the columns \"repossessed\" and \"rejected\". \"rejected\"\nis when a bulkread rejects a strategy buffer because it is dirty and\nrequires flush. Seeing a lot of rejections could indicate you need to\nvacuum. \"repossessed\" is the number of times a strategy buffer was\npinned or in use by another backend and had to be removed from the\nstrategy ring and replaced with a new shared buffer. This gives you some\nindication that there is contention on blocks recently used by a\nstrategy.\n\nI've also added some descriptions to the docs of how these columns might\nbe used or what a large value in one of them may mean.\n\nI haven't added tests for repossessed or rejected yet. I can add tests\nfor repossessed if we decide to keep it. Rejected is hard to write a\ntest for because we can't guarantee checkpointer won't clean up the\nbuffer before we can reject it\n\n>\n> > It also made me remember that I am incorrectly counting rejected buffers\n> > as reused. I'm not sure if it is a good idea to subtract from reuses\n> > when a buffer is rejected. Waiting until after it is rejected to count\n> > the reuse will take some other code changes. Perhaps we could also count\n> > rejections in the stats?\n>\n> I'm not sure what makes sense here.\n\nI have fixed the counting of rejected and have made a new column\ndedicated to rejected.\n\n>\n> > > From the io_context column description:\n> > >\n> > > + The autovacuum daemon, explicit <command>VACUUM</command>,\n> > > explicit\n> > > + <command>ANALYZE</command>, many bulk reads, and many bulk\n> > > writes use a\n> > > + fixed amount of memory, acquiring the equivalent number of\n> > > shared\n> > > + buffers and reusing them circularly to avoid occupying an\n> > > undue portion\n> > > + of the main shared buffer pool.\n> > > + </para></entry>\n> > >\n> > > I don't understand how this is relevant to the io_context column.\n> > > Could you expand on that, or am I just missing something obvious?\n> > >\n> >\n> > I'm trying to explain why those other IO Contexts exist (bulkread,\n> > bulkwrite, vacuum) and why they are separate from shared buffers.\n> > Should I cut it altogether or preface it with something like: these are\n> > counted separate from shared buffers because...?\n>\n> Oh I see. That makes sense; it just wasn't obvious to me this was\n> talking about the last three values of io_context. I think a brief\n> preface like that would be helpful (maybe explicitly with \"these last\n> three values\", and I think \"counted separately\").\n\nI've done this. Thanks for the suggested wording.\n\n>\n> > > + <row>\n> > > + <entry role=\"catalog_table_entry\"><para\n> > > role=\"column_definition\">\n> > > + <structfield>extended</structfield> <type>bigint</type>\n> > > + </para>\n> > > + <para>\n> > > + Extends of relations done by this\n> > > <varname>backend_type</varname> in\n> > > + order to write data in this <varname>io_context</varname>.\n> > > + </para></entry>\n> > > + </row>\n> > >\n> > > I understand what this is, but not why this is something I might want\n> > > to know about.\n> >\n> > Unlike writes, backends largely have to do their own extends, so\n> > separating this from writes lets us determine whether or not we need to\n> > change checkpointer/bgwriter to be more aggressive using the writes\n> > without the distraction of the extends. Should I mention this in the\n> > docs? The other stats views don't seems to editorialize at all, and I\n> > wasn't sure if this was an objective enough point to include in docs.\n>\n> Thanks for the clarification. Just to make sure I understand, you mean\n> that if I see a high extended count, that may be interesting in terms\n> of write activity, but I can't fix that by tuning--it's just the\n> nature of my workload?\n\nThat is correct.\n\n>\n> > > That seems broadly reasonable, but pg_settings also has a 'unit'\n> > > field, and in that view, unit is '8kB' on my system--i.e., it\n> > > (presumably) reflects the block size. Is that something we should try\n> > > to be consistent with (not sure if that's a good idea, but thought it\n> > > was worth asking)?\n> > >\n> >\n> > I think this idea is a good option. I am wondering if it would be clear\n> > when mixed with non-block-oriented IO. Block-oriented IO would say 8kB\n> > (or whatever the build-time value of a block was) and non-block-oriented\n> > IO would say B or kB. The math would work out.\n>\n> Right, yeah. Although maybe that's a little confusing? When you\n> originally added \"unit\", you had said:\n>\n> >The most correct thing to do to accommodate block-oriented and\n> >non-block-oriented IO would be to specify all the values in bytes.\n> >However, I would like this view to be usable visually (as opposed to\n> >just in scripts and by tools). The only current value of unit is\n> >\"block_size\" which could potentially be combined with the value of the\n> >GUC to get bytes.\n>\n> Is this still usable visually if you have to compare values across\n> units? I don't really have any great ideas here (and maybe this is\n> still the best option), just pointing it out.\n>\n> > Looking at pg_settings now though, I am confused about\n> > how the units for wal_buffers is 8kB but then the value of wal_buffers\n> > when I show it in psql is \"16MB\"...\n>\n> You mean the difference between\n>\n> maciek=# select setting, unit from pg_settings where name = 'wal_buffers';\n> setting | unit\n> ---------+------\n> 512 | 8kB\n> (1 row)\n>\n> and\n>\n> maciek=# show wal_buffers;\n> wal_buffers\n> -------------\n> 4MB\n> (1 row)\n>\n> ?\n>\n> Poking around, I think it looks like that's due to\n> convert_int_from_base_unit (indirectly called from SHOW /\n> current_setting):\n>\n> /*\n> * Convert an integer value in some base unit to a human-friendly\n> unit.\n> *\n> * The output unit is chosen so that it's the greatest unit that can\n> represent\n> * the value without loss. For example, if the base unit is\n> GUC_UNIT_KB, 1024\n> * is converted to 1 MB, but 1025 is represented as 1025 kB.\n> */\n\nI've implemented a change using the same function pg_settings uses to\nturn the build-time parameter BLCKSZ into 8kB (get_config_unit_name())\nusing the flag GUC_UNIT_BLOCKS. I am unsure if this is better or worse\nthan \"block_size\". I am feeling very conflicted about this column.\n\n>\n> > Though the units for the pg_stat_io view for block-oriented IO would be\n> > the build-time values for block size, so it wouldn't line up exactly\n> > with pg_settings.\n>\n> I don't follow--what would be the discrepancy?\n\nI got confused.\nYou are right -- pg_settings does seem to use the build-time value of\nBLCKSZ to derive this. I was confused because the description of\npg_settings says:\n\n\"The view pg_settings provides access to run-time parameters of the server.\"\n\n- Melanie",
"msg_date": "Wed, 19 Oct 2022 15:26:51 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\n- we shouldn't do pgstat_count_io_op() while the buffer header lock is held,\n if possible.\n\n I wonder if we should add a \"source\" output argument to\n StrategyGetBuffer(). Then nearly all the counting can happen in\n BufferAlloc().\n\n- \"repossession\" is a very unintuitive name for me. If we want something like\n it, can't we just name it reuse_failed or such?\n\n- Wonder if the column names should be reads, writes, extends, etc instead of\n the current naming pattern\n\n- Is it actually correct to count evictions in StrategyGetBuffer()? What if we\n then decide to not use that buffer in BufferAlloc()? Yes, that'll be counted\n via rejected, but that still leaves the eviction count to be \"misleading\"?\n\n\nOn 2022-10-19 15:26:51 -0400, Melanie Plageman wrote:\n> I have made some major changes in this area to make the columns more\n> useful. I have renamed and split \"clocksweeps\". It is now \"evicted\" and\n> \"freelist acquired\". This makes it clear when a block must be evicted\n> from a shared buffer must be and may help to identify misconfiguration\n> of shared buffers.\n\nI'm not sure freelist acquired is really that useful? If we don't add it, we\nshould however definitely not count buffers from the freelist as evictions.\n\n\n> There is some nuance here that I tried to make clear in the docs.\n> \"freelist acquired\" in a shared context is straightforward.\n> \"freelist acquired\" in a strategy context is counted when a shared\n> buffer is added to the strategy ring (not when it is reused).\n\nNot sure what the second half here means - why would a buffer that's not from\nthe freelist ever be counted as being from the freelist?\n\n\n> \"freelist_acquired\" is confusing for local buffers but I wanted to\n> distinguish between reuse/eviction of local buffers and initial\n> allocation. \"freelist_acquired\" seemed more fitting because there is a\n> clocksweep to find a local buffer and if it hasn't been allocated yet it\n> is allocated in a place similar to where shared buffers acquire a buffer\n> from the freelist. If I didn't count it here, I would need to make a new\n> column only for local buffers called \"allocated\" or something like that.\n\nI think you're making this too granular. We need to have more detail than\ntoday. But we don't necessarily need to catch every nuance.\n\n\n> I chose not to call \"evicted\" \"sb_evicted\"\n> because then we would need a separate \"local_evicted\". I could instead\n> make \"local_evicted\", \"sb_evicted\", and rename \"reused\" to\n> \"strat_evicted\". If I did that we would end up with separate columns for\n> every IO Context describing behavior when a buffer is initially acquired\n> vs when it is reused.\n>\n> It would look something like this:\n>\n> shared buffers:\n> initial: freelist_acquired\n> reused: sb_evicted\n>\n> local buffers:\n> initial: allocated\n> reused: local_evicted\n>\n> strategy buffers:\n> initial: sb_evicted | freelist_acquired\n> reused: strat_evicted\n> replaced: sb_evicted | freelist_acquired\n>\n> This seems not too bad at first, but if you consider that later we will\n> add other kinds of IO -- eg WAL IO or temporary file IO, we won't be\n> able to use these existing columns and will need to add even more\n> columns describing the exact behavior in those cases.\n\nI think it's clearly not the right direction.\n\n\n\n> I have also added the columns \"repossessed\" and \"rejected\". \"rejected\"\n> is when a bulkread rejects a strategy buffer because it is dirty and\n> requires flush. Seeing a lot of rejections could indicate you need to\n> vacuum. \"repossessed\" is the number of times a strategy buffer was\n> pinned or in use by another backend and had to be removed from the\n> strategy ring and replaced with a new shared buffer. This gives you some\n> indication that there is contention on blocks recently used by a\n> strategy.\n\nI don't immediately see a real use case for repossessed. Why isn't it\nsufficient to count it as part of rejected?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 20 Oct 2022 10:31:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 12:27 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> v34 is attached.\n> I think the column names need discussion. Also, the docs need more work\n> (I added a lot of new content there). I could use feedback on the column\n> names and definitions and review/rephrasing ideas for the docs\n> additions.\n\nNice! I think the expanded docs are great, and make this information\nmuch easier to interpret.\n\n>+ <varname>io_context</varname> <literal>bulkread</literal>, existing\n>+ dirty buffers in the ring requirng flush are\n\n\"requiring\"\n\n>+ shared buffers were acquired from the freelist and added to the\n>+ fixed-size strategy ring buffer. Shared buffers are added to the\n>+ strategy ring lazily. If the current buffer in the ring is pinned or in\n\nThis is the first mention of the term \"strategy\" in these docs. It's\nnot totally opaque, since there's some context, but maybe we should\neither try to avoid that term or define it more explicitly?\n\n>+ <varname>io_context</varname>s. This is equivalent to\n>+ <varname>evicted</varname> for shared buffers in\n>+ <varname>io_context</varname> <literal>shared</literal>, as the contents\n>+ of the buffer are <quote>evicted</quote> but refers to the case when the\n\nI don't quite follow this: does this mean that I should expect\n'reused' and 'evicted' to be equal in the 'shared' context, because\nthey represent the same thing? Or will 'reused' just be null because\nit's not distinct from 'evicted'? It looks like it's null right now,\nbut I find the wording here confusing.\n\n>+ future with a new shared buffer. A high number of\n>+ <literal>bulkread</literal> rejections can indicate a need for more\n>+ frequent vacuuming or more aggressive autovacuum settings, as buffers are\n>+ dirtied during a bulkread operation when updating the hint bit or when\n>+ performing on-access pruning.\n\nThis is great. Just wanted to re-iterate that notes like this are\nreally helpful to understanding this view.\n\n> I've implemented a change using the same function pg_settings uses to\n> turn the build-time parameter BLCKSZ into 8kB (get_config_unit_name())\n> using the flag GUC_UNIT_BLOCKS. I am unsure if this is better or worse\n> than \"block_size\". I am feeling very conflicted about this column.\n\nYeah, I guess it feels less natural here than in pg_settings, but it\nstill kind of feels like one way of doing this is better than two...\n\n\n",
"msg_date": "Sun, 23 Oct 2022 15:35:38 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 10:31 AM Andres Freund <andres@anarazel.de> wrote:\n> - \"repossession\" is a very unintuitive name for me. If we want something like\n> it, can't we just name it reuse_failed or such?\n\n+1, I think \"repossessed\" is awkward. I think \"reuse_failed\" works,\nbut no strong opinions on an alternate name.\n\n> - Wonder if the column names should be reads, writes, extends, etc instead of\n> the current naming pattern\n\nWhy? Lukas suggested alignment with existing views like\npg_stat_database and pg_stat_statements. It doesn't make sense to use\nthe blks_ prefix since it's not all blocks, but otherwise it seems\nlike we should be consistent, no?\n\n> > \"freelist_acquired\" is confusing for local buffers but I wanted to\n> > distinguish between reuse/eviction of local buffers and initial\n> > allocation. \"freelist_acquired\" seemed more fitting because there is a\n> > clocksweep to find a local buffer and if it hasn't been allocated yet it\n> > is allocated in a place similar to where shared buffers acquire a buffer\n> > from the freelist. If I didn't count it here, I would need to make a new\n> > column only for local buffers called \"allocated\" or something like that.\n>\n> I think you're making this too granular. We need to have more detail than\n> today. But we don't necessarily need to catch every nuance.\n\nIn general I agree that coarser granularity here may be easier to use.\nI do think the current docs explain what's going on pretty well,\nthough, and I worry if merging too many concepts will make that harder\nto follow. But if a less detailed breakdown still communicates\npotential problems, +1.\n\n> > This seems not too bad at first, but if you consider that later we will\n> > add other kinds of IO -- eg WAL IO or temporary file IO, we won't be\n> > able to use these existing columns and will need to add even more\n> > columns describing the exact behavior in those cases.\n>\n> I think it's clearly not the right direction.\n\n+1, I think the existing approach makes more sense.\n\n\n",
"msg_date": "Sun, 23 Oct 2022 15:48:09 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 1:31 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> - we shouldn't do pgstat_count_io_op() while the buffer header lock is held,\n> if possible.\n\nI've changed this locally. It will be fixed in the next version I share.\n\n>\n> I wonder if we should add a \"source\" output argument to\n> StrategyGetBuffer(). Then nearly all the counting can happen in\n> BufferAlloc().\n\nI think we can just check for BM_VALID being set before invalidating it\nin order to claim the buffer at the end of BufferAlloc(). Then we can\ncount it as an eviction or reuse.\n\n>\n> - \"repossession\" is a very unintuitive name for me. If we want something like\n> it, can't we just name it reuse_failed or such?\n\nRepossession could be called eviction_failed or reuse_failed.\nDo we think we will ever want to use it to count buffers we released\nin other IOContexts (thus making the name eviction_failed better than\nreuse_failed)?\n\n> - Is it actually correct to count evictions in StrategyGetBuffer()? What if we\n> then decide to not use that buffer in BufferAlloc()? Yes, that'll be counted\n> via rejected, but that still leaves the eviction count to be \"misleading\"?\n\nI agree that counting evictions in StrategyGetBuffer() is incorrect.\nChecking BM_VALID at bottom of BufferAlloc() should be better.\n\n> On 2022-10-19 15:26:51 -0400, Melanie Plageman wrote:\n> > I have made some major changes in this area to make the columns more\n> > useful. I have renamed and split \"clocksweeps\". It is now \"evicted\" and\n> > \"freelist acquired\". This makes it clear when a block must be evicted\n> > from a shared buffer must be and may help to identify misconfiguration\n> > of shared buffers.\n>\n> I'm not sure freelist acquired is really that useful? If we don't add it, we\n> should however definitely not count buffers from the freelist as evictions.\n>\n>\n> > There is some nuance here that I tried to make clear in the docs.\n> > \"freelist acquired\" in a shared context is straightforward.\n> > \"freelist acquired\" in a strategy context is counted when a shared\n> > buffer is added to the strategy ring (not when it is reused).\n>\n> Not sure what the second half here means - why would a buffer that's not from\n> the freelist ever be counted as being from the freelist?\n>\n>\n> > \"freelist_acquired\" is confusing for local buffers but I wanted to\n> > distinguish between reuse/eviction of local buffers and initial\n> > allocation. \"freelist_acquired\" seemed more fitting because there is a\n> > clocksweep to find a local buffer and if it hasn't been allocated yet it\n> > is allocated in a place similar to where shared buffers acquire a buffer\n> > from the freelist. If I didn't count it here, I would need to make a new\n> > column only for local buffers called \"allocated\" or something like that.\n>\n> I think you're making this too granular. We need to have more detail than\n> today. But we don't necessarily need to catch every nuance.\n>\n\nI am fine with cutting freelist_acquired. The same actionable\ninformation that it could provide could be provided by \"read\", right?\nAlso, removing it means I can remove the complicated explanation of how\nfreelist_acquired should be interpreted in IOCONTEXT_LOCAL.\n\nSpeaking of IOCONTEXT_LOCAL, I was wondering if it is confusing to call\nit IOCONTEXT_LOCAL since it refers to IO done for temporary tables. What\nif, in the future, we want to track other IO done using data in local\nmemory? Also, what if we want to track other IO done using data from\nshared memory that is not in shared buffers? Would IOCONTEXT_SB and\nIOCONTEXT_TEMP be better? Should IOContext literally describe the\ncontext of the IO being done and there be a separate column which\nindicates the source of the data for the IO?\nLike wal_buffer, local_buffer, shared_buffer? Then if it is not\nblock-oriented, it could be shared_mem, local_mem, or bypass?\n\nIf we had another dimension to the matrix \"data_src\" which, with\nblock-oriented IO is equivalent to \"buffer type\", this could help with\nsome of the clarity problems.\n\nWe could remove the \"reused\" column and that becomes:\n\nIOCONTEXT | DATA_SRC | IOOP\n----------------------------------------\nstrategy | strategy_buffer | EVICT\n\nHaving data_src and iocontext simplifies the meaning of all io\noperations involving a strategy. Some operations are done on shared\nbuffers and some on existing strategy buffers and this would be more\nclear without the addition of special columns for strategies.\n\n\n> > I have also added the columns \"repossessed\" and \"rejected\". \"rejected\"\n> > is when a bulkread rejects a strategy buffer because it is dirty and\n> > requires flush. Seeing a lot of rejections could indicate you need to\n> > vacuum. \"repossessed\" is the number of times a strategy buffer was\n> > pinned or in use by another backend and had to be removed from the\n> > strategy ring and replaced with a new shared buffer. This gives you some\n> > indication that there is contention on blocks recently used by a\n> > strategy.\n>\n> I don't immediately see a real use case for repossessed. Why isn't it\n> sufficient to count it as part of rejected?\n\nI'm still on the fence about combining rejection and reuse_failed. A\nbuffer rejected by a bulkread for being dirty may indicate the need to\nvacuum but doesn't say anything about contention.\nWhereas, failed reuses indicate contention for the blocks operated on by\nthe strategy. You would react to them differently. And you could have a\nbulkread racking up both failed reuses and rejections.\n\nIf this seems like an unlikely or niche case, I would be okay with\ncombining rejections with reuse_failed. But it would be nice if we could\nhelp with interpreting the column. I wonder if there is a rule of thumb\nfor determining which scenario you have. For example, how likely is it\nthat if you see a high number of reuse_rejected in a bulkread IOContext\nthat you would see any reused if the rejections are due to the bulkread\ndirtying its own buffers? I suppose it would depend on your workload and\nhow random your updates/deletes were? If there is some way to use\nreuse_rejected in combination with another column to determine the cause\nof the rejections, it would be easier to combine them.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 24 Oct 2022 14:38:52 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Sun, Oct 23, 2022 at 6:35 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> On Wed, Oct 19, 2022 at 12:27 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > v34 is attached.\n> > I think the column names need discussion. Also, the docs need more work\n> > (I added a lot of new content there). I could use feedback on the column\n> > names and definitions and review/rephrasing ideas for the docs\n> > additions.\n>\n> Nice! I think the expanded docs are great, and make this information\n> much easier to interpret.\n>\n> >+ <varname>io_context</varname> <literal>bulkread</literal>, existing\n> >+ dirty buffers in the ring requirng flush are\n>\n> \"requiring\"\n\nThanks!\n\n>\n> >+ shared buffers were acquired from the freelist and added to the\n> >+ fixed-size strategy ring buffer. Shared buffers are added to the\n> >+ strategy ring lazily. If the current buffer in the ring is pinned or in\n>\n> This is the first mention of the term \"strategy\" in these docs. It's\n> not totally opaque, since there's some context, but maybe we should\n> either try to avoid that term or define it more explicitly?\n>\n\nI am thinking it might be good to define the term strategy for use in\nthis view documentation.\nIn the IOContext column documentation, I've added this\n...\n avoid occupying an undue portion of the main shared buffer pool. This\n pattern is called a Buffer Access Strategy and the fixed-size ring\n buffer can be referred to as a <quote>strategy ring buffer</quote>.\n</para></entry>\n\nI was thinking this would allow me to refer to the strategy ring buffer\nmore easily. I fear simply referring to \"the\" ring buffer throughout\nthis view documentation will be confusing.\n\n> >+ <varname>io_context</varname>s. This is equivalent to\n> >+ <varname>evicted</varname> for shared buffers in\n> >+ <varname>io_context</varname> <literal>shared</literal>, as the contents\n> >+ of the buffer are <quote>evicted</quote> but refers to the case when the\n>\n> I don't quite follow this: does this mean that I should expect\n> 'reused' and 'evicted' to be equal in the 'shared' context, because\n> they represent the same thing? Or will 'reused' just be null because\n> it's not distinct from 'evicted'? It looks like it's null right now,\n> but I find the wording here confusing.\n\nYou should only see evictions when the strategy evicts shared buffers\nand reuses when the strategy evicts existing strategy buffers.\n\nHow about this instead in this docs?\n\nthe number of times an existing buffer in the strategy ring was reused\nas part of an operation in the <literal>bulkread</literal>,\n<literal>bulkwrite</literal>, or <literal>vacuum</literal>\n<varname>io_context</varname>s. when a buffer access strategy\n<quote>reuses</quote> a buffer in the strategy ring, it must evict its\ncontents, incrementing <varname>reused</varname>. when a buffer access\nstrategy adds a new shared buffer to the strategy ring and this shared\nbuffer is occupied, the buffer access strategy must evict the contents\nof the shared buffer, incrementing <varname>evicted</varname>.\n\n\n> > I've implemented a change using the same function pg_settings uses to\n> > turn the build-time parameter BLCKSZ into 8kB (get_config_unit_name())\n> > using the flag GUC_UNIT_BLOCKS. I am unsure if this is better or worse\n> > than \"block_size\". I am feeling very conflicted about this column.\n>\n> Yeah, I guess it feels less natural here than in pg_settings, but it\n> still kind of feels like one way of doing this is better than two...\n\nSo, Andres pointed out that it would be nice to be able to multiply the\nunit column by the operation column (e.g. select unit * reused from\npg_stat_io...) and get a number of bytes. Then you can use\npg_size_pretty to convert it to something more human readable.\n\nIt probably shouldn't be called unit, then, since that would be the same\nname as pg_settings but a different meaning. I thought of\n\"bytes_conversion\". Then, non-block-oriented IO also wouldn't have to be\nin bytes. They could put 1000 or 10000 for bytes_conversion.\n\nWhat do you think?\n\n- Melanie\n\n\n",
"msg_date": "Mon, 24 Oct 2022 15:39:01 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v35 is attached\n\nOn Mon, Oct 24, 2022 at 2:38 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Thu, Oct 20, 2022 at 1:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > I wonder if we should add a \"source\" output argument to\n> > StrategyGetBuffer(). Then nearly all the counting can happen in\n> > BufferAlloc().\n>\n> I think we can just check for BM_VALID being set before invalidating it\n> in order to claim the buffer at the end of BufferAlloc(). Then we can\n> count it as an eviction or reuse.\n\nDone this in attached version\n\n>\n> > On 2022-10-19 15:26:51 -0400, Melanie Plageman wrote:\n> > > I have made some major changes in this area to make the columns more\n> > > useful. I have renamed and split \"clocksweeps\". It is now \"evicted\" and\n> > > \"freelist acquired\". This makes it clear when a block must be evicted\n> > > from a shared buffer must be and may help to identify misconfiguration\n> > > of shared buffers.\n> >\n> > I'm not sure freelist acquired is really that useful? If we don't add it, we\n> > should however definitely not count buffers from the freelist as evictions.\n> >\n> >\n> > > There is some nuance here that I tried to make clear in the docs.\n> > > \"freelist acquired\" in a shared context is straightforward.\n> > > \"freelist acquired\" in a strategy context is counted when a shared\n> > > buffer is added to the strategy ring (not when it is reused).\n> >\n> > Not sure what the second half here means - why would a buffer that's not from\n> > the freelist ever be counted as being from the freelist?\n> >\n> >\n> > > \"freelist_acquired\" is confusing for local buffers but I wanted to\n> > > distinguish between reuse/eviction of local buffers and initial\n> > > allocation. \"freelist_acquired\" seemed more fitting because there is a\n> > > clocksweep to find a local buffer and if it hasn't been allocated yet it\n> > > is allocated in a place similar to where shared buffers acquire a buffer\n> > > from the freelist. If I didn't count it here, I would need to make a new\n> > > column only for local buffers called \"allocated\" or something like that.\n> >\n> > I think you're making this too granular. We need to have more detail than\n> > today. But we don't necessarily need to catch every nuance.\n\nI cut freelist_acquired in attached version.\n\n> I am fine with cutting freelist_acquired. The same actionable\n> information that it could provide could be provided by \"read\", right?\n> Also, removing it means I can remove the complicated explanation of how\n> freelist_acquired should be interpreted in IOCONTEXT_LOCAL.\n>\n> Speaking of IOCONTEXT_LOCAL, I was wondering if it is confusing to call\n> it IOCONTEXT_LOCAL since it refers to IO done for temporary tables. What\n> if, in the future, we want to track other IO done using data in local\n> memory? Also, what if we want to track other IO done using data from\n> shared memory that is not in shared buffers? Would IOCONTEXT_SB and\n> IOCONTEXT_TEMP be better? Should IOContext literally describe the\n> context of the IO being done and there be a separate column which\n> indicates the source of the data for the IO?\n> Like wal_buffer, local_buffer, shared_buffer? Then if it is not\n> block-oriented, it could be shared_mem, local_mem, or bypass?\n\npg_stat_statements uses local_blks_read and temp_blks_read for local\nbuffers for temp tables and temp file IO respectively -- so perhaps we\nshould stick to that\n\nOther updates in this version:\n\nI've also updated the unit column to bytes_conversion.\n\nI've made quite a few updates to the docs including more information\non overlaps between pg_stat_database, pg_statio_*, and\npg_stat_statements.\n\nLet me know if there are other configuration tip resources from the\nexisting docs that I could link in the column \"files_synced\".\n\nI still need to look at the docs with fresh eyes and do another round of\ncleanup (probably).\n\n- Melanie",
"msg_date": "Tue, 25 Oct 2022 23:15:06 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "okay, so I realized v35 had an issue where I wasn't counting strategy\nevictions correctly. fixed in attached v36. This made me wonder if there\nis actually a way to add a test for evictions (in strategy and shared\ncontexts) that is not flakey.\n\nOn Sun, Oct 23, 2022 at 6:48 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> On Thu, Oct 20, 2022 at 10:31 AM Andres Freund <andres@anarazel.de> wrote:\n> > - \"repossession\" is a very unintuitive name for me. If we want something like\n> > it, can't we just name it reuse_failed or such?\n>\n> +1, I think \"repossessed\" is awkward. I think \"reuse_failed\" works,\n> but no strong opinions on an alternate name.\n\nAlso, re: repossessed, I can change it to reuse_failed but I do think it\nis important to give users a way to distinguish between bulkread\nrejections of dirty buffers and strategies failing to reuse buffers due\nto concurrent pinning (since the reaction to these two scenarios would\nlikely be different).\n\nIf we added another column called something like \"claim_failed\" which\ncounts buffers which we failed to reuse because of concurrent pinning or\nusage, we could recommend use of this column together with\n\"reuse_failed\" to determine the cause of the failed reuses for a\nbulkread. We could also use \"claim_failed\" in IOContext shared to\nprovide information on shared buffer contention.\n\n- Melanie",
"msg_date": "Wed, 26 Oct 2022 13:54:44 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-24 14:38:52 -0400, Melanie Plageman wrote:\n> > - \"repossession\" is a very unintuitive name for me. If we want something like\n> > it, can't we just name it reuse_failed or such?\n>\n> Repossession could be called eviction_failed or reuse_failed.\n> Do we think we will ever want to use it to count buffers we released\n> in other IOContexts (thus making the name eviction_failed better than\n> reuse_failed)?\n\nI've a somewhat radical proposal: Let's just not count any of this in the\ninitial version. I think we want something, but clearly it's one of the harder\naspects of this patch. Let's get the rest in, and then work on this is in\nisolation.\n\n\n> Speaking of IOCONTEXT_LOCAL, I was wondering if it is confusing to call\n> it IOCONTEXT_LOCAL since it refers to IO done for temporary tables. What\n> if, in the future, we want to track other IO done using data in local\n> memory?\n\nFair point. However, I think 'tmp' or 'temp' would be worse, because there's\nother sources of temporary files that would be worth counting, consider\ne.g. tuplestore temporary files. 'temptable' isn't good because it's not just\ntables. 'temprel'? On balance I think local is better, but not sure.\n\n\n> Also, what if we want to track other IO done using data from shared memory\n> that is not in shared buffers? Would IOCONTEXT_SB and IOCONTEXT_TEMP be\n> better? Should IOContext literally describe the context of the IO being done\n> and there be a separate column which indicates the source of the data for\n> the IO? Like wal_buffer, local_buffer, shared_buffer? Then if it is not\n> block-oriented, it could be shared_mem, local_mem, or bypass?\n\nHm. I don't think we'd need _buffer for WAL or such, because there's nothing\nelse.\n\n\n> If we had another dimension to the matrix \"data_src\" which, with\n> block-oriented IO is equivalent to \"buffer type\", this could help with\n> some of the clarity problems.\n>\n> We could remove the \"reused\" column and that becomes:\n>\n> IOCONTEXT | DATA_SRC | IOOP\n> ----------------------------------------\n> strategy | strategy_buffer | EVICT\n\n> Having data_src and iocontext simplifies the meaning of all io\n> operations involving a strategy. Some operations are done on shared\n> buffers and some on existing strategy buffers and this would be more\n> clear without the addition of special columns for strategies.\n\n-1, I think this just blows up the complexity further, without providing much\nbenefit. But:\n\nPerhaps a somewhat similar idea could be used to address the concerns in the\npreceding paragraphs. How about the following set of columns:\n\nbackend_type:\nobject: relation, temp_relation[, WAL, tempfiles, ...]\niocontext: buffer_pool, bulkread, bulkwrite, vacuum[, bypass]\nread:\nwritten:\nextended:\nbytes_conversion:\nevicted:\nreused:\nfiles_synced:\nstats_reset:\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 Oct 2022 11:58:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Oct 26, 2022 at 10:55 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n\n+ The <structname>pg_statio_</structname> and\n+ <structname>pg_stat_io</structname> views are primarily useful to determine\n+ the effectiveness of the buffer cache. When the number of actual disk reads\n\nTotally nitpicking, but this reads a little funny to me. Previously\nthe trailing underscore suggested this is a group, and now with\npg_stat_io itself added (stupid question: should this be\n\"pg_statio\"?), it sounds like we're talking about two views:\npg_stat_io and \"pg_statio_\". Maybe something like \"The pg_stat_io view\nand the pg_statio_ set of views are primarily...\"?\n\n+ by that backend type in that IO context. Currently only a subset of IO\n+ operations are tracked here. WAL IO, IO on temporary files, and some forms\n+ of IO outside of shared buffers (such as when building indexes or moving a\n+ table from one tablespace to another) could be added in the future.\n\nAgain nitpicking, but should this be \"may be added\"? I think \"could\"\nsuggests the possibility of implementation, whereas \"may\" feels more\nlike a hint as to how the feature could evolve.\n\n+ portion of the main shared buffer pool. This pattern is called a\n+ <quote>Buffer Access Strategy</quote> in the\n+ <productname>PostgreSQL</productname> source code and the fixed-size\n+ ring buffer is referred to as a <quote>strategy ring buffer</quote> for\n+ the purposes of this view's documentation.\n+ </para></entry>\n\nNice, I think this explanation is very helpful. You also use the term\n\"strategy context\" and \"strategy operation\" below. I think it's fairly\nobvious what those mean, but pointing it out in case we want to note\nthat here, too.\n\n+ <varname>read</varname> and <varname>extended</varname> for\n\nMaybe \"plus\" instead of \"and\" here for clarity (I'm assuming that's\nwhat the \"and\" means)?\n\n+ <varname>backend_type</varname>s <literal>autovacuum launcher</literal>,\n+ <literal>autovacuum worker</literal>, <literal>client backend</literal>,\n+ <literal>standalone backend</literal>, <literal>background\n+ worker</literal>, and <literal>walsender</literal> for all\n+ <varname>io_context</varname>s is similar to the sum of\n\nI'm reviewing the rendered docs now, and I noticed sentences like this\nare a bit hard to scan: they force the reader to parse a big list of\nbackend types before even getting to the meat of what this is talking\nabout. Should we maybe reword this so that the backend list comes at\nthe end of the sentence? Or maybe even use a list (e.g., like in the\n\"state\" column description in pg_stat_activity)?\n\n+ <varname>heap_blks_read</varname>, <varname>idx_blks_read</varname>,\n+ <varname>tidx_blks_read</varname>, and\n+ <varname>toast_blks_read</varname> in <link\n+ linkend=\"monitoring-pg-statio-all-tables-view\">\n+ <structname>pg_statio_all_tables</structname></link>. and\n+ <varname>blks_read</varname> from <link\n\nI think that's a stray period before the \"and.\"\n\n+ <para>If using the <productname>PostgreSQL</productname> extension,\n+ <xref linkend=\"pgstatstatements\"/>,\n+ <varname>read</varname> for\n+ <varname>backend_type</varname>s <literal>autovacuum launcher</literal>,\n+ <literal>autovacuum worker</literal>, <literal>client backend</literal>,\n+ <literal>standalone backend</literal>, <literal>background\n+ worker</literal>, and <literal>walsender</literal> for all\n+ <varname>io_context</varname>s is equivalent to\n\nSame comment as above re: the lengthy list.\n\n+ Normal client backends should be able to rely on maintenance processes\n+ like the checkpointer and background writer to write out dirty data as\n\nNice--it's great to see this mentioned. But I think these are\ngenerally referred to as \"auxiliary\" not \"maintenance\" processes, no?\n\n+ <para>If using the <productname>PostgreSQL</productname> extension,\n+ <xref linkend=\"pgstatstatements\"/>, <varname>written</varname> and\n+ <varname>extended</varname> for <varname>backend_type</varname>s\n\nAgain, should this be \"plus\" instead of \"and\"?\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>bytes_conversion</structfield> <type>bigint</type>\n+ </para>\n\nI think this general approach works (instead of unit). I'm not wild\nabout the name, but I don't really have a better suggestion. Maybe\n\"op_bytes\" (since each cell is counting the number of I/O operations)?\nBut I think bytes_conversion is okay.\n\nAlso, is this (in the middle of the table) the right place for this\ncolumn? I would have expected to see it before or after all the actual\nI/O op cells.\n\n+ <varname>io_context</varname>s. When a <quote>Buffer Access\n+ Strategy</quote> reuses a buffer in the strategy ring, it must evict its\n+ contents, incrementing <varname>reused</varname>. When a <quote>Buffer\n+ Access Strategy</quote> adds a new shared buffer to the strategy ring\n+ and this shared buffer is occupied, the <quote>Buffer Access\n+ Strategy</quote> must evict the contents of the shared buffer,\n+ incrementing <varname>evicted</varname>.\n\nI think the parallel phrasing here makes this a little hard to follow.\nSpecifically, I think \"must evict its contents\" for the strategy case\nsounds like a bad thing, but in fact this is a totally normal thing\nthat happens as part of strategy access, no? The idea is you probably\nwon't need that buffer again, so it's fine to evict it. I'm not sure\nhow to reword, but I think the current phrasing is misleading.\n\n+ The number of times a <literal>bulkread</literal> found the current\n+ buffer in the fixed-size strategy ring dirty and requiring flush.\n\nMaybe \"...found ... to be dirty...\"?\n\n+ frequent vacuuming or more aggressive autovacuum settings, as buffers are\n+ dirtied during a bulkread operation when updating the hint bit or when\n+ performing on-access pruning.\n\nAre there docs to cross-reference here, especially for pruning? I\ncouldn't find much except a few un-explained mentions in the page\nlayout docs [2], and most of the search results refer to partition\npruning. Searching for hint bits at least gives some info in blog\nposts and the wiki.\n\n+ again. A high number of repossessions is a sign of contention for the\n+ blocks operated on by the strategy operation.\n\nThis (and in general the repossession description) makes sense, but\nI'm not sure what to do with the information. Maybe Andres is right\nthat we could skip this in the first version?\n\nOn Mon, Oct 24, 2022 at 12:39 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> > I don't quite follow this: does this mean that I should expect\n> > 'reused' and 'evicted' to be equal in the 'shared' context, because\n> > they represent the same thing? Or will 'reused' just be null because\n> > it's not distinct from 'evicted'? It looks like it's null right now,\n> > but I find the wording here confusing.\n>\n> You should only see evictions when the strategy evicts shared buffers\n> and reuses when the strategy evicts existing strategy buffers.\n>\n> How about this instead in this docs?\n>\n> the number of times an existing buffer in the strategy ring was reused\n> as part of an operation in the <literal>bulkread</literal>,\n> <literal>bulkwrite</literal>, or <literal>vacuum</literal>\n> <varname>io_context</varname>s. when a buffer access strategy\n> <quote>reuses</quote> a buffer in the strategy ring, it must evict its\n> contents, incrementing <varname>reused</varname>. when a buffer access\n> strategy adds a new shared buffer to the strategy ring and this shared\n> buffer is occupied, the buffer access strategy must evict the contents\n> of the shared buffer, incrementing <varname>evicted</varname>.\n\nIt looks like you ended up with different wording in the patch, but\nboth this explanation and what's in the patch now make sense to me.\nThanks for clarifying.\n\nAlso, I noticed that the commit message explains missing rows for some\nbackend_type / io_context combinations and NULL (versus 0) in some\ncells, but the docs don't really talk about that. Do you think that\nshould be in there as well?\n\nThanks,\nMaciek\n\n[1]: https://www.postgresql.org/docs/15/glossary.html#GLOSSARY-AUXILIARY-PROC\n[2]: https://www.postgresql.org/docs/15/storage-page-layout.html\n\n\n",
"msg_date": "Sun, 30 Oct 2022 18:08:55 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v37 attached\n\nOn Sun, Oct 30, 2022 at 9:09 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> On Wed, Oct 26, 2022 at 10:55 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n>\n> + The <structname>pg_statio_</structname> and\n> + <structname>pg_stat_io</structname> views are primarily useful to determine\n> + the effectiveness of the buffer cache. When the number of actual disk reads\n>\n> Totally nitpicking, but this reads a little funny to me. Previously\n> the trailing underscore suggested this is a group, and now with\n> pg_stat_io itself added (stupid question: should this be\n> \"pg_statio\"?), it sounds like we're talking about two views:\n> pg_stat_io and \"pg_statio_\". Maybe something like \"The pg_stat_io view\n> and the pg_statio_ set of views are primarily...\"?\n\nI decided not to call it pg_statio because all of the other stats views\nhave an underscore after stat and I thought it was an opportunity to be\nconsistent with them.\n\n> + by that backend type in that IO context. Currently only a subset of IO\n> + operations are tracked here. WAL IO, IO on temporary files, and some forms\n> + of IO outside of shared buffers (such as when building indexes or moving a\n> + table from one tablespace to another) could be added in the future.\n>\n> Again nitpicking, but should this be \"may be added\"? I think \"could\"\n> suggests the possibility of implementation, whereas \"may\" feels more\n> like a hint as to how the feature could evolve.\n\nI've adopted the wording you suggested.\n\n> + portion of the main shared buffer pool. This pattern is called a\n> + <quote>Buffer Access Strategy</quote> in the\n> + <productname>PostgreSQL</productname> source code and the fixed-size\n> + ring buffer is referred to as a <quote>strategy ring buffer</quote> for\n> + the purposes of this view's documentation.\n> + </para></entry>\n>\n> Nice, I think this explanation is very helpful. You also use the term\n> \"strategy context\" and \"strategy operation\" below. I think it's fairly\n> obvious what those mean, but pointing it out in case we want to note\n> that here, too.\n\nThanks! I've added definitions of those as well.\n\n> + <varname>read</varname> and <varname>extended</varname> for\n>\n> Maybe \"plus\" instead of \"and\" here for clarity (I'm assuming that's\n> what the \"and\" means)?\n\nModified this -- in some cases by adding the lists mentioned below\n\n> + <varname>backend_type</varname>s <literal>autovacuum launcher</literal>,\n> + <literal>autovacuum worker</literal>, <literal>client backend</literal>,\n> + <literal>standalone backend</literal>, <literal>background\n> + worker</literal>, and <literal>walsender</literal> for all\n> + <varname>io_context</varname>s is similar to the sum of\n>\n> I'm reviewing the rendered docs now, and I noticed sentences like this\n> are a bit hard to scan: they force the reader to parse a big list of\n> backend types before even getting to the meat of what this is talking\n> about. Should we maybe reword this so that the backend list comes at\n> the end of the sentence? Or maybe even use a list (e.g., like in the\n> \"state\" column description in pg_stat_activity)?\n\nGood idea with the bullet points.\nFor the lengthy lists, I've added bullet point lists to the docs for\nseveral of the columns. It is quite long now but, hopefully, clearer?\nLet me know if you think it improves the readability.\n\n> + <varname>heap_blks_read</varname>, <varname>idx_blks_read</varname>,\n> + <varname>tidx_blks_read</varname>, and\n> + <varname>toast_blks_read</varname> in <link\n> + linkend=\"monitoring-pg-statio-all-tables-view\">\n> + <structname>pg_statio_all_tables</structname></link>. and\n> + <varname>blks_read</varname> from <link\n>\n> I think that's a stray period before the \"and.\"\n\nFixed!\n\n> + Normal client backends should be able to rely on maintenance processes\n> + like the checkpointer and background writer to write out dirty data as\n>\n> Nice--it's great to see this mentioned. But I think these are\n> generally referred to as \"auxiliary\" not \"maintenance\" processes, no?\n\nThanks! Fixed.\n\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>bytes_conversion</structfield> <type>bigint</type>\n> + </para>\n>\n> I think this general approach works (instead of unit). I'm not wild\n> about the name, but I don't really have a better suggestion. Maybe\n> \"op_bytes\" (since each cell is counting the number of I/O operations)?\n> But I think bytes_conversion is okay.\n\nI really like op_bytes and have changed it to this. Thanks for the\nsuggestion!\n\n> Also, is this (in the middle of the table) the right place for this\n> column? I would have expected to see it before or after all the actual\n> I/O op cells.\n\nI put it after read, write, and extend columns because it applies to\nthem. It doesn't apply to files_synced. For reused and evicted, I didn't\nthink bytes reused and evicted made sense. Also, when we add non-block\noriented IO, reused and evicted won't be used but op_bytes will be. So I\nthought it made more sense to place it after the operations it applies\nto.\n\n> + <varname>io_context</varname>s. When a <quote>Buffer Access\n> + Strategy</quote> reuses a buffer in the strategy ring, it must evict its\n> + contents, incrementing <varname>reused</varname>. When a <quote>Buffer\n> + Access Strategy</quote> adds a new shared buffer to the strategy ring\n> + and this shared buffer is occupied, the <quote>Buffer Access\n> + Strategy</quote> must evict the contents of the shared buffer,\n> + incrementing <varname>evicted</varname>.\n>\n> I think the parallel phrasing here makes this a little hard to follow.\n> Specifically, I think \"must evict its contents\" for the strategy case\n> sounds like a bad thing, but in fact this is a totally normal thing\n> that happens as part of strategy access, no? The idea is you probably\n> won't need that buffer again, so it's fine to evict it. I'm not sure\n> how to reword, but I think the current phrasing is misleading.\n\nI had trouble rephrasing this. I changed a few words. I see what you\nmean. It is worth noting that reusing strategy buffers when there are\nbuffers on the freelist may not be the best behavior, so I wouldn't\nnecessarily consider \"reused\" a good thing. However, I'm not sure how\nmuch the user could really do about this. I would at least like this\nphrasing to be clear (evicted is for shared buffers, reused is for\nstrategy buffers), so, perhaps this section requires more work.\n\n> + The number of times a <literal>bulkread</literal> found the current\n> + buffer in the fixed-size strategy ring dirty and requiring flush.\n>\n> Maybe \"...found ... to be dirty...\"?\n\nChanged to this wording.\n\n> + frequent vacuuming or more aggressive autovacuum settings, as buffers are\n> + dirtied during a bulkread operation when updating the hint bit or when\n> + performing on-access pruning.\n>\n> Are there docs to cross-reference here, especially for pruning? I\n> couldn't find much except a few un-explained mentions in the page\n> layout docs [2], and most of the search results refer to partition\n> pruning. Searching for hint bits at least gives some info in blog\n> posts and the wiki.\n\nyes, I don't see anything explaining this either -- below the page\nlayout it discusses tuple layout but that doesn't mention hint bits.\n\n> + again. A high number of repossessions is a sign of contention for the\n> + blocks operated on by the strategy operation.\n>\n> This (and in general the repossession description) makes sense, but\n> I'm not sure what to do with the information. Maybe Andres is right\n> that we could skip this in the first version?\n\nI've removed repossessed and rejected in attached v37. I am a bit sad\nabout this because I don't see a good way forward and I think those\ncould be useful for users.\n\nI have added the new column Andres recommended in [1] (\"io_object\") to\nclarify temp and local buffers and pave the way for bypass IO (IO not\ndone through a buffer pool), which can be done on temp or permanent\nfiles for temp or permanent relations, and spill file IO which is done\non temporary files but isn't related to temporary tables.\n\nIOObject has increased the memory footprint and complexity of the code\naround tracking and accumulating the statistics, though it has not\nincreased the number of rows in the view.\n\nOne question I still have about this additional dimension is how much\nenumeration we need of the various combinations of IO operations, IO\nobjects, IO ops, and backend types which are allowed and not allowed.\nCurrently because it is only valid to operate on both IOOBJECT_RELATION\nand IOOBJECT_TEMP_RELATION in IOCONTEXT_BUFFER_POOL, the changes to the\nvarious functions asserting and validating what is \"allowed\" in terms of\ncombinations of ops, objects, contexts, and backend types aren't much\ndifferent than they were without IO Object. However, once we begin\nadding other objects and contexts, we will need to make this logic more\ncomprehensive. I'm not sure whether or not I should do that\npreemptively.\n\n> On Mon, Oct 24, 2022 at 12:39 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > > I don't quite follow this: does this mean that I should expect\n> > > 'reused' and 'evicted' to be equal in the 'shared' context, because\n> > > they represent the same thing? Or will 'reused' just be null because\n> > > it's not distinct from 'evicted'? It looks like it's null right now,\n> > > but I find the wording here confusing.\n> >\n> > You should only see evictions when the strategy evicts shared buffers\n> > and reuses when the strategy evicts existing strategy buffers.\n> >\n> > How about this instead in this docs?\n> >\n> > the number of times an existing buffer in the strategy ring was reused\n> > as part of an operation in the <literal>bulkread</literal>,\n> > <literal>bulkwrite</literal>, or <literal>vacuum</literal>\n> > <varname>io_context</varname>s. when a buffer access strategy\n> > <quote>reuses</quote> a buffer in the strategy ring, it must evict its\n> > contents, incrementing <varname>reused</varname>. when a buffer access\n> > strategy adds a new shared buffer to the strategy ring and this shared\n> > buffer is occupied, the buffer access strategy must evict the contents\n> > of the shared buffer, incrementing <varname>evicted</varname>.\n>\n> It looks like you ended up with different wording in the patch, but\n> both this explanation and what's in the patch now make sense to me.\n> Thanks for clarifying.\n\nYes, I tried to rework it and your suggestion and feedback was very\nhelpful.\n\n> Also, I noticed that the commit message explains missing rows for some\n> backend_type / io_context combinations and NULL (versus 0) in some\n> cells, but the docs don't really talk about that. Do you think that\n> should be in there as well?\n\nThanks for pointing this out. I have added notes about this to the\nrelevant columns in the docs.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/20221026185808.4qnxowtn35x43u7u%40awork3.anarazel.de",
"msg_date": "Thu, 3 Nov 2022 13:00:24 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Nov 3, 2022 at 10:00 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> I decided not to call it pg_statio because all of the other stats views\n> have an underscore after stat and I thought it was an opportunity to be\n> consistent with them.\n\nOh, got it. Makes sense.\n\n> > I'm reviewing the rendered docs now, and I noticed sentences like this\n> > are a bit hard to scan: they force the reader to parse a big list of\n> > backend types before even getting to the meat of what this is talking\n> > about. Should we maybe reword this so that the backend list comes at\n> > the end of the sentence? Or maybe even use a list (e.g., like in the\n> > \"state\" column description in pg_stat_activity)?\n>\n> Good idea with the bullet points.\n> For the lengthy lists, I've added bullet point lists to the docs for\n> several of the columns. It is quite long now but, hopefully, clearer?\n> Let me know if you think it improves the readability.\n\nHmm, I should have tried this before suggesting it. I think the lists\nbreak up the flow of the column description too much. What do you\nthink about the attached (on top of your patches--attaching it as a\n.diff to hopefully not confuse cfbot)? I kept the lists for backend\ntypes but inlined the others as a middle ground. I also added a few\nomitted periods and reworded \"read plus extended\" to avoid starting\nthe sentence with a (lowercase) varname (I think in general it's fine\nto do that, but the more complicated sentence structure here makes it\neasier to follow if the sentence starts with a capital).\n\nAlternately, what do you think about pulling equivalencies to existing\nviews out of the main column descriptions, and adding them after the\nmain table as a sort of footnote? Most view docs don't have anything\nlike that, but pg_stat_replication does and it might be a good pattern\nto follow.\n\nThoughts?\n\n> > Also, is this (in the middle of the table) the right place for this\n> > column? I would have expected to see it before or after all the actual\n> > I/O op cells.\n>\n> I put it after read, write, and extend columns because it applies to\n> them. It doesn't apply to files_synced. For reused and evicted, I didn't\n> think bytes reused and evicted made sense. Also, when we add non-block\n> oriented IO, reused and evicted won't be used but op_bytes will be. So I\n> thought it made more sense to place it after the operations it applies\n> to.\n\nGot it, makes sense.\n\n> > + <varname>io_context</varname>s. When a <quote>Buffer Access\n> > + Strategy</quote> reuses a buffer in the strategy ring, it must evict its\n> > + contents, incrementing <varname>reused</varname>. When a <quote>Buffer\n> > + Access Strategy</quote> adds a new shared buffer to the strategy ring\n> > + and this shared buffer is occupied, the <quote>Buffer Access\n> > + Strategy</quote> must evict the contents of the shared buffer,\n> > + incrementing <varname>evicted</varname>.\n> >\n> > I think the parallel phrasing here makes this a little hard to follow.\n> > Specifically, I think \"must evict its contents\" for the strategy case\n> > sounds like a bad thing, but in fact this is a totally normal thing\n> > that happens as part of strategy access, no? The idea is you probably\n> > won't need that buffer again, so it's fine to evict it. I'm not sure\n> > how to reword, but I think the current phrasing is misleading.\n>\n> I had trouble rephrasing this. I changed a few words. I see what you\n> mean. It is worth noting that reusing strategy buffers when there are\n> buffers on the freelist may not be the best behavior, so I wouldn't\n> necessarily consider \"reused\" a good thing. However, I'm not sure how\n> much the user could really do about this. I would at least like this\n> phrasing to be clear (evicted is for shared buffers, reused is for\n> strategy buffers), so, perhaps this section requires more work.\n\nOh, I see. I think the updated wording works better. Although I think\nwe can drop the quotes around \"Buffer Access Strategy\" here. They're\nuseful when defining the term originally, but after that I think it's\nclearer to use the term unquoted.\n\nJust to understand this better myself, though: can you clarify when\n\"reused\" is not a normal, expected part of the strategy execution? I\nwas under the impression that a ring buffer is used because each page\nis needed only \"once\" (i.e., for one set of operations) for the\ncommand using the strategy ring buffer. Naively, in that situation, it\nseems better to reuse a no-longer-needed buffer than to claim another\nbuffer from the freelist (where other commands may eventually make\nbetter use of it).\n\n> > + again. A high number of repossessions is a sign of contention for the\n> > + blocks operated on by the strategy operation.\n> >\n> > This (and in general the repossession description) makes sense, but\n> > I'm not sure what to do with the information. Maybe Andres is right\n> > that we could skip this in the first version?\n>\n> I've removed repossessed and rejected in attached v37. I am a bit sad\n> about this because I don't see a good way forward and I think those\n> could be useful for users.\n\nI can see that, but I think as long as we're not doing anything to\npreclude adding this in the future, it's better to get something out\nthere and expand it later. For what it's worth, I don't feel it needs\nto be excluded, just that it's not worth getting hung up on.\n\n> I have added the new column Andres recommended in [1] (\"io_object\") to\n> clarify temp and local buffers and pave the way for bypass IO (IO not\n> done through a buffer pool), which can be done on temp or permanent\n> files for temp or permanent relations, and spill file IO which is done\n> on temporary files but isn't related to temporary tables.\n>\n> IOObject has increased the memory footprint and complexity of the code\n> around tracking and accumulating the statistics, though it has not\n> increased the number of rows in the view.\n>\n> One question I still have about this additional dimension is how much\n> enumeration we need of the various combinations of IO operations, IO\n> objects, IO ops, and backend types which are allowed and not allowed.\n> Currently because it is only valid to operate on both IOOBJECT_RELATION\n> and IOOBJECT_TEMP_RELATION in IOCONTEXT_BUFFER_POOL, the changes to the\n> various functions asserting and validating what is \"allowed\" in terms of\n> combinations of ops, objects, contexts, and backend types aren't much\n> different than they were without IO Object. However, once we begin\n> adding other objects and contexts, we will need to make this logic more\n> comprehensive. I'm not sure whether or not I should do that\n> preemptively.\n\nIt's definitely something to consider, but I have no useful input here.\n\nSome more notes on the docs patch:\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>io_context</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ The context or location of an IO operation.\n+ </para>\n+ <itemizedlist>\n+ <listitem>\n+ <para>\n+ <varname>io_context</varname> <literal>buffer pool</literal> refers to\n+ IO operations on data in both the shared buffer pool and process-local\n+ buffer pools used for temporary relation data.\n+ </para>\n+ <para>\n+ Operations on temporary relations are tracked in\n+ <varname>io_context</varname> <literal>buffer pool</literal> and\n+ <varname>io_object</varname> <literal>temp relation</literal>.\n+ </para>\n+ <para>\n+ Operations on permanent relations are tracked in\n+ <varname>io_context</varname> <literal>buffer pool</literal> and\n+ <varname>io_object</varname> <literal>relation</literal>.\n+ </para>\n+ </listitem>\n\nFor this column, you repeat \"io_context\" in the list describing the\npossible values of the column. Enum-style columns in other tables\ndon't do that (e.g., the pg_stat_activty \"state\" column). I think it\nmight read better to omit \"io_context\" from the list.\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>io_object</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ Object operated on in a given <varname>io_context</varname> by a given\n+ <varname>backend_type</varname>.\n+ </para>\n\nIs this a fixed set of objects we should list, like for io_context?\n\nThanks,\nMaciek",
"msg_date": "Mon, 7 Nov 2022 10:26:06 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\n\nOne good follow up patch will be to rip out the accounting for\npg_stat_bgwriter's buffers_backend, buffers_backend_fsync and perhaps\nbuffers_alloc and replace it with a subselect getting the equivalent data from\npg_stat_io. It might not be quite worth doing for buffers_alloc because of\nthe way that's tied into bgwriter pacing.\n\n\nOn 2022-11-03 13:00:24 -0400, Melanie Plageman wrote:\n> > + again. A high number of repossessions is a sign of contention for the +\n> > blocks operated on by the strategy operation.\n> >\n> > This (and in general the repossession description) makes sense, but\n> > I'm not sure what to do with the information. Maybe Andres is right\n> > that we could skip this in the first version?\n>\n> I've removed repossessed and rejected in attached v37. I am a bit sad\n> about this because I don't see a good way forward and I think those\n> could be useful for users.\n\nLet's get the basic patch in and then check whether we can find a way to have\nsomething providing at least some more information like repossessed and\nrejected. I think it'll be easier to analyze in isolation.\n\n\n> I have added the new column Andres recommended in [1] (\"io_object\") to\n> clarify temp and local buffers and pave the way for bypass IO (IO not\n> done through a buffer pool), which can be done on temp or permanent\n> files for temp or permanent relations, and spill file IO which is done\n> on temporary files but isn't related to temporary tables.\n\n> IOObject has increased the memory footprint and complexity of the code\n> around tracking and accumulating the statistics, though it has not\n> increased the number of rows in the view.\n\nIt doesn't look too bad from here. Is there a specific portion of the code\nwhere it concerns you the most?\n\n\n> One question I still have about this additional dimension is how much\n> enumeration we need of the various combinations of IO operations, IO\n> objects, IO ops, and backend types which are allowed and not allowed.\n>\n> Currently because it is only valid to operate on both IOOBJECT_RELATION\n> and IOOBJECT_TEMP_RELATION in IOCONTEXT_BUFFER_POOL, the changes to the\n> various functions asserting and validating what is \"allowed\" in terms of\n> combinations of ops, objects, contexts, and backend types aren't much\n> different than they were without IO Object. However, once we begin\n> adding other objects and contexts, we will need to make this logic more\n> comprehensive. I'm not sure whether or not I should do that\n> preemptively.\n\nI'd not do it preemptively.\n\n\n\n> @@ -833,6 +836,22 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n>\n> \tisExtend = (blockNum == P_NEW);\n>\n> +\tif (isLocalBuf)\n> +\t{\n> +\t\t/*\n> +\t\t * Though a strategy object may be passed in, no strategy is employed\n> +\t\t * when using local buffers. This could happen when doing, for example,\n> +\t\t * CREATE TEMPORRARY TABLE AS ...\n> +\t\t */\n> +\t\tio_context = IOCONTEXT_BUFFER_POOL;\n> +\t\tio_object = IOOBJECT_TEMP_RELATION;\n> +\t}\n> +\telse\n> +\t{\n> +\t\tio_context = IOContextForStrategy(strategy);\n> +\t\tio_object = IOOBJECT_RELATION;\n> +\t}\n\nI think given how frequently ReadBuffer_common() is called in some workloads,\nit'd be good to make IOContextForStrategy inlinable. But I guess that's not\neasily doable, because struct BufferAccessStrategyData is only defined in\nfreelist.c.\n\nCould we defer this until later, given that we don't currently need this in\ncase of buffer hits afaict?\n\n\n> @@ -1121,6 +1144,8 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t\t\tBufferAccessStrategy strategy,\n> \t\t\tbool *foundPtr)\n> {\n> +\tbool\t\tfrom_ring;\n> +\tIOContext\tio_context;\n> \tBufferTag\tnewTag;\t\t\t/* identity of requested block */\n> \tuint32\t\tnewHash;\t\t/* hash value for newTag */\n> \tLWLock\t *newPartitionLock;\t/* buffer partition lock for it */\n> @@ -1187,9 +1212,12 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t */\n> \tLWLockRelease(newPartitionLock);\n>\n> +\tio_context = IOContextForStrategy(strategy);\n\nHm - doesn't this mean we do IOContextForStrategy() twice? Once in\nReadBuffer_common() and then again here?\n\n\n> \t/* Loop here in case we have to try another victim buffer */\n> \tfor (;;)\n> \t{\n> +\n> \t\t/*\n> \t\t * Ensure, while the spinlock's not yet held, that there's a free\n> \t\t * refcount entry.\n> @@ -1200,7 +1228,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t\t * Select a victim buffer. The buffer is returned with its header\n> \t\t * spinlock still held!\n> \t\t */\n> -\t\tbuf = StrategyGetBuffer(strategy, &buf_state);\n> +\t\tbuf = StrategyGetBuffer(strategy, &buf_state, &from_ring);\n>\n> \t\tAssert(BUF_STATE_GET_REFCOUNT(buf_state) == 0);\n>\n\nI think patch 0001 relies on this change already having been made, If I am not misunderstanding?\n\n\n> @@ -1263,13 +1291,34 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t\t\t\t\t}\n> \t\t\t\t}\n>\n> +\t\t\t\t/*\n> +\t\t\t\t * When a strategy is in use, only flushes of dirty buffers\n> +\t\t\t\t * already in the strategy ring are counted as strategy writes\n> +\t\t\t\t * (IOCONTEXT [BULKREAD|BULKWRITE|VACUUM] IOOP_WRITE) for the\n> +\t\t\t\t * purpose of IO operation statistics tracking.\n> +\t\t\t\t *\n> +\t\t\t\t * If a shared buffer initially added to the ring must be\n> +\t\t\t\t * flushed before being used, this is counted as an\n> +\t\t\t\t * IOCONTEXT_BUFFER_POOL IOOP_WRITE.\n> +\t\t\t\t *\n> +\t\t\t\t * If a shared buffer added to the ring later because the\n\nMissing word?\n\n\n> +\t\t\t\t * current strategy buffer is pinned or in use or because all\n> +\t\t\t\t * strategy buffers were dirty and rejected (for BAS_BULKREAD\n> +\t\t\t\t * operations only) requires flushing, this is counted as an\n> +\t\t\t\t * IOCONTEXT_BUFFER_POOL IOOP_WRITE (from_ring will be false).\n\nI think this makes sense for now, but it'd be good if somebody else could\nchime in on this...\n\n> +\t\t\t\t *\n> +\t\t\t\t * When a strategy is not in use, the write can only be a\n> +\t\t\t\t * \"regular\" write of a dirty shared buffer (IOCONTEXT_BUFFER_POOL\n> +\t\t\t\t * IOOP_WRITE).\n> +\t\t\t\t */\n> +\n> \t\t\t\t/* OK, do the I/O */\n> \t\t\t\tTRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_START(forkNum, blockNum,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.spcOid,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.dbOid,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.relNumber);\n>\n> -\t\t\t\tFlushBuffer(buf, NULL);\n> +\t\t\t\tFlushBuffer(buf, NULL, io_context, IOOBJECT_RELATION);\n> \t\t\t\tLWLockRelease(BufferDescriptorGetContentLock(buf));\n> \t\t\t\tScheduleBufferTagForWriteback(&BackendWritebackContext,\n\n\n\n> +\tif (oldFlags & BM_VALID)\n> +\t{\n> +\t\t/*\n> +\t\t* When a BufferAccessStrategy is in use, evictions adding a\n> +\t\t* shared buffer to the strategy ring are counted in the\n> +\t\t* corresponding strategy's context.\n\nPerhaps \"adding a shared buffer to the ring are counted in the corresponding\ncontext\"? \"strategy's context\" sounds off to me.\n\n\n> This includes the evictions\n> +\t\t* done to add buffers to the ring initially as well as those\n> +\t\t* done to add a new shared buffer to the ring when current\n> +\t\t* buffer is pinned or otherwise in use.\n\nI think this sentence could use a few commas, but not sure.\n\ns/current/the current/?\n\n\n\n> +\t\t* We wait until this point to count reuses and evictions in order to\n> +\t\t* avoid incorrectly counting a buffer as reused or evicted when it was\n> +\t\t* released because it was concurrently pinned or in use or counting it\n> +\t\t* as reused when it was rejected or when we errored out.\n> +\t\t*/\n\nI can't quite parse this sentence.\n\n\n> +\t\tIOOp io_op = from_ring ? IOOP_REUSE : IOOP_EVICT;\n> +\n> +\t\tpgstat_count_io_op(io_op, IOOBJECT_RELATION, io_context);\n> +\t}\n\nI'd just inline the variable, but ...\n\n\n> @@ -196,6 +197,7 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> \t\t\t\tLocalRefCount[b]++;\n> \t\t\t\tResourceOwnerRememberBuffer(CurrentResourceOwner,\n> \t\t\t\t\t\t\t\t\t\t\tBufferDescriptorGetBuffer(bufHdr));\n> +\n> \t\t\t\tbreak;\n> \t\t\t}\n> \t\t}\n\nSpurious change.\n\n\n> \tpg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n>\n> \t*foundPtr = false;\n> +\n> \treturn bufHdr;\n> }\n\nDito.\n\n\n\n> +/*\n> +* IO Operation statistics are not collected for all BackendTypes.\n> +*\n> +* The following BackendTypes do not participate in the cumulative stats\n> +* subsystem or do not do IO operations worth reporting statistics on:\n\ns/worth reporting/we currently report/?\n\n\n> +\t/*\n> +\t * In core Postgres, only regular backends and WAL Sender processes\n> +\t * executing queries will use local buffers and operate on temporary\n> +\t * relations. Parallel workers will not use local buffers (see\n> +\t * InitLocalBuffers()); however, extensions leveraging background workers\n> +\t * have no such limitation, so track IO Operations on\n> +\t * IOOBJECT_TEMP_RELATION for BackendType B_BG_WORKER.\n> +\t */\n> +\tno_temp_rel = bktype == B_AUTOVAC_LAUNCHER || bktype == B_BG_WRITER || bktype\n> +\t\t== B_CHECKPOINTER || bktype == B_AUTOVAC_WORKER || bktype ==\n> +\t\tB_STANDALONE_BACKEND || bktype == B_STARTUP;\n> +\n> +\tif (no_temp_rel && io_context == IOCONTEXT_BUFFER_POOL && io_object ==\n> +\t\t\tIOOBJECT_TEMP_RELATION)\n> +\t\treturn false;\n\nPersonally I don't like line breaks on the == and would rather break earlier\non the && or ||.\n\n\n\n> +\tfor (int io_context = 0; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> +\t{\n> +\t\tPgStatShared_IOObjectOps *shared_objs = &type_shstats->data[io_context];\n> +\t\tPgStat_IOObjectOps *pending_objs = &pending_IOOpStats.data[io_context];\n> +\n> +\t\tfor (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n> +\t\t{\n\nIs there any compiler that'd complain if you used IOContext/IOObject/IOOp as the\ntype in the for loop? I don't think so? Then you'd not need the casts in other\nplaces, which I think would make the code easier to read.\n\n\n> +\t\t\tPgStat_IOOpCounters *sharedent = &shared_objs->data[io_object];\n> +\t\t\tPgStat_IOOpCounters *pendingent = &pending_objs->data[io_object];\n> +\n> +\t\t\tif (!expect_backend_stats ||\n> +\t\t\t\t!pgstat_bktype_io_context_io_object_valid(MyBackendType,\n> +\t\t\t\t\t(IOContext) io_context, (IOObject) io_object))\n> +\t\t\t{\n> +\t\t\t\tpgstat_io_context_ops_assert_zero(sharedent);\n> +\t\t\t\tpgstat_io_context_ops_assert_zero(pendingent);\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\tfor (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> +\t\t\t{\n> +\t\t\t\tif (!(pgstat_io_op_valid(MyBackendType, (IOContext) io_context,\n> +\t\t\t\t\t\t\t\t(IOObject) io_object, (IOOp) io_op)))\n\nSuperfluous parens after the !, I think?\n\n\n> void\n> pgstat_report_vacuum(Oid tableoid, bool shared,\n> @@ -257,10 +257,18 @@ pgstat_report_vacuum(Oid tableoid, bool shared,\n> \t}\n>\n> \tpgstat_unlock_entry(entry_ref);\n> +\n> +\t/*\n> +\t * Flush IO Operations statistics now. pgstat_report_stat() will flush IO\n> +\t * Operation stats, however this will not be called after an entire\n\nMissing \"until\"?\n\n> +static inline void\n> +pgstat_io_op_assert_zero(PgStat_IOOpCounters *counters, IOOp io_op)\n> +{\n\nDoes this need to be in pgstat.h? Perhaps pgstat_internal.h would suffice,\nafaict it's not used outside of pgstat code?\n\n> +\n> +/*\n> + * Assert that stats have not been counted for any combination of IOContext,\n> + * IOObject, and IOOp which is not valid for the passed-in BackendType. The\n> + * passed-in array of PgStat_IOOpCounters must contain stats from the\n> + * BackendType specified by the second parameter. Caller is responsible for\n> + * locking of the passed-in PgStatShared_IOContextOps, if needed.\n> + */\n> +static inline void\n> +pgstat_backend_io_stats_assert_well_formed(PgStatShared_IOContextOps *backend_io_context_ops,\n> +\t\tBackendType bktype)\n> +{\n\nThis doesn't look like it should be an inline function - it's quite long.\n\nI think it's also too complicated for the compiler to optimize out if\nassertions are disabled. So you'd need to handle this with an explicit #ifdef\nUSE_ASSERT_CHECKING.\n\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>io_context</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + The context or location of an IO operation.\n> + </para>\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <varname>io_context</varname> <literal>buffer pool</literal> refers to\n> + IO operations on data in both the shared buffer pool and process-local\n> + buffer pools used for temporary relation data.\n> + </para>\n> + <para>\n\nThe indentation in the sgml part of the patch seems to be a bit wonky.\n\n\n> + <para>\n> + These last three <varname>io_context</varname>s are counted separately\n> + because the autovacuum daemon, explicit <command>VACUUM</command>,\n> + explicit <command>ANALYZE</command>, many bulk reads, and many bulk\n> + writes use a fixed amount of memory, acquiring the equivalent number of\n\ns/memory/buffers/? The amount of memory isn't really fixed.\n\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>read</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Reads by this <varname>backend_type</varname> into buffers in this\n> + <varname>io_context</varname>.\n> + <varname>read</varname> plus <varname>extended</varname> for\n> + <varname>backend_type</varname>s\n> +\n> + <itemizedlist>\n> +\n> + <listitem>\n> + <para>\n> + <literal>autovacuum launcher</literal>\n> + </para>\n> + </listitem>\n\nHm. ISTM that we should not document the set of valid backend types as part of\nthis view. Couldn't we share it with pg_stat_activity.backend_type?\n\n\n> + The difference is that reads done as part of <command>CREATE\n> + DATABASE</command> are not counted in\n> + <structname>pg_statio_all_tables</structname> and\n> + <structname>pg_stat_database</structname>\n> + </para>\n\nHm, this seems a bit far into the weeds?\n\n\n\n\n> +Datum\n> +pg_stat_get_io(PG_FUNCTION_ARGS)\n> +{\n> +\tPgStat_BackendIOContextOps *backends_io_stats;\n> +\tReturnSetInfo *rsinfo;\n> +\tDatum\t\treset_time;\n> +\n> +\tInitMaterializedSRF(fcinfo, 0);\n> +\trsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> +\n> +\tbackends_io_stats = pgstat_fetch_backend_io_context_ops();\n> +\n> +\treset_time = TimestampTzGetDatum(backends_io_stats->stat_reset_timestamp);\n> +\n> +\tfor (int bktype = 0; bktype < BACKEND_NUM_TYPES; bktype++)\n> +\t{\n> +\t\tDatum\t\tbktype_desc = CStringGetTextDatum(GetBackendTypeDesc((BackendType) bktype));\n> +\t\tbool\t\texpect_backend_stats = true;\n> +\t\tPgStat_IOContextOps *io_context_ops = &backends_io_stats->stats[bktype];\n> +\n> +\t\t/*\n> +\t\t * For those BackendTypes without IO Operation stats, skip\n> +\t\t * representing them in the view altogether.\n> +\t\t */\n> +\t\texpect_backend_stats = pgstat_io_op_stats_collected((BackendType)\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tbktype);\n> +\n> +\t\tfor (int io_context = 0; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> +\t\t{\n> +\t\t\tconst char *io_context_str = pgstat_io_context_desc(io_context);\n> +\t\t\tPgStat_IOObjectOps *io_objs = &io_context_ops->data[io_context];\n> +\n> +\t\t\tfor (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n> +\t\t\t{\n> +\t\t\t\tPgStat_IOOpCounters *counters = &io_objs->data[io_object];\n> +\t\t\t\tconst char *io_obj_str = pgstat_io_object_desc(io_object);\n> +\n> +\t\t\t\tDatum\t\tvalues[IO_NUM_COLUMNS] = {0};\n> +\t\t\t\tbool\t\tnulls[IO_NUM_COLUMNS] = {0};\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t* Some combinations of IOContext, IOObject, and BackendType are\n> +\t\t\t\t* not valid for any type of IOOp. In such cases, omit the\n> +\t\t\t\t* entire row from the view.\n> +\t\t\t\t*/\n> +\t\t\t\tif (!expect_backend_stats ||\n> +\t\t\t\t\t!pgstat_bktype_io_context_io_object_valid((BackendType) bktype,\n> +\t\t\t\t\t\t(IOContext) io_context, (IOObject) io_object))\n> +\t\t\t\t{\n> +\t\t\t\t\tpgstat_io_context_ops_assert_zero(counters);\n> +\t\t\t\t\tcontinue;\n> +\t\t\t\t}\n\nPerhaps mention in a comment two loops up that we don't skip the nested loops\ndespite !expect_backend_stats because we want to assert here?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Nov 2022 16:38:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Note that 001 fails to compile without 002:\n\n../src/backend/storage/buffer/bufmgr.c:1257:43: error: ‘from_ring’ undeclared (first use in this function)\n 1257 | StrategyRejectBuffer(strategy, buf, from_ring))\n\nMy \"warnings\" script informed me about these gripes from MSVC:\n\n[03:42:30.607] c:\\cirrus>call sh -c 'if grep \": warning \" build.txt; then exit 1; fi; exit 0' \n[03:42:30.749] c:\\cirrus\\src\\backend\\storage\\buffer\\freelist.c(699) : warning C4715: 'IOContextForStrategy': not all control paths return a value\n[03:42:30.749] c:\\cirrus\\src\\backend\\utils\\activity\\pgstat_io_ops.c(190) : warning C4715: 'pgstat_io_context_desc': not all control paths return a value\n[03:42:30.749] c:\\cirrus\\src\\backend\\utils\\activity\\pgstat_io_ops.c(204) : warning C4715: 'pgstat_io_object_desc': not all control paths return a value\n[03:42:30.749] c:\\cirrus\\src\\backend\\utils\\activity\\pgstat_io_ops.c(226) : warning C4715: 'pgstat_io_op_desc': not all control paths return a value\n[03:42:30.749] c:\\cirrus\\src\\backend\\utils\\adt\\pgstatfuncs.c(1816) : warning C4715: 'pgstat_io_op_get_index': not all control paths return a value\n\nIn the docs table, you say things like:\n| io_context vacuum refers to the IO operations incurred while vacuuming and analyzing.\n\n..but it's a bit unclear (maybe due to the way the docs are rendered).\nI think it may be more clear to say \"when <io_context> is\n<vacuum>, ...\"\n\n| acquiring the equivalent number of shared buffers\n\nI don't think \"equivelent\" fits here, since it's actually acquiring a\ndifferent number of buffers.\n\nThere's a missing period before \" The difference is\"\n\nThe sentence beginning \"read plus extended for backend_types\" is difficult to\nparse due to having a bulleted list in its middle.\n\nThere aren't many references to \"IOOps\", which is good, because I\nstarted to read it as \"I oops\".\n\n+ * Flush IO Operations statistics now. pgstat_report_stat() will flush IO\n+ * Operation stats, however this will not be called after an entire\n\n=> I think that's intended to say *until* after ?\n\n+ * Functions to assert that invalid IO Operation counters are zero.\n\n=> There's a missing newline above this comment.\n\n+ Assert(counters->evictions == 0 && counters->extends == 0 &&\n+ counters->fsyncs == 0 && counters->reads == 0 && counters->reuses\n+ == 0 && counters->writes == 0);\n\n=> It'd be more readable and also maybe help debugging if these were separate\nassertions. I wondered in the past if that should be a general policy\nfor all assertions.\n\n+pgstat_io_op_stats_collected(BackendType bktype)\n+{\n+ return bktype != B_INVALID && bktype != B_ARCHIVER && bktype != B_LOGGER &&\n+ bktype != B_WAL_RECEIVER && bktype != B_WAL_WRITER;\n\nSimilar: I'd prefer to see this as 5 \"ifs\" or a \"switch\" to return\nfalse, else return true. But YMMV.\n\n+ * CREATE TEMPORRARY TABLE AS ...\n\n=> typo: temporary\n\n+ if (strategy_io_context && io_op == IOOP_FSYNC)\n\n=> Extra space.\n\npgstat_count_io_op() has a superflous newline before \"}\".\n\nI think there may be a problem/deficiency with hint bits:\n\n|postgres=# DROP TABLE u2; CREATE TABLE u2 AS SELECT generate_series(1,999999)a; SELECT pg_stat_reset_shared('io'); explain (analyze,buffers) SELECT * FROM u2;\n|...\n| Seq Scan on u2 (cost=0.00..15708.75 rows=1128375 width=4) (actual time=0.111..458.239 rows=999999 loops=1)\n| Buffers: shared hit=2048 read=2377 dirtied=2377 written=2345\n\n|postgres=# SELECT COUNT(1), relname, COUNT(1) FILTER(WHERE isdirty) FROM pg_buffercache b LEFT JOIN pg_class c ON pg_relation_filenode(c.oid)=b.relfilenode GROUP BY 2 ORDER BY 1 DESC LIMIT 11;\n| count | relname | count\n|-------+---------------------------------+-------\n| 13619 | | 0\n| 2080 | u2 | 2080\n| 104 | pg_attribute | 4\n| 71 | pg_statistic | 1\n| 51 | pg_class | 1\n\nIt says that SELECT caused 2377 buffers to be dirtied, of which 2080 are\nassociated with the new table in pg_buffercache.\n\n|postgres=# SELECT * FROM pg_stat_io WHERE backend_type!~'autovac|archiver|logger|standalone|startup|^wal|background worker' or true ORDER BY 2;\n| backend_type | io_context | io_object | read | written | extended | op_bytes | evicted | reused | files_synced | stats_reset\n|...\n| client backend | bulkread | relation | 2377 | 2345 | | 8192 | 0 | 2345 | | 2022-11-22 22:32:33.044552-06\n\nI think it's a known behavior that hint bits do not use the strategy\nring buffer. For BAS_BULKREAD, ring_size = 256kB (32, 8kB pages), but\nthere's 2080 dirty pages in the buffercache (~16MB).\n\nBut the IO view says that 2345 of the pages were \"reused\", which seems\nmisleading to me. Maybe that just follows from the behavior and the view is\nfine. If the view is fine, maybe this case should still be specifically\nmentioned in the docs.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 22 Nov 2022 23:43:29 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-22 23:43:29 -0600, Justin Pryzby wrote:\n> I think there may be a problem/deficiency with hint bits:\n> \n> |postgres=# DROP TABLE u2; CREATE TABLE u2 AS SELECT generate_series(1,999999)a; SELECT pg_stat_reset_shared('io'); explain (analyze,buffers) SELECT * FROM u2;\n> |...\n> | Seq Scan on u2 (cost=0.00..15708.75 rows=1128375 width=4) (actual time=0.111..458.239 rows=999999 loops=1)\n> | Buffers: shared hit=2048 read=2377 dirtied=2377 written=2345\n> \n> |postgres=# SELECT COUNT(1), relname, COUNT(1) FILTER(WHERE isdirty) FROM pg_buffercache b LEFT JOIN pg_class c ON pg_relation_filenode(c.oid)=b.relfilenode GROUP BY 2 ORDER BY 1 DESC LIMIT 11;\n> | count | relname | count\n> |-------+---------------------------------+-------\n> | 13619 | | 0\n> | 2080 | u2 | 2080\n> | 104 | pg_attribute | 4\n> | 71 | pg_statistic | 1\n> | 51 | pg_class | 1\n> \n> It says that SELECT caused 2377 buffers to be dirtied, of which 2080 are\n> associated with the new table in pg_buffercache.\n\nNote that there's 2048 dirty buffers for u2 in shared_buffers before the\nSELECT, despite the relation being 4425 blocks long, due to the CTAS using\nBAS_BULKWRITE.\n\n\n> |postgres=# SELECT * FROM pg_stat_io WHERE backend_type!~'autovac|archiver|logger|standalone|startup|^wal|background worker' or true ORDER BY 2;\n> | backend_type | io_context | io_object | read | written | extended | op_bytes | evicted | reused | files_synced | stats_reset\n> |...\n> | client backend | bulkread | relation | 2377 | 2345 | | 8192 | 0 | 2345 | | 2022-11-22 22:32:33.044552-06\n> \n> I think it's a known behavior that hint bits do not use the strategy\n> ring buffer. For BAS_BULKREAD, ring_size = 256kB (32, 8kB pages), but\n> there's 2080 dirty pages in the buffercache (~16MB).\n\nI don't think there's any \"circumvention\" of the ringbuffer here. There's 2048\nbuffers for u2 in s_b before, all dirty, there's 2080 after, also all\ndirty. So the ringbuffer restricted the increase in shared buffers used for u2\nto 2080-2048=32 additional buffers.\n\nThe reason hint bits don't prevent pages from being written out here is that a\nBAS_BULKREAD strategy doesn't cause all buffer writes to be rejected, it just\ncauses buffer writes to be rejected when the page LSN would require a WAL\nflush. And that's not typically the case when you just set a hint bit, unless\nyou use wal_log_hint_bits = true.\n\nIf I turn on wal_log_hints=true and add a CHECKPOINT after the CTAS I see 0\nreuses (and 4425 dirty buffers), which is what I'd expect.\n\n\n> But the IO view says that 2345 of the pages were \"reused\", which seems\n> misleading to me. Maybe that just follows from the behavior and the view is\n> fine. If the view is fine, maybe this case should still be specifically\n> mentioned in the docs.\n\nI think that's just confusing due to the reset. 2048 + 2345 = 4393, but we\nonly have 2080 buffers for u2 in s_b.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Nov 2022 14:46:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v38 attached.\n\nOn Sun, Nov 20, 2022 at 7:38 PM Andres Freund <andres@anarazel.de> wrote:\n> One good follow up patch will be to rip out the accounting for\n> pg_stat_bgwriter's buffers_backend, buffers_backend_fsync and perhaps\n> buffers_alloc and replace it with a subselect getting the equivalent data from\n> pg_stat_io. It might not be quite worth doing for buffers_alloc because of\n> the way that's tied into bgwriter pacing.\n\nI don't see how it will make sense to have buffers_backend and\nbuffers_backend_fsync respond to a different reset target than the rest\nof the fields in pg_stat_bgwriter.\n\n> On 2022-11-03 13:00:24 -0400, Melanie Plageman wrote:\n> > @@ -833,6 +836,22 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> >\n> > isExtend = (blockNum == P_NEW);\n> >\n> > + if (isLocalBuf)\n> > + {\n> > + /*\n> > + * Though a strategy object may be passed in, no strategy is employed\n> > + * when using local buffers. This could happen when doing, for example,\n> > + * CREATE TEMPORRARY TABLE AS ...\n> > + */\n> > + io_context = IOCONTEXT_BUFFER_POOL;\n> > + io_object = IOOBJECT_TEMP_RELATION;\n> > + }\n> > + else\n> > + {\n> > + io_context = IOContextForStrategy(strategy);\n> > + io_object = IOOBJECT_RELATION;\n> > + }\n>\n> I think given how frequently ReadBuffer_common() is called in some workloads,\n> it'd be good to make IOContextForStrategy inlinable. But I guess that's not\n> easily doable, because struct BufferAccessStrategyData is only defined in\n> freelist.c.\n\nCorrect\n\n> Could we defer this until later, given that we don't currently need this in\n> case of buffer hits afaict?\n\nYes, you are right. In ReadBuffer_common(), we can easily move the\nIOContextForStrategy() call to directly before using io_context. I've\ndone that in the attached version.\n\n> > @@ -1121,6 +1144,8 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> > BufferAccessStrategy strategy,\n> > bool *foundPtr)\n> > {\n> > + bool from_ring;\n> > + IOContext io_context;\n> > BufferTag newTag; /* identity of requested block */\n> > uint32 newHash; /* hash value for newTag */\n> > LWLock *newPartitionLock; /* buffer partition lock for it */\n> > @@ -1187,9 +1212,12 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> > */\n> > LWLockRelease(newPartitionLock);\n> >\n> > + io_context = IOContextForStrategy(strategy);\n>\n> Hm - doesn't this mean we do IOContextForStrategy() twice? Once in\n> ReadBuffer_common() and then again here?\n\nYes. So, there are a few options for addressing this.\n\n- if the goal is to call IOStrategyForContext() exactly once in a\n given codepath, BufferAlloc() can set IOContext\n (passed by reference as an output parameter). I don't like this much\n because it doesn't make sense to me that BufferAlloc() would set the\n \"io_context\" parameter -- especially given that strategy is already\n passed as a parameter and is obviously available to the caller.\n I also don't see a good way of waiting until BufferAlloc() returns to count\n the IO operations counted in FlushBuffer() and BufferAlloc() itself.\n\n- if the goal is to avoid calling IOStrategyForContext() in more common\n codepaths or to call it as close to its use as possible, then we can\n push down its call in BufferAlloc() to the two locations where it is\n used -- when a dirty buffer must be flushed and when a block was\n evicted or reused. This will avoid calling it when we are not evicting\n a block from a valid buffer.\n\n However, if we do that, I don't know how to avoid calling it twice in\n that codepath. Even though we can assume io_context was set in the\n first location by the time we get to the second location, we would\n need to initialize the variable with something if we only plan to set\n it in some branches and there is no \"invalid\" or \"default\" value of\n the IOContext enum.\n\n Given the above, I've left the call in BufferAlloc() as is in the\n attached version.\n\n>\n>\n> > /* Loop here in case we have to try another victim buffer */\n> > for (;;)\n> > {\n> > +\n> > /*\n> > * Ensure, while the spinlock's not yet held, that there's a free\n> > * refcount entry.\n> > @@ -1200,7 +1228,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> > * Select a victim buffer. The buffer is returned with its header\n> > * spinlock still held!\n> > */\n> > - buf = StrategyGetBuffer(strategy, &buf_state);\n> > + buf = StrategyGetBuffer(strategy, &buf_state, &from_ring);\n> >\n> > Assert(BUF_STATE_GET_REFCOUNT(buf_state) == 0);\n> >\n>\n> I think patch 0001 relies on this change already having been made, If I am not misunderstanding?\n\nFixed.\n\n>\n>\n> > @@ -1263,13 +1291,34 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> > }\n> > }\n> >\n> > + /*\n> > + * When a strategy is in use, only flushes of dirty buffers\n> > + * already in the strategy ring are counted as strategy writes\n> > + * (IOCONTEXT [BULKREAD|BULKWRITE|VACUUM] IOOP_WRITE) for the\n> > + * purpose of IO operation statistics tracking.\n> > + *\n> > + * If a shared buffer initially added to the ring must be\n> > + * flushed before being used, this is counted as an\n> > + * IOCONTEXT_BUFFER_POOL IOOP_WRITE.\n> > + *\n> > + * If a shared buffer added to the ring later because the\n>\n> Missing word?\n\nFixed.\n\n>\n>\n> > + * current strategy buffer is pinned or in use or because all\n> > + * strategy buffers were dirty and rejected (for BAS_BULKREAD\n> > + * operations only) requires flushing, this is counted as an\n> > + * IOCONTEXT_BUFFER_POOL IOOP_WRITE (from_ring will be false).\n>\n> I think this makes sense for now, but it'd be good if somebody else could\n> chime in on this...\n>\n> > + *\n> > + * When a strategy is not in use, the write can only be a\n> > + * \"regular\" write of a dirty shared buffer (IOCONTEXT_BUFFER_POOL\n> > + * IOOP_WRITE).\n> > + */\n> > +\n> > /* OK, do the I/O */\n> > TRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_START(forkNum, blockNum,\n> > smgr->smgr_rlocator.locator.spcOid,\n> > smgr->smgr_rlocator.locator.dbOid,\n> > smgr->smgr_rlocator.locator.relNumber);\n> >\n> > - FlushBuffer(buf, NULL);\n> > + FlushBuffer(buf, NULL, io_context, IOOBJECT_RELATION);\n> > LWLockRelease(BufferDescriptorGetContentLock(buf));\n> > ScheduleBufferTagForWriteback(&BackendWritebackContext,\n>\n>\n>\n> > + if (oldFlags & BM_VALID)\n> > + {\n> > + /*\n> > + * When a BufferAccessStrategy is in use, evictions adding a\n> > + * shared buffer to the strategy ring are counted in the\n> > + * corresponding strategy's context.\n>\n> Perhaps \"adding a shared buffer to the ring are counted in the corresponding\n> context\"? \"strategy's context\" sounds off to me.\n\nFixed.\n\n> > This includes the evictions\n> > + * done to add buffers to the ring initially as well as those\n> > + * done to add a new shared buffer to the ring when current\n> > + * buffer is pinned or otherwise in use.\n>\n> I think this sentence could use a few commas, but not sure.\n>\n> s/current/the current/?\n\nReworded.\n\n>\n> > + * We wait until this point to count reuses and evictions in order to\n> > + * avoid incorrectly counting a buffer as reused or evicted when it was\n> > + * released because it was concurrently pinned or in use or counting it\n> > + * as reused when it was rejected or when we errored out.\n> > + */\n>\n> I can't quite parse this sentence.\n\nI've reworded the whole comment.\nI think it is clearer now.\n\n>\n> > + IOOp io_op = from_ring ? IOOP_REUSE : IOOP_EVICT;\n> > +\n> > + pgstat_count_io_op(io_op, IOOBJECT_RELATION, io_context);\n> > + }\n>\n> I'd just inline the variable, but ...\n\nDone.\n\n> > @@ -196,6 +197,7 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> > LocalRefCount[b]++;\n> > ResourceOwnerRememberBuffer(CurrentResourceOwner,\n> > BufferDescriptorGetBuffer(bufHdr));\n> > +\n> > break;\n> > }\n> > }\n>\n> Spurious change.\n\nRemoved.\n\n> > pg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n> >\n> > *foundPtr = false;\n> > +\n> > return bufHdr;\n> > }\n>\n> Dito.\n\nRemoved.\n\n> > +/*\n> > +* IO Operation statistics are not collected for all BackendTypes.\n> > +*\n> > +* The following BackendTypes do not participate in the cumulative stats\n> > +* subsystem or do not do IO operations worth reporting statistics on:\n>\n> s/worth reporting/we currently report/?\n\nUpdated\n\n> > + /*\n> > + * In core Postgres, only regular backends and WAL Sender processes\n> > + * executing queries will use local buffers and operate on temporary\n> > + * relations. Parallel workers will not use local buffers (see\n> > + * InitLocalBuffers()); however, extensions leveraging background workers\n> > + * have no such limitation, so track IO Operations on\n> > + * IOOBJECT_TEMP_RELATION for BackendType B_BG_WORKER.\n> > + */\n> > + no_temp_rel = bktype == B_AUTOVAC_LAUNCHER || bktype == B_BG_WRITER || bktype\n> > + == B_CHECKPOINTER || bktype == B_AUTOVAC_WORKER || bktype ==\n> > + B_STANDALONE_BACKEND || bktype == B_STARTUP;\n> > +\n> > + if (no_temp_rel && io_context == IOCONTEXT_BUFFER_POOL && io_object ==\n> > + IOOBJECT_TEMP_RELATION)\n> > + return false;\n>\n> Personally I don't like line breaks on the == and would rather break earlier\n> on the && or ||.\n\nI've gone through and fixed all of these that I could find.\n\n> > + for (int io_context = 0; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> > + {\n> > + PgStatShared_IOObjectOps *shared_objs = &type_shstats->data[io_context];\n> > + PgStat_IOObjectOps *pending_objs = &pending_IOOpStats.data[io_context];\n> > +\n> > + for (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n> > + {\n>\n> Is there any compiler that'd complain if you used IOContext/IOObject/IOOp as the\n> type in the for loop? I don't think so? Then you'd not need the casts in other\n> places, which I think would make the code easier to read.\n\nI changed the type and currently get no compiler warnings, however, on\na previous CI run,\nwith the type changed to an enum I got the following warning:\n\n/tmp/cirrus-ci-build/src/include/utils/pgstat_internal.h:605:48:\nerror: no ‘operator++(int)’ declared for postfix ‘++’ [-fpermissive]\n 605 | io_context < IOCONTEXT_NUM_TYPES; io_context++)\n\nI'm not sure why I am no longer getting it.\n\n> > + PgStat_IOOpCounters *sharedent = &shared_objs->data[io_object];\n> > + PgStat_IOOpCounters *pendingent = &pending_objs->data[io_object];\n> > +\n> > + if (!expect_backend_stats ||\n> > + !pgstat_bktype_io_context_io_object_valid(MyBackendType,\n> > + (IOContext) io_context, (IOObject) io_object))\n> > + {\n> > + pgstat_io_context_ops_assert_zero(sharedent);\n> > + pgstat_io_context_ops_assert_zero(pendingent);\n> > + continue;\n> > + }\n> > +\n> > + for (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> > + {\n> > + if (!(pgstat_io_op_valid(MyBackendType, (IOContext) io_context,\n> > + (IOObject) io_object, (IOOp) io_op)))\n>\n> Superfluous parens after the !, I think?\n\nThanks! I've looked for other occurrences as well and fixed them.\n\n> > void\n> > pgstat_report_vacuum(Oid tableoid, bool shared,\n> > @@ -257,10 +257,18 @@ pgstat_report_vacuum(Oid tableoid, bool shared,\n> > }\n> >\n> > pgstat_unlock_entry(entry_ref);\n> > +\n> > + /*\n> > + * Flush IO Operations statistics now. pgstat_report_stat() will flush IO\n> > + * Operation stats, however this will not be called after an entire\n>\n> Missing \"until\"?\n\nFixed.\n\n> > +static inline void\n> > +pgstat_io_op_assert_zero(PgStat_IOOpCounters *counters, IOOp io_op)\n> > +{\n>\n> Does this need to be in pgstat.h? Perhaps pgstat_internal.h would suffice,\n> afaict it's not used outside of pgstat code?\n\nIt is used in pgstatfuncs.c during the view creation.\n\n> > +\n> > +/*\n> > + * Assert that stats have not been counted for any combination of IOContext,\n> > + * IOObject, and IOOp which is not valid for the passed-in BackendType. The\n> > + * passed-in array of PgStat_IOOpCounters must contain stats from the\n> > + * BackendType specified by the second parameter. Caller is responsible for\n> > + * locking of the passed-in PgStatShared_IOContextOps, if needed.\n> > + */\n> > +static inline void\n> > +pgstat_backend_io_stats_assert_well_formed(PgStatShared_IOContextOps *backend_io_context_ops,\n> > + BackendType bktype)\n> > +{\n>\n> This doesn't look like it should be an inline function - it's quite long.\n>\n> I think it's also too complicated for the compiler to optimize out if\n> assertions are disabled. So you'd need to handle this with an explicit #ifdef\n> USE_ASSERT_CHECKING.\n\nI've made it a static helper function in pgstat.c.\n\n>\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>io_context</structfield> <type>text</type>\n> > + </para>\n> > + <para>\n> > + The context or location of an IO operation.\n> > + </para>\n> > + <itemizedlist>\n> > + <listitem>\n> > + <para>\n> > + <varname>io_context</varname> <literal>buffer pool</literal> refers to\n> > + IO operations on data in both the shared buffer pool and process-local\n> > + buffer pools used for temporary relation data.\n> > + </para>\n> > + <para>\n>\n> The indentation in the sgml part of the patch seems to be a bit wonky.\n\nI'll address this and the other docs feedback in a separate patchset and email.\n\n> > +Datum\n> > +pg_stat_get_io(PG_FUNCTION_ARGS)\n> > +{\n> > + PgStat_BackendIOContextOps *backends_io_stats;\n> > + ReturnSetInfo *rsinfo;\n> > + Datum reset_time;\n> > +\n> > + InitMaterializedSRF(fcinfo, 0);\n> > + rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > +\n> > + backends_io_stats = pgstat_fetch_backend_io_context_ops();\n> > +\n> > + reset_time = TimestampTzGetDatum(backends_io_stats->stat_reset_timestamp);\n> > +\n> > + for (int bktype = 0; bktype < BACKEND_NUM_TYPES; bktype++)\n> > + {\n> > + Datum bktype_desc = CStringGetTextDatum(GetBackendTypeDesc((BackendType) bktype));\n> > + bool expect_backend_stats = true;\n> > + PgStat_IOContextOps *io_context_ops = &backends_io_stats->stats[bktype];\n> > +\n> > + /*\n> > + * For those BackendTypes without IO Operation stats, skip\n> > + * representing them in the view altogether.\n> > + */\n> > + expect_backend_stats = pgstat_io_op_stats_collected((BackendType)\n> > + bktype);\n> > +\n> > + for (int io_context = 0; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> > + {\n> > + const char *io_context_str = pgstat_io_context_desc(io_context);\n> > + PgStat_IOObjectOps *io_objs = &io_context_ops->data[io_context];\n> > +\n> > + for (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n> > + {\n> > + PgStat_IOOpCounters *counters = &io_objs->data[io_object];\n> > + const char *io_obj_str = pgstat_io_object_desc(io_object);\n> > +\n> > + Datum values[IO_NUM_COLUMNS] = {0};\n> > + bool nulls[IO_NUM_COLUMNS] = {0};\n> > +\n> > + /*\n> > + * Some combinations of IOContext, IOObject, and BackendType are\n> > + * not valid for any type of IOOp. In such cases, omit the\n> > + * entire row from the view.\n> > + */\n> > + if (!expect_backend_stats ||\n> > + !pgstat_bktype_io_context_io_object_valid((BackendType) bktype,\n> > + (IOContext) io_context, (IOObject) io_object))\n> > + {\n> > + pgstat_io_context_ops_assert_zero(counters);\n> > + continue;\n> > + }\n>\n> Perhaps mention in a comment two loops up that we don't skip the nested loops\n> despite !expect_backend_stats because we want to assert here?\n\nDone.\n\nI've also removed the test for bulkread reads from regress because\nCREATE DATABASE is expensive and added it to the verify_heapam test\nsince it is one of the only users of a BULKREAD strategy which\nunconditionally uses a BULKREAD strategy.\n\nThanks,\nMelanie",
"msg_date": "Mon, 28 Nov 2022 21:05:33 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 12:43 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Note that 001 fails to compile without 002:\n>\n> ../src/backend/storage/buffer/bufmgr.c:1257:43: error: ‘from_ring’ undeclared (first use in this function)\n> 1257 | StrategyRejectBuffer(strategy, buf, from_ring))\n\nThanks!\nI fixed this in version 38 attached in response to Andres upthread [1].\n\n> My \"warnings\" script informed me about these gripes from MSVC:\n>\n> [03:42:30.607] c:\\cirrus>call sh -c 'if grep \": warning \" build.txt; then exit 1; fi; exit 0'\n> [03:42:30.749] c:\\cirrus\\src\\backend\\storage\\buffer\\freelist.c(699) : warning C4715: 'IOContextForStrategy': not all control paths return a value\n> [03:42:30.749] c:\\cirrus\\src\\backend\\utils\\activity\\pgstat_io_ops.c(190) : warning C4715: 'pgstat_io_context_desc': not all control paths return a value\n> [03:42:30.749] c:\\cirrus\\src\\backend\\utils\\activity\\pgstat_io_ops.c(204) : warning C4715: 'pgstat_io_object_desc': not all control paths return a value\n> [03:42:30.749] c:\\cirrus\\src\\backend\\utils\\activity\\pgstat_io_ops.c(226) : warning C4715: 'pgstat_io_op_desc': not all control paths return a value\n> [03:42:30.749] c:\\cirrus\\src\\backend\\utils\\adt\\pgstatfuncs.c(1816) : warning C4715: 'pgstat_io_op_get_index': not all control paths return a value\n\nThanks, I forgot to look at those warnings in CI.\nI added pg_unreachable() and think it silenced the warnings.\n\n> In the docs table, you say things like:\n> | io_context vacuum refers to the IO operations incurred while vacuuming and analyzing.\n>\n> ..but it's a bit unclear (maybe due to the way the docs are rendered).\n> I think it may be more clear to say \"when <io_context> is\n> <vacuum>, ...\"\n\nSo, because I use this language [column name] [column value] so often in\nthe docs, I would prefer a pattern that is as concise as possible. I\nagree it may be hard to see due to the rendering. Currently, I am using\n<varname> tags for the column name and <literal> tags for the column\nvalue. Is there another tag type I could use to perhaps make this more\nclear without adding additional words?\n\nThis is what the code looks like for the above docs text:\n<varname>io_context</varname> <literal>vacuum</literal> refers to the IO\n\n> | acquiring the equivalent number of shared buffers\n>\n> I don't think \"equivelent\" fits here, since it's actually acquiring a\n> different number of buffers.\n\nI'm planning to do docs changes in a separate patchset after addressing\ncode feedback. I plan to change \"equivalent\" to \"corresponding\" here.\n\n> There's a missing period before \" The difference is\"\n>\n> The sentence beginning \"read plus extended for backend_types\" is difficult to\n> parse due to having a bulleted list in its middle.\n\nWill address in future version.\n\n> There aren't many references to \"IOOps\", which is good, because I\n> started to read it as \"I oops\".\n\nGrep'ing for this in the code, I only use the word IOOp(s) in the code\nwhen I very clearly want to use the type name -- and never in the docs.\nBut, yes, it does look like \"I oops\" :)\n\n>\n> + * Flush IO Operations statistics now. pgstat_report_stat() will flush IO\n> + * Operation stats, however this will not be called after an entire\n>\n> => I think that's intended to say *until* after ?\n\nFixed in v38.\n\n> + * Functions to assert that invalid IO Operation counters are zero.\n>\n> => There's a missing newline above this comment.\n\nFixed in v38.\n\n> + Assert(counters->evictions == 0 && counters->extends == 0 &&\n> + counters->fsyncs == 0 && counters->reads == 0 && counters->reuses\n> + == 0 && counters->writes == 0);\n>\n> => It'd be more readable and also maybe help debugging if these were separate\n> assertions.\n\nI have made this change.\n\n> +pgstat_io_op_stats_collected(BackendType bktype)\n> +{\n> + return bktype != B_INVALID && bktype != B_ARCHIVER && bktype != B_LOGGER &&\n> + bktype != B_WAL_RECEIVER && bktype != B_WAL_WRITER;\n>\n> Similar: I'd prefer to see this as 5 \"ifs\" or a \"switch\" to return\n> false, else return true. But YMMV.\n\nI don't know that separating it into multiple if statements or a switch\nwould make it more clear to me or help me with debugging here.\n\nSeparately, since this is used in non-assert builds, I would like to\nensure it is efficient. Do you know if a switch or if statements will\nbe compiled to the exact same thing as this at useful optimization\nlevels?\n\n>\n> + * CREATE TEMPORRARY TABLE AS ...\n>\n> => typo: temporary\n\nFixed in v38.\n\n>\n> + if (strategy_io_context && io_op == IOOP_FSYNC)\n>\n> => Extra space.\n\nFixed.\n\n>\n> pgstat_count_io_op() has a superflous newline before \"}\".\n\nI couldn't find the one you are referencing.\nDo you mind pasting in the code?\n\nThanks,\nMelanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_Zvaj_yFA_eiSRrLZsjhT0J8cJ044QhZfKuXq6WN5bu5g%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 28 Nov 2022 21:08:36 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Thanks for the review, Maciek!\n\nI've attached a new version 39 of the patch which addresses your docs\nfeedback from this email as well as docs feedback from Andres in [1] and\nJustin in [2].\n\nI've made some additional code changes addressing a few of their other\npoints as well, and I've moved the verify_heapam test to a plain sql\ntest in contrib/amcheck instead of putting it in the perl test.\n\nThis patchset also includes various cleanup, pgindenting, and addressing\nthe sgml indentation issue brought up in the thread.\n\nOn Mon, Nov 7, 2022 at 1:26 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> On Thu, Nov 3, 2022 at 10:00 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n>\n> > > I'm reviewing the rendered docs now, and I noticed sentences like this\n> > > are a bit hard to scan: they force the reader to parse a big list of\n> > > backend types before even getting to the meat of what this is talking\n> > > about. Should we maybe reword this so that the backend list comes at\n> > > the end of the sentence? Or maybe even use a list (e.g., like in the\n> > > \"state\" column description in pg_stat_activity)?\n> >\n> > Good idea with the bullet points.\n> > For the lengthy lists, I've added bullet point lists to the docs for\n> > several of the columns. It is quite long now but, hopefully, clearer?\n> > Let me know if you think it improves the readability.\n>\n> Hmm, I should have tried this before suggesting it. I think the lists\n> break up the flow of the column description too much. What do you\n> think about the attached (on top of your patches--attaching it as a\n> .diff to hopefully not confuse cfbot)? I kept the lists for backend\n> types but inlined the others as a middle ground. I also added a few\n> omitted periods and reworded \"read plus extended\" to avoid starting\n> the sentence with a (lowercase) varname (I think in general it's fine\n> to do that, but the more complicated sentence structure here makes it\n> easier to follow if the sentence starts with a capital).\n>\n> Alternately, what do you think about pulling equivalencies to existing\n> views out of the main column descriptions, and adding them after the\n> main table as a sort of footnote? Most view docs don't have anything\n> like that, but pg_stat_replication does and it might be a good pattern\n> to follow.\n>\n> Thoughts?\n\nThanks for including a patch!\nIn the attached v39, I've taken your suggestion of flattening some of\nthe lists and done some rewording as well. I have also moved the note\nabout equivalence with pg_stat_statements columns to the\npg_stat_statements documentation. The result is quite a bit different\nthan what I had before, so I would be interested to hear your thoughts.\n\nMy concern with the blue \"note\" section like you mentioned is that it\nwould be harder to read the lists of backend types than it was in the\ntabular format.\n\n> > > + <varname>io_context</varname>s. When a <quote>Buffer Access\n> > > + Strategy</quote> reuses a buffer in the strategy ring, it must evict its\n> > > + contents, incrementing <varname>reused</varname>. When a <quote>Buffer\n> > > + Access Strategy</quote> adds a new shared buffer to the strategy ring\n> > > + and this shared buffer is occupied, the <quote>Buffer Access\n> > > + Strategy</quote> must evict the contents of the shared buffer,\n> > > + incrementing <varname>evicted</varname>.\n> > >\n> > > I think the parallel phrasing here makes this a little hard to follow.\n> > > Specifically, I think \"must evict its contents\" for the strategy case\n> > > sounds like a bad thing, but in fact this is a totally normal thing\n> > > that happens as part of strategy access, no? The idea is you probably\n> > > won't need that buffer again, so it's fine to evict it. I'm not sure\n> > > how to reword, but I think the current phrasing is misleading.\n> >\n> > I had trouble rephrasing this. I changed a few words. I see what you\n> > mean. It is worth noting that reusing strategy buffers when there are\n> > buffers on the freelist may not be the best behavior, so I wouldn't\n> > necessarily consider \"reused\" a good thing. However, I'm not sure how\n> > much the user could really do about this. I would at least like this\n> > phrasing to be clear (evicted is for shared buffers, reused is for\n> > strategy buffers), so, perhaps this section requires more work.\n>\n> Oh, I see. I think the updated wording works better. Although I think\n> we can drop the quotes around \"Buffer Access Strategy\" here. They're\n> useful when defining the term originally, but after that I think it's\n> clearer to use the term unquoted.\n\nThanks! I've fixed this.\n\n> Just to understand this better myself, though: can you clarify when\n> \"reused\" is not a normal, expected part of the strategy execution? I\n> was under the impression that a ring buffer is used because each page\n> is needed only \"once\" (i.e., for one set of operations) for the\n> command using the strategy ring buffer. Naively, in that situation, it\n> seems better to reuse a no-longer-needed buffer than to claim another\n> buffer from the freelist (where other commands may eventually make\n> better use of it).\n\nYou are right: reused is a normal, expected part of strategy\nexecution. And you are correct: the idea behind reusing existing\nstrategy buffers instead of taking buffers off the freelist is to leave\nthose buffers for blocks that we might expect to be accessed more than\nonce.\n\nIn practice, however, if you happen to not be using many shared buffers,\nand then do a large COPY, for example, you will end up doing a bunch of\nwrites (in order to reuse the strategy buffers) that you perhaps didn't\nneed to do at that time had you leveraged the freelist. I think the\ndecision about which tradeoff to make is quite contentious, though.\n\n> Some more notes on the docs patch:\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>io_context</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + The context or location of an IO operation.\n> + </para>\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <varname>io_context</varname> <literal>buffer pool</literal> refers to\n> + IO operations on data in both the shared buffer pool and process-local\n> + buffer pools used for temporary relation data.\n> + </para>\n> + <para>\n> + Operations on temporary relations are tracked in\n> + <varname>io_context</varname> <literal>buffer pool</literal> and\n> + <varname>io_object</varname> <literal>temp relation</literal>.\n> + </para>\n> + <para>\n> + Operations on permanent relations are tracked in\n> + <varname>io_context</varname> <literal>buffer pool</literal> and\n> + <varname>io_object</varname> <literal>relation</literal>.\n> + </para>\n> + </listitem>\n>\n> For this column, you repeat \"io_context\" in the list describing the\n> possible values of the column. Enum-style columns in other tables\n> don't do that (e.g., the pg_stat_activty \"state\" column). I think it\n> might read better to omit \"io_context\" from the list.\n\nI changed this.\n\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>io_object</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Object operated on in a given <varname>io_context</varname> by a given\n> + <varname>backend_type</varname>.\n> + </para>\n>\n> Is this a fixed set of objects we should list, like for io_context?\n\nI've added this.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/20221121003815.qnwlnz2lhkow2e5w%40awork3.anarazel.de\n[2] https://www.postgresql.org/message-id/20221123054329.GG11463%40telsasoft.com",
"msg_date": "Tue, 29 Nov 2022 20:12:47 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 09:08:36PM -0500, Melanie Plageman wrote:\n> > +pgstat_io_op_stats_collected(BackendType bktype)\n> > +{\n> > + return bktype != B_INVALID && bktype != B_ARCHIVER && bktype != B_LOGGER &&\n> > + bktype != B_WAL_RECEIVER && bktype != B_WAL_WRITER;\n> >\n> > Similar: I'd prefer to see this as 5 \"ifs\" or a \"switch\" to return\n> > false, else return true. But YMMV.\n> \n> I don't know that separating it into multiple if statements or a switch\n> would make it more clear to me or help me with debugging here.\n> \n> Separately, since this is used in non-assert builds, I would like to\n> ensure it is efficient. Do you know if a switch or if statements will\n> be compiled to the exact same thing as this at useful optimization\n> levels?\n\nThis doesn't seem like a detail worth much bother, but I did a test.\n\nWith -O2 (but not -O1 nor -Og) the assembly (gcc 9.4) is the same when\nwritten like:\n\n+ if (bktype == B_INVALID)\n+ return false;\n+ if (bktype == B_ARCHIVER)\n+ return false;\n+ if (bktype == B_LOGGER)\n+ return false;\n+ if (bktype == B_WAL_RECEIVER)\n+ return false;\n+ if (bktype == B_WAL_WRITER)\n+ return false;\n+\n+ return true;\n\nobjdump --disassemble=pgstat_io_op_stats_collected src/backend/postgres_lib.a.p/utils_activity_pgstat_io_ops.c.o\n\n0000000000000110 <pgstat_io_op_stats_collected>:\n 110: f3 0f 1e fa endbr64 \n 114: b8 01 00 00 00 mov $0x1,%eax\n 119: 83 ff 0d cmp $0xd,%edi\n 11c: 77 10 ja 12e <pgstat_io_op_stats_collected+0x1e>\n 11e: b8 03 29 00 00 mov $0x2903,%eax\n 123: 89 f9 mov %edi,%ecx\n 125: 48 d3 e8 shr %cl,%rax\n 128: 48 f7 d0 not %rax\n 12b: 83 e0 01 and $0x1,%eax\n 12e: c3 retq \n\nI was surprised, but the assembly is *not* the same when I used a switch{}.\n\nI think it's fine to write however you want.\n\n> > pgstat_count_io_op() has a superflous newline before \"}\".\n> \n> I couldn't find the one you are referencing.\n> Do you mind pasting in the code?\n\n+ case IOOP_WRITE:\n+ pending_counters->writes++;\n+ break;\n+ }\n+ --> here <--\n+}\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 29 Nov 2022 20:51:13 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 5:13 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Thanks for the review, Maciek!\n>\n> I've attached a new version 39 of the patch which addresses your docs\n> feedback from this email as well as docs feedback from Andres in [1] and\n> Justin in [2].\n\nThis looks great! Just a couple of minor comments.\n\n> You are right: reused is a normal, expected part of strategy\n> execution. And you are correct: the idea behind reusing existing\n> strategy buffers instead of taking buffers off the freelist is to leave\n> those buffers for blocks that we might expect to be accessed more than\n> once.\n>\n> In practice, however, if you happen to not be using many shared buffers,\n> and then do a large COPY, for example, you will end up doing a bunch of\n> writes (in order to reuse the strategy buffers) that you perhaps didn't\n> need to do at that time had you leveraged the freelist. I think the\n> decision about which tradeoff to make is quite contentious, though.\n\nThanks for the explanation--that makes sense.\n\n> On Mon, Nov 7, 2022 at 1:26 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> > Alternately, what do you think about pulling equivalencies to existing\n> > views out of the main column descriptions, and adding them after the\n> > main table as a sort of footnote? Most view docs don't have anything\n> > like that, but pg_stat_replication does and it might be a good pattern\n> > to follow.\n> >\n> > Thoughts?\n>\n> Thanks for including a patch!\n> In the attached v39, I've taken your suggestion of flattening some of\n> the lists and done some rewording as well. I have also moved the note\n> about equivalence with pg_stat_statements columns to the\n> pg_stat_statements documentation. The result is quite a bit different\n> than what I had before, so I would be interested to hear your thoughts.\n>\n> My concern with the blue \"note\" section like you mentioned is that it\n> would be harder to read the lists of backend types than it was in the\n> tabular format.\n\nOh, I wasn't thinking of doing a separate \"note\": just additional\nparagraphs of text after the table (like what pg_stat_replication has\nbefore its \"note\", or the brief comment after the pg_stat_archiver\ntable). But I think the updated docs work also.\n\n+ <para>\n+ The context or location of an IO operation.\n+ </para>\n\nmaybe \"...of an IO operation:\" (colon) instead?\n\n+ default. Future values could include those derived from\n+ <symbol>XLOG_BLCKSZ</symbol>, once WAL IO is tracked in this view, and\n+ constant multipliers once non-block-oriented IO (e.g. temporary file IO)\n+ is tracked here.\n\nI know Lukas had commented that we should communicate that the goal is\nto eventually provide relatively comprehensive I/O stats in this view\n(you do that in the view description and I think that works), and this\nis sort of along those lines, but I think speculative documentation\nlike this is not all that helpful. I'd drop this last sentence. Just\nmy two cents.\n\n+ <para>\n+ <varname>evicted</varname> in <varname>io_context</varname>\n+ <literal>buffer pool</literal> and <varname>io_object</varname>\n+ <literal>temp relation</literal> counts the number of times a block of\n+ data from an existing local buffer was evicted in order to replace it\n+ with another block, also in local buffers.\n+ </para>\n\nDoesn't this follow from the first sentence of the column description?\nI think we could drop this, no?\n\nOtherwise, the docs look good to me.\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Sun, 4 Dec 2022 14:48:43 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\n- I think it might be worth to rename IOCONTEXT_BUFFER_POOL to\n IOCONTEXT_{NORMAL, PLAIN, DEFAULT}. I'd like at some point to track WAL IO ,\n temporary file IO etc, and it doesn't seem useful to define a version of\n BUFFER_POOL for each of them. And it'd make it less confusing, because all\n the other existing contexts are also in the buffer pool (for now, can't wait\n for \"bypass\" or whatever to be tracked as well).\n\n- given that IOContextForStrategy() is defined in freelist.c, and that\n declaring it in pgstat.h requires including buf.h, I think it's probably\n better to move IOContextForStrategy()'s declaration to freelist.h (doesn't\n exist, but whatever the right one is)\n\n- pgstat_backend_io_stats_assert_well_formed() doesn't seem to belong in\n pgstat.c. Why not pgstat_io_ops.c?\n\n- Do pgstat_io_context_ops_assert_zero(), pgstat_io_op_assert_zero() have to\n be in pgstat.h?\n\n\nI think the only non-trival thing is the first point, the rest is stuff than I\nalso evolve during commit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Dec 2022 11:32:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Attached is v40.\n\nI have addressed the feedback from Justin [1] and Maciek [2] as well.\nI took all of the suggestions regarding the docs that Maciek made,\nincluding the following:\n\n> + default. Future values could include those derived from\n> + <symbol>XLOG_BLCKSZ</symbol>, once WAL IO is tracked in this view, and\n> + constant multipliers once non-block-oriented IO (e.g. temporary file IO)\n> + is tracked here.\n>\n>\n> I know Lukas had commented that we should communicate that the goal is\n> to eventually provide relatively comprehensive I/O stats in this view\n> (you do that in the view description and I think that works), and this\n> is sort of along those lines, but I think speculative documentation\n> like this is not all that helpful. I'd drop this last sentence. Just\n> my two cents.\n\nI have removed this and added the relevant part of this as a comment to\nthe view generating function pg_stat_get_io().\n\nOn Mon, Dec 5, 2022 at 2:32 PM Andres Freund <andres@anarazel.de> wrote:\n> - I think it might be worth to rename IOCONTEXT_BUFFER_POOL to\n> IOCONTEXT_{NORMAL, PLAIN, DEFAULT}. I'd like at some point to track WAL IO ,\n> temporary file IO etc, and it doesn't seem useful to define a version of\n> BUFFER_POOL for each of them. And it'd make it less confusing, because all\n> the other existing contexts are also in the buffer pool (for now, can't wait\n> for \"bypass\" or whatever to be tracked as well).\n\nIn attached v40, I've renamed IOCONTEXT_BUFFER_POOL to IOCONTEXT_NORMAL.\n\n> - given that IOContextForStrategy() is defined in freelist.c, and that\n> declaring it in pgstat.h requires including buf.h, I think it's probably\n> better to move IOContextForStrategy()'s declaration to freelist.h (doesn't\n> exist, but whatever the right one is)\n\nI have moved it to buf_internals.h.\n\n> - pgstat_backend_io_stats_assert_well_formed() doesn't seem to belong in\n> pgstat.c. Why not pgstat_io_ops.c?\n\nI put it in pgstat.c because it is only used there -- so I made it\nstatic. I've moved it to pg_stat_io_ops.c and declared it in\npgstat_internal.h\n\n> - Do pgstat_io_context_ops_assert_zero(), pgstat_io_op_assert_zero() have to\n> be in pgstat.h?\n\nThey are used in pgstatfuncs.c, which I presume should not include\npgstat_internal.h. Or did you mean that I should not put them in a\nheader file at all?\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/20221130025113.GD24131%40telsasoft.com\n[2] https://www.postgresql.org/message-id/CAOtHd0BfFdMqO7-zDOk%3DiJTatzSDgVcgYcaR1_wk0GS4NN%2BRUQ%40mail.gmail.com",
"msg_date": "Mon, 5 Dec 2022 20:49:20 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "In the pg_stat_statements docs, there are several column descriptions like\n\n Total number of ... by the statement\n\nYou added an additional sentence to some describing the equivalent\npg_stat_io values, but you only added a period to the previous\nsentence for shared_blks_read (for other columns, the additional\ndescription just follows directly). These should be consistent.\n\nOtherwise, the docs look good to me.\n\n\n",
"msg_date": "Wed, 7 Dec 2022 21:09:18 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-06 13:42:09 -0400, Melanie Plageman wrote:\n> > Additionally, some minor notes:\n> >\n> > - Since the stats are counting blocks, it would make sense to prefix the view columns with \"blks_\", and word them in the past tense (to match current style), i.e. \"blks_written\", \"blks_read\", \"blks_extended\", \"blks_fsynced\" (realistically one would combine this new view with other data e.g. from pg_stat_database or pg_stat_statements, which all use the \"blks_\" prefix, and stop using pg_stat_bgwriter for this which does not use such a prefix)\n>\n> I have changed the column names to be in the past tense.\n\nFor a while I was convinced by the consistency argument (after Melanie\npointing it out to me). But the more I look, the less convinced I am. The\nexisting IO related stats in pg_stat_database, pg_stat_bgwriter aren't past\ntense, just the ones in pg_stat_statements. pg_stat_database uses past tense\nfor tup_*, but not xact_*, deadlocks, checksum_failures etc.\n\nAnd even pg_stat_statements isn't consistent about it - otherwise it'd be\n'planned' instead of 'plans', 'called' instead of 'calls' etc.\n\nI started to look at the naming \"tense\" issue again, after I got \"confused\"\nabout \"extended\", because that somehow makes me think about more detailed\nstats or such, rather than files getting extended.\n\nISTM that 'evictions', 'extends', 'fsyncs', 'reads', 'reuses', 'writes' are\nclearer than the past tense versions, and about as consistent with existing\ncolumns.\n\n\nFWIW, I've been hacking on this code a bunch, mostly around renaming things\nand changing the 'stacking' of the patches. My current state is at\nhttps://github.com/anarazel/postgres/tree/pg_stat_io\nA bit more to do before posting the edited version...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Dec 2022 15:56:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 6:56 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> FWIW, I've been hacking on this code a bunch, mostly around renaming things\n> and changing the 'stacking' of the patches. My current state is at\n> https://github.com/anarazel/postgres/tree/pg_stat_io\n> A bit more to do before posting the edited version...\n\nHere is the bit more done.\nI've attached a new version 42 which incorporates all of Andres' changes\non his branch (which I am considering version 41).\nI have fixed various issues with counting fsyncs and added more comments\nand done cosmetic cleanup.\n\nThe docs have substantial changes but still require more work:\n\n- The comparisons between columns in pg_stat_io and pg_stat_statements\n have been removed, since the granularity and lifetime are so\n different, comparing them isn't quite correct.\n\n- The lists of backend types still take up a lot of visual space in the\n definitions, which doesn't look great. I'm not sure what to do about\n that.\n\n- Andres has pointed out that it is difficult to read the definitions of\n the columns because of the added clutter of the interpretations and\n the comparisons to other stats views. I'm not sure if I should cut\n these. He and I tried adding that information as a note and in other\n various table types, however none of the alternatives were an\n improvement.\n\nBesides docs, there is one large change to the code which I am currently\nworking on, which is to change PgStat_IOOpCounters into an array of\nPgStatCounters instead of having individual members for each IOOp type.\nI hadn't done this previously because the additional level of nesting\nseemed confusing. However, it seems it would simplify the code quite a\nbit and is probably worth doing.\n\n- Melanie",
"msg_date": "Mon, 2 Jan 2023 17:46:22 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 5:46 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Besides docs, there is one large change to the code which I am currently\n> working on, which is to change PgStat_IOOpCounters into an array of\n> PgStatCounters instead of having individual members for each IOOp type.\n> I hadn't done this previously because the additional level of nesting\n> seemed confusing. However, it seems it would simplify the code quite a\n> bit and is probably worth doing.\n\nAs described above, attached v43 uses an array for the PgStatCounters of\nIOOps instead of struct members.",
"msg_date": "Mon, 2 Jan 2023 20:15:54 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 8:15 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Mon, Jan 2, 2023 at 5:46 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > Besides docs, there is one large change to the code which I am currently\n> > working on, which is to change PgStat_IOOpCounters into an array of\n> > PgStatCounters instead of having individual members for each IOOp type.\n> > I hadn't done this previously because the additional level of nesting\n> > seemed confusing. However, it seems it would simplify the code quite a\n> > bit and is probably worth doing.\n>\n> As described above, attached v43 uses an array for the PgStatCounters of\n> IOOps instead of struct members.\n\nThis wasn't quite a multi-dimensional array. Attached is v44, in which I\nhave removed all of the granular struct types -- PgStat_IOOps,\nPgStat_IOContext, and PgStat_IOObject by collapsing them into a single\narray of PgStat_Counters in a new struct PgStat_BackendIO. I needed to\nkeep this in addition to PgStat_IO to have a data type for backends to\ntrack their stats in locally.\n\nI've also done another round of cleanup.\n\n- Melanie",
"msg_date": "Wed, 4 Jan 2023 17:56:07 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Attached is v45 of the patchset. I've done some additional code cleanup\nand changes. The most significant change, however, is the docs. I've\nseparated the docs into its own patch for ease of review.\n\nThe docs patch here was edited and co-authored by Samay Sharma.\nI'm not sure if the order of pg_stat_io in the docs is correct.\n\nThe significant changes are removal of all \"correspondence\" or\n\"equivalence\"-related sections (those explaining how other IO stats were\nthe same or different from pg_stat_io columns).\n\nI've tried to remove references to \"strategies\" and \"Buffer Access\nStrategy\" as much as possible.\n\nI've moved the advice and interpretation section to the bottom --\noutside of the table of definitions. Since this page is primarily a\nreference page, I agree with Samay that incorporating interpretation\ninto the column definitions adds clutter and confusion.\n\nI think the best course would be to have an \"Interpreting Statistics\"\nsection.\n\nI suggest a structure like the following for this section:\n - Statistics Collection Configuration\n - Viewing Statistics\n - Statistics Views Reference\n - Statistics Functions Reference\n - Interpreting Statistics\n\nAs an aside, this section of the docs has some other structural issues\nas well.\n\nFor example, I'm not sure it makes sense to have the dynamic statistics\nviews as sub-sections under 28.2, which is titled \"The Cumulative\nStatistics System.\"\n\nIn fact the docs say this under Section 28.2\nhttps://www.postgresql.org/docs/current/monitoring-stats.html\n\n\"PostgreSQL also supports reporting dynamic information about exactly\nwhat is going on in the system right now, such as the exact command\ncurrently being executed by other server processes, and which other\nconnections exist in the system. This facility is independent of the\ncumulative statistics system.\"\n\nSo, it is a bit weird that they are defined under the section titled\n\"The Cumulative Statistics System\".\n\nIn this version of the patchset, I have not attempted a new structure\nbut instead moved the advice/interpretation for pg_stat_io to below the\ntable containing the column definitions.\n\n- Melanie",
"msg_date": "Mon, 9 Jan 2023 16:10:47 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 02:41, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Attached is v45 of the patchset. I've done some additional code cleanup\n> and changes. The most significant change, however, is the docs. I've\n> separated the docs into its own patch for ease of review.\n>\n> The docs patch here was edited and co-authored by Samay Sharma.\n> I'm not sure if the order of pg_stat_io in the docs is correct.\n>\n> The significant changes are removal of all \"correspondence\" or\n> \"equivalence\"-related sections (those explaining how other IO stats were\n> the same or different from pg_stat_io columns).\n>\n> I've tried to remove references to \"strategies\" and \"Buffer Access\n> Strategy\" as much as possible.\n>\n> I've moved the advice and interpretation section to the bottom --\n> outside of the table of definitions. Since this page is primarily a\n> reference page, I agree with Samay that incorporating interpretation\n> into the column definitions adds clutter and confusion.\n>\n> I think the best course would be to have an \"Interpreting Statistics\"\n> section.\n>\n> I suggest a structure like the following for this section:\n> - Statistics Collection Configuration\n> - Viewing Statistics\n> - Statistics Views Reference\n> - Statistics Functions Reference\n> - Interpreting Statistics\n>\n> As an aside, this section of the docs has some other structural issues\n> as well.\n>\n> For example, I'm not sure it makes sense to have the dynamic statistics\n> views as sub-sections under 28.2, which is titled \"The Cumulative\n> Statistics System.\"\n>\n> In fact the docs say this under Section 28.2\n> https://www.postgresql.org/docs/current/monitoring-stats.html\n>\n> \"PostgreSQL also supports reporting dynamic information about exactly\n> what is going on in the system right now, such as the exact command\n> currently being executed by other server processes, and which other\n> connections exist in the system. This facility is independent of the\n> cumulative statistics system.\"\n>\n> So, it is a bit weird that they are defined under the section titled\n> \"The Cumulative Statistics System\".\n>\n> In this version of the patchset, I have not attempted a new structure\n> but instead moved the advice/interpretation for pg_stat_io to below the\n> table containing the column definitions.\n\nFor some reason cfbot is not able to apply this patch as in [1],\nplease have a look and post an updated patch if required:\n=== Applying patches on top of PostgreSQL commit ID\n3c6fc58209f24b959ee18f5d19ef96403d08f15c ===\n=== applying patch\n./v45-0001-pgindent-and-some-manual-cleanup-in-pgstat-relat.patch\npatching file src/backend/storage/buffer/bufmgr.c\npatching file src/backend/storage/buffer/localbuf.c\npatching file src/backend/utils/activity/pgstat.c\npatching file src/backend/utils/activity/pgstat_relation.c\npatching file src/backend/utils/adt/pgstatfuncs.c\npatching file src/include/pgstat.h\npatching file src/include/utils/pgstat_internal.h\n=== applying patch ./v45-0002-pgstat-Infrastructure-to-track-IO-operations.patch\ngpatch: **** Only garbage was found in the patch input.\n\n[1] - http://cfbot.cputube.org/patch_41_3272.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:02:37 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "> Subject: [PATCH v45 4/5] Add system view tracking IO ops per backend type\n\nThe patch can/will fail with:\n\nCREATE TABLESPACE test_io_shared_stats_tblspc LOCATION '';\n+WARNING: tablespaces created by regression test cases should have names starting with \"regress_\"\n\nCREATE TABLESPACE test_stats LOCATION '';\n+WARNING: tablespaces created by regression test cases should have names starting with \"regress_\"\n\n(I already sent patches to address the omission in cirrus.yml)\n\n1760 : errhint(\"Target must be \\\"archiver\\\", \\\"io\\\", \\\"bgwriter\\\", \\\"recovery_prefetch\\\", or \\\"wal\\\".\")));\n=> Do you want to put these in order?\n\npgstat_get_io_op_name() isn't currently being hit by tests; actually,\nit's completely unused.\n\nFlushRelationBuffers() isn't being hit for local buffers.\n\n> + <entry><structname>pg_stat_io</structname><indexterm><primary>pg_stat_io</primary></indexterm></entry>\n> + <entry>\n> + One row per backend type, context, target object combination showing\n> + cluster-wide I/O statistics.\n\nI suggest: \"One row for each combination of of ..\"\n\n> + The <structname>pg_stat_io</structname> and\n> + <structname>pg_statio_</structname> set of views are especially useful for\n> + determining the effectiveness of the buffer cache. When the number of actual\n> + disk reads is much smaller than the number of buffer hits, then the cache is\n> + satisfying most read requests without invoking a kernel call.\n\nI would change this say \"Postgres' own buffer cache is satisfying ...\"\n\n> However, these\n> + statistics do not give the entire story: due to the way in which\n> + <productname>PostgreSQL</productname> handles disk I/O, data that is not in\n> + the <productname>PostgreSQL</productname> buffer cache might still reside in\n> + the kernel's I/O cache, and might therefore still be fetched without\n\nI suggest to refer to \"the kernel's page cache\"\n\n> + The <structname>pg_stat_io</structname> view will contain one row for each\n> + backend type, I/O context, and target I/O object combination showing\n> + cluster-wide I/O statistics. Combinations which do not make sense are\n> + omitted.\n\n\"..for each combination of ..\"\n\n> + <varname>io_context</varname> for a type of I/O operation. For\n\n\"for I/O operations\"\n\n> + <literal>vacuum</literal>: I/O operations done outside of shared\n> + buffers incurred while vacuuming and analyzing permanent relations.\n\ns/incurred/performed/\n\n> + <literal>bulkread</literal>: Qualifying large read I/O operations\n> + done outside of shared buffers, for example, a sequential scan of a\n> + large table.\n\nI don't think it's correct to say that it's \"outside of\" shared-buffers.\ns/Qualifying/Certain/\n\n> + <literal>bulkwrite</literal>: Qualifying large write I/O operations\n> + done outside of shared buffers, such as <command>COPY</command>.\n\nSame\n\n> + Target object of an I/O operation. Possible values are:\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <literal>relation</literal>: This includes permanent relations.\n\nIt says \"includes permanent\" but what seems to mean is that it\n\"exclusive of temporary relations\".\n\n> + <row>\n> + <entry role=\"catalog_table_entry\">\n> + <para role=\"column_definition\">\n> + <structfield>read</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of read operations in units of <varname>op_bytes</varname>.\n\nThis looks too much like it means \"bytes\".\nShould say: \"in number of blocks of size >op_bytes<\"\n\nBut wait - is it the number of read operations \"in units of op_bytes\"\n(which would means this already multiplied by op_bytes, and is in units\nof bytes).\n\nOr the \"number of read operations\" *of* op_bytes chunks ? Which would\nmean this is a \"pure\" number, and could be multipled by op_bytes to\nobtain a size in bytes.\n\n> + Number of write operations in units of <varname>op_bytes</varname>.\n\n> + Number of relation extend operations in units of\n> + <varname>op_bytes</varname>.\n\nsame\n\n> + In <varname>io_context</varname> <literal>normal</literal>, this counts\n> + the number of times a block was evicted from a buffer and replaced with\n> + another block. In <varname>io_context</varname>s\n> + <literal>bulkwrite</literal>, <literal>bulkread</literal>, and\n> + <literal>vacuum</literal>, this counts the number of times a block was\n> + evicted from shared buffers in order to add the shared buffer to a\n> + separate size-limited ring buffer.\n\nThis never defines what \"evicted\" means. Does it mea that a dirty\nbuffer was written out ?\n\n> + The number of times an existing buffer in a size-limited ring buffer\n> + outside of shared buffers was reused as part of an I/O operation in the\n> + <literal>bulkread</literal>, <literal>bulkwrite</literal>, or\n> + <literal>vacuum</literal> <varname>io_context</varname>s.\n\nMaybe say \"as part of a bulk I/O operation (bulkread, bulkwrite, or\nvacuum).\"\n\n> + <para>\n> + <structname>pg_stat_io</structname> can be used to inform database tuning.\n\n> + For example:\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + A high <varname>evicted</varname> count can indicate that shared buffers\n> + should be increased.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + Client backends rely on the checkpointer to ensure data is persisted to\n> + permanent storage. Large numbers of <varname>files_synced</varname> by\n> + <literal>client backend</literal>s could indicate a misconfiguration of\n> + shared buffers or of checkpointer. More information on checkpointer\n\nof *the* checkpointer\n\n> + Normally, client backends should be able to rely on auxiliary processes\n> + like the checkpointer and background writer to write out dirty data as\n\n*the* bg writer\n\n> + much as possible. Large numbers of writes by client backends could\n> + indicate a misconfiguration of shared buffers or of checkpointer. More\n\n*the* ckpointer\n\nShould this link to various docs for checkpointer/bgwriter ?\n\nMaybe the docs for ALTER/COPY/VACUUM/CREATE/etc should be updated to\nrefer to some central description of ring buffers. Maybe something\nshould be included to the appendix.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:58:51 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Attached is v46.\n\nOn Wed, Dec 28, 2022 at 6:56 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-10-06 13:42:09 -0400, Melanie Plageman wrote:\n> > > Additionally, some minor notes:\n> > >\n> > > - Since the stats are counting blocks, it would make sense to prefix the view columns with \"blks_\", and word them in the past tense (to match current style), i.e. \"blks_written\", \"blks_read\", \"blks_extended\", \"blks_fsynced\" (realistically one would combine this new view with other data e.g. from pg_stat_database or pg_stat_statements, which all use the \"blks_\" prefix, and stop using pg_stat_bgwriter for this which does not use such a prefix)\n> >\n> > I have changed the column names to be in the past tense.\n>\n> For a while I was convinced by the consistency argument (after Melanie\n> pointing it out to me). But the more I look, the less convinced I am. The\n> existing IO related stats in pg_stat_database, pg_stat_bgwriter aren't past\n> tense, just the ones in pg_stat_statements. pg_stat_database uses past tense\n> for tup_*, but not xact_*, deadlocks, checksum_failures etc.\n>\n> And even pg_stat_statements isn't consistent about it - otherwise it'd be\n> 'planned' instead of 'plans', 'called' instead of 'calls' etc.\n>\n> I started to look at the naming \"tense\" issue again, after I got \"confused\"\n> about \"extended\", because that somehow makes me think about more detailed\n> stats or such, rather than files getting extended.\n>\n> ISTM that 'evictions', 'extends', 'fsyncs', 'reads', 'reuses', 'writes' are\n> clearer than the past tense versions, and about as consistent with existing\n> columns.\n\nI have updated the column names to the above recommendation.\n\nOn Wed, Jan 11, 2023 at 11:32 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> For some reason cfbot is not able to apply this patch as in [1],\n> please have a look and post an updated patch if required:\n> === Applying patches on top of PostgreSQL commit ID\n> 3c6fc58209f24b959ee18f5d19ef96403d08f15c ===\n> === applying patch\n> ./v45-0001-pgindent-and-some-manual-cleanup-in-pgstat-relat.patch\n> patching file src/backend/storage/buffer/bufmgr.c\n> patching file src/backend/storage/buffer/localbuf.c\n> patching file src/backend/utils/activity/pgstat.c\n> patching file src/backend/utils/activity/pgstat_relation.c\n> patching file src/backend/utils/adt/pgstatfuncs.c\n> patching file src/include/pgstat.h\n> patching file src/include/utils/pgstat_internal.h\n> === applying patch ./v45-0002-pgstat-Infrastructure-to-track-IO-operations.patch\n> gpatch: **** Only garbage was found in the patch input.\n>\n> [1] - http://cfbot.cputube.org/patch_41_3272.log\n>\n\nThis was an issue with cfbot that Thomas has now fixed as he describes\nin [1].\n\nOn Wed, Jan 11, 2023 at 4:58 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> > Subject: [PATCH v45 4/5] Add system view tracking IO ops per backend type\n>\n> The patch can/will fail with:\n>\n> CREATE TABLESPACE test_io_shared_stats_tblspc LOCATION '';\n> +WARNING: tablespaces created by regression test cases should have names starting with \"regress_\"\n>\n> CREATE TABLESPACE test_stats LOCATION '';\n> +WARNING: tablespaces created by regression test cases should have names starting with \"regress_\"\n>\n> (I already sent patches to address the omission in cirrus.yml)\n\nThanks. I've fixed this\nI make a tablespace in amcheck -- are there recommendations for naming\ntablespaces in contrib also?\n\n>\n> 1760 : errhint(\"Target must be \\\"archiver\\\", \\\"io\\\", \\\"bgwriter\\\", \\\"recovery_prefetch\\\", or \\\"wal\\\".\")));\n> => Do you want to put these in order?\n\nThanks. Fixed.\n\n> pgstat_get_io_op_name() isn't currently being hit by tests; actually,\n> it's completely unused.\n\nDeleted it.\n\n> FlushRelationBuffers() isn't being hit for local buffers.\n\nI added a test.\n\n> > + <entry><structname>pg_stat_io</structname><indexterm><primary>pg_stat_io</primary></indexterm></entry>\n> > + <entry>\n> > + One row per backend type, context, target object combination showing\n> > + cluster-wide I/O statistics.\n>\n> I suggest: \"One row for each combination of of ..\"\n\nI have made this change.\n\n> > + The <structname>pg_stat_io</structname> and\n> > + <structname>pg_statio_</structname> set of views are especially useful for\n> > + determining the effectiveness of the buffer cache. When the number of actual\n> > + disk reads is much smaller than the number of buffer hits, then the cache is\n> > + satisfying most read requests without invoking a kernel call.\n>\n> I would change this say \"Postgres' own buffer cache is satisfying ...\"\n\nSo, this is existing copy to which I added the pg_stat_io view name and\nre-flowed the indentation.\nHowever, I think your suggestions are a good idea, so I've taken them\nand just rewritten this paragraph altogether.\n\n>\n> > However, these\n> > + statistics do not give the entire story: due to the way in which\n> > + <productname>PostgreSQL</productname> handles disk I/O, data that is not in\n> > + the <productname>PostgreSQL</productname> buffer cache might still reside in\n> > + the kernel's I/O cache, and might therefore still be fetched without\n>\n> I suggest to refer to \"the kernel's page cache\"\n\nsame applies here.\n\n>\n> > + The <structname>pg_stat_io</structname> view will contain one row for each\n> > + backend type, I/O context, and target I/O object combination showing\n> > + cluster-wide I/O statistics. Combinations which do not make sense are\n> > + omitted.\n>\n> \"..for each combination of ..\"\n\nI have changed this.\n\n>\n> > + <varname>io_context</varname> for a type of I/O operation. For\n>\n> \"for I/O operations\"\n\nSo I actually mean for a type of I/O operation -- that is, relation data\nis normally written to a shared buffer but sometimes we bypass shared\nbuffers and just call write and sometimes we use a buffer access\nstrategy and write it to a special ring buffer (made up of buffers\nstolen from shared buffers, but still). So I don't want to say \"for I/O\noperations\" because I think that would imply that writes of relation\ndata will always be in the same IO Context.\n\n>\n> > + <literal>vacuum</literal>: I/O operations done outside of shared\n> > + buffers incurred while vacuuming and analyzing permanent relations.\n>\n> s/incurred/performed/\n\nI changed this.\n\n>\n> > + <literal>bulkread</literal>: Qualifying large read I/O operations\n> > + done outside of shared buffers, for example, a sequential scan of a\n> > + large table.\n>\n> I don't think it's correct to say that it's \"outside of\" shared-buffers.\n\nI suppose \"outside of\" gives the wrong idea. But I need to make clear\nthat this I/O is to and from buffers which are not a part of shared\nbuffers right now -- they may still be accessible from the same data\nstructures which access shared buffers but they are currently being used\nin a different way.\n\n> s/Qualifying/Certain/\n\nI feel like qualifying is more specific than certain, but I would be open\nto changing it if there was a specific reason you don't like it.\n\n>\n> > + <literal>bulkwrite</literal>: Qualifying large write I/O operations\n> > + done outside of shared buffers, such as <command>COPY</command>.\n>\n> Same\n>\n> > + Target object of an I/O operation. Possible values are:\n> > + <itemizedlist>\n> > + <listitem>\n> > + <para>\n> > + <literal>relation</literal>: This includes permanent relations.\n>\n> It says \"includes permanent\" but what seems to mean is that it\n> \"exclusive of temporary relations\".\n\nI've changed this.\n\n>\n> > + <row>\n> > + <entry role=\"catalog_table_entry\">\n> > + <para role=\"column_definition\">\n> > + <structfield>read</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of read operations in units of <varname>op_bytes</varname>.\n>\n> This looks too much like it means \"bytes\".\n> Should say: \"in number of blocks of size >op_bytes<\"\n>\n> But wait - is it the number of read operations \"in units of op_bytes\"\n> (which would means this already multiplied by op_bytes, and is in units\n> of bytes).\n>\n> Or the \"number of read operations\" *of* op_bytes chunks ? Which would\n> mean this is a \"pure\" number, and could be multipled by op_bytes to\n> obtain a size in bytes.\n\nIt is the number of read operations of op_bytes size -- thanks so much\nfor pointing this out. The wording was really unclear.\nThe idea is that you can do something like:\nSELECT pg_size_pretty(reads * op_bytes) FROM pg_stat_io;\nand get it in bytes.\n\nThe view will contain other types of IO that are not in BLCKSZ chunks,\nwhich is where this column will be handy.\n\n>\n> > + Number of write operations in units of <varname>op_bytes</varname>.\n>\n> > + Number of relation extend operations in units of\n> > + <varname>op_bytes</varname>.\n>\n> same\n>\n> > + In <varname>io_context</varname> <literal>normal</literal>, this counts\n> > + the number of times a block was evicted from a buffer and replaced with\n> > + another block. In <varname>io_context</varname>s\n> > + <literal>bulkwrite</literal>, <literal>bulkread</literal>, and\n> > + <literal>vacuum</literal>, this counts the number of times a block was\n> > + evicted from shared buffers in order to add the shared buffer to a\n> > + separate size-limited ring buffer.\n>\n> This never defines what \"evicted\" means. Does it mea that a dirty\n> buffer was written out ?\n\nThanks. I've updated this.\n\n>\n> > + The number of times an existing buffer in a size-limited ring buffer\n> > + outside of shared buffers was reused as part of an I/O operation in the\n> > + <literal>bulkread</literal>, <literal>bulkwrite</literal>, or\n> > + <literal>vacuum</literal> <varname>io_context</varname>s.\n>\n> Maybe say \"as part of a bulk I/O operation (bulkread, bulkwrite, or\n> vacuum).\"\n\nI've changed this.\n\n>\n> > + <para>\n> > + <structname>pg_stat_io</structname> can be used to inform database tuning.\n>\n> > + For example:\n> > + <itemizedlist>\n> > + <listitem>\n> > + <para>\n> > + A high <varname>evicted</varname> count can indicate that shared buffers\n> > + should be increased.\n> > + </para>\n> > + </listitem>\n> > + <listitem>\n> > + <para>\n> > + Client backends rely on the checkpointer to ensure data is persisted to\n> > + permanent storage. Large numbers of <varname>files_synced</varname> by\n> > + <literal>client backend</literal>s could indicate a misconfiguration of\n> > + shared buffers or of checkpointer. More information on checkpointer\n>\n> of *the* checkpointer\n>\n> > + Normally, client backends should be able to rely on auxiliary processes\n> > + like the checkpointer and background writer to write out dirty data as\n>\n> *the* bg writer\n>\n> > + much as possible. Large numbers of writes by client backends could\n> > + indicate a misconfiguration of shared buffers or of checkpointer. More\n>\n> *the* ckpointer\n\nI've made most of these changes.\n\n> Should this link to various docs for checkpointer/bgwriter ?\n\nI couldn't find docs related to tuning checkpointer outside of the WAL\nconfiguration docs. There is the docs page for the CHECKPOINT command --\nbut I don't think that is very relevant here.\n\n> Maybe the docs for ALTER/COPY/VACUUM/CREATE/etc should be updated to\n> refer to some central description of ring buffers. Maybe something\n> should be included to the appendix.\n\nI agree it would be nice to explain Buffer Access Strategies in the docs.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGLiY1e%2B1%3DpB7hXJOyGj1dJOfgde%2BHmiSnv3gDKayUFJMA%40mail.gmail.com",
"msg_date": "Thu, 12 Jan 2023 21:19:36 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 09:19:36PM -0500, Melanie Plageman wrote:\n> On Wed, Jan 11, 2023 at 4:58 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > > Subject: [PATCH v45 4/5] Add system view tracking IO ops per backend type\n> >\n> > The patch can/will fail with:\n> >\n> > CREATE TABLESPACE test_io_shared_stats_tblspc LOCATION '';\n> > +WARNING: tablespaces created by regression test cases should have names starting with \"regress_\"\n> >\n> > CREATE TABLESPACE test_stats LOCATION '';\n> > +WARNING: tablespaces created by regression test cases should have names starting with \"regress_\"\n> >\n> > (I already sent patches to address the omission in cirrus.yml)\n> \n> Thanks. I've fixed this\n> I make a tablespace in amcheck -- are there recommendations for naming\n> tablespaces in contrib also?\n\nThat's the test_stats one I mentioned.\n\nCheck with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\n> > > + <literal>bulkread</literal>: Qualifying large read I/O operations\n> > > + done outside of shared buffers, for example, a sequential scan of a\n> > > + large table.\n> >\n> > I don't think it's correct to say that it's \"outside of\" shared-buffers.\n> \n> I suppose \"outside of\" gives the wrong idea. But I need to make clear\n> that this I/O is to and from buffers which are not a part of shared\n> buffers right now -- they may still be accessible from the same data\n> structures which access shared buffers but they are currently being used\n> in a different way.\n\nThis would be a good place to link to a description of the ringbuffer,\nif we had one.\n\n> > s/Qualifying/Certain/\n> \n> I feel like qualifying is more specific than certain, but I would be open\n> to changing it if there was a specific reason you don't like it.\n\nI suggested to change it because at first I started to interpret it as\n\"The act of qualifying large I/O ops ..\" rather than \"Large I/O ops that\nqualify..\".\n\n+ Number of read operations of <varname>op_bytes</varname> size. \n\nThis is still a bit too easy to misinterpret as being in units of bytes.\nI suggest: Number of read operations (which are each of the size\nspecified in >op_bytes<).\n\n+ in order to add the shared buffer to a separate size-limited ring buffer\n\nseparate comma\n\n+ More information on configuring checkpointer can be found in Section 30.5. \n\n*the* checkpointer (as in the following paragraph)\n\n+ <varname>backend_type</varname> <literal>checkpointer</literal> and \n+ <varname>io_object</varname> <literal>temp relation</literal>. \n+ </para> \n\nI still think it's a bit hard to understand the <varname>s adjacent to\n<literal>s.\n\n+ Some backend_types\n+ in some io_contexts\n+ on some io_objects\n+ in certain io_contexts\n+ on certain io_objects\n\nMaybe these should not use underscores: Some backend types never\nperform I/O operations in some I/O contexts and/or on some i/o objects.\n\n+ for (BackendType bktype = B_INVALID; bktype < BACKEND_NUM_TYPES; bktype++)\n+ for (IOContext io_context = IOCONTEXT_BULKREAD; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n+ for (IOObject io_obj = IOOBJECT_RELATION; io_obj < IOOBJECT_NUM_TYPES; io_obj++)\n+ for (IOOp io_op = IOOP_EVICT; io_op < IOOP_NUM_TYPES; io_op++)\n\nThese look a bit fragile due to starting at some hardcoded \"first\"\nvalue. In other places you use symbols \"FIRST\" symbols:\n\n+ for (IOContext io_context = IOCONTEXT_FIRST; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n+ for (IOObject io_object = IOOBJECT_FIRST; io_object < IOOBJECT_NUM_TYPES; io_object++)\n+ for (IOOp io_op = IOOP_FIRST; io_op < IOOP_NUM_TYPES; io_op++)\n\nI think that's marginally better, but I think having to define both\nFIRST and NUM is excessive and doesn't make it less fragile. Not sure\nwhat anyone else will say, but I'd prefer if it started at \"0\".\n\nThanks for working on this - I'm looking forward to updating my rrdtool\nscript for this soon. It'll be nice to finally distinguish huge number\nof \"backend ringbuffer writes during ALTER\" from other backend writes.\nCurrently, that makes it look like something is terribly wrong.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 12 Jan 2023 23:23:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Attached is v47.\n\nOn Fri, Jan 13, 2023 at 12:23 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Jan 12, 2023 at 09:19:36PM -0500, Melanie Plageman wrote:\n> > On Wed, Jan 11, 2023 at 4:58 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > > Subject: [PATCH v45 4/5] Add system view tracking IO ops per backend type\n> > >\n> > > The patch can/will fail with:\n> > >\n> > > CREATE TABLESPACE test_io_shared_stats_tblspc LOCATION '';\n> > > +WARNING: tablespaces created by regression test cases should have names starting with \"regress_\"\n> > >\n> > > CREATE TABLESPACE test_stats LOCATION '';\n> > > +WARNING: tablespaces created by regression test cases should have names starting with \"regress_\"\n> > >\n> > > (I already sent patches to address the omission in cirrus.yml)\n> >\n> > Thanks. I've fixed this\n> > I make a tablespace in amcheck -- are there recommendations for naming\n> > tablespaces in contrib also?\n>\n> That's the test_stats one I mentioned.\n>\n> Check with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\nThanks. I have now changed both tablespace names and checked using that\nmacro.\n\n> > > > + <literal>bulkread</literal>: Qualifying large read I/O operations\n> > > > + done outside of shared buffers, for example, a sequential scan of a\n> > > > + large table.\n> > >\n> > > I don't think it's correct to say that it's \"outside of\" shared-buffers.\n> >\n> > I suppose \"outside of\" gives the wrong idea. But I need to make clear\n> > that this I/O is to and from buffers which are not a part of shared\n> > buffers right now -- they may still be accessible from the same data\n> > structures which access shared buffers but they are currently being used\n> > in a different way.\n>\n> This would be a good place to link to a description of the ringbuffer,\n> if we had one.\n\nIndeed.\n\n> > > s/Qualifying/Certain/\n> >\n> > I feel like qualifying is more specific than certain, but I would be open\n> > to changing it if there was a specific reason you don't like it.\n>\n> I suggested to change it because at first I started to interpret it as\n> \"The act of qualifying large I/O ops ..\" rather than \"Large I/O ops that\n> qualify..\".\n\nI have changed it to \"certain\".\n\n> + Number of read operations of <varname>op_bytes</varname> size.\n>\n> This is still a bit too easy to misinterpret as being in units of bytes.\n> I suggest: Number of read operations (which are each of the size\n> specified in >op_bytes<).\n\nI have changed this.\n\n> + in order to add the shared buffer to a separate size-limited ring buffer\n>\n> separate comma\n>\n> + More information on configuring checkpointer can be found in Section 30.5.\n>\n> *the* checkpointer (as in the following paragraph)\n\nabove items changed.\n\n> + <varname>backend_type</varname> <literal>checkpointer</literal> and\n> + <varname>io_object</varname> <literal>temp relation</literal>.\n> + </para>\n>\n> I still think it's a bit hard to understand the <varname>s adjacent to\n> <literal>s.\n\nI agree it isn't great -- is there a different XML tag you suggest\ninstead of literal?\n\n> + Some backend_types\n> + in some io_contexts\n> + on some io_objects\n> + in certain io_contexts\n> + on certain io_objects\n>\n> Maybe these should not use underscores: Some backend types never\n> perform I/O operations in some I/O contexts and/or on some i/o objects.\n\nI've changed this.\n\nAlso, taking another look, I forgot to update the docs' column name\ntenses in the last version. That is now done.\n\n> + for (BackendType bktype = B_INVALID; bktype < BACKEND_NUM_TYPES; bktype++)\n> + for (IOContext io_context = IOCONTEXT_BULKREAD; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> + for (IOObject io_obj = IOOBJECT_RELATION; io_obj < IOOBJECT_NUM_TYPES; io_obj++)\n> + for (IOOp io_op = IOOP_EVICT; io_op < IOOP_NUM_TYPES; io_op++)\n>\n> These look a bit fragile due to starting at some hardcoded \"first\"\n> value. In other places you use symbols \"FIRST\" symbols:\n>\n> + for (IOContext io_context = IOCONTEXT_FIRST; io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> + for (IOObject io_object = IOOBJECT_FIRST; io_object < IOOBJECT_NUM_TYPES; io_object++)\n> + for (IOOp io_op = IOOP_FIRST; io_op < IOOP_NUM_TYPES; io_op++)\n>\n> I think that's marginally better, but I think having to define both\n> FIRST and NUM is excessive and doesn't make it less fragile. Not sure\n> what anyone else will say, but I'd prefer if it started at \"0\".\n\nThanks for catching the discrepancy in pg_stat_get_io(). I have changed\nthose instances to use _FIRST.\n\nI think that having the loop start from the first enum value (except\nwhen that value is something special like _INVALID like with\nBackendType) is confusing. I agree that having multiple macros to allow\niteration through all enum values introduces some fragility. I'm not\nsure about using the number 0 with the enum as the loop variable\ndata type. Is that a common pattern?\n\nIn this version, I have updated the loops in pg_stat_get_io() to use\n_FIRST.\n\n> Thanks for working on this - I'm looking forward to updating my rrdtool\n> script for this soon. It'll be nice to finally distinguish huge number\n> of \"backend ringbuffer writes during ALTER\" from other backend writes.\n> Currently, that makes it look like something is terribly wrong.\n\nCool! I'm glad to know you will use it.\n\n- Melanie",
"msg_date": "Fri, 13 Jan 2023 13:38:15 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-13 13:38:15 -0500, Melanie Plageman wrote:\n> > I think that's marginally better, but I think having to define both\n> > FIRST and NUM is excessive and doesn't make it less fragile. Not sure\n> > what anyone else will say, but I'd prefer if it started at \"0\".\n\nThe reason for using FIRST is to be able to define the loop variable as the\nenum type, without assigning numeric values to an enum var. I prefer it\nslightly.\n\n\n> From f8c9077631169a778c893fd16b7a973ad5725f2a Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Fri, 9 Dec 2022 18:23:19 -0800\n> Subject: [PATCH v47 1/5] pgindent and some manual cleanup in pgstat related\n\nApplied.\n\n\n> Subject: [PATCH v47 2/5] pgstat: Infrastructure to track IO operations\n\n\n> diff --git a/src/backend/utils/activity/pgstat.c b/src/backend/utils/activity/pgstat.c\n> index 0fa5370bcd..608c3b59da 100644\n> --- a/src/backend/utils/activity/pgstat.c\n> +++ b/src/backend/utils/activity/pgstat.c\n\nReminder to self: Need to bump PGSTAT_FILE_FORMAT_ID before commit.\n\nPerhaps you could add a note about that to the commit message?\n\n\n\n> @@ -359,6 +360,15 @@ static const PgStat_KindInfo pgstat_kind_infos[PGSTAT_NUM_KINDS] = {\n> \t\t.snapshot_cb = pgstat_checkpointer_snapshot_cb,\n> \t},\n>\n> +\t[PGSTAT_KIND_IO] = {\n> +\t\t.name = \"io_ops\",\n\nThat should be \"io\" now I think?\n\n\n\n> +/*\n> + * Check that stats have not been counted for any combination of IOContext,\n> + * IOObject, and IOOp which are not tracked for the passed-in BackendType. The\n> + * passed-in PgStat_BackendIO must contain stats from the BackendType specified\n> + * by the second parameter. Caller is responsible for locking the passed-in\n> + * PgStat_BackendIO, if needed.\n> + */\n\nOther PgStat_Backend* structs are just for pending data. Perhaps we could\nrename it slightly to make that clearer? PgStat_BktypeIO?\nPgStat_IOForBackendType? or a similar variation?\n\n\n> +bool\n> +pgstat_bktype_io_stats_valid(PgStat_BackendIO *backend_io,\n> +\t\t\t\t\t\t\t BackendType bktype)\n> +{\n> +\tbool\t\tbktype_tracked = pgstat_tracks_io_bktype(bktype);\n> +\n> +\tfor (IOContext io_context = IOCONTEXT_FIRST;\n> +\t\t io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> +\t{\n> +\t\tfor (IOObject io_object = IOOBJECT_FIRST;\n> +\t\t\t io_object < IOOBJECT_NUM_TYPES; io_object++)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * Don't bother trying to skip to the next loop iteration if\n> +\t\t\t * pgstat_tracks_io_object() would return false here. We still\n> +\t\t\t * need to validate that each counter is zero anyway.\n> +\t\t\t */\n> +\t\t\tfor (IOOp io_op = IOOP_FIRST; io_op < IOOP_NUM_TYPES; io_op++)\n> +\t\t\t{\n> +\t\t\t\tif ((!bktype_tracked || !pgstat_tracks_io_op(bktype, io_context, io_object, io_op)) &&\n> +\t\t\t\t\tbackend_io->data[io_context][io_object][io_op] != 0)\n> +\t\t\t\t\treturn false;\n\nHm, perhaps this could be broken up into multiple lines? Something like\n\n /* no stats, so nothing to validate */\n if (backend_io->data[io_context][io_object][io_op] == 0)\n continue;\n\n /* something went wrong if have stats for something not tracked */\n if (!bktype_tracked ||\n !pgstat_tracks_io_op(bktype, io_context, io_object, io_op))\n return false;\n\n\n> +typedef struct PgStat_BackendIO\n> +{\n> +\tPgStat_Counter data[IOCONTEXT_NUM_TYPES][IOOBJECT_NUM_TYPES][IOOP_NUM_TYPES];\n> +} PgStat_BackendIO;\n\nWould it bother you if we swapped the order of iocontext and iobject here and\nrelated places? It makes more sense to me semantically, and should now be\npretty easy, code wise.\n\n\n> +/* shared version of PgStat_IO */\n> +typedef struct PgStatShared_IO\n> +{\n\nMaybe /* PgStat_IO in shared memory */?\n\n\n\n> Subject: [PATCH v47 3/5] pgstat: Count IO for relations\n\nNearly happy with this now. See one minor nit below.\n\nI don't love the counting in register_dirty_segment() and mdsyncfiletag(), but\nI don't have a better idea, and it doesn't seem too horrible.\n\n\n> @@ -1441,6 +1474,28 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n>\n> \tUnlockBufHdr(buf, buf_state);\n>\n> +\tif (oldFlags & BM_VALID)\n> +\t{\n> +\t\t/*\n> +\t\t * When a BufferAccessStrategy is in use, blocks evicted from shared\n> +\t\t * buffers are counted as IOOP_EVICT in the corresponding context\n> +\t\t * (e.g. IOCONTEXT_BULKWRITE). Shared buffers are evicted by a\n> +\t\t * strategy in two cases: 1) while initially claiming buffers for the\n> +\t\t * strategy ring 2) to replace an existing strategy ring buffer\n> +\t\t * because it is pinned or in use and cannot be reused.\n> +\t\t *\n> +\t\t * Blocks evicted from buffers already in the strategy ring are\n> +\t\t * counted as IOOP_REUSE in the corresponding strategy context.\n> +\t\t *\n> +\t\t * At this point, we can accurately count evictions and reuses,\n> +\t\t * because we have successfully claimed the valid buffer. Previously,\n> +\t\t * we may have been forced to release the buffer due to concurrent\n> +\t\t * pinners or erroring out.\n> +\t\t */\n> +\t\tpgstat_count_io_op(from_ring ? IOOP_REUSE : IOOP_EVICT,\n> +\t\t\t\t\t\t IOOBJECT_RELATION, *io_context);\n> +\t}\n> +\n> \tif (oldPartitionLock != NULL)\n> \t{\n> \t\tBufTableDelete(&oldTag, oldHash);\n\nThere's no reason to do this while we still hold the buffer partition lock,\nright? That's a highly contended lock, and we can just move the counting a few\nlines down.\n\n\n> @@ -1410,6 +1432,9 @@ mdsyncfiletag(const FileTag *ftag, char *path)\n> \tif (need_to_close)\n> \t\tFileClose(file);\n>\n> +\tif (result >= 0)\n> +\t\tpgstat_count_io_op(IOOP_FSYNC, IOOBJECT_RELATION, IOCONTEXT_NORMAL);\n> +\n\nI'd lean towards doing this unconditionally, it's still an fsync if it\nfailed... Not that it matters.\n\n\n\n> Subject: [PATCH v47 4/5] Add system view tracking IO ops per backend type\n\nNote to self + commit message: Remember the need to do a catversion bump.\n\n\n> +-- pg_stat_io test:\n> +-- verify_heapam always uses a BAS_BULKREAD BufferAccessStrategy.\n\nMaybe add that \"whereas a sequential scan does not, see ...\"?\n\n\n> This allows\n> +-- us to reliably test that pg_stat_io BULKREAD reads are being captured\n> +-- without relying on the size of shared buffers or on an expensive operation\n> +-- like CREATE DATABASE.\n\nCREATE / DROP TABLESPACE is also pretty expensive, but I don't have a better\nidea.\n\n\n> +-- Create an alternative tablespace and move the heaptest table to it, causing\n> +-- it to be rewritten.\n\nIIRC the point of that is that it reliably evicts all the buffers from s_b,\ncorrect? If so, mention that?\n\n\n\n\n> +Datum\n> +pg_stat_get_io(PG_FUNCTION_ARGS)\n> +{\n> +\tReturnSetInfo *rsinfo;\n> +\tPgStat_IO *backends_io_stats;\n> +\tDatum\t\treset_time;\n> +\n> +\tInitMaterializedSRF(fcinfo, 0);\n> +\trsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> +\n> +\tbackends_io_stats = pgstat_fetch_stat_io();\n> +\n> +\treset_time = TimestampTzGetDatum(backends_io_stats->stat_reset_timestamp);\n> +\n> +\tfor (BackendType bktype = B_INVALID; bktype < BACKEND_NUM_TYPES; bktype++)\n> +\t{\n> +\t\tbool\t\tbktype_tracked;\n> +\t\tDatum\t\tbktype_desc = CStringGetTextDatum(GetBackendTypeDesc(bktype));\n> +\t\tPgStat_BackendIO *bktype_stats = &backends_io_stats->stats[bktype];\n> +\n> +\t\t/*\n> +\t\t * For those BackendTypes without IO Operation stats, skip\n> +\t\t * representing them in the view altogether. We still loop through\n> +\t\t * their counters so that we can assert that all values are zero.\n> +\t\t */\n> +\t\tbktype_tracked = pgstat_tracks_io_bktype(bktype);\n\nHow about instead just doing Assert(pgstat_bktype_io_stats_valid(...))? That\ndeduplicates the logic for the asserts, and avoids doing the full loop when\nassertions aren't enabled anyway?\n\nOtherwise, see also the suggestion aout formatting the assertions as I\nsuggested for 0002.\n\n\n> +-- After a checkpoint, there should be some additional IOCONTEXT_NORMAL writes\n> +-- and fsyncs.\n> +-- The second checkpoint ensures that stats from the first checkpoint have been\n> +-- reported and protects against any potential races amongst the table\n> +-- creation, a possible timing-triggered checkpoint, and the explicit\n> +-- checkpoint in the test.\n\nThere's a comment about the subsequent checkpoints earlier in the file, and I\nthink the comment is slightly more precise. Mybe just reference the earlier comment?\n\n\n> +-- Change the tablespace so that the table is rewritten directly, then SELECT\n> +-- from it to cause it to be read back into shared buffers.\n> +SET allow_in_place_tablespaces = true;\n> +CREATE TABLESPACE regress_io_stats_tblspc LOCATION '';\n\nPerhaps worth doing this in tablespace.sql, to avoid the additional\ncheckpoints done as part of CREATE/DROP TABLESPACE?\n\nOr, at least combine this with the CHECKPOINTs above?\n\n> +-- Drop the table so we can drop the tablespace later.\n> +DROP TABLE test_io_shared;\n> +-- Test that the follow IOCONTEXT_LOCAL IOOps are tracked in pg_stat_io:\n> +-- - eviction of local buffers in order to reuse them\n> +-- - reads of temporary table blocks into local buffers\n> +-- - writes of local buffers to permanent storage\n> +-- - extends of temporary tables\n> +-- Set temp_buffers to a low value so that we can trigger writes with fewer\n> +-- inserted tuples. Do so in a new session in case temporary tables have been\n> +-- accessed by previous tests in this session.\n> +\\c\n> +SET temp_buffers TO '1MB';\n\nI'd set it to the actual minimum '100' (in pages). Perhaps that'd allow to\nmake test_io_local a bit smaller?\n\n\n> +CREATE TEMPORARY TABLE test_io_local(a int, b TEXT);\n> +SELECT sum(extends) AS io_sum_local_extends_before\n> + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> +SELECT sum(evictions) AS io_sum_local_evictions_before\n> + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> +SELECT sum(writes) AS io_sum_local_writes_before\n> + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> +-- Insert tuples into the temporary table, generating extends in the stats.\n> +-- Insert enough values that we need to reuse and write out dirty local\n> +-- buffers, generating evictions and writes.\n> +INSERT INTO test_io_local SELECT generate_series(1, 8000) as id, repeat('a', 100);\n> +SELECT sum(reads) AS io_sum_local_reads_before\n> + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n\nMaybe add something like\n\nSELECT pg_relation_size('test_io_local') / current_setting('block_size')::int8 > 100;\n\nBetter toast compression or such could easily make test_io_local smaller than\nit's today. Seeing that it's too small would make it easier to understand the\nfailure.\n\n\n> +SELECT sum(evictions) AS io_sum_local_evictions_after\n> + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> +SELECT sum(reads) AS io_sum_local_reads_after\n> + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> +SELECT sum(writes) AS io_sum_local_writes_after\n> + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> +SELECT sum(extends) AS io_sum_local_extends_after\n> + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n\nThis could just be one select with multiple columns?\n\nI think if you use something like \\gset io_sum_local_after_ you can also avoid\nthe need to repeat \"io_sum_local_\" so many times.\n\n\n> +SELECT :io_sum_local_evictions_after > :io_sum_local_evictions_before;\n> + ?column?\n> +----------\n> + t\n> +(1 row)\n> +\n> +SELECT :io_sum_local_reads_after > :io_sum_local_reads_before;\n> + ?column?\n> +----------\n> + t\n> +(1 row)\n> +\n> +SELECT :io_sum_local_writes_after > :io_sum_local_writes_before;\n> + ?column?\n> +----------\n> + t\n> +(1 row)\n> +\n> +SELECT :io_sum_local_extends_after > :io_sum_local_extends_before;\n> + ?column?\n> +----------\n> + t\n> +(1 row)\n\nSimilar.\n\n\n> +SELECT sum(reuses) AS io_sum_vac_strategy_reuses_before FROM pg_stat_io WHERE io_context = 'vacuum' \\gset\n> +SELECT sum(reads) AS io_sum_vac_strategy_reads_before FROM pg_stat_io WHERE io_context = 'vacuum' \\gset\n\nThere's quite a few more instances of this, so I'll now omit further mentions.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Jan 2023 15:36:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 10:38 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Attached is v47.\n\nI missed a couple of versions, but I think the docs are clearer now.\nI'm torn on losing some of the detail, but overall I do think it's a\ngood trade-off. Moving some details out to after the table does keep\nthe bulk of the view documentation more readable, and the \"inform\ndatabase tuning\" part is great. I really like the idea of a separate\nInterpreting Statistics section, but for now this works.\n\n>+ <literal>vacuum</literal>: I/O operations performed outside of shared\n>+ buffers while vacuuming and analyzing permanent relations.\n\nWhy only permanent relations? Are temporary relations treated\ndifferently? I imagine if someone has a temp-table-heavy workload that\nrequires regularly vacuuming and analyzing those relations, this point\nmay be confusing without some additional explanation.\n\nOther than that, this looks great.\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Mon, 16 Jan 2023 13:41:49 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v48 attached.\n\nOn Fri, Jan 13, 2023 at 6:36 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-13 13:38:15 -0500, Melanie Plageman wrote:\n> > From f8c9077631169a778c893fd16b7a973ad5725f2a Mon Sep 17 00:00:00 2001\n> > From: Andres Freund <andres@anarazel.de>\n> > Date: Fri, 9 Dec 2022 18:23:19 -0800\n> > Subject: [PATCH v47 2/5] pgstat: Infrastructure to track IO operations\n> > diff --git a/src/backend/utils/activity/pgstat.c b/src/backend/utils/activity/pgstat.c\n> > index 0fa5370bcd..608c3b59da 100644\n> > --- a/src/backend/utils/activity/pgstat.c\n> > +++ b/src/backend/utils/activity/pgstat.c\n>\n> Reminder to self: Need to bump PGSTAT_FILE_FORMAT_ID before commit.\n>\n> Perhaps you could add a note about that to the commit message?\n>\n\ndone\n\n>\n>\n> > @@ -359,6 +360,15 @@ static const PgStat_KindInfo pgstat_kind_infos[PGSTAT_NUM_KINDS] = {\n> > .snapshot_cb = pgstat_checkpointer_snapshot_cb,\n> > },\n> >\n> > + [PGSTAT_KIND_IO] = {\n> > + .name = \"io_ops\",\n>\n> That should be \"io\" now I think?\n>\n\nOh no! I didn't notice this was broken. I've added pg_stat_have_stats()\nto the IO stats tests now.\n\nIt would be nice if pgstat_get_kind_from_str() could be used in\npg_stat_reset_shared() to avoid having to remember to change both. It\ndoesn't really work because we want to be able to throw the error\nmessage in pg_stat_reset_shared() when the user input is wrong -- not\nthe one in pgstat_get_kind_from_str().\nAlso:\n- Since recovery_prefetch doesn't have a statistic kind, it doesn't fit\n well into this paradigm\n- Only a subset of the statistics kinds are reset through this function\n- bgwriter and checkpointer share a reset target\nI added a comment -- perhaps that's all I can do?\n\nOn a separate note, should we be setting have_[io/slru/etc]stats to\nfalse in the reset all functions?\n\n>\n> > +/*\n> > + * Check that stats have not been counted for any combination of IOContext,\n> > + * IOObject, and IOOp which are not tracked for the passed-in BackendType. The\n> > + * passed-in PgStat_BackendIO must contain stats from the BackendType specified\n> > + * by the second parameter. Caller is responsible for locking the passed-in\n> > + * PgStat_BackendIO, if needed.\n> > + */\n>\n> Other PgStat_Backend* structs are just for pending data. Perhaps we could\n> rename it slightly to make that clearer? PgStat_BktypeIO?\n> PgStat_IOForBackendType? or a similar variation?\n\nI've done this.\n\n>\n> > +bool\n> > +pgstat_bktype_io_stats_valid(PgStat_BackendIO *backend_io,\n> > + BackendType bktype)\n> > +{\n> > + bool bktype_tracked = pgstat_tracks_io_bktype(bktype);\n> > +\n> > + for (IOContext io_context = IOCONTEXT_FIRST;\n> > + io_context < IOCONTEXT_NUM_TYPES; io_context++)\n> > + {\n> > + for (IOObject io_object = IOOBJECT_FIRST;\n> > + io_object < IOOBJECT_NUM_TYPES; io_object++)\n> > + {\n> > + /*\n> > + * Don't bother trying to skip to the next loop iteration if\n> > + * pgstat_tracks_io_object() would return false here. We still\n> > + * need to validate that each counter is zero anyway.\n> > + */\n> > + for (IOOp io_op = IOOP_FIRST; io_op < IOOP_NUM_TYPES; io_op++)\n> > + {\n> > + if ((!bktype_tracked || !pgstat_tracks_io_op(bktype, io_context, io_object, io_op)) &&\n> > + backend_io->data[io_context][io_object][io_op] != 0)\n> > + return false;\n>\n> Hm, perhaps this could be broken up into multiple lines? Something like\n>\n> /* no stats, so nothing to validate */\n> if (backend_io->data[io_context][io_object][io_op] == 0)\n> continue;\n>\n> /* something went wrong if have stats for something not tracked */\n> if (!bktype_tracked ||\n> !pgstat_tracks_io_op(bktype, io_context, io_object, io_op))\n> return false;\n\nI've done this.\n\n> > +typedef struct PgStat_BackendIO\n> > +{\n> > + PgStat_Counter data[IOCONTEXT_NUM_TYPES][IOOBJECT_NUM_TYPES][IOOP_NUM_TYPES];\n> > +} PgStat_BackendIO;\n>\n> Would it bother you if we swapped the order of iocontext and iobject here and\n> related places? It makes more sense to me semantically, and should now be\n> pretty easy, code wise.\n\nSo, thinking about this I started noticing inconsistencies in other\nareas around this order:\nFor example: ordering of objects mentioned in commit messages and comments,\nordering of parameters (like in pgstat_count_io_op() [currently in\nreverse order]).\n\nI think we should make a final decision about this ordering and then\nmake everywhere consistent (including ordering in the view).\n\nCurrently the order is:\nBackendType\n IOContext\n IOObject\n IOOp\n\nYou are suggesting this order:\nBackendType\n IOObject\n IOContext\n IOOp\n\nCould you explain what you find more natural about this ordering (as I\nfind the other more natural)?\n\nThis is one possible natural sentence with these objects:\n\nDuring COPY, a client backend may read in data from a permanent\nrelation.\nThis order is:\nIOContext\n BackendType\n IOOp\n IOObject\n\nI think English sentences are often structured subject, verb, object --\nbut in our case, we have an extra thing that doesn't fit neatly\n(IOContext). Also, IOOp in a sentence would be in the middle (as the\nverb). I made it last because a) it feels like the smallest unit b) it\nwould make the code a lot more annoying if it wasn't last.\n\nWRT IOObject and IOContext, is there a future case for which having\nIOObject first will be better or lead to fewer mistakes?\n\nI actually see loads of places where this needs to be made consistent.\n\n>\n> > +/* shared version of PgStat_IO */\n> > +typedef struct PgStatShared_IO\n> > +{\n>\n> Maybe /* PgStat_IO in shared memory */?\n>\n\nupdated.\n\n>\n> > Subject: [PATCH v47 3/5] pgstat: Count IO for relations\n>\n> Nearly happy with this now. See one minor nit below.\n>\n> I don't love the counting in register_dirty_segment() and mdsyncfiletag(), but\n> I don't have a better idea, and it doesn't seem too horrible.\n\nYou don't like it because such things shouldn't be in md.c -- since we\nwent to the trouble of having function pointers and making it general?\n\n>\n> > @@ -1441,6 +1474,28 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> >\n> > UnlockBufHdr(buf, buf_state);\n> >\n> > + if (oldFlags & BM_VALID)\n> > + {\n> > + /*\n> > + * When a BufferAccessStrategy is in use, blocks evicted from shared\n> > + * buffers are counted as IOOP_EVICT in the corresponding context\n> > + * (e.g. IOCONTEXT_BULKWRITE). Shared buffers are evicted by a\n> > + * strategy in two cases: 1) while initially claiming buffers for the\n> > + * strategy ring 2) to replace an existing strategy ring buffer\n> > + * because it is pinned or in use and cannot be reused.\n> > + *\n> > + * Blocks evicted from buffers already in the strategy ring are\n> > + * counted as IOOP_REUSE in the corresponding strategy context.\n> > + *\n> > + * At this point, we can accurately count evictions and reuses,\n> > + * because we have successfully claimed the valid buffer. Previously,\n> > + * we may have been forced to release the buffer due to concurrent\n> > + * pinners or erroring out.\n> > + */\n> > + pgstat_count_io_op(from_ring ? IOOP_REUSE : IOOP_EVICT,\n> > + IOOBJECT_RELATION, *io_context);\n> > + }\n> > +\n> > if (oldPartitionLock != NULL)\n> > {\n> > BufTableDelete(&oldTag, oldHash);\n>\n> There's no reason to do this while we still hold the buffer partition lock,\n> right? That's a highly contended lock, and we can just move the counting a few\n> lines down.\n\nThanks, I've done this.\n\n>\n> > @@ -1410,6 +1432,9 @@ mdsyncfiletag(const FileTag *ftag, char *path)\n> > if (need_to_close)\n> > FileClose(file);\n> >\n> > + if (result >= 0)\n> > + pgstat_count_io_op(IOOP_FSYNC, IOOBJECT_RELATION, IOCONTEXT_NORMAL);\n> > +\n>\n> I'd lean towards doing this unconditionally, it's still an fsync if it\n> failed... Not that it matters.\n\nGood point. We still incurred the costs if not benefited from the\neffects. I've updated this.\n\n>\n> > Subject: [PATCH v47 4/5] Add system view tracking IO ops per backend type\n>\n> Note to self + commit message: Remember the need to do a catversion bump.\n\nNoted.\n\n>\n> > +-- pg_stat_io test:\n> > +-- verify_heapam always uses a BAS_BULKREAD BufferAccessStrategy.\n>\n> Maybe add that \"whereas a sequential scan does not, see ...\"?\n\nUpdated.\n\n>\n> > This allows\n> > +-- us to reliably test that pg_stat_io BULKREAD reads are being captured\n> > +-- without relying on the size of shared buffers or on an expensive operation\n> > +-- like CREATE DATABASE.\n>\n> CREATE / DROP TABLESPACE is also pretty expensive, but I don't have a better\n> idea.\n\nI've added a comment.\n\n>\n> > +-- Create an alternative tablespace and move the heaptest table to it, causing\n> > +-- it to be rewritten.\n>\n> IIRC the point of that is that it reliably evicts all the buffers from s_b,\n> correct? If so, mention that?\n\nDone.\n\n>\n> > +Datum\n> > +pg_stat_get_io(PG_FUNCTION_ARGS)\n> > +{\n> > + ReturnSetInfo *rsinfo;\n> > + PgStat_IO *backends_io_stats;\n> > + Datum reset_time;\n> > +\n> > + InitMaterializedSRF(fcinfo, 0);\n> > + rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > +\n> > + backends_io_stats = pgstat_fetch_stat_io();\n> > +\n> > + reset_time = TimestampTzGetDatum(backends_io_stats->stat_reset_timestamp);\n> > +\n> > + for (BackendType bktype = B_INVALID; bktype < BACKEND_NUM_TYPES; bktype++)\n> > + {\n> > + bool bktype_tracked;\n> > + Datum bktype_desc = CStringGetTextDatum(GetBackendTypeDesc(bktype));\n> > + PgStat_BackendIO *bktype_stats = &backends_io_stats->stats[bktype];\n> > +\n> > + /*\n> > + * For those BackendTypes without IO Operation stats, skip\n> > + * representing them in the view altogether. We still loop through\n> > + * their counters so that we can assert that all values are zero.\n> > + */\n> > + bktype_tracked = pgstat_tracks_io_bktype(bktype);\n>\n> How about instead just doing Assert(pgstat_bktype_io_stats_valid(...))? That\n> deduplicates the logic for the asserts, and avoids doing the full loop when\n> assertions aren't enabled anyway?\n>\n\nI've done this and added a comment.\n\n>\n>\n> > +-- After a checkpoint, there should be some additional IOCONTEXT_NORMAL writes\n> > +-- and fsyncs.\n> > +-- The second checkpoint ensures that stats from the first checkpoint have been\n> > +-- reported and protects against any potential races amongst the table\n> > +-- creation, a possible timing-triggered checkpoint, and the explicit\n> > +-- checkpoint in the test.\n>\n> There's a comment about the subsequent checkpoints earlier in the file, and I\n> think the comment is slightly more precise. Mybe just reference the earlier comment?\n>\n>\n> > +-- Change the tablespace so that the table is rewritten directly, then SELECT\n> > +-- from it to cause it to be read back into shared buffers.\n> > +SET allow_in_place_tablespaces = true;\n> > +CREATE TABLESPACE regress_io_stats_tblspc LOCATION '';\n>\n> Perhaps worth doing this in tablespace.sql, to avoid the additional\n> checkpoints done as part of CREATE/DROP TABLESPACE?\n>\n> Or, at least combine this with the CHECKPOINTs above?\n\nI see a checkpoint is requested when dropping the tablespace if not all\nthe files in it are deleted. It seems like if the DROP TABLE for the\npermanent table is before the explicit checkpoints in the test, then the\nDROP TABLESPACE will not cause an additional checkpoint. Is this what\nyou are suggesting? Dropping the temporary table should not have an\neffect on this.\n\n>\n> > +-- Drop the table so we can drop the tablespace later.\n> > +DROP TABLE test_io_shared;\n> > +-- Test that the follow IOCONTEXT_LOCAL IOOps are tracked in pg_stat_io:\n> > +-- - eviction of local buffers in order to reuse them\n> > +-- - reads of temporary table blocks into local buffers\n> > +-- - writes of local buffers to permanent storage\n> > +-- - extends of temporary tables\n> > +-- Set temp_buffers to a low value so that we can trigger writes with fewer\n> > +-- inserted tuples. Do so in a new session in case temporary tables have been\n> > +-- accessed by previous tests in this session.\n> > +\\c\n> > +SET temp_buffers TO '1MB';\n>\n> I'd set it to the actual minimum '100' (in pages). Perhaps that'd allow to\n> make test_io_local a bit smaller?\n\nI've done this.\n\n>\n> > +CREATE TEMPORARY TABLE test_io_local(a int, b TEXT);\n> > +SELECT sum(extends) AS io_sum_local_extends_before\n> > + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> > +SELECT sum(evictions) AS io_sum_local_evictions_before\n> > + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> > +SELECT sum(writes) AS io_sum_local_writes_before\n> > + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> > +-- Insert tuples into the temporary table, generating extends in the stats.\n> > +-- Insert enough values that we need to reuse and write out dirty local\n> > +-- buffers, generating evictions and writes.\n> > +INSERT INTO test_io_local SELECT generate_series(1, 8000) as id, repeat('a', 100);\n> > +SELECT sum(reads) AS io_sum_local_reads_before\n> > + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n>\n> Maybe add something like\n>\n> SELECT pg_relation_size('test_io_local') / current_setting('block_size')::int8 > 100;\n>\n> Better toast compression or such could easily make test_io_local smaller than\n> it's today. Seeing that it's too small would make it easier to understand the\n> failure.\n\nGood idea. So, I used pg_table_size() because it seems like\npg_relation_size() does not include the toast relations. However, I'm\nnot sure this is a good idea, because pg_table_size() includes FSM and\nvisibility map. Should I write a query to get the toast relation name\nand add pg_relation_size() of that relation and the main relation?\n\n>\n> > +SELECT sum(evictions) AS io_sum_local_evictions_after\n> > + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> > +SELECT sum(reads) AS io_sum_local_reads_after\n> > + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> > +SELECT sum(writes) AS io_sum_local_writes_after\n> > + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n> > +SELECT sum(extends) AS io_sum_local_extends_after\n> > + FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'temp relation' \\gset\n>\n> This could just be one select with multiple columns?\n>\n> I think if you use something like \\gset io_sum_local_after_ you can also avoid\n> the need to repeat \"io_sum_local_\" so many times.\n\nThanks. I didn't realize. I've fixed this throughout the test file.\n\n\nOn Mon, Jan 16, 2023 at 4:42 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> I missed a couple of versions, but I think the docs are clearer now.\n> I'm torn on losing some of the detail, but overall I do think it's a\n> good trade-off. Moving some details out to after the table does keep\n> the bulk of the view documentation more readable, and the \"inform\n> database tuning\" part is great. I really like the idea of a separate\n> Interpreting Statistics section, but for now this works.\n>\n> >+ <literal>vacuum</literal>: I/O operations performed outside of shared\n> >+ buffers while vacuuming and analyzing permanent relations.\n>\n> Why only permanent relations? Are temporary relations treated\n> differently? I imagine if someone has a temp-table-heavy workload that\n> requires regularly vacuuming and analyzing those relations, this point\n> may be confusing without some additional explanation.\n\nAh, yes. This is a bit confusing. We don't use buffer access strategies\nwhen operating on temp relations, so vacuuming them is counted in IO\nContext normal. I've added this information to the docs but now that\ndefinition is a bit long. Perhaps it should be a note? That seems like\nit would draw too much attention to this detail, though...\n\n- Melanie",
"msg_date": "Tue, 17 Jan 2023 12:22:14 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 12:22:14 -0500, Melanie Plageman wrote:\n> > > @@ -359,6 +360,15 @@ static const PgStat_KindInfo pgstat_kind_infos[PGSTAT_NUM_KINDS] = {\n> > > .snapshot_cb = pgstat_checkpointer_snapshot_cb,\n> > > },\n> > >\n> > > + [PGSTAT_KIND_IO] = {\n> > > + .name = \"io_ops\",\n> >\n> > That should be \"io\" now I think?\n> >\n> \n> Oh no! I didn't notice this was broken. I've added pg_stat_have_stats()\n> to the IO stats tests now.\n> \n> It would be nice if pgstat_get_kind_from_str() could be used in\n> pg_stat_reset_shared() to avoid having to remember to change both.\n\nIt's hard to make that work, because of the historical behaviour of that\nfunction :(\n\n\n> Also:\n> - Since recovery_prefetch doesn't have a statistic kind, it doesn't fit\n> well into this paradigm\n\nI think that needs a rework anyway - it went in at about the same time as the\nshared mem stats patch, so it doesn't quite cohere.\n\n\n> On a separate note, should we be setting have_[io/slru/etc]stats to\n> false in the reset all functions?\n\nThat'd not work reliably, because other backends won't do the same. I don't\nsee a benefit in doing it differently in the local connection than the other\nconnections.\n\n\n> > > +typedef struct PgStat_BackendIO\n> > > +{\n> > > + PgStat_Counter data[IOCONTEXT_NUM_TYPES][IOOBJECT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > +} PgStat_BackendIO;\n> >\n> > Would it bother you if we swapped the order of iocontext and iobject here and\n> > related places? It makes more sense to me semantically, and should now be\n> > pretty easy, code wise.\n> \n> So, thinking about this I started noticing inconsistencies in other\n> areas around this order:\n> For example: ordering of objects mentioned in commit messages and comments,\n> ordering of parameters (like in pgstat_count_io_op() [currently in\n> reverse order]).\n> \n> I think we should make a final decision about this ordering and then\n> make everywhere consistent (including ordering in the view).\n> \n> Currently the order is:\n> BackendType\n> IOContext\n> IOObject\n> IOOp\n> \n> You are suggesting this order:\n> BackendType\n> IOObject\n> IOContext\n> IOOp\n> \n> Could you explain what you find more natural about this ordering (as I\n> find the other more natural)?\n\nThe object we're performing IO on determines more things than the context. So\nit just seems like the natural hierarchical fit. The context is a sub-category\nof the object. Consider how it'll look like if we also have objects for 'wal',\n'temp files'. It'll make sense to group by just the object, but it won't make\nsense to group by just the context.\n\nIf it were trivial to do I'd use a different IOContext for each IOObject. But\nit'd make it much harder. So there'll just be a bunch of values of IOContext\nthat'll only be used for one or a subset of the IOObjects.\n\n\nThe reason to put BackendType at the top is pragmatic - one backend is of a\nsingle type, but can do IO for all kinds of objects/contexts. So any other\nhierarchy would make the locking etc much harder.\n\n\n> This is one possible natural sentence with these objects:\n> \n> During COPY, a client backend may read in data from a permanent\n> relation.\n> This order is:\n> IOContext\n> BackendType\n> IOOp\n> IOObject\n> \n> I think English sentences are often structured subject, verb, object --\n> but in our case, we have an extra thing that doesn't fit neatly\n> (IOContext).\n\n\"..., to avoid polluting the buffer cache it uses the bulk (read|write)\nstrategy\".\n\n\n> Also, IOOp in a sentence would be in the middle (as the\n> verb). I made it last because a) it feels like the smallest unit b) it\n> would make the code a lot more annoying if it wasn't last.\n\nYea, I think pragmatically that is the right choice.\n\n\n\n> > > Subject: [PATCH v47 3/5] pgstat: Count IO for relations\n> >\n> > Nearly happy with this now. See one minor nit below.\n> >\n> > I don't love the counting in register_dirty_segment() and mdsyncfiletag(), but\n> > I don't have a better idea, and it doesn't seem too horrible.\n> \n> You don't like it because such things shouldn't be in md.c -- since we\n> went to the trouble of having function pointers and making it general?\n\nIt's more of a gut feeling than well reasoned ;)\n\n\n\n> > > +-- Change the tablespace so that the table is rewritten directly, then SELECT\n> > > +-- from it to cause it to be read back into shared buffers.\n> > > +SET allow_in_place_tablespaces = true;\n> > > +CREATE TABLESPACE regress_io_stats_tblspc LOCATION '';\n> >\n> > Perhaps worth doing this in tablespace.sql, to avoid the additional\n> > checkpoints done as part of CREATE/DROP TABLESPACE?\n> >\n> > Or, at least combine this with the CHECKPOINTs above?\n> \n> I see a checkpoint is requested when dropping the tablespace if not all\n> the files in it are deleted. It seems like if the DROP TABLE for the\n> permanent table is before the explicit checkpoints in the test, then the\n> DROP TABLESPACE will not cause an additional checkpoint.\n\nUnfortunately, that's not how it works :(. See the comment above mdunlink():\n\n> * For regular relations, we don't unlink the first segment file of the rel,\n> * but just truncate it to zero length, and record a request to unlink it after\n> * the next checkpoint. Additional segments can be unlinked immediately,\n> * however. Leaving the empty file in place prevents that relfilenumber\n> * from being reused. The scenario this protects us from is:\n> ...\n\n\n> Is this what you are suggesting? Dropping the temporary table should not\n> have an effect on this.\n\nI was wondering about simply moving that portion of the test to\ntablespace.sql, where we already created a tablespace.\n\n\nAn alternative would be to propose splitting tablespace.sql into one portion\nrunning at the start of parallel_schedule, and one at the end. Historically,\nwe needed tablespace.sql to be optional due to causing problems when\nreplicating to another instance on the same machine, but now we have\nallow_in_place_tablespaces.\n\n\n> > SELECT pg_relation_size('test_io_local') / current_setting('block_size')::int8 > 100;\n> >\n> > Better toast compression or such could easily make test_io_local smaller than\n> > it's today. Seeing that it's too small would make it easier to understand the\n> > failure.\n> \n> Good idea. So, I used pg_table_size() because it seems like\n> pg_relation_size() does not include the toast relations. However, I'm\n> not sure this is a good idea, because pg_table_size() includes FSM and\n> visibility map. Should I write a query to get the toast relation name\n> and add pg_relation_size() of that relation and the main relation?\n\nI think it's the right thing to just include the relation size. Your queries\nIIRC won't use the toast table or other forks. So I'd leave it at just\npg_relation_size().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Jan 2023 11:12:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "v49 attached\n\nOn Tue, Jan 17, 2023 at 2:12 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-17 12:22:14 -0500, Melanie Plageman wrote:\n>\n> > > > +typedef struct PgStat_BackendIO\n> > > > +{\n> > > > + PgStat_Counter data[IOCONTEXT_NUM_TYPES][IOOBJECT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > > +} PgStat_BackendIO;\n> > >\n> > > Would it bother you if we swapped the order of iocontext and iobject here and\n> > > related places? It makes more sense to me semantically, and should now be\n> > > pretty easy, code wise.\n> >\n> > So, thinking about this I started noticing inconsistencies in other\n> > areas around this order:\n> > For example: ordering of objects mentioned in commit messages and comments,\n> > ordering of parameters (like in pgstat_count_io_op() [currently in\n> > reverse order]).\n> >\n> > I think we should make a final decision about this ordering and then\n> > make everywhere consistent (including ordering in the view).\n> >\n> > Currently the order is:\n> > BackendType\n> > IOContext\n> > IOObject\n> > IOOp\n> >\n> > You are suggesting this order:\n> > BackendType\n> > IOObject\n> > IOContext\n> > IOOp\n> >\n> > Could you explain what you find more natural about this ordering (as I\n> > find the other more natural)?\n>\n> The object we're performing IO on determines more things than the context. So\n> it just seems like the natural hierarchical fit. The context is a sub-category\n> of the object. Consider how it'll look like if we also have objects for 'wal',\n> 'temp files'. It'll make sense to group by just the object, but it won't make\n> sense to group by just the context.\n>\n> If it were trivial to do I'd use a different IOContext for each IOObject. But\n> it'd make it much harder. So there'll just be a bunch of values of IOContext\n> that'll only be used for one or a subset of the IOObjects.\n>\n>\n> The reason to put BackendType at the top is pragmatic - one backend is of a\n> single type, but can do IO for all kinds of objects/contexts. So any other\n> hierarchy would make the locking etc much harder.\n>\n>\n> > This is one possible natural sentence with these objects:\n> >\n> > During COPY, a client backend may read in data from a permanent\n> > relation.\n> > This order is:\n> > IOContext\n> > BackendType\n> > IOOp\n> > IOObject\n> >\n> > I think English sentences are often structured subject, verb, object --\n> > but in our case, we have an extra thing that doesn't fit neatly\n> > (IOContext).\n>\n> \"..., to avoid polluting the buffer cache it uses the bulk (read|write)\n> strategy\".\n>\n>\n> > Also, IOOp in a sentence would be in the middle (as the\n> > verb). I made it last because a) it feels like the smallest unit b) it\n> > would make the code a lot more annoying if it wasn't last.\n>\n> Yea, I think pragmatically that is the right choice.\n\nI have changed the order and updated all the places using\nPgStat_BktypeIO as well as in all locations in which it should be\nordered for consistency (that I could find in the pass I did) -- e.g.\nthe view definition, function signatures, comments, commit messages,\netc.\n\n> > > > +-- Change the tablespace so that the table is rewritten directly, then SELECT\n> > > > +-- from it to cause it to be read back into shared buffers.\n> > > > +SET allow_in_place_tablespaces = true;\n> > > > +CREATE TABLESPACE regress_io_stats_tblspc LOCATION '';\n> > >\n> > > Perhaps worth doing this in tablespace.sql, to avoid the additional\n> > > checkpoints done as part of CREATE/DROP TABLESPACE?\n> > >\n> > > Or, at least combine this with the CHECKPOINTs above?\n> >\n> > I see a checkpoint is requested when dropping the tablespace if not all\n> > the files in it are deleted. It seems like if the DROP TABLE for the\n> > permanent table is before the explicit checkpoints in the test, then the\n> > DROP TABLESPACE will not cause an additional checkpoint.\n>\n> Unfortunately, that's not how it works :(. See the comment above mdunlink():\n>\n> > * For regular relations, we don't unlink the first segment file of the rel,\n> > * but just truncate it to zero length, and record a request to unlink it after\n> > * the next checkpoint. Additional segments can be unlinked immediately,\n> > * however. Leaving the empty file in place prevents that relfilenumber\n> > * from being reused. The scenario this protects us from is:\n> > ...\n>\n>\n> > Is this what you are suggesting? Dropping the temporary table should not\n> > have an effect on this.\n>\n> I was wondering about simply moving that portion of the test to\n> tablespace.sql, where we already created a tablespace.\n>\n>\n> An alternative would be to propose splitting tablespace.sql into one portion\n> running at the start of parallel_schedule, and one at the end. Historically,\n> we needed tablespace.sql to be optional due to causing problems when\n> replicating to another instance on the same machine, but now we have\n> allow_in_place_tablespaces.\n\nIt seems like the best way would be to split up the tablespace test file\nas you suggested and drop the tablespace at the end of the regression\ntest suite. There could be other tests that could use a tablespace.\nThough what I wrote is kind of tablespace test coverage, if this\nrewriting behavior no longer happened when doing alter table set\ntablespace, we would want to come up with a new test which exercised\nthat code to count those IO stats, not simply delete it from the\ntablespace tests.\n\n> > > SELECT pg_relation_size('test_io_local') / current_setting('block_size')::int8 > 100;\n> > >\n> > > Better toast compression or such could easily make test_io_local smaller than\n> > > it's today. Seeing that it's too small would make it easier to understand the\n> > > failure.\n> >\n> > Good idea. So, I used pg_table_size() because it seems like\n> > pg_relation_size() does not include the toast relations. However, I'm\n> > not sure this is a good idea, because pg_table_size() includes FSM and\n> > visibility map. Should I write a query to get the toast relation name\n> > and add pg_relation_size() of that relation and the main relation?\n>\n> I think it's the right thing to just include the relation size. Your queries\n> IIRC won't use the toast table or other forks. So I'd leave it at just\n> pg_relation_size().\n\nI did notice that this test wasn't using the toast table for the\ntoastable column -- but you mentioned better toast compression affecting\nthe future test stability, so I'm confused.\n\n- Melanie",
"msg_date": "Tue, 17 Jan 2023 17:00:34 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 9:22 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Mon, Jan 16, 2023 at 4:42 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> > I missed a couple of versions, but I think the docs are clearer now.\n> > I'm torn on losing some of the detail, but overall I do think it's a\n> > good trade-off. Moving some details out to after the table does keep\n> > the bulk of the view documentation more readable, and the \"inform\n> > database tuning\" part is great. I really like the idea of a separate\n> > Interpreting Statistics section, but for now this works.\n> >\n> > >+ <literal>vacuum</literal>: I/O operations performed outside of shared\n> > >+ buffers while vacuuming and analyzing permanent relations.\n> >\n> > Why only permanent relations? Are temporary relations treated\n> > differently? I imagine if someone has a temp-table-heavy workload that\n> > requires regularly vacuuming and analyzing those relations, this point\n> > may be confusing without some additional explanation.\n>\n> Ah, yes. This is a bit confusing. We don't use buffer access strategies\n> when operating on temp relations, so vacuuming them is counted in IO\n> Context normal. I've added this information to the docs but now that\n> definition is a bit long. Perhaps it should be a note? That seems like\n> it would draw too much attention to this detail, though...\n\nThanks for clarifying. I think the updated definition still works:\nit's still shorter than the `normal` context definition.\n\n\n",
"msg_date": "Tue, 17 Jan 2023 22:10:38 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Wed, 18 Jan 2023 at 03:30, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> v49 attached\n>\n> On Tue, Jan 17, 2023 at 2:12 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-01-17 12:22:14 -0500, Melanie Plageman wrote:\n> >\n> > > > > +typedef struct PgStat_BackendIO\n> > > > > +{\n> > > > > + PgStat_Counter data[IOCONTEXT_NUM_TYPES][IOOBJECT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > > > +} PgStat_BackendIO;\n> > > >\n> > > > Would it bother you if we swapped the order of iocontext and iobject here and\n> > > > related places? It makes more sense to me semantically, and should now be\n> > > > pretty easy, code wise.\n> > >\n> > > So, thinking about this I started noticing inconsistencies in other\n> > > areas around this order:\n> > > For example: ordering of objects mentioned in commit messages and comments,\n> > > ordering of parameters (like in pgstat_count_io_op() [currently in\n> > > reverse order]).\n> > >\n> > > I think we should make a final decision about this ordering and then\n> > > make everywhere consistent (including ordering in the view).\n> > >\n> > > Currently the order is:\n> > > BackendType\n> > > IOContext\n> > > IOObject\n> > > IOOp\n> > >\n> > > You are suggesting this order:\n> > > BackendType\n> > > IOObject\n> > > IOContext\n> > > IOOp\n> > >\n> > > Could you explain what you find more natural about this ordering (as I\n> > > find the other more natural)?\n> >\n> > The object we're performing IO on determines more things than the context. So\n> > it just seems like the natural hierarchical fit. The context is a sub-category\n> > of the object. Consider how it'll look like if we also have objects for 'wal',\n> > 'temp files'. It'll make sense to group by just the object, but it won't make\n> > sense to group by just the context.\n> >\n> > If it were trivial to do I'd use a different IOContext for each IOObject. But\n> > it'd make it much harder. So there'll just be a bunch of values of IOContext\n> > that'll only be used for one or a subset of the IOObjects.\n> >\n> >\n> > The reason to put BackendType at the top is pragmatic - one backend is of a\n> > single type, but can do IO for all kinds of objects/contexts. So any other\n> > hierarchy would make the locking etc much harder.\n> >\n> >\n> > > This is one possible natural sentence with these objects:\n> > >\n> > > During COPY, a client backend may read in data from a permanent\n> > > relation.\n> > > This order is:\n> > > IOContext\n> > > BackendType\n> > > IOOp\n> > > IOObject\n> > >\n> > > I think English sentences are often structured subject, verb, object --\n> > > but in our case, we have an extra thing that doesn't fit neatly\n> > > (IOContext).\n> >\n> > \"..., to avoid polluting the buffer cache it uses the bulk (read|write)\n> > strategy\".\n> >\n> >\n> > > Also, IOOp in a sentence would be in the middle (as the\n> > > verb). I made it last because a) it feels like the smallest unit b) it\n> > > would make the code a lot more annoying if it wasn't last.\n> >\n> > Yea, I think pragmatically that is the right choice.\n>\n> I have changed the order and updated all the places using\n> PgStat_BktypeIO as well as in all locations in which it should be\n> ordered for consistency (that I could find in the pass I did) -- e.g.\n> the view definition, function signatures, comments, commit messages,\n> etc.\n>\n> > > > > +-- Change the tablespace so that the table is rewritten directly, then SELECT\n> > > > > +-- from it to cause it to be read back into shared buffers.\n> > > > > +SET allow_in_place_tablespaces = true;\n> > > > > +CREATE TABLESPACE regress_io_stats_tblspc LOCATION '';\n> > > >\n> > > > Perhaps worth doing this in tablespace.sql, to avoid the additional\n> > > > checkpoints done as part of CREATE/DROP TABLESPACE?\n> > > >\n> > > > Or, at least combine this with the CHECKPOINTs above?\n> > >\n> > > I see a checkpoint is requested when dropping the tablespace if not all\n> > > the files in it are deleted. It seems like if the DROP TABLE for the\n> > > permanent table is before the explicit checkpoints in the test, then the\n> > > DROP TABLESPACE will not cause an additional checkpoint.\n> >\n> > Unfortunately, that's not how it works :(. See the comment above mdunlink():\n> >\n> > > * For regular relations, we don't unlink the first segment file of the rel,\n> > > * but just truncate it to zero length, and record a request to unlink it after\n> > > * the next checkpoint. Additional segments can be unlinked immediately,\n> > > * however. Leaving the empty file in place prevents that relfilenumber\n> > > * from being reused. The scenario this protects us from is:\n> > > ...\n> >\n> >\n> > > Is this what you are suggesting? Dropping the temporary table should not\n> > > have an effect on this.\n> >\n> > I was wondering about simply moving that portion of the test to\n> > tablespace.sql, where we already created a tablespace.\n> >\n> >\n> > An alternative would be to propose splitting tablespace.sql into one portion\n> > running at the start of parallel_schedule, and one at the end. Historically,\n> > we needed tablespace.sql to be optional due to causing problems when\n> > replicating to another instance on the same machine, but now we have\n> > allow_in_place_tablespaces.\n>\n> It seems like the best way would be to split up the tablespace test file\n> as you suggested and drop the tablespace at the end of the regression\n> test suite. There could be other tests that could use a tablespace.\n> Though what I wrote is kind of tablespace test coverage, if this\n> rewriting behavior no longer happened when doing alter table set\n> tablespace, we would want to come up with a new test which exercised\n> that code to count those IO stats, not simply delete it from the\n> tablespace tests.\n>\n> > > > SELECT pg_relation_size('test_io_local') / current_setting('block_size')::int8 > 100;\n> > > >\n> > > > Better toast compression or such could easily make test_io_local smaller than\n> > > > it's today. Seeing that it's too small would make it easier to understand the\n> > > > failure.\n> > >\n> > > Good idea. So, I used pg_table_size() because it seems like\n> > > pg_relation_size() does not include the toast relations. However, I'm\n> > > not sure this is a good idea, because pg_table_size() includes FSM and\n> > > visibility map. Should I write a query to get the toast relation name\n> > > and add pg_relation_size() of that relation and the main relation?\n> >\n> > I think it's the right thing to just include the relation size. Your queries\n> > IIRC won't use the toast table or other forks. So I'd leave it at just\n> > pg_relation_size().\n>\n> I did notice that this test wasn't using the toast table for the\n> toastable column -- but you mentioned better toast compression affecting\n> the future test stability, so I'm confused.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n4f74f5641d53559ec44e74d5bf552e167fdd5d20 ===\n=== applying patch\n./v49-0003-Add-system-view-tracking-IO-ops-per-backend-type.patch\n....\npatching file src/test/regress/expected/rules.out\nHunk #1 FAILED at 1876.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/test/regress/expected/rules.out.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3272.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:47:57 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 6:18 AM vignesh C <vignesh21@gmail.com> wrote:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> === Applying patches on top of PostgreSQL commit ID\n> 4f74f5641d53559ec44e74d5bf552e167fdd5d20 ===\n> === applying patch\n> ./v49-0003-Add-system-view-tracking-IO-ops-per-backend-type.patch\n> ....\n> patching file src/test/regress/expected/rules.out\n> Hunk #1 FAILED at 1876.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/test/regress/expected/rules.out.rej\n>\n> [1] - http://cfbot.cputube.org/patch_41_3272.log\n\nYes, it conflicted with 47bb9db75996232. rebased v50 is attached.\n\nOn Tue, Jan 17, 2023 at 5:00 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> > > > > +-- Change the tablespace so that the table is rewritten directly, then SELECT\n> > > > > +-- from it to cause it to be read back into shared buffers.\n> > > > > +SET allow_in_place_tablespaces = true;\n> > > > > +CREATE TABLESPACE regress_io_stats_tblspc LOCATION '';\n> > > >\n> > > > Perhaps worth doing this in tablespace.sql, to avoid the additional\n> > > > checkpoints done as part of CREATE/DROP TABLESPACE?\n> > > >\n> > > > Or, at least combine this with the CHECKPOINTs above?\n> > >\n> > > I see a checkpoint is requested when dropping the tablespace if not all\n> > > the files in it are deleted. It seems like if the DROP TABLE for the\n> > > permanent table is before the explicit checkpoints in the test, then the\n> > > DROP TABLESPACE will not cause an additional checkpoint.\n> >\n> > Unfortunately, that's not how it works :(. See the comment above mdunlink():\n> >\n> > > * For regular relations, we don't unlink the first segment file of the rel,\n> > > * but just truncate it to zero length, and record a request to unlink it after\n> > > * the next checkpoint. Additional segments can be unlinked immediately,\n> > > * however. Leaving the empty file in place prevents that relfilenumber\n> > > * from being reused. The scenario this protects us from is:\n> > > ...\n> >\n> >\n> > > Is this what you are suggesting? Dropping the temporary table should not\n> > > have an effect on this.\n> >\n> > I was wondering about simply moving that portion of the test to\n> > tablespace.sql, where we already created a tablespace.\n> >\n> >\n> > An alternative would be to propose splitting tablespace.sql into one portion\n> > running at the start of parallel_schedule, and one at the end. Historically,\n> > we needed tablespace.sql to be optional due to causing problems when\n> > replicating to another instance on the same machine, but now we have\n> > allow_in_place_tablespaces.\n>\n> It seems like the best way would be to split up the tablespace test file\n> as you suggested and drop the tablespace at the end of the regression\n> test suite. There could be other tests that could use a tablespace.\n> Though what I wrote is kind of tablespace test coverage, if this\n> rewriting behavior no longer happened when doing alter table set\n> tablespace, we would want to come up with a new test which exercised\n> that code to count those IO stats, not simply delete it from the\n> tablespace tests.\n\nI have added a patch to the set which creates the regress_tblspace\n(formerly created in tablespace.sq1) in test_setup.sql. I then moved the\ntablespace test to the end of the parallel schedule so that my test (and\nothers) could use the regress_tblspace.\n\nI modified some of the tablespace.sql tests to be more specific in terms\nof the objects they are looking for so that tests using the tablespace\nare not forced to drop all of the objects they make in the tablespace.\n\nNote that I did not proactively change all tests in tablespace.sql that\nmay fail in this way -- only those that failed because of the tables I\ncreated (and did not drop) from regress_tblspace.\n\n- Melanie",
"msg_date": "Thu, 19 Jan 2023 16:28:59 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 4:28 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Thu, Jan 19, 2023 at 6:18 AM vignesh C <vignesh21@gmail.com> wrote:\n> > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> > === Applying patches on top of PostgreSQL commit ID\n> > 4f74f5641d53559ec44e74d5bf552e167fdd5d20 ===\n> > === applying patch\n> > ./v49-0003-Add-system-view-tracking-IO-ops-per-backend-type.patch\n> > ....\n> > patching file src/test/regress/expected/rules.out\n> > Hunk #1 FAILED at 1876.\n> > 1 out of 1 hunk FAILED -- saving rejects to file\n> > src/test/regress/expected/rules.out.rej\n> >\n> > [1] - http://cfbot.cputube.org/patch_41_3272.log\n>\n> Yes, it conflicted with 47bb9db75996232. rebased v50 is attached.\n\nOh dear-- an extra FlushBuffer() snuck in there somehow.\nRemoved it in attached v51.\nAlso, I fixed an issue in my tablespace.sql updates\n\n- Melanie",
"msg_date": "Thu, 19 Jan 2023 21:15:34 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hello.\n\nAt Thu, 19 Jan 2023 21:15:34 -0500, Melanie Plageman <melanieplageman@gmail.com> wrote in \n> Oh dear-- an extra FlushBuffer() snuck in there somehow.\n> Removed it in attached v51.\n> Also, I fixed an issue in my tablespace.sql updates\n\nI only looked 0002 and 0004.\n(Sorry for the random order of the comment..)\n\n0002:\n\n+\tAssert(pgstat_bktype_io_stats_valid(bktype_shstats, MyBackendType));\n\nThis is relatively complex checking. We already asserts-out increments\nof invalid counters. Thus this is checking if some unrelated codes\nclobbered them, which we do only when consistency is critical. Is\nthere any needs to do that here? I saw another occurance of the same\nassertion.\n\n\n-/* Reset some shared cluster-wide counters */\n+/*\n+ * Reset some shared cluster-wide counters\n+ *\n+ * When adding a new reset target, ideally the name should match that in\n+ * pgstat_kind_infos, if relevant.\n+ */\n\nI'm not sure the addition is useful..\n\n\n+pgstat_count_io_op(IOObject io_object, IOContext io_context, IOOp io_op)\n+{\n+\tAssert(io_object < IOOBJECT_NUM_TYPES);\n+\tAssert(io_context < IOCONTEXT_NUM_TYPES);\n+\tAssert(io_op < IOOP_NUM_TYPES);\n+\tAssert(pgstat_tracks_io_op(MyBackendType, io_object, io_context, io_op));\n\nIs there any reason for not checking the value ranges at the\nbottom-most functions? They can lead to out-of-bounds access so I\ndon't think we need to continue execution for such invalid values.\n\n+\tno_temp_rel = bktype == B_AUTOVAC_LAUNCHER || bktype == B_BG_WRITER ||\n+\t\tbktype == B_CHECKPOINTER || bktype == B_AUTOVAC_WORKER ||\n+\t\tbktype == B_STANDALONE_BACKEND || bktype == B_STARTUP;\n\nI'm not sure I like to omit parentheses for such a long Boolean\nexpression on the right side.\n\n\n+\twrite_chunk_s(fpout, &pgStatLocal.snapshot.io);\n+\tif (!read_chunk_s(fpin, &shmem->io.stats))\n\nThe names of the functions hardly make sense alone to me. How about\nwrite_struct()/read_struct()? (I personally prefer to use\nwrite_chunk() directly..)\n\n\n+ PgStat_BktypeIO\n\nThis patch abbreviates \"backend\" as \"bk\" but \"be\" is used in many\nplaces. I think that naming should follow the predecessors.\n\n\n0004:\n\nsystem_views.sql:\n\n+FROM pg_stat_get_io() b;\n\nWhat does the \"b\" stand for? (Backend? then \"s\" or \"i\" seems\nstraight-forward.)\n\n\n+\t\tnulls[col_idx] = !pgstat_tracks_io_op(bktype, io_obj, io_context, io_op);\n+\n+\t\tif (nulls[col_idx])\n+\t\t\tcontinue;\n+\n+\t\tvalues[col_idx] =\n+\t\t\tInt64GetDatum(bktype_stats->data[io_obj][io_context][io_op]);\n\nThis is a bit hard to read since it requires to follow the condition\nflow. The following is simpler and I thhink close to our standard.\n\nif (pgstat_tacks_io_op())\n values[col_idx] =\n\t\t\tInt64GetDatum(bktype_stats->data[io_obj][io_context][io_op]);\nelse\n nulls[col_idx] = true;\n\n\n> + Number of read operations in units of <varname>op_bytes</varname>.\n\nI may be the only one who see the name as umbiguous between \"total\nnumber of handled bytes\" and \"bytes hadled at an operation\". Can't it\nbe op_blocksize or just block_size?\n\n+ b.io_object,\n+ b.io_context,\n\nIt's uncertain to me why only the two columns are prefixed by\n\"io\". Don't \"object_type\" and just \"context\" work instead?\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Jan 2023 17:22:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Tue, 24 Jan 2023 17:22:03 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> +pgstat_count_io_op(IOObject io_object, IOContext io_context, IOOp io_op)\n> +{\n> +\tAssert(io_object < IOOBJECT_NUM_TYPES);\n> +\tAssert(io_context < IOCONTEXT_NUM_TYPES);\n> +\tAssert(io_op < IOOP_NUM_TYPES);\n> +\tAssert(pgstat_tracks_io_op(MyBackendType, io_object, io_context, io_op));\n> \n> Is there any reason for not checking the value ranges at the\n> bottom-most functions? They can lead to out-of-bounds access so I\n\nTo make sure, the \"They\" means \"out-of-range io_object/context/op\nvalues\"..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Jan 2023 17:25:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-24 17:22:03 +0900, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Thu, 19 Jan 2023 21:15:34 -0500, Melanie Plageman <melanieplageman@gmail.com> wrote in \n> > Oh dear-- an extra FlushBuffer() snuck in there somehow.\n> > Removed it in attached v51.\n> > Also, I fixed an issue in my tablespace.sql updates\n> \n> I only looked 0002 and 0004.\n> (Sorry for the random order of the comment..)\n> \n> 0002:\n> \n> +\tAssert(pgstat_bktype_io_stats_valid(bktype_shstats, MyBackendType));\n> \n> This is relatively complex checking. We already asserts-out increments\n> of invalid counters. Thus this is checking if some unrelated codes\n> clobbered them, which we do only when consistency is critical. Is\n> there any needs to do that here? I saw another occurance of the same\n> assertion.\n\nI found it useful to find problems.\n\n\n> +\tno_temp_rel = bktype == B_AUTOVAC_LAUNCHER || bktype == B_BG_WRITER ||\n> +\t\tbktype == B_CHECKPOINTER || bktype == B_AUTOVAC_WORKER ||\n> +\t\tbktype == B_STANDALONE_BACKEND || bktype == B_STARTUP;\n> \n> I'm not sure I like to omit parentheses for such a long Boolean\n> expression on the right side.\n\nWhat parens would help?\n\n\n> +\twrite_chunk_s(fpout, &pgStatLocal.snapshot.io);\n> +\tif (!read_chunk_s(fpin, &shmem->io.stats))\n> \n> The names of the functions hardly make sense alone to me. How about\n> write_struct()/read_struct()? (I personally prefer to use\n> write_chunk() directly..)\n\nThat's not related to this patch - there's several existing callers for\nit. And write_struct wouldn't be better imo, because it's not just for\nstructs.\n\n\n> + PgStat_BktypeIO\n> \n> This patch abbreviates \"backend\" as \"bk\" but \"be\" is used in many\n> places. I think that naming should follow the predecessors.\n\nThe precedence aren't consistent unfortunately :)\n\n\n> > + Number of read operations in units of <varname>op_bytes</varname>.\n> \n> I may be the only one who see the name as umbiguous between \"total\n> number of handled bytes\" and \"bytes hadled at an operation\". Can't it\n> be op_blocksize or just block_size?\n> \n> + b.io_object,\n> + b.io_context,\n\nNo, block wouldn't be helpful - we'd like to use this for something that isn't\nuniform blocks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 Jan 2023 14:35:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Tue, 24 Jan 2023 14:35:12 -0800, Andres Freund <andres@anarazel.de> wrote in \n> > 0002:\n> > \n> > +\tAssert(pgstat_bktype_io_stats_valid(bktype_shstats, MyBackendType));\n> > \n> > This is relatively complex checking. We already asserts-out increments\n> > of invalid counters. Thus this is checking if some unrelated codes\n> > clobbered them, which we do only when consistency is critical. Is\n> > there any needs to do that here? I saw another occurance of the same\n> > assertion.\n> \n> I found it useful to find problems.\n\nOkay.\n\n> > +\tno_temp_rel = bktype == B_AUTOVAC_LAUNCHER || bktype == B_BG_WRITER ||\n> > +\t\tbktype == B_CHECKPOINTER || bktype == B_AUTOVAC_WORKER ||\n> > +\t\tbktype == B_STANDALONE_BACKEND || bktype == B_STARTUP;\n> > \n> > I'm not sure I like to omit parentheses for such a long Boolean\n> > expression on the right side.\n> \n> What parens would help?\n\nI thought about the following.\n\nno_temp_rel =\n (bktype == B_AUTOVAC_LAUNCHER ||\n bktype == B_BG_WRITER ||\n\tbktype == B_CHECKPOINTER ||\n\tbktype == B_AUTOVAC_WORKER ||\n\tbktype == B_STANDALONE_BACKEND ||\n\tbktype == B_STARTUP);\n\n\n> > +\twrite_chunk_s(fpout, &pgStatLocal.snapshot.io);\n> > +\tif (!read_chunk_s(fpin, &shmem->io.stats))\n> > \n> > The names of the functions hardly make sense alone to me. How about\n> > write_struct()/read_struct()? (I personally prefer to use\n> > write_chunk() directly..)\n> \n> That's not related to this patch - there's several existing callers for\n> it. And write_struct wouldn't be better imo, because it's not just for\n> structs.\n\nHmm. Then what the \"_s\" stands for?\n\n> > + PgStat_BktypeIO\n> > \n> > This patch abbreviates \"backend\" as \"bk\" but \"be\" is used in many\n> > places. I think that naming should follow the predecessors.\n> \n> The precedence aren't consistent unfortunately :)\n\nUuuummmmm. Okay, just I like \"be\" there! Anyway, I don't strongly\npush that.\n\n> > > + Number of read operations in units of <varname>op_bytes</varname>.\n> > \n> > I may be the only one who see the name as umbiguous between \"total\n> > number of handled bytes\" and \"bytes hadled at an operation\". Can't it\n> > be op_blocksize or just block_size?\n> > \n> > + b.io_object,\n> > + b.io_context,\n> \n> No, block wouldn't be helpful - we'd like to use this for something that isn't\n> uniform blocks.\n\nWhat does the field show in that case? The mean of operation size? Or\none row per opration size? If the former, the name looks somewhat\nwrong. If the latter, block_size seems making sense.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Jan 2023 16:56:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nI did another read through the series. I do have some minor changes, but\nthey're minor. I think this is ready for commit. I plan to start pushing\ntomorrow.\n\nThe changes I made are:\n- the tablespace test changes didn't quite work in isolation / needed a bit of\n polishing\n- moved the tablespace changes to later in the series\n- split the tests out of the commit adding the view into its own commit\n- minor code formatting things (e.g. didn't like nested for()s without {})\n\n\n\nOn 2023-01-25 16:56:17 +0900, Kyotaro Horiguchi wrote:\n> At Tue, 24 Jan 2023 14:35:12 -0800, Andres Freund <andres@anarazel.de> wrote in\n> > > +\twrite_chunk_s(fpout, &pgStatLocal.snapshot.io);\n> > > +\tif (!read_chunk_s(fpin, &shmem->io.stats))\n> > >\n> > > The names of the functions hardly make sense alone to me. How about\n> > > write_struct()/read_struct()? (I personally prefer to use\n> > > write_chunk() directly..)\n> >\n> > That's not related to this patch - there's several existing callers for\n> > it. And write_struct wouldn't be better imo, because it's not just for\n> > structs.\n>\n> Hmm. Then what the \"_s\" stands for?\n\nSize. It's a macro that just forwards to read_chunk()/write_chunk().\n\n\n\n> > > > + Number of read operations in units of <varname>op_bytes</varname>.\n> > >\n> > > I may be the only one who see the name as umbiguous between \"total\n> > > number of handled bytes\" and \"bytes hadled at an operation\". Can't it\n> > > be op_blocksize or just block_size?\n> > >\n> > > + b.io_object,\n> > > + b.io_context,\n> >\n> > No, block wouldn't be helpful - we'd like to use this for something that isn't\n> > uniform blocks.\n>\n> What does the field show in that case? The mean of operation size? Or\n> one row per opration size? If the former, the name looks somewhat\n> wrong. If the latter, block_size seems making sense.\n\n1, so that it's clear that the rest are in bytes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Feb 2023 22:38:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Tue, 7 Feb 2023 22:38:14 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> I did another read through the series. I do have some minor changes, but\n> they're minor. I think this is ready for commit. I plan to start pushing\n> tomorrow.\n> \n> The changes I made are:\n> - the tablespace test changes didn't quite work in isolation / needed a bit of\n> polishing\n> - moved the tablespace changes to later in the series\n> - split the tests out of the commit adding the view into its own commit\n> - minor code formatting things (e.g. didn't like nested for()s without {})\n\n\n> On 2023-01-25 16:56:17 +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 24 Jan 2023 14:35:12 -0800, Andres Freund <andres@anarazel.de> wrote in\n> > > > +\twrite_chunk_s(fpout, &pgStatLocal.snapshot.io);\n> > > > +\tif (!read_chunk_s(fpin, &shmem->io.stats))\n> > > >\n> > > > The names of the functions hardly make sense alone to me. How about\n> > > > write_struct()/read_struct()? (I personally prefer to use\n> > > > write_chunk() directly..)\n> > >\n> > > That's not related to this patch - there's several existing callers for\n> > > it. And write_struct wouldn't be better imo, because it's not just for\n> > > structs.\n> >\n> > Hmm. Then what the \"_s\" stands for?\n> \n> Size. It's a macro that just forwards to read_chunk()/write_chunk().\n\nI know what the macros do. But, I'm fine with the names as they are\nthere since before this patch. Sorry for the noise.\n\n> > > > > + Number of read operations in units of <varname>op_bytes</varname>.\n> > > >\n> > > > I may be the only one who see the name as umbiguous between \"total\n> > > > number of handled bytes\" and \"bytes hadled at an operation\". Can't it\n> > > > be op_blocksize or just block_size?\n> > > >\n> > > > + b.io_object,\n> > > > + b.io_context,\n> > >\n> > > No, block wouldn't be helpful - we'd like to use this for something that isn't\n> > > uniform blocks.\n> >\n> > What does the field show in that case? The mean of operation size? Or\n> > one row per opration size? If the former, the name looks somewhat\n> > wrong. If the latter, block_size seems making sense.\n> \n> 1, so that it's clear that the rest are in bytes.\n\nThanks. Okay, I guess the documentation will be changed as necessary.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Feb 2023 16:03:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-07 22:38:14 -0800, Andres Freund wrote:\n> I did another read through the series. I do have some minor changes, but\n> they're minor. I think this is ready for commit. I plan to start pushing\n> tomorrow.\n\nPushed the first (and biggest) commit. More tomorrow.\n\n\nAlready can't wait to see incremental improvements of this version of\npg_stat_io ;). Tracking buffer hits. Tracking Wal IO. Tracking relation IO\nbypassing shared buffers. Per connection IO statistics. Tracking IO time.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 21:03:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 21:03:19 -0800, Andres Freund wrote:\n> Pushed the first (and biggest) commit. More tomorrow.\n\nJust pushed the actual pg_stat_io view, the splitting of the tablespace test,\nand the pg_stat_io tests.\n\nYay!\n\nThanks all for patch and review!\n\n\n> Already can't wait to see incremental improvements of this version of\n> pg_stat_io ;). Tracking buffer hits. Tracking Wal IO. Tracking relation IO\n> bypassing shared buffers. Per connection IO statistics. Tracking IO time.\n\nThat's still the case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 10:24:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-11 10:24:37 -0800, Andres Freund wrote:\n> Just pushed the actual pg_stat_io view, the splitting of the tablespace test,\n> and the pg_stat_io tests.\n\nOne thing I started to wonder about since is whether we should remove the io_\nprefix from io_object, io_context. The prefixes make sense on the C level, but\nit's not clear to me that that's also the case on the table level.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Feb 2023 11:08:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 11:08 AM Andres Freund <andres@anarazel.de> wrote:\n> One thing I started to wonder about since is whether we should remove the io_\n> prefix from io_object, io_context. The prefixes make sense on the C level, but\n> it's not clear to me that that's also the case on the table level.\n\nYeah, +1. It's hard to argue that there would be any confusion,\nconsidering `io_` is in the name of the view.\n\n(Unless, I suppose, some other, non-I/O, \"some_object\" or\n\"some_context\" column were to be introduced to this view in the\nfuture. But that doesn't seem likely?)\n\n\n",
"msg_date": "Tue, 14 Feb 2023 22:35:01 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Tue, 14 Feb 2023 22:35:01 -0800, Maciek Sakrejda <m.sakrejda@gmail.com> wrote in \n> On Tue, Feb 14, 2023 at 11:08 AM Andres Freund <andres@anarazel.de> wrote:\n> > One thing I started to wonder about since is whether we should remove the io_\n> > prefix from io_object, io_context. The prefixes make sense on the C level, but\n> > it's not clear to me that that's also the case on the table level.\n> \n> Yeah, +1. It's hard to argue that there would be any confusion,\n> considering `io_` is in the name of the view.\n\nWe usually add such prefixes to the columns of system views and\ncatalogs, but it seems that's not the case for the stats views. Thus\n+1 from me, too.\n\n> (Unless, I suppose, some other, non-I/O, \"some_object\" or\n> \"some_context\" column were to be introduced to this view in the\n> future. But that doesn't seem likely?)\n\nI don't think that can happen. As for corss-views ambiguity, that is\nalready present. Many columns in stats views share the same names with\nsome other views.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Feb 2023 16:40:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Sat, Feb 11, 2023 at 10:24:37AM -0800, Andres Freund wrote:\n> On 2023-02-08 21:03:19 -0800, Andres Freund wrote:\n> > Pushed the first (and biggest) commit. More tomorrow.\n> \n> Just pushed the actual pg_stat_io view, the splitting of the tablespace test,\n> and the pg_stat_io tests.\n\npg_stat_io says:\n\n * Some BackendTypes do not currently perform any IO in certain\n * IOContexts, and, while it may not be inherently incorrect for them to\n * do so, excluding those rows from the view makes the view easier to use.\n\n if (bktype == B_AUTOVAC_LAUNCHER && io_context == IOCONTEXT_VACUUM)\n return false;\n\n if ((bktype == B_AUTOVAC_WORKER || bktype == B_AUTOVAC_LAUNCHER) &&\n io_context == IOCONTEXT_BULKWRITE)\n return false;\n\nWhat about these combinations? Aren't these also \"can't happen\" ?\n\n relation | bulkread | autovacuum worker\n relation | bulkread | autovacuum launcher\n relation | vacuum | startup\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 21 Feb 2023 19:50:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 07:50:35PM -0600, Justin Pryzby wrote:\n> On Sat, Feb 11, 2023 at 10:24:37AM -0800, Andres Freund wrote:\n> > On 2023-02-08 21:03:19 -0800, Andres Freund wrote:\n> > > Pushed the first (and biggest) commit. More tomorrow.\n> > \n> > Just pushed the actual pg_stat_io view, the splitting of the tablespace test,\n> > and the pg_stat_io tests.\n> \n> pg_stat_io says:\n> \n> * Some BackendTypes do not currently perform any IO in certain\n> * IOContexts, and, while it may not be inherently incorrect for them to\n> * do so, excluding those rows from the view makes the view easier to use.\n> \n> if (bktype == B_AUTOVAC_LAUNCHER && io_context == IOCONTEXT_VACUUM)\n> return false;\n> \n> if ((bktype == B_AUTOVAC_WORKER || bktype == B_AUTOVAC_LAUNCHER) &&\n> io_context == IOCONTEXT_BULKWRITE)\n> return false;\n> \n> What about these combinations? Aren't these also \"can't happen\" ?\n> \n> relation | bulkread | autovacuum worker\n> relation | bulkread | autovacuum launcher\n> relation | vacuum | startup\n\nNevermind - at least these are possible.\n\n(gdb) p MyBackendType\n$1 = B_AUTOVAC_WORKER\n(gdb) p io_object\n$2 = IOOBJECT_RELATION\n(gdb) p io_context\n$3 = IOCONTEXT_BULKREAD\n(gdb) p io_op\n$4 = IOOP_EVICT\n(gdb) bt\n...\n#9 0x0000557b2f6097a3 in ReadBufferExtended (reln=0x7ff5ccee36b8, forkNum=forkNum@entry=MAIN_FORKNUM, blockNum=blockNum@entry=16, mode=mode@entry=RBM_NORMAL, strategy=0x557b305fb568) at ../src/include/utils/rel.h:573\n#10 0x0000557b2f3057c0 in heapgetpage (sscan=sscan@entry=0x557b305fb158, block=block@entry=16) at ../src/backend/access/heap/heapam.c:405\n#11 0x0000557b2f305d6c in heapgettup_pagemode (scan=scan@entry=0x557b305fb158, dir=dir@entry=ForwardScanDirection, nkeys=0, key=0x0) at ../src/backend/access/heap/heapam.c:885\n#12 0x0000557b2f306956 in heap_getnext (sscan=sscan@entry=0x557b305fb158, direction=direction@entry=ForwardScanDirection) at ../src/backend/access/heap/heapam.c:1122\n#13 0x0000557b2f59be0c in do_autovacuum () at ../src/backend/postmaster/autovacuum.c:2061\n#14 0x0000557b2f59ccf7 in AutoVacWorkerMain (argc=argc@entry=0, argv=argv@entry=0x0) at ../src/backend/postmaster/autovacuum.c:1716\n#15 0x0000557b2f59cdd8 in StartAutoVacWorker () at ../src/backend/postmaster/autovacuum.c:1494\n#16 0x0000557b2f5a561a in StartAutovacuumWorker () at ../src/backend/postmaster/postmaster.c:5481\n#17 0x0000557b2f5a5a39 in process_pm_pmsignal () at ../src/backend/postmaster/postmaster.c:5192\n#18 0x0000557b2f5a5d7e in ServerLoop () at ../src/backend/postmaster/postmaster.c:1770\n#19 0x0000557b2f5a73da in PostmasterMain (argc=9, argv=<optimized out>) at ../src/backend/postmaster/postmaster.c:1463\n#20 0x0000557b2f4dfc39 in main (argc=9, argv=0x557b30568f50) at ../src/backend/main/main.c:200\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 21 Feb 2023 22:09:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Pushed the first (and biggest) commit. More tomorrow.\n\nI hadn't run my buildfarm-compile-warning scraper for a little while,\nbut I just did, and I find that this commit is causing warnings on\nno fewer than 14 buildfarm animals. They all look like\n\n ayu | 2023-02-25 23:02:08 | pgstat_io.c:40:14: warning: comparison of constant 2 with expression of type 'IOObject' (aka 'enum IOObject') is always true [-Wtautological-constant-out-of-range-compare]\n ayu | 2023-02-25 23:02:08 | pgstat_io.c:43:16: warning: comparison of constant 4 with expression of type 'IOContext' (aka 'enum IOContext') is always true [-Wtautological-constant-out-of-range-compare]\n ayu | 2023-02-25 23:02:08 | pgstat_io.c:70:19: warning: comparison of constant 2 with expression of type 'IOObject' (aka 'enum IOObject') is always true [-Wtautological-constant-out-of-range-compare]\n ayu | 2023-02-25 23:02:08 | pgstat_io.c:71:20: warning: comparison of constant 4 with expression of type 'IOContext' (aka 'enum IOContext') is always true [-Wtautological-constant-out-of-range-compare]\n ayu | 2023-02-25 23:02:08 | pgstat_io.c:115:14: warning: comparison of constant 2 with expression of type 'IOObject' (aka 'enum IOObject') is always true [-Wtautological-constant-out-of-range-compare]\n ayu | 2023-02-25 23:02:08 | pgstat_io.c:118:16: warning: comparison of constant 4 with expression of type 'IOContext' (aka 'enum IOContext') is always true [-Wtautological-constant-out-of-range-compare]\n ayu | 2023-02-25 23:02:08 | pgstatfuncs.c:1329:12: warning: comparison of constant 2 with expression of type 'IOObject' (aka 'enum IOObject') is always true [-Wtautological-constant-out-of-range-compare]\n ayu | 2023-02-25 23:02:08 | pgstatfuncs.c:1334:17: warning: comparison of constant 4 with expression of type 'IOContext' (aka 'enum IOContext') is always true [-Wtautological-constant-out-of-range-compare]\n\nThat is, these compilers think that comparisons like\n\n\tio_object < IOOBJECT_NUM_TYPES\n\tio_context < IOCONTEXT_NUM_TYPES\n\nare constant-true. This seems not good; if they were to actually\nact on this observation, by removing those loop-ending tests,\nwe'd have a problem.\n\nThe issue seems to be that code like this:\n\ntypedef enum IOContext\n{\n\tIOCONTEXT_BULKREAD,\n\tIOCONTEXT_BULKWRITE,\n\tIOCONTEXT_NORMAL,\n\tIOCONTEXT_VACUUM,\n} IOContext;\n\n#define IOCONTEXT_FIRST IOCONTEXT_BULKREAD\n#define IOCONTEXT_NUM_TYPES (IOCONTEXT_VACUUM + 1)\n\nis far too cute for its own good. I'm not sure about how to fix it\neither. I thought of defining\n\n#define IOCONTEXT_LAST IOCONTEXT_VACUUM\n\nand make the loop conditions like \"io_context <= IOCONTEXT_LAST\",\nbut that doesn't actually fix the problem.\n\n(Even aside from that, I do not find this coding even a little bit\nmistake-proof: you still have to remember to update the #define\nwhen adding another enum value.)\n\nWe have similar code involving enum ForkNumber but it looks to me\nlike the loop variables are always declared as plain \"int\". That\nmight be the path of least resistance here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 13:20:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "I wrote:\n> The issue seems to be that code like this:\n> ...\n> is far too cute for its own good.\n\nOh, there's another thing here that qualifies as too-cute: loops like\n\n for (IOObject io_object = IOOBJECT_FIRST;\n io_object < IOOBJECT_NUM_TYPES; io_object++)\n\nmake it look like we could define these enums as 1-based rather\nthan 0-based, but if we did this code would fail, because it's\nconfusing \"the number of values\" with \"1 more than the last value\".\n\nAgain, we could fix that with tests like \"io_context <= IOCONTEXT_LAST\",\nbut I don't see the point of adding more macros rather than removing\nsome. We do need IOOBJECT_NUM_TYPES to declare array sizes with,\nso I think we should nuke the \"xxx_FIRST\" macros as being not worth\nthe electrons they're written on, and write these loops like\n\n for (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n\nwhich is not actually adding any assumptions that you don't already\nmake by using io_object as a C array subscript.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 13:52:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-26 13:20:00 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Pushed the first (and biggest) commit. More tomorrow.\n> \n> I hadn't run my buildfarm-compile-warning scraper for a little while,\n> but I just did, and I find that this commit is causing warnings on\n> no fewer than 14 buildfarm animals. They all look like\n> \n> ayu | 2023-02-25 23:02:08 | pgstat_io.c:40:14: warning: comparison of constant 2 with expression of type 'IOObject' (aka 'enum IOObject') is always true [-Wtautological-constant-out-of-range-compare]\n> ayu | 2023-02-25 23:02:08 | pgstat_io.c:43:16: warning: comparison of constant 4 with expression of type 'IOContext' (aka 'enum IOContext') is always true [-Wtautological-constant-out-of-range-compare]\n> ayu | 2023-02-25 23:02:08 | pgstat_io.c:70:19: warning: comparison of constant 2 with expression of type 'IOObject' (aka 'enum IOObject') is always true [-Wtautological-constant-out-of-range-compare]\n> ayu | 2023-02-25 23:02:08 | pgstat_io.c:71:20: warning: comparison of constant 4 with expression of type 'IOContext' (aka 'enum IOContext') is always true [-Wtautological-constant-out-of-range-compare]\n> ayu | 2023-02-25 23:02:08 | pgstat_io.c:115:14: warning: comparison of constant 2 with expression of type 'IOObject' (aka 'enum IOObject') is always true [-Wtautological-constant-out-of-range-compare]\n> ayu | 2023-02-25 23:02:08 | pgstat_io.c:118:16: warning: comparison of constant 4 with expression of type 'IOContext' (aka 'enum IOContext') is always true [-Wtautological-constant-out-of-range-compare]\n> ayu | 2023-02-25 23:02:08 | pgstatfuncs.c:1329:12: warning: comparison of constant 2 with expression of type 'IOObject' (aka 'enum IOObject') is always true [-Wtautological-constant-out-of-range-compare]\n> ayu | 2023-02-25 23:02:08 | pgstatfuncs.c:1334:17: warning: comparison of constant 4 with expression of type 'IOContext' (aka 'enum IOContext') is always true [-Wtautological-constant-out-of-range-compare]\n\nWhat other animals? If it had been just ayu / clang 4, I'd not be sure it's\nworth doing much here.\n\n\n> That is, these compilers think that comparisons like\n> \n> \tio_object < IOOBJECT_NUM_TYPES\n> \tio_context < IOCONTEXT_NUM_TYPES\n> \n> are constant-true. This seems not good; if they were to actually\n> act on this observation, by removing those loop-ending tests,\n> we'd have a problem.\n\nIt'd at least be obvious breakage :/\n\n\n> The issue seems to be that code like this:\n> \n> typedef enum IOContext\n> {\n> \tIOCONTEXT_BULKREAD,\n> \tIOCONTEXT_BULKWRITE,\n> \tIOCONTEXT_NORMAL,\n> \tIOCONTEXT_VACUUM,\n> } IOContext;\n> \n> #define IOCONTEXT_FIRST IOCONTEXT_BULKREAD\n> #define IOCONTEXT_NUM_TYPES (IOCONTEXT_VACUUM + 1)\n> \n> is far too cute for its own good. I'm not sure about how to fix it\n> either. I thought of defining\n> \n> #define IOCONTEXT_LAST IOCONTEXT_VACUUM\n> \n> and make the loop conditions like \"io_context <= IOCONTEXT_LAST\",\n> but that doesn't actually fix the problem.\n> \n> (Even aside from that, I do not find this coding even a little bit\n> mistake-proof: you still have to remember to update the #define\n> when adding another enum value.)\n\nBut the alternative is going around and updating N places, or having a LAST\nmember in the enum, which then precludes means either adding pointless case\nstatements or adding default: cases, which prevents the compiler from warning\nwhen a new case is added.\n\nI haven't dug up an old enough compiler yet, what happens if\nIOCONTEXT_NUM_TYPES is redefined to ((int)IOOBJECT_TEMP_RELATION + 1)?\n\n\n> We have similar code involving enum ForkNumber but it looks to me\n> like the loop variables are always declared as plain \"int\". That\n> might be the path of least resistance here.\n\nIIRC that caused some even longer lines due to casting the integer to the enum\nin some other lines. Perhaps we should just case for the < comparison?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 11:33:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-26 13:20:00 -0500, Tom Lane wrote:\n>> I hadn't run my buildfarm-compile-warning scraper for a little while,\n>> but I just did, and I find that this commit is causing warnings on\n>> no fewer than 14 buildfarm animals. They all look like\n\n> What other animals? If it had been just ayu / clang 4, I'd not be sure it's\n> worth doing much here.\n\nayu\nbatfish\ndemoiselle\ndesmoxytes\ndragonet\nidiacanthus\nmantid\npetalura\nphycodurus\npogona\nwobbegong\n\nSome of those are yours ;-)\n\nActually there are only 11, because I miscounted before, but\nthere are new compilers in that group not only old ones.\ndesmoxytes is gcc 10, for instance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 14:40:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-26 14:40:00 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-26 13:20:00 -0500, Tom Lane wrote:\n> >> I hadn't run my buildfarm-compile-warning scraper for a little while,\n> >> but I just did, and I find that this commit is causing warnings on\n> >> no fewer than 14 buildfarm animals. They all look like\n> \n> > What other animals? If it had been just ayu / clang 4, I'd not be sure it's\n> > worth doing much here.\n> \n> ayu\n> batfish\n> demoiselle\n> desmoxytes\n> dragonet\n> idiacanthus\n> mantid\n> petalura\n> phycodurus\n> pogona\n> wobbegong\n> \n> Some of those are yours ;-)\n> \n> Actually there are only 11, because I miscounted before, but\n> there are new compilers in that group not only old ones.\n> desmoxytes is gcc 10, for instance.\n\nI think on mine the warnings come from the clang to generate bitcode, rather\nthan gcc. The parallel make output makes that a bit hard to see though, as\ncommands and warnings are interspersed.\n\nThey're all animals for testing older LLVM versions. They're using\npretty old clang versions. phycodurus and dragonet are clang 3.9, petalura and\ndesmoxytes is clang 4, idiacanthus and pogona are clang 5.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 11:53:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> They're all animals for testing older LLVM versions. They're using\n> pretty old clang versions. phycodurus and dragonet are clang 3.9, petalura and\n> desmoxytes is clang 4, idiacanthus and pogona are clang 5.\n\n[ shrug ... ] If I thought this was actually good code, I might\nagree with ignoring these warnings; but I think what it mostly is\nis misleading overcomplication.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 15:08:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On 2023-02-26 15:08:33 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > They're all animals for testing older LLVM versions. They're using\n> > pretty old clang versions. phycodurus and dragonet are clang 3.9, petalura and\n> > desmoxytes is clang 4, idiacanthus and pogona are clang 5.\n> \n> [ shrug ... ] If I thought this was actually good code, I might\n> agree with ignoring these warnings; but I think what it mostly is\n> is misleading overcomplication.\n\nI don't mind removing *_FIRST et al by using 0. None of the proposals for\ngetting rid of *_NUM_* seemed a cure actually better than the disease.\n\nAdding a cast to int of the loop iteration variable seems to work and only\nnoticeably, not untollerably, ugly.\n\nOne thing that's odd is that the warnings don't appear reliably. The\n\"io_op < IOOP_NUM_TYPES\" comparison in pgstatfuncs.c doesn't trigger any\nwith clang-4.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 12:33:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 12:33:03PM -0800, Andres Freund wrote:\n> On 2023-02-26 15:08:33 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > They're all animals for testing older LLVM versions. They're using\n> > > pretty old clang versions. phycodurus and dragonet are clang 3.9, petalura and\n> > > desmoxytes is clang 4, idiacanthus and pogona are clang 5.\n> >\n> > [ shrug ... ] If I thought this was actually good code, I might\n> > agree with ignoring these warnings; but I think what it mostly is\n> > is misleading overcomplication.\n>\n> I don't mind removing *_FIRST et al by using 0. None of the proposals for\n> getting rid of *_NUM_* seemed a cure actually better than the disease.\n\nI am also fine with removing *_FIRST and allowing those electrons to\nmove on to bigger and better things :)\n\n>\n> Adding a cast to int of the loop iteration variable seems to work and only\n> noticeably, not untollerably, ugly.\n>\n> One thing that's odd is that the warnings don't appear reliably. The\n> \"io_op < IOOP_NUM_TYPES\" comparison in pgstatfuncs.c doesn't trigger any\n> with clang-4.\n\nUsing an int and casting all over the place certainly doesn't make the\ncode more attractive, but I am fine with this if it seems like the least\nbad solution.\n\nI didn't want to write a patch with this (ints instead of enums as loop\ncontrol variable) without being able to reproduce the warnings myself\nand confirm the patch silences them. However, I wasn't able to reproduce\nthe warnings myself. I tried to do so with a minimal repro on godbolt,\nand even with\n-Wtautological-constant-out-of-range-compare -Wall -Wextra -Weverything -Werror\nI couldn't get clang 4 or 5 (or a number of other compilers I randomly\npicked from the dropdown) to produce the warnings.\n\n- Melanie\n\n\n",
"msg_date": "Sun, 26 Feb 2023 16:11:45 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 04:11:45PM -0500, Melanie Plageman wrote:\n> On Sun, Feb 26, 2023 at 12:33:03PM -0800, Andres Freund wrote:\n> > On 2023-02-26 15:08:33 -0500, Tom Lane wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > > > They're all animals for testing older LLVM versions. They're using\n> > > > pretty old clang versions. phycodurus and dragonet are clang 3.9, petalura and\n> > > > desmoxytes is clang 4, idiacanthus and pogona are clang 5.\n> > >\n> > > [ shrug ... ] If I thought this was actually good code, I might\n> > > agree with ignoring these warnings; but I think what it mostly is\n> > > is misleading overcomplication.\n> >\n> > I don't mind removing *_FIRST et al by using 0. None of the proposals for\n> > getting rid of *_NUM_* seemed a cure actually better than the disease.\n> \n> I am also fine with removing *_FIRST and allowing those electrons to\n> move on to bigger and better things :)\n> \n> >\n> > Adding a cast to int of the loop iteration variable seems to work and only\n> > noticeably, not untollerably, ugly.\n> >\n> > One thing that's odd is that the warnings don't appear reliably. The\n> > \"io_op < IOOP_NUM_TYPES\" comparison in pgstatfuncs.c doesn't trigger any\n> > with clang-4.\n> \n> Using an int and casting all over the place certainly doesn't make the\n> code more attractive, but I am fine with this if it seems like the least\n> bad solution.\n> \n> I didn't want to write a patch with this (ints instead of enums as loop\n> control variable) without being able to reproduce the warnings myself\n> and confirm the patch silences them. However, I wasn't able to reproduce\n> the warnings myself. I tried to do so with a minimal repro on godbolt,\n> and even with\n> -Wtautological-constant-out-of-range-compare -Wall -Wextra -Weverything -Werror\n> I couldn't get clang 4 or 5 (or a number of other compilers I randomly\n> picked from the dropdown) to produce the warnings.\n\nJust kidding: it reproduces if the defined enum has two or less values.\nInteresting...\n\nAfter discovering this, tried out various solutions including one Andres\nsuggested:\n\n\tfor (IOOp io_op = 0; (int) io_op < IOOP_NUM_TYPES; io_op++)\n\nand it does silence the warning. What do you think?\n\n- Melanie\n\n\n",
"msg_date": "Sun, 26 Feb 2023 16:24:40 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > The issue seems to be that code like this:\n> > ...\n> > is far too cute for its own good.\n>\n> Oh, there's another thing here that qualifies as too-cute: loops like\n>\n> for (IOObject io_object = IOOBJECT_FIRST;\n> io_object < IOOBJECT_NUM_TYPES; io_object++)\n>\n> make it look like we could define these enums as 1-based rather\n> than 0-based, but if we did this code would fail, because it's\n> confusing \"the number of values\" with \"1 more than the last value\".\n>\n> Again, we could fix that with tests like \"io_context <= IOCONTEXT_LAST\",\n> but I don't see the point of adding more macros rather than removing\n> some. We do need IOOBJECT_NUM_TYPES to declare array sizes with,\n> so I think we should nuke the \"xxx_FIRST\" macros as being not worth\n> the electrons they're written on, and write these loops like\n>\n> for (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n>\n> which is not actually adding any assumptions that you don't already\n> make by using io_object as a C array subscript.\n\nAttached is a patch to remove the *_FIRST macros.\nI was going to add in code to change\n\n for (IOObject io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n to\n for (IOObject io_object = 0; (int) io_object < IOOBJECT_NUM_TYPES;\nio_object++)\n\nbut then I couldn't remember why we didn't just do\n\n for (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n\nI recall that when passing that loop variable into a function I was\ngetting a compiler warning that required me to cast the value back to an\nenum to silence it:\n\n pgstat_tracks_io_op(bktype, (IOObject) io_object,\nio_context, io_op))\n\nHowever, I am now unable to reproduce that warning.\nMoreover, I see in cases like table_block_relation_size() with\nForkNumber, the variable i is passed with no cast to smgrnblocks().\n\n- Melanie",
"msg_date": "Mon, 27 Feb 2023 09:24:01 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> Attached is a patch to remove the *_FIRST macros.\n> I was going to add in code to change\n\n> for (IOObject io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n> to\n> for (IOObject io_object = 0; (int) io_object < IOOBJECT_NUM_TYPES; io_object++)\n\nI don't really like that proposal. ISTM it's just silencing the\nmessenger rather than addressing the underlying problem, namely that\nthere's no guarantee that an IOObject variable can hold the value\nIOOBJECT_NUM_TYPES, which it had better do if you want the loop to\nterminate. Admittedly it's quite unlikely that these three enums would\ngrow to the point that that becomes an actual hazard for them --- but\nIMO it's still bad practice and a bad precedent for future code.\n\n> but then I couldn't remember why we didn't just do\n\n> for (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n\n> I recall that when passing that loop variable into a function I was\n> getting a compiler warning that required me to cast the value back to an\n> enum to silence it:\n\n> pgstat_tracks_io_op(bktype, (IOObject) io_object,\n> io_context, io_op))\n\n> However, I am now unable to reproduce that warning.\n> Moreover, I see in cases like table_block_relation_size() with\n> ForkNumber, the variable i is passed with no cast to smgrnblocks().\n\nYeah, my druthers would be to just do it the way we do comparable\nthings with ForkNumber. I don't feel like we need to invent a\nbetter way here.\n\nThe risk of needing to cast when using the \"int\" loop variable\nas an enum is obviously the downside of that approach, but we have\nnot seen any indication that any compilers actually do warn.\nIt's interesting that you did see such a warning ... I wonder which\ncompiler you were using at the time?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Feb 2023 10:30:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 10:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Melanie Plageman <melanieplageman@gmail.com> writes:\n> > Attached is a patch to remove the *_FIRST macros.\n> > I was going to add in code to change\n>\n> > for (IOObject io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n> > to\n> > for (IOObject io_object = 0; (int) io_object < IOOBJECT_NUM_TYPES; io_object++)\n>\n> I don't really like that proposal. ISTM it's just silencing the\n> messenger rather than addressing the underlying problem, namely that\n> there's no guarantee that an IOObject variable can hold the value\n> IOOBJECT_NUM_TYPES, which it had better do if you want the loop to\n> terminate. Admittedly it's quite unlikely that these three enums would\n> grow to the point that that becomes an actual hazard for them --- but\n> IMO it's still bad practice and a bad precedent for future code.\n\nThat's fair. Patch attached.\n\n> > but then I couldn't remember why we didn't just do\n>\n> > for (int io_object = 0; io_object < IOOBJECT_NUM_TYPES; io_object++)\n>\n> > I recall that when passing that loop variable into a function I was\n> > getting a compiler warning that required me to cast the value back to an\n> > enum to silence it:\n>\n> > pgstat_tracks_io_op(bktype, (IOObject) io_object,\n> > io_context, io_op))\n>\n> > However, I am now unable to reproduce that warning.\n> > Moreover, I see in cases like table_block_relation_size() with\n> > ForkNumber, the variable i is passed with no cast to smgrnblocks().\n>\n> Yeah, my druthers would be to just do it the way we do comparable\n> things with ForkNumber. I don't feel like we need to invent a\n> better way here.\n>\n> The risk of needing to cast when using the \"int\" loop variable\n> as an enum is obviously the downside of that approach, but we have\n> not seen any indication that any compilers actually do warn.\n> It's interesting that you did see such a warning ... I wonder which\n> compiler you were using at the time?\n\nso, pretty much any version of clang I tried with\n-Wsign-conversion produces a warning.\n\n<source>:35:32: warning: implicit conversion changes signedness: 'int'\nto 'IOOp' (aka 'enum IOOp') [-Wsign-conversion]\n\nI didn't do the casts in the attached patch since they aren't done elsewhere.\n\n- Melanie",
"msg_date": "Mon, 27 Feb 2023 14:03:16 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> On Mon, Feb 27, 2023 at 10:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The risk of needing to cast when using the \"int\" loop variable\n>> as an enum is obviously the downside of that approach, but we have\n>> not seen any indication that any compilers actually do warn.\n>> It's interesting that you did see such a warning ... I wonder which\n>> compiler you were using at the time?\n\n> so, pretty much any version of clang I tried with\n> -Wsign-conversion produces a warning.\n\n> <source>:35:32: warning: implicit conversion changes signedness: 'int'\n> to 'IOOp' (aka 'enum IOOp') [-Wsign-conversion]\n\nOh, interesting --- so it's not about the implicit conversion to enum\nbut just about signedness. I bet we could silence that by making the\nloop variables be \"unsigned int\". I doubt it's worth any extra keystrokes\nthough, because we are not at all clean about sign-conversion warnings.\nI tried enabling -Wsign-conversion on Apple's clang 14.0.0 just now,\nand counted 13462 such warnings just in the core build :-(. I don't\nforesee anybody trying to clean that up.\n\n> I didn't do the casts in the attached patch since they aren't done elsewhere.\n\nAgreed. I'll push this along with the earlier patch if there are\nnot objections.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Feb 2023 14:58:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On 2023-02-27 14:58:30 -0500, Tom Lane wrote:\n> Agreed. I'll push this along with the earlier patch if there are\n> not objections.\n\nNone here.\n\n\n",
"msg_date": "Mon, 27 Feb 2023 15:18:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Just pushed the actual pg_stat_io view, the splitting of the tablespace test,\n> and the pg_stat_io tests.\n\nOne of the test cases is flapping a bit:\n\ndiff -U3 /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/expected/stats.out /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/results/stats.out\n--- /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/expected/stats.out\t2023-03-04 21:30:05.891579466 +0100\n+++ /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/results/stats.out\t2023-03-04 21:34:26.745552661 +0100\n@@ -1201,7 +1201,7 @@\n SELECT :io_sum_shared_after_reads > :io_sum_shared_before_reads;\n ?column? \n ----------\n- t\n+ f\n (1 row)\n \n DROP TABLE test_io_shared;\n\nThere are two instances of this today [1][2], and I've seen it before\nbut failed to note down where.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2023-03-04%2021%3A19%3A39\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mule&dt=2023-03-04%2020%3A30%3A05\n\n\n",
"msg_date": "Sat, 04 Mar 2023 18:21:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Sat, 04 Mar 2023 18:21:09 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Andres Freund <andres@anarazel.de> writes:\n> > Just pushed the actual pg_stat_io view, the splitting of the tablespace test,\n> > and the pg_stat_io tests.\n> \n> One of the test cases is flapping a bit:\n> \n> diff -U3 /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/expected/stats.out /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/results/stats.out\n> --- /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/expected/stats.out\t2023-03-04 21:30:05.891579466 +0100\n> +++ /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/results/stats.out\t2023-03-04 21:34:26.745552661 +0100\n> @@ -1201,7 +1201,7 @@\n> SELECT :io_sum_shared_after_reads > :io_sum_shared_before_reads;\n> ?column? \n> ----------\n> - t\n> + f\n> (1 row)\n> \n> DROP TABLE test_io_shared;\n> \n> There are two instances of this today [1][2], and I've seen it before\n> but failed to note down where.\n\nThe concurrent autoanalyze below is logged as performing at least one\npage read from the table. It is unclear, however, how that analyze\noperation resulted in 19 hits and 2 reads on the (I think) single-page\nrelation.\n\nIn any case, I think we need to avoid such concurrent autovacuum/analyze.\n\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2023-03-04%2021%3A19%3A39\n\n2023-03-04 22:36:27.781 CET [4073:106] pg_regress/stats LOG: statement: ALTER TABLE test_io_shared SET TABLESPACE regress_tblspace;\n2023-03-04 22:36:27.838 CET [4073:107] pg_regress/stats LOG: statement: SELECT COUNT(*) FROM test_io_shared;\n2023-03-04 22:36:27.864 CET [4255:5] LOG: automatic analyze of table \"regression.public.test_io_shared\"\n\tavg read rate: 5.208 MB/s, avg write rate: 5.208 MB/s\n\tbuffer usage: 17 hits, 2 misses, 2 dirtied\n2023-03-04 22:36:28.024 CET [4073:108] pg_regress/stats LOG: statement: SELECT pg_stat_force_next_flush();\n2023-03-04 22:36:28.024 CET [4073:108] pg_regress/stats LOG: statement: SELECT pg_stat_force_next_flush();\n2023-03-04 22:36:28.027 CET [4073:109] pg_regress/stats LOG: statement: SELECT sum(reads) AS io_sum_shared_after_reads\n\t FROM pg_stat_io WHERE io_context = 'normal' AND io_object = 'relation' \n\n\n\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2023-03-04%2021%3A19%3A39\n> [2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mule&dt=2023-03-04%2020%3A30%3A05\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 06 Mar 2023 15:24:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Mon, 06 Mar 2023 15:24:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> In any case, I think we need to avoid such concurrent autovacuum/analyze.\n\nIf it is correct, I believe the attached fix works.\n\nregads.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 06 Mar 2023 15:48:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 1:48 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 06 Mar 2023 15:24:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > In any case, I think we need to avoid such concurrent autovacuum/analyze.\n>\n> If it is correct, I believe the attached fix works.\n\nThanks for investigating this!\n\nYes, this fix looks correct and makes sense to me.\n\nOn Mon, Mar 6, 2023 at 1:24 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 04 Mar 2023 18:21:09 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Andres Freund <andres@anarazel.de> writes:\n> > > Just pushed the actual pg_stat_io view, the splitting of the tablespace test,\n> > > and the pg_stat_io tests.\n> >\n> > One of the test cases is flapping a bit:\n> >\n> > diff -U3 /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/expected/stats.out /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/results/stats.out\n> > --- /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/expected/stats.out 2023-03-04 21:30:05.891579466 +0100\n> > +++ /home/pg/build-farm-15/buildroot/HEAD/pgsql.build/src/test/regress/results/stats.out 2023-03-04 21:34:26.745552661 +0100\n> > @@ -1201,7 +1201,7 @@\n> > SELECT :io_sum_shared_after_reads > :io_sum_shared_before_reads;\n> > ?column?\n> > ----------\n> > - t\n> > + f\n> > (1 row)\n> >\n> > DROP TABLE test_io_shared;\n> >\n> > There are two instances of this today [1][2], and I've seen it before\n> > but failed to note down where.\n>\n> The concurrent autoanalyze below is logged as performing at least one\n> page read from the table. It is unclear, however, how that analyze\n> operation resulted in 19 hits and 2 reads on the (I think) single-page\n> relation.\n\nYes, it is a single page.\nI think there could be a few different reasons by it is 2 misses/2\ndirtied, but the one that seems most likely is that I/O of other\nrelations done during this autovac/analyze of this relation is counted\nin the same global variables (like catalog tables).\n\n- Melanie\n\n\n",
"msg_date": "Mon, 6 Mar 2023 10:09:24 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-06 10:09:24 -0500, Melanie Plageman wrote:\n> On Mon, Mar 6, 2023 at 1:48 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 06 Mar 2023 15:24:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > In any case, I think we need to avoid such concurrent autovacuum/analyze.\n> >\n> > If it is correct, I believe the attached fix works.\n> \n> Thanks for investigating this!\n> \n> Yes, this fix looks correct and makes sense to me.\n\nWouldn't it be better to just perform the section from the ALTER TABLE till\nthe DROP TABLE in a transaction? Then there couldn't be any other accesses in\njust that section. I'm not convinced it's good to disallow all concurrent\nactivity in other parts of the test.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Mar 2023 11:09:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 11:09:19AM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2023-03-06 10:09:24 -0500, Melanie Plageman wrote:\n> > On Mon, Mar 6, 2023 at 1:48 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Mon, 06 Mar 2023 15:24:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > > In any case, I think we need to avoid such concurrent autovacuum/analyze.\n> > >\n> > > If it is correct, I believe the attached fix works.\n> > \n> > Thanks for investigating this!\n> > \n> > Yes, this fix looks correct and makes sense to me.\n> \n> Wouldn't it be better to just perform the section from the ALTER TABLE till\n> the DROP TABLE in a transaction? Then there couldn't be any other accesses in\n> just that section. I'm not convinced it's good to disallow all concurrent\n> activity in other parts of the test.\n\nYou mean for test coverage reasons? Because the table in question only\nexists for a few operations in this test file.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 6 Mar 2023 14:24:09 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-06 14:24:09 -0500, Melanie Plageman wrote:\n> On Mon, Mar 06, 2023 at 11:09:19AM -0800, Andres Freund wrote:\n> > On 2023-03-06 10:09:24 -0500, Melanie Plageman wrote:\n> > > On Mon, Mar 6, 2023 at 1:48 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > At Mon, 06 Mar 2023 15:24:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > > > In any case, I think we need to avoid such concurrent autovacuum/analyze.\n> > > >\n> > > > If it is correct, I believe the attached fix works.\n> > > \n> > > Thanks for investigating this!\n> > > \n> > > Yes, this fix looks correct and makes sense to me.\n> > \n> > Wouldn't it be better to just perform the section from the ALTER TABLE till\n> > the DROP TABLE in a transaction? Then there couldn't be any other accesses in\n> > just that section. I'm not convinced it's good to disallow all concurrent\n> > activity in other parts of the test.\n> \n> You mean for test coverage reasons? Because the table in question only\n> exists for a few operations in this test file.\n\nThat, but also because it's simply more reliable. autovacuum=off doesn't\nprotect against a anti-wraparound vacuum or such. Or a concurrent test somehow\ntriggering a read. Or ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Mar 2023 11:34:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-03-06 14:24:09 -0500, Melanie Plageman wrote:\n> > On Mon, Mar 06, 2023 at 11:09:19AM -0800, Andres Freund wrote:\n> > > On 2023-03-06 10:09:24 -0500, Melanie Plageman wrote:\n> > > > On Mon, Mar 6, 2023 at 1:48 AM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > > >\n> > > > > At Mon, 06 Mar 2023 15:24:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > > > > In any case, I think we need to avoid such concurrent autovacuum/analyze.\n> > > > >\n> > > > > If it is correct, I believe the attached fix works.\n> > > >\n> > > > Thanks for investigating this!\n> > > >\n> > > > Yes, this fix looks correct and makes sense to me.\n> > >\n> > > Wouldn't it be better to just perform the section from the ALTER TABLE till\n> > > the DROP TABLE in a transaction? Then there couldn't be any other accesses in\n> > > just that section. I'm not convinced it's good to disallow all concurrent\n> > > activity in other parts of the test.\n> >\n> > You mean for test coverage reasons? Because the table in question only\n> > exists for a few operations in this test file.\n>\n> That, but also because it's simply more reliable. autovacuum=off doesn't\n> protect against a anti-wraparound vacuum or such. Or a concurrent test somehow\n> triggering a read. Or ...\n\nGood point. Attached is what you suggested. I committed the transaction\nbefore the drop table so that the statistics would be visible when we\nqueried pg_stat_io.\n\n- Melanie",
"msg_date": "Mon, 6 Mar 2023 15:21:14 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "At Mon, 6 Mar 2023 15:21:14 -0500, Melanie Plageman <melanieplageman@gmail.com> wrote in \r\n> On Mon, Mar 6, 2023 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\r\n> > That, but also because it's simply more reliable. autovacuum=off doesn't\r\n> > protect against a anti-wraparound vacuum or such. Or a concurrent test somehow\r\n> > triggering a read. Or ...\r\n>\r\n> Good point. Attached is what you suggested. I committed the transaction\r\n> before the drop table so that the statistics would be visible when we\r\n> queried pg_stat_io.\r\n\r\nWhile I don't believe anti-wraparound vacuum can occur during testing,\r\nMelanie's solution (moving the commit by a few lines) seems working\r\n(by a manual testing).\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Tue, 07 Mar 2023 11:55:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-06 15:21:14 -0500, Melanie Plageman wrote:\n> Good point. Attached is what you suggested. I committed the transaction\n> before the drop table so that the statistics would be visible when we\n> queried pg_stat_io.\n\nPushed, thanks for the report, analysis and fix, Tom, Horiguchi-san, Melanie.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Mar 2023 10:18:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 10:18:44AM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2023-03-06 15:21:14 -0500, Melanie Plageman wrote:\n> > Good point. Attached is what you suggested. I committed the transaction\n> > before the drop table so that the statistics would be visible when we\n> > queried pg_stat_io.\n> \n> Pushed, thanks for the report, analysis and fix, Tom, Horiguchi-san, Melanie.\n\nThere's a 2nd portion of the test that's still flapping, at least on\ncirrusci.\n\nThe issue that Tom mentioned is at:\n SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes;\n\nBut what I've seen on cirrusci is at:\n SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes;\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6701069548388352/log/src/test/recovery/tmp_check/regression.diffs\nhttps://api.cirrus-ci.com/v1/artifact/task/5355168397524992/log/src/test/recovery/tmp_check/regression.diffs\nhttps://api.cirrus-ci.com/v1/artifact/task/6142435751886848/testrun/build/testrun/recovery/027_stream_regress/log/regress_log_027_stream_regress\n\nIt'd be neat if cfbot could show a histogram of test failures, although\nI'm not entirely sure what granularity would be most useful: the test\nthat failed (027_regress) or the way it failed (:after_write >\n:before_writes). Maybe it's enough to show the test, with links to its\nrecent failures.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 9 Mar 2023 06:51:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-09 06:51:31 -0600, Justin Pryzby wrote:\n> On Tue, Mar 07, 2023 at 10:18:44AM -0800, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-03-06 15:21:14 -0500, Melanie Plageman wrote:\n> > > Good point. Attached is what you suggested. I committed the transaction\n> > > before the drop table so that the statistics would be visible when we\n> > > queried pg_stat_io.\n> > \n> > Pushed, thanks for the report, analysis and fix, Tom, Horiguchi-san, Melanie.\n> \n> There's a 2nd portion of the test that's still flapping, at least on\n> cirrusci.\n> \n> The issue that Tom mentioned is at:\n> SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes;\n> \n> But what I've seen on cirrusci is at:\n> SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes;\n\nSeems you meant to copy a different line for Tom's (s/writes/redas/)?\n\n\n> https://api.cirrus-ci.com/v1/artifact/task/6701069548388352/log/src/test/recovery/tmp_check/regression.diffs\n\nHm. I guess the explanation here is that the buffers were already all written\nout by another backend. Which is made more likely by your patch.\n\n\nI found a few more occurances and chatted with Melanie. Melanie will come up\nwith a fix I think.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Mar 2023 11:43:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 2:43 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-09 06:51:31 -0600, Justin Pryzby wrote:\n> > On Tue, Mar 07, 2023 at 10:18:44AM -0800, Andres Freund wrote:\n> > There's a 2nd portion of the test that's still flapping, at least on\n> > cirrusci.\n> >\n> > The issue that Tom mentioned is at:\n> > SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes;\n> >\n> > But what I've seen on cirrusci is at:\n> > SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes;\n>\n> Seems you meant to copy a different line for Tom's (s/writes/redas/)?\n>\n>\n> > https://api.cirrus-ci.com/v1/artifact/task/6701069548388352/log/src/test/recovery/tmp_check/regression.diffs\n>\n> Hm. I guess the explanation here is that the buffers were already all written\n> out by another backend. Which is made more likely by your patch.\n>\n>\n> I found a few more occurances and chatted with Melanie. Melanie will come up\n> with a fix I think.\n\nSo, what this test is relying on is that either the checkpointer or\nanother backend will flush the pages of test_io_shared which we dirtied\nabove in the test. The test specifically checks for IOCONTEXT_NORMAL\nwrites. It could fail if some other backend is doing a bulkread or\nbulkwrite and flushes these buffers first in a strategy context.\nThis will happen more often when shared buffers is small.\n\nI tried to come up with a reliable test which was limited to\nIOCONTEXT_NORMAL. I thought if we could guarantee a dirty buffer would\nbe pinned using a cursor, that we could then issue a checkpoint and\nguarantee a flush that way. However, I don't see a way to guarantee that\nno one flushes the buffer between dirtying it and pinning it with the\ncursor.\n\nSo, I think our best bet is to just change the test to pass if there are\nany writes in any contexts. By moving the sum(writes) before the INSERT\nand keeping the checkpoint, we can guarantee that someway or another,\nsome buffers will be flushed. This essentially covers the same code anyway.\n\nPatch attached.\n\n- Melanie",
"msg_date": "Fri, 10 Mar 2023 14:51:13 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 11:43:01AM -0800, Andres Freund wrote:\n> On 2023-03-09 06:51:31 -0600, Justin Pryzby wrote:\n> > On Tue, Mar 07, 2023 at 10:18:44AM -0800, Andres Freund wrote:\n> > > Hi,\n> > > \n> > > On 2023-03-06 15:21:14 -0500, Melanie Plageman wrote:\n> > > > Good point. Attached is what you suggested. I committed the transaction\n> > > > before the drop table so that the statistics would be visible when we\n> > > > queried pg_stat_io.\n> > > \n> > > Pushed, thanks for the report, analysis and fix, Tom, Horiguchi-san, Melanie.\n> > \n> > There's a 2nd portion of the test that's still flapping, at least on\n> > cirrusci.\n> > \n> > The issue that Tom mentioned is at:\n> > SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes;\n> > \n> > But what I've seen on cirrusci is at:\n> > SELECT :io_sum_shared_after_writes > :io_sum_shared_before_writes;\n> \n> Seems you meant to copy a different line for Tom's (s/writes/redas/)?\n\nSeems so\n\n> > https://api.cirrus-ci.com/v1/artifact/task/6701069548388352/log/src/test/recovery/tmp_check/regression.diffs\n> \n> Hm. I guess the explanation here is that the buffers were already all written\n> out by another backend. Which is made more likely by your patch.\n\nFYI: that patch would've made it more likely for each backend to write\nout its *own* dirty pages of TOAST ... but the two other failures that I\nmentioned were for patches which wouldn't have affected this at all.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 10 Mar 2023 14:19:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 3:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Mar 09, 2023 at 11:43:01AM -0800, Andres Freund wrote:\n> > > https://api.cirrus-ci.com/v1/artifact/task/6701069548388352/log/src/test/recovery/tmp_check/regression.diffs\n> >\n> > Hm. I guess the explanation here is that the buffers were already all written\n> > out by another backend. Which is made more likely by your patch.\n>\n> FYI: that patch would've made it more likely for each backend to write\n> out its *own* dirty pages of TOAST ... but the two other failures that I\n> mentioned were for patches which wouldn't have affected this at all.\n\nI think your patch made it more likely that a backend needing to flush a\nbuffer in order to fit its own data would be doing so in a buffer access\nstrategy IO context.\n\nYour patch makes it so those toast table writes are using a\nBAS_BULKWRITE (see GetBulkInsertState()) and when they are looking for\nbuffers to put their data in, they have to evict other data (theirs and\nothers) but all of this is tracked in io_context = 'bulkwrite' -- and\nthe test only counted writes done in io_context 'normal'. But it is good\nthat your patch did that! It helped us to see that this test is not\nreliable.\n\nThe other times this test failed in cfbot were for a patch that had many\nfailures and might have something wrong with its code, IIRC.\n\nThanks again for the report!\n\n- Melanie\n\n\n",
"msg_date": "Fri, 10 Mar 2023 15:33:44 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "Hello,\n\nI found that the 'standalone backend' backend type is not documented \nright now.\nAdding something like (from commit message) would be helpful:\n\nBoth the bootstrap backend and single user mode backends will have \nbackend_type STANDALONE_BACKEND.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Mon, 3 Apr 2023 07:13:26 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 12:13 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n>\n> Hello,\n>\n> I found that the 'standalone backend' backend type is not documented\n> right now.\n> Adding something like (from commit message) would be helpful:\n>\n> Both the bootstrap backend and single user mode backends will have\n> backend_type STANDALONE_BACKEND.\n\nThanks for the report.\n\nAttached is a tiny patch to add standalone backend type to\npg_stat_activity documentation (referenced by pg_stat_io).\n\nI mentioned both the bootstrap process and single user mode process in\nthe docs, though I can't imagine that the bootstrap process is relevant\nfor pg_stat_activity.\n\nI also noticed that the pg_stat_activity docs call background workers\n\"parallel workers\" (though it also mentions that extensions could have\nother background workers registered), but this seems a bit weird because\npg_stat_activity uses GetBackendTypeDesc() and this prints \"background\nworker\" for type B_BG_WORKER. Background workers doing parallelism tasks\nis what users will most often see in pg_stat_activity, but I feel like\nit is confusing to have it documented as something different than what\nwould appear in the view. Unless I am misunderstanding something...\n\n- Melanie",
"msg_date": "Mon, 3 Apr 2023 16:50:43 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On 03.04.2023 23:50, Melanie Plageman wrote:\n> Attached is a tiny patch to add standalone backend type to\n> pg_stat_activity documentation (referenced by pg_stat_io).\n>\n> I mentioned both the bootstrap process and single user mode process in\n> the docs, though I can't imagine that the bootstrap process is relevant\n> for pg_stat_activity.\n\nAfter a little thought... I'm not sure about the term 'bootstrap \nprocess'. I can't find this term in the documentation.\nDo I understand correctly that this is a postmaster? If so, then the \npostmaster process is not shown in pg_stat_activity.\n\nPerhaps it may be worth adding a description of the standalone backend \nto pg_stat_io, not to pg_stat_activity.\nSomething like: backend_type is all types from pg_stat_activity plus \n'standalone backend',\nwhich is used for the postmaster process and in a single user mode.\n\n> I also noticed that the pg_stat_activity docs call background workers\n> \"parallel workers\" (though it also mentions that extensions could have\n> other background workers registered), but this seems a bit weird because\n> pg_stat_activity uses GetBackendTypeDesc() and this prints \"background\n> worker\" for type B_BG_WORKER. Background workers doing parallelism tasks\n> is what users will most often see in pg_stat_activity, but I feel like\n> it is confusing to have it documented as something different than what\n> would appear in the view. Unless I am misunderstanding something...\n\n'parallel worker' appears in the pg_stat_activity for parallel queries. \nI think it's right here.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Tue, 4 Apr 2023 23:35:13 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Tue, Apr 4, 2023 at 4:35 PM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n>\n> On 03.04.2023 23:50, Melanie Plageman wrote:\n> > Attached is a tiny patch to add standalone backend type to\n> > pg_stat_activity documentation (referenced by pg_stat_io).\n> >\n> > I mentioned both the bootstrap process and single user mode process in\n> > the docs, though I can't imagine that the bootstrap process is relevant\n> > for pg_stat_activity.\n>\n> After a little thought... I'm not sure about the term 'bootstrap\n> process'. I can't find this term in the documentation.\n\nThere are various mentions of \"bootstrap\" peppered throughout the docs\nbut no concise summary of what it is. For example, initdb docs mention\nthe \"bootstrap backend\" [1].\n\nInterestingly, 910cab820d0 added \"Bootstrap superuser\" in November. This\ndoesn't really cover what bootstrapping is itself, but I wonder if that\nis useful? If so, you could propose a glossary entry for it?\n(preferably in a new thread)\n\n> Do I understand correctly that this is a postmaster? If so, then the\n> postmaster process is not shown in pg_stat_activity.\n\nNo, bootstrap process is for initializing the template database. You\nwill not be able to see pg_stat_activity when it is running.\n\n> Perhaps it may be worth adding a description of the standalone backend\n> to pg_stat_io, not to pg_stat_activity.\n> Something like: backend_type is all types from pg_stat_activity plus\n> 'standalone backend',\n> which is used for the postmaster process and in a single user mode.\n\nYou can query pg_stat_activity from single user mode, so it is relevant\nto pg_stat_activity also. I take your point that bootstrap mode isn't\nrelevant for pg_stat_activity, but I am hesitant to add that distinction\nto the pg_stat_io docs since the reason you won't see it in\npg_stat_activity is because it is ephemeral and before a user can access\nthe database and not because stats are not tracked for it.\n\nCan you think of a way to convey this?\n\n> > I also noticed that the pg_stat_activity docs call background workers\n> > \"parallel workers\" (though it also mentions that extensions could have\n> > other background workers registered), but this seems a bit weird because\n> > pg_stat_activity uses GetBackendTypeDesc() and this prints \"background\n> > worker\" for type B_BG_WORKER. Background workers doing parallelism tasks\n> > is what users will most often see in pg_stat_activity, but I feel like\n> > it is confusing to have it documented as something different than what\n> > would appear in the view. Unless I am misunderstanding something...\n>\n> 'parallel worker' appears in the pg_stat_activity for parallel queries.\n> I think it's right here.\n\nAh, I didn't read the code closely enough in pg_stat_get_activity()\nEven though there is no BackendType which GetBackendTypeDesc() returns\ncalled \"parallel worker\", we to out of our way to be specific using\nGetBackgroundWorkerTypeByPid()\n\n /* Add backend type */\n if (beentry->st_backendType == B_BG_WORKER)\n {\n const char *bgw_type;\n\n bgw_type = GetBackgroundWorkerTypeByPid(beentry->st_procpid);\n if (bgw_type)\n values[17] = CStringGetTextDatum(bgw_type);\n else\n nulls[17] = true;\n }\n else\n values[17] =\n CStringGetTextDatum(GetBackendTypeDesc(beentry->st_backendType));\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/current/app-initdb.html\n\n\n",
"msg_date": "Tue, 4 Apr 2023 20:41:09 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On 05.04.2023 03:41, Melanie Plageman wrote:\n> On Tue, Apr 4, 2023 at 4:35 PM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n>\n>> After a little thought... I'm not sure about the term 'bootstrap\n>> process'. I can't find this term in the documentation.\n> There are various mentions of \"bootstrap\" peppered throughout the docs\n> but no concise summary of what it is. For example, initdb docs mention\n> the \"bootstrap backend\" [1].\n>\n> Interestingly, 910cab820d0 added \"Bootstrap superuser\" in November. This\n> doesn't really cover what bootstrapping is itself, but I wonder if that\n> is useful? If so, you could propose a glossary entry for it?\n> (preferably in a new thread)\n\nI'm not sure if this is the reason for adding a new entry in the glossary.\n\n>> Do I understand correctly that this is a postmaster? If so, then the\n>> postmaster process is not shown in pg_stat_activity.\n> No, bootstrap process is for initializing the template database. You\n> will not be able to see pg_stat_activity when it is running.\n\nOh, it's clear to me now. Thank you for the explanation.\n\n> You can query pg_stat_activity from single user mode, so it is relevant\n> to pg_stat_activity also. I take your point that bootstrap mode isn't\n> relevant for pg_stat_activity, but I am hesitant to add that distinction\n> to the pg_stat_io docs since the reason you won't see it in\n> pg_stat_activity is because it is ephemeral and before a user can access\n> the database and not because stats are not tracked for it.\n>\n> Can you think of a way to convey this?\n\nSee my attempt attached.\nI'm not sure about the wording. But I think we can avoid the term \n'bootstrap process'\nby replacing it with \"database cluster initialization\", which should be \nclear to everyone.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Mon, 10 Apr 2023 10:41:38 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 3:41 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n>\n> On 05.04.2023 03:41, Melanie Plageman wrote:\n> > On Tue, Apr 4, 2023 at 4:35 PM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n> >\n> >> After a little thought... I'm not sure about the term 'bootstrap\n> >> process'. I can't find this term in the documentation.\n> > There are various mentions of \"bootstrap\" peppered throughout the docs\n> > but no concise summary of what it is. For example, initdb docs mention\n> > the \"bootstrap backend\" [1].\n> >\n> > Interestingly, 910cab820d0 added \"Bootstrap superuser\" in November. This\n> > doesn't really cover what bootstrapping is itself, but I wonder if that\n> > is useful? If so, you could propose a glossary entry for it?\n> > (preferably in a new thread)\n>\n> I'm not sure if this is the reason for adding a new entry in the glossary.\n>\n> >> Do I understand correctly that this is a postmaster? If so, then the\n> >> postmaster process is not shown in pg_stat_activity.\n> > No, bootstrap process is for initializing the template database. You\n> > will not be able to see pg_stat_activity when it is running.\n>\n> Oh, it's clear to me now. Thank you for the explanation.\n>\n> > You can query pg_stat_activity from single user mode, so it is relevant\n> > to pg_stat_activity also. I take your point that bootstrap mode isn't\n> > relevant for pg_stat_activity, but I am hesitant to add that distinction\n> > to the pg_stat_io docs since the reason you won't see it in\n> > pg_stat_activity is because it is ephemeral and before a user can access\n> > the database and not because stats are not tracked for it.\n> >\n> > Can you think of a way to convey this?\n>\n> See my attempt attached.\n> I'm not sure about the wording. But I think we can avoid the term\n> 'bootstrap process'\n> by replacing it with \"database cluster initialization\", which should be\n> clear to everyone.\n\nI like that idea.\n\ndiff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\nindex 3f33a1c56c..45e20efbfb 100644\n--- a/doc/src/sgml/monitoring.sgml\n+++ b/doc/src/sgml/monitoring.sgml\n@@ -991,6 +991,9 @@ postgres 27093 0.0 0.0 30096 2752 ?\nSs 11:34 0:00 postgres: ser\n <literal>archiver</literal>,\n <literal>startup</literal>, <literal>walreceiver</literal>,\n <literal>walsender</literal> and <literal>walwriter</literal>.\n+ The special type <literal>standalone backend</literal> is used\n\nI think referring to it as a \"special type\" is a bit confusing. I think\nyou can just start the sentence with \"standalone backend\". You could\neven include it in the main list of backend_types since it is possible\nto see it in pg_stat_activity when in single user mode.\n\n+ when initializing a database cluster by <xref linkend=\"app-initdb\"/>\n+ and when running in the <xref linkend=\"app-postgres-single-user\"/>.\n In addition, background workers registered by extensions may have\n additional types.\n </para></entry>\n\nI like the rest of this.\n\nI copied the committer who most recently touched pg_stat_io (Michael\nPaquier) to see if we could get someone interested in committing this\ndocs update.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 24 Apr 2023 16:53:25 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
},
{
"msg_contents": "On 24.04.2023 23:53, Melanie Plageman wrote:\n> I copied the committer who most recently touched pg_stat_io (Michael\n> Paquier) to see if we could get someone interested in committing this\n> docs update.\n\nI can explain my motivation by suggesting this update.\n\npg_stat_io is a very impressive feature. So I decided to try it.\nI see 4 rows for some 'standalone backend' out of 30 total rows of the \nview.\n\nThe attempt to find description of 'standalone backend' in the docs\ndid not result in anything. pg_stat_io page references pg_stat_activity\nfor backend types. But pg_stat_activity page doesn't say anything\nabout 'standalone backend'.\n\nI think this question will be popular without clarifying in docs.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Tue, 25 Apr 2023 10:16:05 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and\n more?)"
}
] |
[
{
"msg_contents": "Hi,\nThere are 3 tiny improvements to xlog.c code:\n\n1. At function StartupXLOG (line 6370), the second test if\n(ArchiveRecoveryRequested) is redundant and can secure removed.\n2. At function StartupXLOG (line 7254) the var switchedTLI already been\ntested before and the second test can secure removed.\n3. At function KeepLogSeg (line 9357) the test if (slotSegNo <= 0), the\nvar slotSegNo is uint64 and not can be < 0.\n\nAs it is a critical file, even though small, these improvements, I believe\nare welcome, because they improve readability.\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 24 Jan 2020 17:48:59 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] /src/backend/access/transam/xlog.c, tiny improvements"
},
{
"msg_contents": "\n\n> On Jan 24, 2020, at 12:48 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> 3. At function KeepLogSeg (line 9357) the test if (slotSegNo <= 0), the var slotSegNo is uint64 and not can be < 0.\n\nThere is something unusual about comparing a XLogSegNo variable in this way, but it seems to go back to 2014 when the replication slots were introduced in commit 858ec11858a914d4c380971985709b6d6b7dd6fc, and XLogSegNo was unsigned then, too. Depending on how you look at it, this could be a thinko, or it could be defensive programming against future changes to the XLogSegNo typedef. I’m betting it was defensive programming, given the context. As such, I don’t think it would be appropriate to remove this defense in your patch.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sun, 26 Jan 2020 18:47:57 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] /src/backend/access/transam/xlog.c, tiny improvements"
},
{
"msg_contents": "On Sun, Jan 26, 2020 at 06:47:57PM -0800, Mark Dilger wrote:\n> There is something unusual about comparing a XLogSegNo variable in\n> this way, but it seems to go back to 2014 when the replication slots\n> were introduced in commit 858ec11858a914d4c380971985709b6d6b7dd6fc,\n> and XLogSegNo was unsigned then, too. Depending on how you look at\n> it, this could be a thinko, or it could be defensive programming\n> against future changes to the XLogSegNo typedef. I’m betting it was\n> defensive programming, given the context. As such, I don’t think it\n> would be appropriate to remove this defense in your patch. \n\nYeah. To e honest, I am not actually sure if it is worth bothering\nabout any of those three places.\n--\nMichael",
"msg_date": "Mon, 27 Jan 2020 16:55:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] /src/backend/access/transam/xlog.c, tiny improvements"
},
{
"msg_contents": "At Mon, 27 Jan 2020 16:55:56 +0900, Michael Paquier <michael@paquier.xyz> wrote in \r\n> On Sun, Jan 26, 2020 at 06:47:57PM -0800, Mark Dilger wrote:\r\n> > There is something unusual about comparing a XLogSegNo variable in\r\n> > this way, but it seems to go back to 2014 when the replication slots\r\n> > were introduced in commit 858ec11858a914d4c380971985709b6d6b7dd6fc,\r\n> > and XLogSegNo was unsigned then, too. Depending on how you look at\r\n> > it, this could be a thinko, or it could be defensive programming\r\n> > against future changes to the XLogSegNo typedef. I’m betting it was\r\n> > defensive programming, given the context. As such, I don’t think it\r\n> > would be appropriate to remove this defense in your patch. \r\n> \r\n> Yeah. To e honest, I am not actually sure if it is worth bothering\r\n> about any of those three places.\r\n\r\n+1.\r\n\r\nFWIW, I have reasons for being aganst the first the the last items.\r\n\r\nFor the first item, The duplicate if blocks seem working as enclosure\r\nof a meaningful set of code. It's annoying that OwnLatch follows a\r\nbunch of \"else if() ereport\" lines in a block.\r\n\r\nFor the last item, using '==' in the context of size comparison make\r\nme a bit uneasy. I prefer '< 1' there but I don't bother doing\r\nthat. They are logially the same.\r\n\r\nFor the second item, I don't object to do that but also I'm not\r\nwilling to support it.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 27 Jan 2020 19:54:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] /src/backend/access/transam/xlog.c, tiny improvements"
},
{
"msg_contents": "Em dom., 26 de jan. de 2020 às 23:48, Mark Dilger <\nmark.dilger@enterprisedb.com> escreveu:\n\n> > On Jan 24, 2020, at 12:48 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > 3. At function KeepLogSeg (line 9357) the test if (slotSegNo <= 0), the\n> var slotSegNo is uint64 and not can be < 0.\n>\n> There is something unusual about comparing XLogSegNo variable in this\n> way, but it seems to go back to 2014 when the replication slots were\n> introduced in commit 858ec11858a914d4c380971985709b6d6b7dd6fc, and\n> XLogSegNo was unsigned then, too. Depending on how you look at it, this\n> could be a thinko, or it could be defensive programming against future\n> changes to the XLogSegNo typedef. I’m betting it was defensive\n> programming, given the context. As such, I don’t think it would be\n> appropriate to remove this defense in your patch.\n>\n while in general terms I am in favor of defensive programming, it is not\nneeded everywhere.\nI am surprised that the final code contains a lot of code that is not\nactually executed.\nIt should be an interesting exercise to debug postgres, which I haven't\ndone yet.\nSee variables being declared, functions called, memories being touched,\nnone of which occurs in production.\n\nIn the case of XLogSegNo, it is almost impossible to change it to signed,\nas this would limit the number of segments that could be used.\nEven so, the test could be removed and put into comment, the warning that\nin the future, in case of change to signed, it should be tested less than\nzero.\n\nregards,\nRanier Vilela\n\nEm dom., 26 de jan. de 2020 às 23:48, Mark Dilger <mark.dilger@enterprisedb.com> escreveu:\n> On Jan 24, 2020, at 12:48 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> 3. At function KeepLogSeg (line 9357) the test if (slotSegNo <= 0), the var slotSegNo is uint64 and not can be < 0.\n\nThere is something unusual about comparing \nXLogSegNo variable in this way, but it seems to go back to 2014 when the replication slots were introduced in commit 858ec11858a914d4c380971985709b6d6b7dd6fc, and XLogSegNo was unsigned then, too. Depending on how you look at it, this could be a thinko, or it could be defensive programming against future changes to the XLogSegNo typedef. I’m betting it was defensive programming, given the context. As such, I don’t think it would be appropriate to remove this defense in your patch. while in general terms I am in favor of defensive programming, it is not needed everywhere.I am surprised that the final code contains a lot of code that is not actually executed.It should be an interesting exercise to debug postgres, which I haven't done yet.See variables being declared, functions called, memories being touched, none of which occurs in production.In the case of XLogSegNo, it is almost impossible to change it to signed, as this would limit the number of segments that could be used.Even so, the test could be removed and put into comment, the warning that in the future, in case of change to signed, it should be tested less than zero.regards,Ranier Vilela",
"msg_date": "Mon, 27 Jan 2020 10:48:18 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] /src/backend/access/transam/xlog.c, tiny improvements"
}
] |
[
{
"msg_contents": "On 1/24/20, 1:32 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n>>> I chose to disallow disabling both *_RELATION_CLEANUP options\r\n>>> together, as this would essentially cause the VACUUM command to take\r\n>>> no action.\r\n>>\r\n>> My first reaction is why? Agreed that it is a bit crazy to combine\r\n>> both options, but if you add the argument related to more relation\r\n>> types like toast..\r\n>\r\n> Yes, I suppose we have the same problem if you disable\r\n> MAIN_RELATION_CLEANUP and the relation has no TOAST table. In any\r\n> case, allowing both options to be disabled shouldn't hurt anything.\r\n\r\nI've been thinking further in this area, and I'm wondering if it also\r\nmakes sense to remove the restriction on ANALYZE with\r\nMAIN_RELATION_CLEANUP disabled. A command like\r\n\r\n VACUUM (ANALYZE, MAIN_RELATION_CLEANUP FALSE) test;\r\n\r\ncould be interpreted as meaning we should vacuum the TOAST table and\r\nanalyze the main relation. Since the word \"cleanup\" is present in the\r\noption name, this might not be too confusing.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 24 Jan 2020 22:13:44 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: Add MAIN_RELATION_CLEANUP and\n SECONDARY_RELATION_CLEANUP options to VACUUM"
}
] |
[
{
"msg_contents": "Hi,\n\nSometimes I set RELSEG_SIZE to 1, as a way to get the various in-tree\ntests to give the relation segment code a good workout. That's\noutside the range that configure --with-segsize would allow and\ntherefore not really a supported size (it's set in GB), but it's very\nuseful for giving the relation segment code a good workout on small\ndatabases like the check-world ones. At some point I think that\nworked, but now it says:\n\nt/010_pg_basebackup.pl ... 100/106\n# Failed test 'pg_basebackup does not report more than 5 checksum\nmismatches stderr\n/(?^s:^WARNING.*further.*failures.*will.not.be.reported)/'\n# at t/010_pg_basebackup.pl line 526.\n# 'WARNING: checksum verification failed in file\n\"./base/13759/16396\", block 0: calculated 49B8 but expected A91E\n# WARNING: could not verify checksum in file \"./base/13759/16396\",\nblock 4: read buffer size 8225 and page size 8192 differ\n# pg_basebackup: error: checksum error occurred\n# '\n# doesn't match '(?^s:^WARNING.*further.*failures.*will.not.be.reported)'\n\nI haven't quite figured out why it does that yet (I don't see any\nfiles of size other than 0 or 8192, as expected), but it'd be nice to\ndo that. I was even thinking of running a bf animal that way.\n\n\n",
"msg_date": "Sun, 26 Jan 2020 13:45:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "t/010_pg_basebackup.pl checksum verify fails with RELSEG_SIZE 1"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Sometimes I set RELSEG_SIZE to 1, as a way to get the various in-tree\n> tests to give the relation segment code a good workout. That's\n> outside the range that configure --with-segsize would allow and\n> therefore not really a supported size (it's set in GB), but it's very\n> useful for giving the relation segment code a good workout on small\n> databases like the check-world ones. At some point I think that\n> worked, but now it says:\n\n> t/010_pg_basebackup.pl ... 100/106\n> # Failed test 'pg_basebackup does not report more than 5 checksum\n> mismatches stderr\n\nSo ... presumably, the problem is that this supposes that whatever\ndamage it did is spread across less than 5 relation segments, and\nwith a sufficiently small segment size, that assumption is wrong.\n\nI'd say this is a a poorly designed test.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Jan 2020 19:49:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: t/010_pg_basebackup.pl checksum verify fails with RELSEG_SIZE 1"
}
] |
[
{
"msg_contents": "I cannot ever think of a time when I don't want to know if I'm in a\ntransaction or not (and what its state is). Every new setup I do, I add\n%x to the psql prompt.\n\nI think it should be part of the default prompt. Path attached.\n-- \nVik Fearing",
"msg_date": "Sun, 26 Jan 2020 13:29:16 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "Re: Vik Fearing 2020-01-26 <09502c40-cfe1-bb29-10f9-4b3fa7b2bbb2@2ndquadrant.com>\n> I cannot ever think of a time when I don't want to know if I'm in a\n> transaction or not (and what its state is). Every new setup I do, I add\n> %x to the psql prompt.\n> \n> I think it should be part of the default prompt. Path attached.\n\n+1, same here.\n\nChristoph\n\n\n",
"msg_date": "Sun, 26 Jan 2020 13:40:41 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "\nHello Vik,\n\n> I cannot ever think of a time when I don't want to know if I'm in a\n> transaction or not (and what its state is). Every new setup I do, I add\n> %x to the psql prompt.\n>\n> I think it should be part of the default prompt. Path attached.\n\nIsn't there examples in the documentation which use the default prompts?\n\nIf so, should they be updated accordingly?\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 26 Jan 2020 19:48:04 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On 26/01/2020 19:48, Fabien COELHO wrote:\n> \n> Hello Vik,\n> \n>> I cannot ever think of a time when I don't want to know if I'm in a\n>> transaction or not (and what its state is). Every new setup I do, I add\n>> %x to the psql prompt.\n>>\n>> I think it should be part of the default prompt. Path attached.\n> \n> Isn't there examples in the documentation which use the default prompts?\n> \n> If so, should they be updated accordingly?\n\nGood catch!\nI thought about the documentation but not the examples therein.\n\nUpdated patch attached.\n-- \nVik Fearing",
"msg_date": "Sun, 26 Jan 2020 20:04:41 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "Hello Vik,\n\n>> Isn't there examples in the documentation which use the default prompts?\n>>\n>> If so, should they be updated accordingly?\n>\n> Good catch!\n> I thought about the documentation but not the examples therein.\n>\n> Updated patch attached.\n\nOk.\n\nOnly one transaction prompt example in the whole documentation:-(\nNo tests is troubled by the change:-(\nSigh…\n\nPatch applies and compiles cleanly, global and psql make check ok.\n\nDoc build ok.\n\nWorks for me.\n\nI'd be in favor of adding a non trivial session example in psql \ndocumentation at the end of the prompt stuff section, something like:\n\nBEGIN;\nCREATE TABLE\n Word(data TEXT PRIMARY KEY);\nCOPY Word(data) FROM STDIN;\nhello\n\\.\nSELECT 2+;\nROLLBACK;\n\nbut this is not necessary for this patch.\n\n-- \nFabien.",
"msg_date": "Wed, 29 Jan 2020 08:25:08 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On 29/01/2020 08:25, Fabien COELHO wrote:\n> \n> Hello Vik,\n> \n>>> Isn't there examples in the documentation which use the default prompts?\n>>>\n>>> If so, should they be updated accordingly?\n>>\n>> Good catch!\n>> I thought about the documentation but not the examples therein.\n>>\n>> Updated patch attached.\n> \n> Ok.\n> \n> Only one transaction prompt example in the whole documentation:-(\n> No tests is troubled by the change:-(\n> Sigh…\n> \n> Patch applies and compiles cleanly, global and psql make check ok.\n> \n> Doc build ok.\n> \n> Works for me.\n\nThanks for the review!\n\nWould you mind changing the status in the commitfest app?\nhttps://commitfest.postgresql.org/27/2427/\n-- \nVik Fearing\n\n\n",
"msg_date": "Wed, 29 Jan 2020 23:51:10 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 11:51:10PM +0100, Vik Fearing wrote:\n> Thanks for the review!\n> \n> Would you mind changing the status in the commitfest app?\n> https://commitfest.postgresql.org/27/2427/\n\nFWIW, I am not really in favor of changing a default old enough that\nit could vote (a45195a).\n--\nMichael",
"msg_date": "Mon, 3 Feb 2020 16:08:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "> On 3 Feb 2020, at 08:08, Michael Paquier <michael@paquier.xyz> wrote:\n\n> FWIW, I am not really in favor of changing a default old enough that\n> it could vote (a45195a).\n\nThat by itself doesn't seem a good reason to not change things.\n\nMy concern would be that users who have never ever considered that the prompt\ncan be changed, all of sudden wonder why the prompt is showing characters it\nnormally isn't, thus causing confusion. That being said, I agree that this is\na better default long-term.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 3 Feb 2020 13:40:48 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On 2020-Feb-03, Daniel Gustafsson wrote:\n\n> > On 3 Feb 2020, at 08:08, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > FWIW, I am not really in favor of changing a default old enough that\n> > it could vote (a45195a).\n> \n> That by itself doesn't seem a good reason to not change things.\n\nYeah.\n\n> My concern would be that users who have never ever considered that the prompt\n> can be changed, all of sudden wonder why the prompt is showing characters it\n> normally isn't, thus causing confusion. That being said, I agree that this is\n> a better default long-term.\n\nI think this is the good kind of surprise, not the bad kind.\n\nI think the only kind of user that would be negatively affected would be\nthose that have scripted psql under expect(1) and would fail to read the\nnew prompt correctly. But I would say that that kind of user is the one\nmost likely to be able to fix things as needed. Everybody else is just\nlooking at an extra char in the screen, and they don't really care.\n\nI'm +1 for the default change.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 3 Feb 2020 11:34:56 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 3 Feb 2020, at 08:08, Michael Paquier <michael@paquier.xyz> wrote:\n>> FWIW, I am not really in favor of changing a default old enough that\n>> it could vote (a45195a).\n\n> That by itself doesn't seem a good reason to not change things.\n\n> My concern would be that users who have never ever considered that the prompt\n> can be changed, all of sudden wonder why the prompt is showing characters it\n> normally isn't, thus causing confusion. That being said, I agree that this is\n> a better default long-term.\n\nI've got the same misgivings as Michael. In a green field this'd likely\nbe a good idea, but after so many years I'm afraid it will make fewer\npeople happy than unhappy.\n\nNow on the other hand, we did change the server's default log_line_prefix\nnot so long ago (7d3235ba4), and the feared storm of complaints was pretty\nmuch undetectable. So maybe this'd go down the same way. Worth noting\nalso is that this shouldn't be able to break any applications, since the\nprompt is an interactive-only behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Feb 2020 09:40:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On 2020/02/03 23:40, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 3 Feb 2020, at 08:08, Michael Paquier <michael@paquier.xyz> wrote:\n>>> FWIW, I am not really in favor of changing a default old enough that\n>>> it could vote (a45195a).\n> \n>> That by itself doesn't seem a good reason to not change things.\n> \n>> My concern would be that users who have never ever considered that the prompt\n>> can be changed, all of sudden wonder why the prompt is showing characters it\n>> normally isn't, thus causing confusion. That being said, I agree that this is\n>> a better default long-term.\n> \n> I've got the same misgivings as Michael. In a green field this'd likely\n> be a good idea, but after so many years I'm afraid it will make fewer\n> people happy than unhappy.\n> \n> Now on the other hand, we did change the server's default log_line_prefix\n> not so long ago (7d323to se5ba4), and the feared storm of complaints was pretty\n> much undetectable. So maybe this'd go down the same way. Worth noting\n> also is that this shouldn't be able to break any applications, since the\n> prompt is an interactive-only behavior.\n\nThe last change I recall affecting default psql behaviour was the addition\nof COMP_KEYWORD_CASE in 9.2 (db84ba65), which personally I (and no doubt others)\nfound annoying, but the world still turns.\n\n+1 one for this change, it's something I also add to every .psqlrc I setup.\n\nMoreover, some of my work involves logging in at short notice to other people's\nsystems where I don't have control over the .psqlrc setup - while I can of\ncourse set the prompt manually it's an extra step, and it would be really\nnice to see the transaction status by default when working on critical\nsystems in a time-critical situation. (Obviously it'll take a few years\nfor this change to filter into production...).\n\n\nRegards\n\n\nIan Barwick\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 4 Feb 2020 10:31:43 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On Tue, Feb 04, 2020 at 10:31:43AM +0900, Ian Barwick wrote:\n> The last change I recall affecting default psql behaviour was the addition\n> of COMP_KEYWORD_CASE in 9.2 (db84ba65), which personally I (and no doubt others)\n> found annoying, but the world still turns.\n> \n> +1 one for this change, it's something I also add to every .psqlrc I setup.\n\nSo.. We have:\n+1: Vik, Ian, Daniel, Alvaro, Christoph\n+-0: Tom (?), Fabien (?)\n-1: Michael P.\n\nSo there is a clear majority in favor of changing the default. Any\nextra opinions from others?\n--\nMichael",
"msg_date": "Tue, 4 Feb 2020 17:20:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On Tue, Feb 4, 2020 at 3:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n> So.. We have:\n> +1: Vik, Ian, Daniel, Alvaro, Christoph\n> +-0: Tom (?), Fabien (?)\n> -1: Michael P.\n\nI'm not really against this change but, given how long it's been the\nway that it is, I think we shouldn't make it without more plus votes.\nIf we've actually got a broad consensus on it, sure, but I don't think\n4 votes is a broad consensus.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Feb 2020 09:30:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm not really against this change but, given how long it's been the\n> way that it is, I think we shouldn't make it without more plus votes.\n> If we've actually got a broad consensus on it, sure, but I don't think\n> 4 votes is a broad consensus.\n\nFair point. I'm still abstaining, but maybe this should be proposed\non pgsql-general to try to get more opinions?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Feb 2020 10:21:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 3:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Feb 4, 2020 at 3:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > So.. We have:\n> > +1: Vik, Ian, Daniel, Alvaro, Christoph\n> > +-0: Tom (?), Fabien (?)\n> > -1: Michael P.\n>\n> I'm not really against this change but, given how long it's been the\n> way that it is, I think we shouldn't make it without more plus votes.\n> If we've actually got a broad consensus on it, sure, but I don't think\n> 4 votes is a broad consensus.\n\nHere's another +1 for making the change.\n\n//Magnus\n\n\n",
"msg_date": "Wed, 5 Feb 2020 17:16:14 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "\n>> +1 one for this change, it's something I also add to every .psqlrc I setup.\n>\n> So.. We have:\n> +1: Vik, Ian, Daniel, Alvaro, Christoph\n> +-0: Tom (?), Fabien (?)\n\nI did not know I had a vote. I'm \"+1\" on this change, if that matters.\n\nJust this morning I had a case where I wished I had the current \ntransaction status under the eye under psql.\n\n> -1: Michael P.\n>\n> So there is a clear majority in favor of changing the default. Any\n> extra opinions from others?\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 5 Feb 2020 17:31:38 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On Wed, Feb 05, 2020 at 10:21:11AM -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I'm not really against this change but, given how long it's been the\n>> way that it is, I think we shouldn't make it without more plus votes.\n>> If we've actually got a broad consensus on it, sure, but I don't think\n>> 4 votes is a broad consensus.\n> \n> Fair point. I'm still abstaining, but maybe this should be proposed\n> on pgsql-general to try to get more opinions?\n\nIndeed, still I am not sure what kind of number is enough to define a\nlarge consensus. Vik, could you start a new thread on -general?\n\nFWIW, 24 hours later I am counting two more people in favor (Magnus\nand Fabien), for a total of 7, with one abstention and one not in\nfavor.\n--\nMichael",
"msg_date": "Thu, 6 Feb 2020 11:38:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On 06/02/2020 03:38, Michael Paquier wrote:\n> On Wed, Feb 05, 2020 at 10:21:11AM -0500, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> I'm not really against this change but, given how long it's been the\n>>> way that it is, I think we shouldn't make it without more plus votes.\n>>> If we've actually got a broad consensus on it, sure, but I don't think\n>>> 4 votes is a broad consensus.\n>>\n>> Fair point. I'm still abstaining, but maybe this should be proposed\n>> on pgsql-general to try to get more opinions?\n> \n> Indeed, still I am not sure what kind of number is enough to define a\n> large consensus. Vik, could you start a new thread on -general?\n\nhttps://www.postgresql.org/message-id/flat/3d8e809b-fc26-87c5-55ac-616a98d2b0be%40postgresfriends.org\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 6 Feb 2020 03:56:54 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On 06/02/2020 03:56, Vik Fearing wrote:\n> On 06/02/2020 03:38, Michael Paquier wrote:\n>> On Wed, Feb 05, 2020 at 10:21:11AM -0500, Tom Lane wrote:\n>>> Robert Haas <robertmhaas@gmail.com> writes:\n>>>> I'm not really against this change but, given how long it's been the\n>>>> way that it is, I think we shouldn't make it without more plus votes.\n>>>> If we've actually got a broad consensus on it, sure, but I don't think\n>>>> 4 votes is a broad consensus.\n>>>\n>>> Fair point. I'm still abstaining, but maybe this should be proposed\n>>> on pgsql-general to try to get more opinions?\n>>\n>> Indeed, still I am not sure what kind of number is enough to define a\n>> large consensus. Vik, could you start a new thread on -general?\n> \n> https://www.postgresql.org/message-id/flat/3d8e809b-fc26-87c5-55ac-616a98d2b0be%40postgresfriends.org\n\nThe vote here was 6 yeas and 1 nay (85.7%) with 2 abstentions.\n\nThe poll on pgsql-general has gone stale with 19 yeas and 2 nays (90.5%).\n\nMy poll on Twitter has ended with 33 yeas and 4 nays (89.2%).\nhttps://twitter.com/pg_xocolatl/status/1225258876527874048\n\nThere is a little bit of overlap within those three groups but among the\nminuscule percentage of our users that responded, the result is\noverwhelmingly in favor of this change.\n-- \nVik Fearing\n\n\n",
"msg_date": "Mon, 10 Feb 2020 00:16:44 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On Mon, Feb 10, 2020 at 12:16:44AM +0100, Vik Fearing wrote:\n> There is a little bit of overlap within those three groups but among the\n> minuscule percentage of our users that responded, the result is\n> overwhelmingly in favor of this change.\n\nThanks Vik for handling that. So, it seems to me that we have a\nconclusion here. Any last words?\n--\nMichael",
"msg_date": "Mon, 10 Feb 2020 09:45:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On Sun, Feb 9, 2020 at 7:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Feb 10, 2020 at 12:16:44AM +0100, Vik Fearing wrote:\n> > There is a little bit of overlap within those three groups but among the\n> > minuscule percentage of our users that responded, the result is\n> > overwhelmingly in favor of this change.\n>\n> Thanks Vik for handling that. So, it seems to me that we have a\n> conclusion here. Any last words?\n\nNo objections here. I'm glad that we put in the effort to get more\nopinions, but I agree that an overall vote of ~58 to ~8 is a pretty\nstrong consensus.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Feb 2020 10:05:25 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On Tue, Feb 11, 2020 at 10:05:25AM -0500, Robert Haas wrote:\n> No objections here. I'm glad that we put in the effort to get more\n> opinions, but I agree that an overall vote of ~58 to ~8 is a pretty\n> strong consensus.\n\nClearly, so done as dcdbb5a.\n--\nMichael",
"msg_date": "Wed, 12 Feb 2020 13:35:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
},
{
"msg_contents": "On 12/02/2020 05:35, Michael Paquier wrote:\n> On Tue, Feb 11, 2020 at 10:05:25AM -0500, Robert Haas wrote:\n>> No objections here. I'm glad that we put in the effort to get more\n>> opinions, but I agree that an overall vote of ~58 to ~8 is a pretty\n>> strong consensus.\n> \n> Clearly, so done as dcdbb5a.\n\nThanks!\n-- \nVik Fearing\n\n\n",
"msg_date": "Wed, 12 Feb 2020 09:33:03 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Add %x to PROMPT1 and PROMPT2"
}
] |
[
{
"msg_contents": "I'm forking this thread since it's separate topic, and since keeping in a\nsingle branch hasn't made maintaining the patches any easier.\nhttps://www.postgresql.org/message-id/CAMkU%3D1xAyWnwnLGORBOD%3Dpyv%3DccEkDi%3DwKeyhwF%3DgtB7QxLBwQ%40mail.gmail.com\nOn Sun, Dec 29, 2019 at 01:15:24PM -0500, Jeff Janes wrote:\n> Also, I'd appreciate a report on how many hint-bits were set, and how many\n> pages were marked all-visible and/or frozen. When I do a manual vacuum, it\n> is more often for those purposes than it is for removing removable rows\n> (which autovac generally does a good enough job of).\n\nThe first patch seems simple enough but the 2nd could use critical review.",
"msg_date": "Sun, 26 Jan 2020 08:13:28 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "vacuum verbose: show pages marked allvisible/frozen/hintbits"
},
{
"msg_contents": "On Sun, 26 Jan 2020 at 23:13, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> I'm forking this thread since it's separate topic, and since keeping in a\n> single branch hasn't made maintaining the patches any easier.\n> https://www.postgresql.org/message-id/CAMkU%3D1xAyWnwnLGORBOD%3Dpyv%3DccEkDi%3DwKeyhwF%3DgtB7QxLBwQ%40mail.gmail.com\n> On Sun, Dec 29, 2019 at 01:15:24PM -0500, Jeff Janes wrote:\n> > Also, I'd appreciate a report on how many hint-bits were set, and how many\n> > pages were marked all-visible and/or frozen. When I do a manual vacuum, it\n> > is more often for those purposes than it is for removing removable rows\n> > (which autovac generally does a good enough job of).\n>\n> The first patch seems simple enough but the 2nd could use critical review.\n\nHere is comments on 0001 patch from a quick review:\n\n- BlockNumber pages_removed;\n+ BlockNumber pages_removed; /* Due to truncation */\n+ BlockNumber pages_allvisible;\n+ BlockNumber pages_frozen;\n\nOther codes in vacuumlazy.c uses ‘all_frozen', so how about\npages_allfrozen instead of pages_frozen?\n\n---\n@@ -1549,8 +1558,12 @@ lazy_scan_heap(Relation onerel, VacuumParams\n*params, LVRelStats *vacrelstats,\n {\n uint8 flags = VISIBILITYMAP_ALL_VISIBLE;\n\n- if (all_frozen)\n+ if (all_frozen) {\n flags |= VISIBILITYMAP_ALL_FROZEN;\n+ vacrelstats->pages_frozen++;\n+ }\n\n@@ -1979,10 +2000,14 @@ lazy_vacuum_page(Relation onerel, BlockNumber\nblkno, Buffer buffer,\n uint8 flags = 0;\n\n /* Set the VM all-frozen bit to flag, if needed */\n- if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)\n+ if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0) {\n flags |= VISIBILITYMAP_ALL_VISIBLE;\n- if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)\n+ vacrelstats->pages_allvisible++;\n+ }\n+ if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen) {\n flags |= VISIBILITYMAP_ALL_FROZEN;\n+ vacrelstats->pages_frozen++;\n+ }\n\nThe above changes need to follow PostgreSQL code format (a newline is\nrequired after if statement).\n\n---\n /*\n * If the all-visible page is all-frozen but not marked as such yet,\n * mark it as all-frozen. Note that all_frozen is only valid if\n * all_visible is true, so we must check both.\n */\n else if (all_visible_according_to_vm && all_visible && all_frozen &&\n !VM_ALL_FROZEN(onerel, blkno, &vmbuffer))\n {\n /*\n * We can pass InvalidTransactionId as the cutoff XID here,\n * because setting the all-frozen bit doesn't cause recovery\n * conflicts.\n */\n visibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,\n vmbuffer, InvalidTransactionId,\n VISIBILITYMAP_ALL_FROZEN);\n }\n\nWe should also count up vacrelstats->pages_frozen here.\n\nFor 0002 patch, how users will be able to make any meaning out of how\nmany hint bits were updated by vacuumu?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 15 Jun 2020 13:30:58 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose: show pages marked allvisible/frozen/hintbits"
},
{
"msg_contents": "On Mon, Jun 15, 2020 at 01:30:58PM +0900, Masahiko Sawada wrote:\n> For 0002 patch, how users will be able to make any meaning out of how\n> many hint bits were updated by vacuum?\n\nThe patch has not been updated for the last three months, though it\nlooks kind of interesting to have more stats for frozen and\nall-visible pages around here.\n\nPlease note that the patch uses C99-style comments, which is not a\nformat allowed. The format of some of the lines coded is incorrect as\nwell. I have marked the patch as returned with feedback for now.\n--\nMichael",
"msg_date": "Mon, 7 Sep 2020 12:05:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose: show pages marked allvisible/frozen/hintbits"
}
] |
[
{
"msg_contents": "I believe that the design intention for EXPLAIN's non-text output\nformats is that a given field should appear, or not, depending solely\non the plan shape, EXPLAIN options, and possibly GUC settings.\nIt's not okay to suppress a field just because it's empty or zero or\notherwise uninteresting, because that makes life harder for automated\ntools that now have to cope with expected fields maybe not being there.\nSee for instance what I wrote in commit 8ebb69f85:\n\n [Partial Mode] did not appear at all for a non-parallelized Agg plan node,\n which is contrary to expectation in non-text formats. We're notionally\n producing objects that conform to a schema, so the set of fields for a\n given node type and EXPLAIN mode should be well-defined. I set it up to\n fill in \"Simple\" in such cases.\n\n Other fields that were added for parallel query, namely \"Parallel Aware\"\n and Gather's \"Single Copy\", had not gotten the word on that point either.\n Make them appear always in non-text output.\n\n(This is intentionally different from the policy for TEXT-format output,\nwhich is meant to be human-readable so suppressing boring data is\nsensible.)\n\nBut I noticed while poking at the EXPLAIN code yesterday that recent\npatches haven't adhered to this policy too well.\n\nFor one, EXPLAIN (SETTINGS) suppresses the \"Settings\" subnode if\nthere's nothing to report. This is just wrong, but I think all we\nhave to do is delete the over-eager early exit:\n\n\t/* also bail out of there are no options */\n\tif (!num)\n\t\treturn;\n\nThe other offender is the JIT stuff: it prints if COSTS is on and\nthere's some JIT activity to report, and otherwise you get nothing.\nThis is OK for text mode but it's bogus for the other formats.\nSince we just rearranged EXPLAIN's JIT output anyway, now seems like\na good time to fix it.\n\nI think we might as well go a little further and invent an explicit\nJIT option for EXPLAIN, filling in the feature that Andres didn't\nbother with originally. What's not entirely clear to me is whether\nto try to preserve the current behavior by making it track COSTS\nif not explicitly specified. I'd rather decouple that and say\n\"you must write EXPLAIN (JIT [ON]) if you want JIT info\"; but maybe\npeople will argue that it's already too late to change this?\n\nAnother debatable question is whether to print anything in non-JIT\nbuilds. We could, with a little bit of pain, print a lot of zeroes\nand \"falses\". If we stick with the current behavior of omitting\nthe JIT fields entirely, then that's extending the existing policy\nto say that configuration options are also allowed to affect the\nset of fields that are printed. Given that we allow GUCs to affect\nthat set (cf track_io_timing), maybe this is okay; but it does seem\nlike it's weakening the promise of a well-defined data schema for\nEXPLAIN output.\n\nAny thoughts? I'm happy to go make this happen if there's not a\nlot of argument over what it should look like.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Jan 2020 15:13:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "EXPLAIN's handling of output-a-field-or-not decisions"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-26 15:13:49 -0500, Tom Lane wrote:\n> The other offender is the JIT stuff: it prints if COSTS is on and\n> there's some JIT activity to report, and otherwise you get nothing.\n> This is OK for text mode but it's bogus for the other formats.\n> Since we just rearranged EXPLAIN's JIT output anyway, now seems like\n> a good time to fix it.\n\nNo objection. I think the current choice really is just about hiding JIT\ninformation in the cases where we display explain output in the\ntests. That output can't change depending on the build environment and\nsettings (it's e.g. hugely useful to force all queries to be JITed for\ncoverage).\n\nWe did discuss adding a JIT option in 11, but it wasn't clear it'd be\nuseful at that time (hence the \"Might want to separate that out from\nCOSTS at a later stage.\" comment ExplainOnePlan).\n\n\nI'm not really bothered by entire sections that a re optional in the\nother formats, tbh. Especially when they're configurable, more recent\nand/or dependent on build options - tools are going to have to deal with\nbeing absent anyway, and that's usually just about no code.\n\n\n> I think we might as well go a little further and invent an explicit\n> JIT option for EXPLAIN, filling in the feature that Andres didn't\n> bother with originally.\n\nYea, I've introduced that in slightly different form in the thread I\nreferenced yesterday too. See 0004 in\nhttps://www.postgresql.org/message-id/20191029000229.fkjmuld3g7f2jq7i%40alap3.anarazel.de\nalthough it was about adding additional details, rather than showing the\n\"basic\" information.\n\nI'd probably want to make JIT a tristate (off, on, details), instead of\na boolean, but that's details. And can be changed later.\n\n\n> What's not entirely clear to me is whether to try to preserve the\n> current behavior by making it track COSTS if not explicitly\n> specified.\n\n> I'd rather decouple that and say \"you must write EXPLAIN (JIT [ON]) if\n> you want JIT info\"; but maybe people will argue that it's already too\n> late to change this?\n\nI don't think \"too late\" is a concern, I don't think there's much of a\n\"reliance\" interest here. Especially not after whacking things around\nalready.\n\nOne concern I do have is that I think we need the overall time for JIT\nto be displayed regardless of the JIT option. Otherwise it's going to be\nmuch harder to diagnose cases where some person reports an issue with\nexecutor startup being slow, because of JIT overhead. I think we\nshouldn't require users to guess that that's where they should\nlook. That need, combined with wanting the regression test output to\nkeep stable, is really what lead to tying the jit information to COSTS.\n\nI guess we could make it so that JIT is inferred based on COSTS, unless\nexplicitly present. A bit automagical, but ...\n\n\n> Another debatable question is whether to print anything in non-JIT\n> builds. We could, with a little bit of pain, print a lot of zeroes\n> and \"falses\". If we stick with the current behavior of omitting\n> the JIT fields entirely, then that's extending the existing policy\n> to say that configuration options are also allowed to affect the\n> set of fields that are printed. Given that we allow GUCs to affect\n> that set (cf track_io_timing), maybe this is okay; but it does seem\n> like it's weakening the promise of a well-defined data schema for\n> EXPLAIN output.\n\nHm. I don't think I have an opinion on this. I can see an argument\neither way.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Jan 2020 13:24:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN's handling of output-a-field-or-not decisions"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-01-26 15:13:49 -0500, Tom Lane wrote:\n>> The other offender is the JIT stuff: it prints if COSTS is on and\n>> there's some JIT activity to report, and otherwise you get nothing.\n>> This is OK for text mode but it's bogus for the other formats.\n>> Since we just rearranged EXPLAIN's JIT output anyway, now seems like\n>> a good time to fix it.\n\n> No objection. I think the current choice really is just about hiding JIT\n> information in the cases where we display explain output in the\n> tests. That output can't change depending on the build environment and\n> settings (it's e.g. hugely useful to force all queries to be JITed for\n> coverage).\n\nRight, but then ...\n\n> One concern I do have is that I think we need the overall time for JIT\n> to be displayed regardless of the JIT option.\n\n... how are you going to square that desire with not breaking the\nregression tests?\n\n>> Another debatable question is whether to print anything in non-JIT\n>> builds.\n\n> Hm. I don't think I have an opinion on this. I can see an argument\n> either way.\n\nAfter a bit more thought I'm leaning to \"print nothing\", since as you say\ntools would have to cope with that anyway if they want to work with old\nreleases. Also, while it's not that hard to print dummy data for JIT\neven in a non-JIT build, I can imagine some future feature where it\n*would* be hard. So setting a precedent that we must provide dummy\noutput for unimplemented features could come back to bite us.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Jan 2020 16:54:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN's handling of output-a-field-or-not decisions"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-26 16:54:58 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-01-26 15:13:49 -0500, Tom Lane wrote:\n> >> The other offender is the JIT stuff: it prints if COSTS is on and\n> >> there's some JIT activity to report, and otherwise you get nothing.\n> >> This is OK for text mode but it's bogus for the other formats.\n> >> Since we just rearranged EXPLAIN's JIT output anyway, now seems like\n> >> a good time to fix it.\n> \n> > No objection. I think the current choice really is just about hiding JIT\n> > information in the cases where we display explain output in the\n> > tests. That output can't change depending on the build environment and\n> > settings (it's e.g. hugely useful to force all queries to be JITed for\n> > coverage).\n> \n> Right, but then ...\n> \n> > One concern I do have is that I think we need the overall time for JIT\n> > to be displayed regardless of the JIT option.\n> \n> ... how are you going to square that desire with not breaking the\n> regression tests?\n\nWell, that's how we arrived at turning off JIT information when COSTS\nOFF, because that's already something all the EXPLAINs in the regression\ntests have to do. I do not want to regress from the current state, with\nregard to both regression tests, and seeing at least a top-line time in\nthe normal EXPLAIN ANALYZE cases.\n\nI've previously wondered about adding a REGRESS option to EXPLAIN would\nnot actually be a good one, so we can move the magic into that, rather\nthan options that are also otherwise relevant.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Jan 2020 14:42:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN's handling of output-a-field-or-not decisions"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-01-26 16:54:58 -0500, Tom Lane wrote:\n>> ... how are you going to square that desire with not breaking the\n>> regression tests?\n\n> Well, that's how we arrived at turning off JIT information when COSTS\n> OFF, because that's already something all the EXPLAINs in the regression\n> tests have to do. I do not want to regress from the current state, with\n> regard to both regression tests, and seeing at least a top-line time in\n> the normal EXPLAIN ANALYZE cases.\n\nRight, but that's still just a hack.\n\n> I've previously wondered about adding a REGRESS option to EXPLAIN would\n> not actually be a good one, so we can move the magic into that, rather\n> than options that are also otherwise relevant.\n\nI'd be inclined to think about a GUC actually.\nforce_parallel_mode = regress is sort of precedent for that,\nand we do already have the infrastructure needed to force a\ndatabase-level GUC setting for regression databases.\n\nI can see some advantages to making it an explicit EXPLAIN option\ninstead --- but unless we wanted to back-patch it, it'd be a real\npain in the rear for back-patching regression test cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Jan 2020 17:53:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN's handling of output-a-field-or-not decisions"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-26 17:53:09 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I've previously wondered about adding a REGRESS option to EXPLAIN would\n> > not actually be a good one, so we can move the magic into that, rather\n> > than options that are also otherwise relevant.\n> \n> I'd be inclined to think about a GUC actually.\n> force_parallel_mode = regress is sort of precedent for that,\n> and we do already have the infrastructure needed to force a\n> database-level GUC setting for regression databases.\n\nYea, a GUC could work too. What would it do exactly? Hide COSTS, TIMING,\nJIT, unless explicitly turned on in the EXPLAIN? And perhaps also\n\"redact\" a few things that we currently manually have to filter out?\nAnd then we'd leave the implicit JIT to on, to allow users to see where\ntime is spent?\n\nOr were you thinking of something different entirely?\n\n\n> I can see some advantages to making it an explicit EXPLAIN option\n> instead --- but unless we wanted to back-patch it, it'd be a real\n> pain in the rear for back-patching regression test cases.\n\nHm. Would it really be harder? I'd expect that we'd end up writing tests\nin master that need additional options to be usable in the back\nbranches. Seems we'd definitely need to backpatch the GUC?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Jan 2020 17:28:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN's handling of output-a-field-or-not decisions"
}
] |
[
{
"msg_contents": "Andres and I discussed bottlenecks in the nbtree code during the\nrecent PgDay SF community event. Andres observed that the call to\nBTreeTupleGetNAtts() in _bt_compare() can become somewhat of a\nbottleneck. I came up with the very simple attached POC-quality\npatches, which Andres tested and profiled with his original complaint\nin mind yesterday. He reported increased throughput on a memory\nbound simple pgbench SELECT-only workload.\n\nHe reported that the problem went away with the patches applied. The\nfollowing pgbench SELECT-only result was sent to me privately:\n\nbefore:\nsingle: tps = 30300.202363 (excluding connections establishing)\nall cores: tps = 1026853.447047 (excluding connections establishing)\n\nafter:\nsingle: tps = 31120.452919 (excluding connections establishing)\nall cores: tps = 1032376.361006 (excluding connections establishing)\n\n(Actually, he tested something slightly different -- he inlined\n_bt_compare() in his own way instead of using my 0002-*, and didn't\nuse the 0003-* optimization at all.)\n\nApparently this was a large multi-socket machine. Those are hard to\ncome by.\n\nThe main idea here is to make _bt_compare() delay\ncalling BTreeTupleGetNAtts() until the point after the first attribute\nturns out to be equal to the _bt_compare() caller's insertion scankey.\nMany or most calls to _bt_compare() won't even need to call\nBTreeTupleGetNAtts().\n\nThis relies on the assumption that any tuple must have at least one\nuntruncated suffix column in the _bt_compare() loop. It doesn't matter\nwhether it's a pivot or non-pivot tuple -- the representation of the\nfirst column will be exactly the same.\n\nThe negative infinity item on an internal page always has zero\nattributes, which might seem like a snag. However, we already treat\nthat as a special case within _bt_compare(), for historical reasons\n(pg_upgrade'd indexes won't have the number of attributes explicitly\nset to zero in some cases).\n\nAnother separate though related idea in 0003-* is to remove the\nnegative infinity check. It goes from _bt_compare() to _bt_binsrch(),\nsince it isn't something that we need to consider more than once per\npage -- and only on internal pages. That way, _bt_compare() doesn't\nhave to look at the page special area to check if it's a leaf page or\nan internal page at all. I haven't really profiled this just yet. This is\none of those patches where 95%+ of the work is profiling and\nbenchmarking.\n\nAndres and I both agree that there is a lot more work to be done in\nthis area, but that will be a major undertaking. I am quite keen on\nthe idea of repurposing the redundant-to-nbtree ItemId.lp_len field to\nstore an abbreviated key. Making that work well is a considerable\nundertaking, since you need to use prefix compression to get a high\nentropy abbreviated key. It would probably take me the best part of a\nwhole release cycle to write such a patch. The attached patches get\nus a relatively easy win in the short term, though.\n\nThoughts?\n--\nPeter Geoghegan",
"msg_date": "Sun, 26 Jan 2020 14:49:06 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "\r\n> He reported that the problem went away with the patches applied. The\r\n> following pgbench SELECT-only result was sent to me privately:\r\n> \r\n> before:\r\n> single: tps = 30300.202363 (excluding connections establishing)\r\n> all cores: tps = 1026853.447047 (excluding connections establishing)\r\n> \r\n> after:\r\n> single: tps = 31120.452919 (excluding connections establishing)\r\n> all cores: tps = 1032376.361006 (excluding connections establishing)\r\n> \r\n> (Actually, he tested something slightly different -- he inlined\r\n> _bt_compare() in his own way instead of using my 0002-*, and didn't use the\r\n> 0003-* optimization at all.)\r\n> \r\n> Apparently this was a large multi-socket machine. Those are hard to come by.\r\n> \r\n\r\nI could do some tests with the patch on some larger machines. What exact tests do you propose? Are there some specific postgresql.conf settings and pgbench initialization you recommend for this? And was the test above just running 'pgbench -S' select-only with specific -T, -j and -c parameters?\r\n\r\n-Floris\r\n\r\n",
"msg_date": "Mon, 27 Jan 2020 15:42:06 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": false,
"msg_subject": "RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-27 15:42:06 +0000, Floris Van Nee wrote:\n> \n> > He reported that the problem went away with the patches applied. The\n> > following pgbench SELECT-only result was sent to me privately:\n> > \n> > before:\n> > single: tps = 30300.202363 (excluding connections establishing)\n> > all cores: tps = 1026853.447047 (excluding connections establishing)\n> > \n> > after:\n> > single: tps = 31120.452919 (excluding connections establishing)\n> > all cores: tps = 1032376.361006 (excluding connections establishing)\n> > \n> > (Actually, he tested something slightly different -- he inlined\n> > _bt_compare() in his own way instead of using my 0002-*, and didn't use the\n> > 0003-* optimization at all.)\n> > \n> > Apparently this was a large multi-socket machine. Those are hard to\n> > come by.\n\nI'd not say \"large multi socket\", 2 x XeonGold 5215, 192GB RAM.\n\n\n> I could do some tests with the patch on some larger machines. What\n> exact tests do you propose? Are there some specific postgresql.conf\n> settings and pgbench initialization you recommend for this? And was\n> the test above just running 'pgbench -S' select-only with specific -T,\n> -j and -c parameters?\n\nThe above test was IIRC:\n\nPGOPTIONS='-c vacuum_freeze_min_age=0' pgbench -i -q -s 300\nwith a restart here, and a\nSELECT SUM(pg_prewarm(oid, 'buffer')) FROM pg_class WHERE relkind IN ('r', 'i', 't');\nafter starting, and then\nPGOPTIONS='-c default_transaction_isolation=repeatable\\ read' pgbench -n -M prepared -P1 -c100 -j72 -T1000 -S\n\nThe freeze, restart & prewarm are to have fairer comparisons between\ntests, without needing to recreate the database from scratch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Jan 2020 08:12:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-26 14:49:06 -0800, Peter Geoghegan wrote:\n> Andres and I discussed bottlenecks in the nbtree code during the\n> recent PgDay SF community event. Andres observed that the call to\n> BTreeTupleGetNAtts() in _bt_compare() can become somewhat of a\n> bottleneck. I came up with the very simple attached POC-quality\n> patches, which Andres tested and profiled with his original complaint\n> in mind yesterday. He reported increased throughput on a memory\n> bound simple pgbench SELECT-only workload.\n\nYea - it shows up as a pipeline stall, because the loop depends on\nhaving loaded the tuple. Which basically requires two\nunlikely-to-be-cached memory loads to complete. Whereas before/after the\npatcha good bit of that latency could be hidden by out-of-order\nexecution, as e.g. the tupledesc and scankey accesses are not dependent\non the memory load for the tuple having finished.\n\n\n\n> The main idea here is to make _bt_compare() delay\n> calling BTreeTupleGetNAtts() until the point after the first attribute\n> turns out to be equal to the _bt_compare() caller's insertion scankey.\n> Many or most calls to _bt_compare() won't even need to call\n> BTreeTupleGetNAtts().\n\nFWIW, I still think it might be better to *continue* loading ntupatts\nwhere it's currently done, but keep the the change to the loop\ntermination the way you have it. That way we don't add a branch to check\nfor ntupatts, and because we don't depend on the result to enter the\nloop, we don't have a stall. I'd even keep the Min() bit. I'm a bit\nafraid that delaying the load will add one (smaller) stall after the key\ncomparison, and that the additional branches will be noticable too.\n\n\n\n> Andres and I both agree that there is a lot more work to be done in\n> this area, but that will be a major undertaking. I am quite keen on\n> the idea of repurposing the redundant-to-nbtree ItemId.lp_len field to\n> store an abbreviated key. Making that work well is a considerable\n> undertaking, since you need to use prefix compression to get a high\n> entropy abbreviated key. It would probably take me the best part of a\n> whole release cycle to write such a patch. The attached patches get\n> us a relatively easy win in the short term, though.\n\nMy intuition is that a binary search optimized layout (next para) will\nbe a bigger win, and probably easier. There are pretty clear profiling\nindicators that even the access to the ItemId array in the binary search\nis most of the time a cache miss and causes a stall - and it makes sense\ntoo.\n\nI.e. instead of a plain sorted order, store the n ItemIds on a page\nin the order of\n[1/2 n, 1/4 n, 3/4 n, 1/8 n, 3/8 n, 5/8 n, 7/8 n, ...]\nas binary search looks first at 1/2, then at either 1/4 or 3/4, then\neither (1/8 or 3/8) or (5/8, 7/8), and so on, this layout means that the\ncommonly the first few four levels of the ItemId array are on a *single*\ncacheline. Whereas in contrast, using the normal layout, that *never* is\nthe case for page with more than a few entries. And even beyond the\nfirst few levels, the \"sub\" trees the binary search descends into, are\nconcentrated onto fewer cachelines. It's not just the reduced number of\ncachelines touched, additionally the layout also is very prefetchable,\nbecause the tree levels are basically laid out sequentially left to\nright, which many cpu prefetchers can recognize.\n\nI think this works particularly well for inner btree pages, because we\ndon't rewrite their itemid lists all that often, so the somewhat higher\ncost of that doesn't matter much, and similarly, the higher cost of\nsequentially iterating, isn't significant either.\n\nNow that's only the ItemId array - whereas a larger amount of the cache\nmisses comes from the index tuple accesses. The nice bit there is that\nwe can just optimize the order of the index tuples on the page without\nany format changes (and even the read access code won't change). I.e. we\ncan just lay out the tuples in an *approximately* binary search\noptimized order, without needing to change anything but the layout\n\"writing\" code, as the ItemId.lp_off indirection will hide that.\n\n\nI do completely agree that having a small high-entropy abbreviated key\ninside the ItemId would be an awesome improvement, as it can entirely\nremove many of the hard to predict accesses. My gut feeling is just that\na) it's a pretty darn hard project.\nb) it'll be a smaller win as long as there's an unpredictable access to\n the abbreviated key\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Jan 2020 09:14:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 9:14 AM Andres Freund <andres@anarazel.de> wrote:\n> > The main idea here is to make _bt_compare() delay\n> > calling BTreeTupleGetNAtts() until the point after the first attribute\n> > turns out to be equal to the _bt_compare() caller's insertion scankey.\n> > Many or most calls to _bt_compare() won't even need to call\n> > BTreeTupleGetNAtts().\n>\n> FWIW, I still think it might be better to *continue* loading ntupatts\n> where it's currently done, but keep the the change to the loop\n> termination the way you have it. That way we don't add a branch to check\n> for ntupatts, and because we don't depend on the result to enter the\n> loop, we don't have a stall. I'd even keep the Min() bit. I'm a bit\n> afraid that delaying the load will add one (smaller) stall after the key\n> comparison, and that the additional branches will be noticable too.\n\nI can do it that way. I am not attached to the current approach in\n0001-* at all.\n\n> My intuition is that a binary search optimized layout (next para) will\n> be a bigger win, and probably easier. There are pretty clear profiling\n> indicators that even the access to the ItemId array in the binary search\n> is most of the time a cache miss and causes a stall - and it makes sense\n> too.\n>\n> I.e. instead of a plain sorted order, store the n ItemIds on a page\n> in the order of\n> [1/2 n, 1/4 n, 3/4 n, 1/8 n, 3/8 n, 5/8 n, 7/8 n, ...]\n> as binary search looks first at 1/2, then at either 1/4 or 3/4, then\n> either (1/8 or 3/8) or (5/8, 7/8), and so on, this layout means that the\n> commonly the first few four levels of the ItemId array are on a *single*\n> cacheline.\n\nYou don't really have to convince me of anything here. I see these as\nessentially the same project already. I am only really emphasizing the\nabbreviated keys thing because it's obviously unbeatable with the\nright workload.\n\nWorking on B-Tree stuff for v12 really convinced me of the value of an\nintegrated approach, at least in this area. Everything affects\neverything else, so expanding the scope of a project can actually be\nreally helpful. It's very normal for these optimizations to be worth a\nlot more when combined than they are worth individually. I know that\nyou have had similar experiences in other areas of the code.\n\n> I think this works particularly well for inner btree pages, because we\n> don't rewrite their itemid lists all that often, so the somewhat higher\n> cost of that doesn't matter much, and similarly, the higher cost of\n> sequentially iterating, isn't significant either.\n\nObviously all of these techniques are only practical because of the\nasymmetry between leaf pages and internal pages. Internal pages are\nwhere the large majority of comparisons are performed in most OLTP\nworkloads, and yet their tuples are often only about one third of one\npercent of the total number of tuples in the B-Tree. That is the\nspecific ratio within the pgbench indexes, IIRC. Having more than one\npercent of all tuples come from internal pages is fairly exceptional\n-- you only really see it in indexes that are on very long text\nstrings.\n\n> I do completely agree that having a small high-entropy abbreviated key\n> inside the ItemId would be an awesome improvement, as it can entirely\n> remove many of the hard to predict accesses. My gut feeling is just that\n> a) it's a pretty darn hard project.\n> b) it'll be a smaller win as long as there's an unpredictable access to\n> the abbreviated key\n\nIt will be relatively straightforward to come up with a basic\nabbreviated keys prototype that targets one particular data\ndistribution and index type, though. For example, I can focus on your\npgbench SELECT workload. That way, I won't have to do any of the hard\nwork of making abbreviated keys work with a variety of workloads,\nwhile still getting a good idea of the benefits in one specific case.\nFor this prototype, I can either not do prefix compression to get a\nhigh entropy abbreviated key, or do the prefix compression in a way\nthat is totally inflexible, but still works well enough for this\ninitial test workload.\n\nMy estimate is that it would take me 4 - 6 weeks to write a prototype\nalong those lines. That isn't so bad.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Jan 2020 15:01:51 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
}
] |
[
{
"msg_contents": "Hi, I noticed that since PostgreSQL 12, Tid scan increments value of\npg_stat_all_tables.seq_scan. (but not seq_tup_read)\n\nThe following is an example.\n\nCREATE TABLE t (c int);\nINSERT INTO t SELECT 1;\nSET enable_seqscan to off;\n\n(v12 -)\n=# EXPLAIN ANALYZE SELECT * FROM t WHERE ctid = '(0,1)';\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Tid Scan on t (cost=0.00..4.01 rows=1 width=4) (actual\ntime=0.034..0.035 rows=1 loops=1)\n TID Cond: (ctid = '(0,1)'::tid)\n Planning Time: 0.341 ms\n Execution Time: 0.059 ms\n(4 rows)\n\n=# SELECT seq_scan, seq_tup_read FROM pg_stat_user_tables WHERE relname = 't';\n seq_scan | seq_tup_read\n----------+--------------\n 1 | 0\n(1 row)\n\n(- v11)\n=# EXPLAIN ANALYZE SELECT * FROM t WHERE ctid = '(0,1)';\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Tid Scan on t (cost=0.00..4.01 rows=1 width=4) (actual\ntime=0.026..0.027 rows=1 loops=1)\n TID Cond: (ctid = '(0,1)'::tid)\n Planning Time: 1.003 ms\n Execution Time: 0.068 ms\n(4 rows)\n\npostgres=# SELECT seq_scan, seq_tup_read FROM pg_stat_user_tables\nWHERE relname = 't';\n seq_scan | seq_tup_read\n----------+--------------\n 0 | 0\n(1 row)\n\n\nExactly, this change occurred from following commit.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=147e3722f7e531f15ba389a4d518efe8cd0bd736)\nI think, from this commit, TidListEval() came to call\ntable_beginscan() , so this increments would be happen.\n\nI'm not sure this change whether intention or not, it can confuse some users.\n\nBest regards,\n--\nNTT Open Source Software Center\nTatsuhito Kasahara\n\n\n",
"msg_date": "Mon, 27 Jan 2020 14:35:03 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Tid scan increments value of pg_stat_all_tables.seq_scan. (but not\n seq_tup_read)"
},
{
"msg_contents": "Hi.\n\nAttached patch solve this problem.\n\nThis patch adds table_beginscan_tid() and call it in TidListEval()\ninstead of table_beginscan().\ntable_beginscan_tid() is the same as table_beginscan() but do not set\nSO_TYPE_SEQSCAN to flags.\n\nAlthough I'm not sure this behavior is really problem or not,\nit seems to me that previous behavior is more prefer.\n\nIs it worth to apply to HEAD and v12 branch ?\n\nBest regards,\n\n2020年1月27日(月) 14:35 Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>:\n>\n> Hi, I noticed that since PostgreSQL 12, Tid scan increments value of\n> pg_stat_all_tables.seq_scan. (but not seq_tup_read)\n>\n> The following is an example.\n>\n> CREATE TABLE t (c int);\n> INSERT INTO t SELECT 1;\n> SET enable_seqscan to off;\n>\n> (v12 -)\n> =# EXPLAIN ANALYZE SELECT * FROM t WHERE ctid = '(0,1)';\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------\n> Tid Scan on t (cost=0.00..4.01 rows=1 width=4) (actual\n> time=0.034..0.035 rows=1 loops=1)\n> TID Cond: (ctid = '(0,1)'::tid)\n> Planning Time: 0.341 ms\n> Execution Time: 0.059 ms\n> (4 rows)\n>\n> =# SELECT seq_scan, seq_tup_read FROM pg_stat_user_tables WHERE relname = 't';\n> seq_scan | seq_tup_read\n> ----------+--------------\n> 1 | 0\n> (1 row)\n>\n> (- v11)\n> =# EXPLAIN ANALYZE SELECT * FROM t WHERE ctid = '(0,1)';\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------\n> Tid Scan on t (cost=0.00..4.01 rows=1 width=4) (actual\n> time=0.026..0.027 rows=1 loops=1)\n> TID Cond: (ctid = '(0,1)'::tid)\n> Planning Time: 1.003 ms\n> Execution Time: 0.068 ms\n> (4 rows)\n>\n> postgres=# SELECT seq_scan, seq_tup_read FROM pg_stat_user_tables\n> WHERE relname = 't';\n> seq_scan | seq_tup_read\n> ----------+--------------\n> 0 | 0\n> (1 row)\n>\n>\n> Exactly, this change occurred from following commit.\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=147e3722f7e531f15ba389a4d518efe8cd0bd736)\n> I think, from this commit, TidListEval() came to call\n> table_beginscan() , so this increments would be happen.\n>\n> I'm not sure this change whether intention or not, it can confuse some users.\n>\n> Best regards,\n> --\n> NTT Open Source Software Center\n> Tatsuhito Kasahara\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Wed, 29 Jan 2020 20:06:06 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "\n\nOn 2020/01/29 20:06, Kasahara Tatsuhito wrote:\n> Hi.\n> \n> Attached patch solve this problem.\n> \n> This patch adds table_beginscan_tid() and call it in TidListEval()\n> instead of table_beginscan().\n> table_beginscan_tid() is the same as table_beginscan() but do not set\n> SO_TYPE_SEQSCAN to flags.\n> \n> Although I'm not sure this behavior is really problem or not,\n> it seems to me that previous behavior is more prefer.\n> \n> Is it worth to apply to HEAD and v12 branch ?\n\nI've not read the patch yet, but I agree that updating only seq_scan\nbut not seq_tup_read in Tid Scan sounds strange. IMO at least\nboth should be update together or neither should be updated.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 29 Jan 2020 23:24:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "Hello.\n\nAt Wed, 29 Jan 2020 23:24:09 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2020/01/29 20:06, Kasahara Tatsuhito wrote:\n> > Hi.\n> > Attached patch solve this problem.\n> > This patch adds table_beginscan_tid() and call it in TidListEval()\n> > instead of table_beginscan().\n> > table_beginscan_tid() is the same as table_beginscan() but do not set\n> > SO_TYPE_SEQSCAN to flags.\n> > Although I'm not sure this behavior is really problem or not,\n> > it seems to me that previous behavior is more prefer.\n> > Is it worth to apply to HEAD and v12 branch ?\n> \n> I've not read the patch yet, but I agree that updating only seq_scan\n> but not seq_tup_read in Tid Scan sounds strange. IMO at least\n> both should be update together or neither should be updated.\n\nBasically agreed, but sample scan doesn't increment seq_scan but\nincrements seq_tup_read.\n\nAside from that fact, before 147e3722f7 TidScan didn't need a scan\ndescriptor so didn't call table_beginscan. table_beginscan didn't\nincrement the counter for bitmapscan and samplescan. The commit\nchanges TidScan to call beginscan but didn't change table_beginscan\nnot to increment the counter for tidscans.\n\n From the view of the view pg_stat_*_tables, the counters moves as follows.\n\n increments\nscan type table_beginscan?, per scan, per tuple , SO_TYPE flags\n=============================================================================\nseq scan : yes , seq_scan, seq_tup_read , SO_TYPE_SEQSCAN\nindex scan : no , idx_scan, idx_tup_fetch , <none>\nbitmap scan: yes , idx_scan, idx_tup_fetch , SO_TYPE_BITMAPSCAN\nsample scan: yes , <none> , seq_tup_read , SO_TYPE_SAMPLESCAN\nTID scan : yes , seq_scan, <none> , <none>\n\nbitmap scan and sample scan are historically excluded by corresponding\nflags is_bitmapscan and is_samplescan and the commit c3b23ae457 moved\nthe work to SO_TYPE_* flags. After 147e3722f7, TID scan has the same\ncharacteristics, that is, it calls table_beginscan but doesn't\nincrement seq_scan. But it doesn't have corresponding flag value.\n\nI'd rather think that whatever calls table_beginscan should have\ncorresponding SO_TYPE_* flags. (Note: index scan doesn't call it.)\n\n\nIt would be another issue what we should do for the sample scan case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 30 Jan 2020 10:54:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jan 30, 2020 at 10:55 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 29 Jan 2020 23:24:09 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> > On 2020/01/29 20:06, Kasahara Tatsuhito wrote:\n> > > Although I'm not sure this behavior is really problem or not,\n> > > it seems to me that previous behavior is more prefer.\n> > > Is it worth to apply to HEAD and v12 branch ?\n> >\n> > I've not read the patch yet, but I agree that updating only seq_scan\n> > but not seq_tup_read in Tid Scan sounds strange. IMO at least\n> > both should be update together or neither should be updated.\n>\n> Basically agreed, but sample scan doesn't increment seq_scan but\n> increments seq_tup_read.\nYeah, sample scan's behavior is also bit annoying.\n\n\n> From the view of the view pg_stat_*_tables, the counters moves as follows.\nThanks for your clarification.\n\n> TID scan : yes , seq_scan, <none> , <none>\nHere is wrong, because TID scan came to have SO_TYPE_SEQSCAN flags\nfrom commit 147e3722f7.\n\nSo, currently( v12 and HEAD) TID scan status as following\n\n increments\nscan type table_beginscan?, per scan, per tuple , SO_TYPE flags\n=============================================================================\nTID scan : yes , seq_scan, <none> , SO_TYPE_SEQSCAN\n\nAnd my patch change the status to following (same as -v11)\n\n increments\nscan type table_beginscan?, per scan, per tuple , SO_TYPE flags\n=============================================================================\nTID scan : yes , <none>, <none> , <none>\n\n\n> I'd rather think that whatever calls table_beginscan should have\n> corresponding SO_TYPE_* flags. (Note: index scan doesn't call it.)\nAgreed.\nIt may be better to add new flag such like SO_TYPE_TIDSCAN,\nand handles some statistics updating and other things.\nBut it may be a bit overkill, since I want to revert to the previous\nbehavior this time.\n\nBest regards,\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n",
"msg_date": "Thu, 30 Jan 2020 13:30:56 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "At Thu, 30 Jan 2020 13:30:56 +0900, Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com> wrote in \n> > TID scan : yes , seq_scan, <none> , <none>\n> Here is wrong, because TID scan came to have SO_TYPE_SEQSCAN flags\n> from commit 147e3722f7.\n\nIt is reflectings the discussion below, which means TID scan doesn't\nhave corresponding SO_TYPE_ value. Currently it is setting\nSO_TYPE_SEQSCAN by accedent.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 30 Jan 2020 13:48:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 1:49 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 30 Jan 2020 13:30:56 +0900, Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com> wrote in\n> > > TID scan : yes , seq_scan, <none> , <none>\n> > Here is wrong, because TID scan came to have SO_TYPE_SEQSCAN flags\n> > from commit 147e3722f7.\n>\n> It is reflectings the discussion below, which means TID scan doesn't\n> have corresponding SO_TYPE_ value. Currently it is setting\n> SO_TYPE_SEQSCAN by accedent.\nAh, sorry I misunderstood..\n\nUpon further investigation, the SO_TYPE_SEQSCAN flag was also used at\nheap_beginscan() to determine whether a predicate lock was taken on\nthe entire relation.\n\n if (scan->rs_base.rs_flags & (SO_TYPE_SEQSCAN | SO_TYPE_SAMPLESCAN))\n {\n /*\n * Ensure a missing snapshot is noticed reliably, even if the\n * isolation mode means predicate locking isn't performed (and\n * therefore the snapshot isn't used here).\n */\n Assert(snapshot);\n PredicateLockRelation(relation, snapshot);\n }\n\nTherefore, it can not simply remove the SO_TYPE_SEQSCAN flag from a TID scan.\nTo keep the old behavior, I think it would be better to add a new\nSO_TYPE_TIDSCAN flag and take a predicate lock on the entire relation.\n\nAttach the v2 patch which change the status to following. (same as\n-v11 but have new SO_TYPE_TIDSCAN flag)\n\n increments\nscan type table_beginscan?, per scan, per tuple , SO_TYPE flags\n=============================================================================\nTID scan : yes , <none>, <none> , SO_TYPE_TIDSCAN\n\nIs it acceptable change for HEAD and v12?\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Sat, 1 Feb 2020 16:05:04 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "\n\nOn 2020/02/01 16:05, Kasahara Tatsuhito wrote:\n> On Thu, Jan 30, 2020 at 1:49 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> At Thu, 30 Jan 2020 13:30:56 +0900, Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com> wrote in\n>>>> TID scan : yes , seq_scan, <none> , <none>\n>>> Here is wrong, because TID scan came to have SO_TYPE_SEQSCAN flags\n>>> from commit 147e3722f7.\n>>\n>> It is reflectings the discussion below, which means TID scan doesn't\n>> have corresponding SO_TYPE_ value. Currently it is setting\n>> SO_TYPE_SEQSCAN by accedent.\n> Ah, sorry I misunderstood..\n> \n> Upon further investigation, the SO_TYPE_SEQSCAN flag was also used at\n> heap_beginscan() to determine whether a predicate lock was taken on\n> the entire relation.\n> \n> if (scan->rs_base.rs_flags & (SO_TYPE_SEQSCAN | SO_TYPE_SAMPLESCAN))\n> {\n> /*\n> * Ensure a missing snapshot is noticed reliably, even if the\n> * isolation mode means predicate locking isn't performed (and\n> * therefore the snapshot isn't used here).\n> */\n> Assert(snapshot);\n> PredicateLockRelation(relation, snapshot);\n> }\n> \n> Therefore, it can not simply remove the SO_TYPE_SEQSCAN flag from a TID scan.\n> To keep the old behavior, I think it would be better to add a new\n> SO_TYPE_TIDSCAN flag and take a predicate lock on the entire relation.\n\nBut in the old behavior, PredicateLockRelation() was not called in TidScan case\nbecause its flag was not SO_TYPE_SEQSCAN. No?\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 3 Feb 2020 16:22:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 4:22 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/01 16:05, Kasahara Tatsuhito wrote:\n> > if (scan->rs_base.rs_flags & (SO_TYPE_SEQSCAN | SO_TYPE_SAMPLESCAN))\n> > {\n> > /*\n> > * Ensure a missing snapshot is noticed reliably, even if the\n> > * isolation mode means predicate locking isn't performed (and\n> > * therefore the snapshot isn't used here).\n> > */\n> > Assert(snapshot);\n> > PredicateLockRelation(relation, snapshot);\n> > }\n> >\n> > Therefore, it can not simply remove the SO_TYPE_SEQSCAN flag from a TID scan.\n> > To keep the old behavior, I think it would be better to add a new\n> > SO_TYPE_TIDSCAN flag and take a predicate lock on the entire relation.\n>\n> But in the old behavior, PredicateLockRelation() was not called in TidScan case\n> because its flag was not SO_TYPE_SEQSCAN. No?\nNo. Tid scan called PredicateLockRelation() both previous and current.\n\nIn the current (v12 and HEAD), Tid scan has SO_TYPE_SEQSCAN flag so\nthat PredicateLockRelation()is called in Tid scan.\nIn the Previous (- v11), in heap_beginscan_internal(), checks\nis_bitmapscan flags.\nIf is_bitmapscan is set to false, calls PredicateLockRelation().\n\n(- v11)\n if (!is_bitmapscan)\n PredicateLockRelation(relation, snapshot);\n\nAnd in the Tid scan, is_bitmapscan is set to false, so that\nPredicateLockRelation()is called in Tid scan.\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n",
"msg_date": "Mon, 3 Feb 2020 16:39:51 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "\n\nOn 2020/02/03 16:39, Kasahara Tatsuhito wrote:\n> On Mon, Feb 3, 2020 at 4:22 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/02/01 16:05, Kasahara Tatsuhito wrote:\n>>> if (scan->rs_base.rs_flags & (SO_TYPE_SEQSCAN | SO_TYPE_SAMPLESCAN))\n>>> {\n>>> /*\n>>> * Ensure a missing snapshot is noticed reliably, even if the\n>>> * isolation mode means predicate locking isn't performed (and\n>>> * therefore the snapshot isn't used here).\n>>> */\n>>> Assert(snapshot);\n>>> PredicateLockRelation(relation, snapshot);\n>>> }\n>>>\n>>> Therefore, it can not simply remove the SO_TYPE_SEQSCAN flag from a TID scan.\n>>> To keep the old behavior, I think it would be better to add a new\n>>> SO_TYPE_TIDSCAN flag and take a predicate lock on the entire relation.\n>>\n>> But in the old behavior, PredicateLockRelation() was not called in TidScan case\n>> because its flag was not SO_TYPE_SEQSCAN. No?\n> No. Tid scan called PredicateLockRelation() both previous and current.\n> \n> In the current (v12 and HEAD), Tid scan has SO_TYPE_SEQSCAN flag so\n> that PredicateLockRelation()is called in Tid scan.\n> In the Previous (- v11), in heap_beginscan_internal(), checks\n> is_bitmapscan flags.\n> If is_bitmapscan is set to false, calls PredicateLockRelation().\n> \n> (- v11)\n> if (!is_bitmapscan)\n> PredicateLockRelation(relation, snapshot);\n> \n> And in the Tid scan, is_bitmapscan is set to false, so that\n> PredicateLockRelation()is called in Tid scan.\n\nThanks for explaining that! But heap_beginscan_internal() was really\ncalled in TidScan case?\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 3 Feb 2020 17:32:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 5:33 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Thanks for explaining that! But heap_beginscan_internal() was really\n> called in TidScan case?\nOh, you are right.\nTid Scan started calling table_beginscan from v12 (commit 147e3722f7).\nSo previously(-v11), Tid Scan might never calls heap_beginscan_internal().\n\nTherefore, from v12, Tid scan not only increases the value of\nseq_scan, but also acquires a predicate lock.\n\nBest regards,\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n",
"msg_date": "Mon, 3 Feb 2020 18:20:49 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 6:20 PM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n> Therefore, from v12, Tid scan not only increases the value of\n> seq_scan, but also acquires a predicate lock.\n\nBased on further investigation and Fujii's advice, I've summarized\nthis issue as follows.\n\n From commit 147e3722f7, Tid Scan came to\n(A) increments num of seq_scan on pg_stat_*_tables\nand\n(B) take a predicate lock on the entire relation.\n\n(A) may be confusing to users, so I think it is better to fix it.\nFor (B), an unexpected serialization error has occurred as follows, so\nI think it should be fix.\n\n=========================================================================\n[Preparation]\nCREATE TABLE tid_test (c1 int, c2 int);\nINSERT INTO tid_test SELECT generate_series(1,1000), 0;\n\n\n[Session-1:]\nBEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE ;\n [Session-2:]\n BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE ;\n[Session-1:]\nSELECT * FROM tid_test WHERE ctid = '(0,1)';\n [Session-2:]\n SELECT * FROM tid_test WHERE ctid = '(1,1)';\n[Session-1:]\nINSERT INTO tid_test SELECT 1001, 10;\n [Session-2:]\n INSERT INTO tid_test SELECT 1001, 10;\n[Session-1:]\nCOMMIT;\n [Session-2:]\n COMMIT;\n\nResult:\n(-v11): Both session could commit.\n(v12-): Session-2 raised error as following because of taking a\npredicate lock on the entire table...\n--------\nERROR: could not serialize access due to read/write dependencies\namong transactions\nDETAIL: Reason code: Canceled on identification as a pivot, during\ncommit attempt.\nHINT: The transaction might succeed if retried.\n--------\n=========================================================================\n\nAttached patch fix both (A) and (B), so that the behavior of Tid Scan\nback to the same as before v11.\n(As a result, this patch is the same as the one that first attached.)\n\nBest regards,\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Wed, 5 Feb 2020 16:25:25 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-05 16:25:25 +0900, Kasahara Tatsuhito wrote:\n> On Mon, Feb 3, 2020 at 6:20 PM Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n> > Therefore, from v12, Tid scan not only increases the value of\n> > seq_scan, but also acquires a predicate lock.\n> \n> Based on further investigation and Fujii's advice, I've summarized\n> this issue as follows.\n> \n> From commit 147e3722f7, Tid Scan came to\n> (A) increments num of seq_scan on pg_stat_*_tables\n> and\n> (B) take a predicate lock on the entire relation.\n> \n> (A) may be confusing to users, so I think it is better to fix it.\n> For (B), an unexpected serialization error has occurred as follows, so\n> I think it should be fix.\n\nI think it'd be good if we could guard against b) via an isolation\ntest. It's more painful to do that for a), due to the unreliability of\nstats at the moment (we have some tests, but they take a long time).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Feb 2020 18:48:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 6, 2020 at 11:48 AM Andres Freund <andres@anarazel.de> wrote:\n> I think it'd be good if we could guard against b) via an isolation\n> test. It's more painful to do that for a), due to the unreliability of\n> stats at the moment (we have some tests, but they take a long time).\nThanks for your advise, and agreed.\n\nI added a new (but minimal) isolation test for the case of tid scan.\n(v12 and HEAD will be failed this test. v11 and HEAD with my patch\nwill be passed)\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Thu, 6 Feb 2020 15:04:58 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "\n\nOn 2020/02/06 15:04, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> On Thu, Feb 6, 2020 at 11:48 AM Andres Freund <andres@anarazel.de> wrote:\n>> I think it'd be good if we could guard against b) via an isolation\n>> test. It's more painful to do that for a), due to the unreliability of\n>> stats at the moment (we have some tests, but they take a long time).\n> Thanks for your advise, and agreed.\n> \n> I added a new (but minimal) isolation test for the case of tid scan.\n> (v12 and HEAD will be failed this test. v11 and HEAD with my patch\n> will be passed)\n\nIsn't this test scenario a bit overkill? We can simply test that\nas follows, instead.\n\nCREATE TABLE test_tidscan AS SELECT 1 AS id;\nBEGIN ISOLATION LEVEL SERIALIZABLE;\nSELECT * FROM test_tidscan WHERE ctid = '(0,1)';\nSELECT locktype, mode FROM pg_locks WHERE pid = pg_backend_pid() AND mode = 'SIReadLock';\nCOMMIT;\n\nIn the expected file, the result of query looking at pg_locks\nshould be matched with the following.\n\n locktype | mode\n----------+------------\n tuple | SIReadLock\n\nBTW, in master branch, locktype in that query result is \"relation\"\nbecause of the issue.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 6 Feb 2020 15:24:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "HI,\n\nOn Thu, Feb 6, 2020 at 3:24 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > I added a new (but minimal) isolation test for the case of tid scan.\n> > (v12 and HEAD will be failed this test. v11 and HEAD with my patch\n> > will be passed)\n>\n> Isn't this test scenario a bit overkill? We can simply test that\n> as follows, instead.\n> CREATE TABLE test_tidscan AS SELECT 1 AS id;\n> BEGIN ISOLATION LEVEL SERIALIZABLE;\n> SELECT * FROM test_tidscan WHERE ctid = '(0,1)';\n> SELECT locktype, mode FROM pg_locks WHERE pid = pg_backend_pid() AND mode = 'SIReadLock';\n> COMMIT;\n>\n> In the expected file, the result of query looking at pg_locks\n> should be matched with the following.\n>\n> locktype | mode\n> ----------+------------\n> tuple | SIReadLock\nThanks for your reply.\nHmm, it's an simple and might be the better way than adding isolation test.\n\nI added above test case to regress/sql/tidscan.sql.\nAttach the patch.\n\nBest regards,\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Thu, 6 Feb 2020 19:11:46 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "On Thu, 6 Feb 2020 at 19:12, Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> HI,\n>\n> On Thu, Feb 6, 2020 at 3:24 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > I added a new (but minimal) isolation test for the case of tid scan.\n> > > (v12 and HEAD will be failed this test. v11 and HEAD with my patch\n> > > will be passed)\n> >\n> > Isn't this test scenario a bit overkill? We can simply test that\n> > as follows, instead.\n> > CREATE TABLE test_tidscan AS SELECT 1 AS id;\n> > BEGIN ISOLATION LEVEL SERIALIZABLE;\n> > SELECT * FROM test_tidscan WHERE ctid = '(0,1)';\n> > SELECT locktype, mode FROM pg_locks WHERE pid = pg_backend_pid() AND mode = 'SIReadLock';\n> > COMMIT;\n> >\n> > In the expected file, the result of query looking at pg_locks\n> > should be matched with the following.\n> >\n> > locktype | mode\n> > ----------+------------\n> > tuple | SIReadLock\n> Thanks for your reply.\n> Hmm, it's an simple and might be the better way than adding isolation test.\n>\n> I added above test case to regress/sql/tidscan.sql.\n> Attach the patch.\n>\n\nI've tested predicate locking including promotion cases with v3 patch\nand it works fine.\n\n+table_beginscan_tid(Relation rel, Snapshot snapshot,\n+ int nkeys, struct ScanKeyData *key)\n+{\n+ uint32 flags = SO_ALLOW_STRAT | SO_ALLOW_SYNC | SO_ALLOW_PAGEMODE;\n\nIIUC setting SO_ALLOW_STRAT, SO_ALLOW_SYNC and SO_ALLOW_PAGEMODE has\nno meaning during tid scan. I think v11 also should be the same.\n\nWhy did you remove SO_TYPE_TIDSCAN from the previous version patch? I\nthink it's better to add it and then we can set only SO_TYPE_TIDSCAN\nto the scan option of tid scan.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 6 Feb 2020 23:01:14 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 6, 2020 at 11:01 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 6 Feb 2020 at 19:12, Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n> I've tested predicate locking including promotion cases with v3 patch\n> and it works fine.\n>\n> +table_beginscan_tid(Relation rel, Snapshot snapshot,\n> + int nkeys, struct ScanKeyData *key)\n> +{\n> + uint32 flags = SO_ALLOW_STRAT | SO_ALLOW_SYNC | SO_ALLOW_PAGEMODE;\n>\n> IIUC setting SO_ALLOW_STRAT, SO_ALLOW_SYNC and SO_ALLOW_PAGEMODE has\n> no meaning during tid scan. I think v11 also should be the same.\nThanks for your check, and useful advise.\nI was wondering if I should keep these flags, but I confirmed that I\ncan remove these from TidScan's flags.\n\n> Why did you remove SO_TYPE_TIDSCAN from the previous version patch? I\n> think it's better to add it and then we can set only SO_TYPE_TIDSCAN\n> to the scan option of tid scan.\nBecause, currently SO_TYPE_TIDSCAN is not used anywhere.\nSo I thought it might be better to avoid adding a new flag.\nHowever, as you said, this flag may be useful for the future tid scan\nfeature (like [1])\n\nAttach v4 patch. I re-added SO_TYPE_TIDSCAN to ScanOptions and\ntable_beginscan_tid.\nAnd I remove unnecessary SO_ALLOW_* flags.\n\nBest regards,\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKJS1f-%2BJJpm6B_NThUWzFv4007zAjObBXX1CBHE_bH9nOAvSw%40mail.gmail.com#1ae648acdc2df930b19218b6026135d3\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Fri, 7 Feb 2020 12:27:26 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "At Fri, 7 Feb 2020 12:27:26 +0900, Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com> wrote in \n> > IIUC setting SO_ALLOW_STRAT, SO_ALLOW_SYNC and SO_ALLOW_PAGEMODE has\n> > no meaning during tid scan. I think v11 also should be the same.\n> Thanks for your check, and useful advise.\n> I was wondering if I should keep these flags, but I confirmed that I\n> can remove these from TidScan's flags.\n> \n> > Why did you remove SO_TYPE_TIDSCAN from the previous version patch? I\n> > think it's better to add it and then we can set only SO_TYPE_TIDSCAN\n> > to the scan option of tid scan.\n> Because, currently SO_TYPE_TIDSCAN is not used anywhere.\n> So I thought it might be better to avoid adding a new flag.\n> However, as you said, this flag may be useful for the future tid scan\n> feature (like [1])\n> \n> Attach v4 patch. I re-added SO_TYPE_TIDSCAN to ScanOptions and\n> table_beginscan_tid.\n> And I remove unnecessary SO_ALLOW_* flags.\n\n+table_beginscan_tid(Relation rel, Snapshot snapshot,\n+\t\t\t\tint nkeys, struct ScanKeyData *key)\n+{\n+\tuint32\t\tflags = SO_TYPE_TIDSCAN;\n+\n+\treturn rel->rd_tableam->scan_begin(rel, snapshot, nkeys, key, NULL, flags);\n\nIt seems that nkeys and key are useless. Since every table_beginscan_*\nfunctions have distinct parameter sets, don't we remove them from\ntable_beginscan_tid?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 07 Feb 2020 13:26:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "Hi,\n\nOn Fri, Feb 7, 2020 at 1:27 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> It seems that nkeys and key are useless. Since every table_beginscan_*\n> functions have distinct parameter sets, don't we remove them from\n> table_beginscan_tid?\nYeah, actually, when calling table_beginscan_tid(), nkeys is set to 0\nand * key is set to NULL,\nso these are not used at the moment.\n\nI removed unnecessary arguments from table_beginscan_tid().\n\nAttache the v5 patch.\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Fri, 7 Feb 2020 15:07:51 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "\n\nOn 2020/02/07 15:07, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> On Fri, Feb 7, 2020 at 1:27 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> It seems that nkeys and key are useless. Since every table_beginscan_*\n>> functions have distinct parameter sets, don't we remove them from\n>> table_beginscan_tid?\n> Yeah, actually, when calling table_beginscan_tid(), nkeys is set to 0\n> and * key is set to NULL,\n> so these are not used at the moment.\n> \n> I removed unnecessary arguments from table_beginscan_tid().\n> \n> Attache the v5 patch.\n\nThanks for updating the patch! The patch looks good to me.\nSo barring any objection, I will push it and back-patch to v12 *soon*\nso that the upcoming minor version can contain it.\n\nBTW, commit 147e3722f7 causing the issue changed currtid_byreloid()\nand currtid_byrelname() so that they also call table_beginscan().\nI'm not sure what those functions are, but probably we should fix them\nso that table_beginscan_tid() is called instead. Thought?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 7 Feb 2020 17:01:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "Hi,\n\nOn Fri, Feb 7, 2020 at 5:02 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> BTW, commit 147e3722f7 causing the issue changed currtid_byreloid()\n> and currtid_byrelname() so that they also call table_beginscan().\n> I'm not sure what those functions are, but probably we should fix them\n> so that table_beginscan_tid() is called instead. Thought?\n+1, sorry, I overlooked it.\n\nBoth functions are used to check whether a valid tid or not with a\nrelation name (or oid),\nand both perform a tid scan internally.\nSo, these functions should call table_beginscan_tid().\n\nPerhaps unnecessary, I will attach a patch.\n\nBest regards,\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com",
"msg_date": "Fri, 7 Feb 2020 17:28:43 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "At Fri, 7 Feb 2020 17:01:59 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/02/07 15:07, Kasahara Tatsuhito wrote:\n> > Hi,\n> > On Fri, Feb 7, 2020 at 1:27 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> It seems that nkeys and key are useless. Since every table_beginscan_*\n> >> functions have distinct parameter sets, don't we remove them from\n> >> table_beginscan_tid?\n> > Yeah, actually, when calling table_beginscan_tid(), nkeys is set to 0\n> > and * key is set to NULL,\n> > so these are not used at the moment.\n> > I removed unnecessary arguments from table_beginscan_tid().\n> > Attache the v5 patch.\n> \n> Thanks for updating the patch! The patch looks good to me.\n> So barring any objection, I will push it and back-patch to v12 *soon*\n> so that the upcoming minor version can contain it.\n> \n> BTW, commit 147e3722f7 causing the issue changed currtid_byreloid()\n> and currtid_byrelname() so that they also call table_beginscan().\n> I'm not sure what those functions are, but probably we should fix them\n> so that table_beginscan_tid() is called instead. Thought?\n\nAt least they don't seem to need table_beginscan(), to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 07 Feb 2020 17:34:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "\n\nOn 2020/02/07 17:28, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> On Fri, Feb 7, 2020 at 5:02 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> BTW, commit 147e3722f7 causing the issue changed currtid_byreloid()\n>> and currtid_byrelname() so that they also call table_beginscan().\n>> I'm not sure what those functions are, but probably we should fix them\n>> so that table_beginscan_tid() is called instead. Thought?\n> +1, sorry, I overlooked it.\n> \n> Both functions are used to check whether a valid tid or not with a\n> relation name (or oid),\n> and both perform a tid scan internally.\n> So, these functions should call table_beginscan_tid().\n> \n> Perhaps unnecessary, I will attach a patch.\n\nPushed! Thanks!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 7 Feb 2020 22:09:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
},
{
"msg_contents": "On Fri, Feb 7, 2020 at 10:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Pushed! Thanks!\nThanks Fujii.\n\n\n--\nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n",
"msg_date": "Fri, 7 Feb 2020 23:25:46 +0900",
"msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tid scan increments value of pg_stat_all_tables.seq_scan. (but\n not seq_tup_read)"
}
] |
[
{
"msg_contents": "Hi,\n\nRecent developments on the \"backup manifest\" thread and others have\ncaused me to take an interest in making more code that has\nhistorically been backend-only accessible in frontend environments\nalso. I'm pretty sure I'm not alone in having often wished for more\nbackend-only facilities to be available in front-end code, because\nmany of them are very convenient, but up until now I haven't been\nmotivated enough to do much about it.\n\nProbably the thorniest problem is the backend's widespread dependence\non ereport() and elog(). Now, it would not be enormously difficult to\ncode up something that will sigsetjmp() and siglongjmp() in front-end\ncode just as we do in backend code, but I think it would be largely\nmissing the point. Just jumping out of a function some place and back\nto the top level is not very safe, because you will tend to leak\nresources and leave data structures in broken states. In the backend,\nwe've made this safe via transaction cleanup, which arranges to\nrelease resources, including memory, at the proper times. Getting this\nto work everywhere and for all the kinds of resources that matter has\nbeen a pretty significant undertaking; while frontend environments\ntend to be simpler, I think it's likely to take a good deal of work\nand thought to figure it all out. There's another problem, too: when\nereport() is used in the backend, it reports not only an error message\nbut a bunch of other things like an error code, an optional error\ndetail, and an error context. It might be good to introduce some of\nthese concepts on the frontend side, but it will be confusing if\nfrontend errors start getting reported just like backend errors, so\nhere again I feel we need to think carefully.\n\nThat being said, not all uses of ereport() and elog() are created\nequal. Sometimes, they are just used to report warnings, which\ntypically don't contain much more than a primary message, and\nsometimes they are used to report can't-happen conditions, which ERROR\nin backend code and could probably just print a message and exit() in\nfrontend code. Providing a general way to do this kind of thing seems\na lot easier than solving the whole problem, and it would allow us to\navoid continuing to copy stuff like this:\n\n#ifndef FRONTEND\n#define pg_log_warning(...) elog(WARNING, __VA_ARGS__)\n#else\n#include \"common/logging.h\"\n#endif\n\nWe have two copies of that already, and I don't think we should\ncontinue to add more. So, what I'd like to propose is a pair of\nmacros, one of which arranges for a warning of an appropriate sort for\neither a backend or frontend environment, and the other of which is\nused for a can't happen condition that should either ERROR or just\nexit. Taking a page from my Perl programming background, I propose to\ncall these pg_carp() and pg_croak(), although I'm not in love with\nthose names so let the bikeshedding commence. Something like:\n\n#ifdef FRONTEND\n#define pg_croak(...) do { pg_log_fatal(__VA_ARGS__); exit(1) } while (0)\n#define pg_carp(...) pg_log_warning(__VA_ARGS__);\n#else\n#define pg_croak(...) elog(ERROR, __VA_ARGS__)\n#define pg_carp(...) elog(WARNING, __VA_ARGS__)\n#endif\n\nIt is of course somewhat questionable to consider a \"croak\" as\neffectively *fatal* on the frontend side but only ERROR rather than\nFATAL on the backend side, but for the kinds of things I'm looking at\nit seems like the most useful definition. If the JSON parser goes\nhorribly wrong to due to some logic bug, it is neither useful nor\nfriendly to terminate the session, but a frontend tool is probably\nfine for it to just exit. The user perception will be, in each case,\nthat the last command they attempted (either from the command-line or\nfrom psql, as the case may be) failed but that they are free to enter\nmore commands without needing to start a new session (either of psql\nor bash or whatever).\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Jan 2020 09:29:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_croak, or something like it?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Probably the thorniest problem is the backend's widespread dependence\n> on ereport() and elog(). Now, it would not be enormously difficult to\n> code up something that will sigsetjmp() and siglongjmp() in front-end\n> code just as we do in backend code, but I think it would be largely\n> missing the point.\n\nAgreed that we don't want to introduce anything like transaction\nabort recovery in our frontend tools.\n\n> That being said, not all uses of ereport() and elog() are created\n> equal. Sometimes, they are just used to report warnings, which\n> typically don't contain much more than a primary message, and\n> sometimes they are used to report can't-happen conditions, which ERROR\n> in backend code and could probably just print a message and exit() in\n> frontend code. Providing a general way to do this kind of thing seems\n> a lot easier than solving the whole problem,\n\nI think the $64 question is whether handling those two cases is sufficient\nfor *all* elog/ereport usages in the code we'd like to move to src/common.\nIf it is, great; but if it isn't, we still have a problem to solve, and\nit's not clear that solving that problem won't yield a better answer for\nthese cases too.\n\n> Taking a page from my Perl programming background, I propose to\n> call these pg_carp() and pg_croak(), although I'm not in love with\n> those names so let the bikeshedding commence.\n\nYeah, I'm not in love with those names either, partly because I think\nthey carry some baggage about when to use them and what the reports\nare likely to look like. But sans caffeine, I don't have a better idea\neither.\n\n> Something like:\n> #ifdef FRONTEND\n> #define pg_croak(...) do { pg_log_fatal(__VA_ARGS__); exit(1) } while (0)\n> #define pg_carp(...) pg_log_warning(__VA_ARGS__);\n> #else\n> #define pg_croak(...) elog(ERROR, __VA_ARGS__)\n> #define pg_carp(...) elog(WARNING, __VA_ARGS__)\n> #endif\n\nHm, the thing that jumps out at me about those is the lack of attention\nto translatability. Sure, for really \"can't happen\" cases we probably\njust want to use bare elog with a non-translated message. But warnings\nare user-facing, and there will be an enormous temptation to use the\ncroak mechanism for user-facing errors too, and those should be\ntranslated. There's also a problem that there's noplace to add an\nERRCODE; that's flat out not acceptable for backend code that's\nreporting anything but absolutely-cannot-happen cases.\n\n> It is of course somewhat questionable to consider a \"croak\" as\n> effectively *fatal* on the frontend side but only ERROR rather than\n> FATAL on the backend side, but for the kinds of things I'm looking at\n> it seems like the most useful definition.\n\nAgreed there.\n\nWhat I keep thinking is that we should stick with ereport() as the\nreporting notation, and invent a frontend-side implementation of it\nthat covers the cases you mention (ie WARNING and ERROR ... and maybe\nDEBUG?), ignoring any components of the ereport that aren't helpful for\nthe purpose. That'd eliminate the temptation to shave the quality of\nthe backend-side error reports, and we still end up with about the same\nbasic functionality on frontend side.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Jan 2020 10:08:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_croak, or something like it?"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 10:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Something like:\n> > #ifdef FRONTEND\n> > #define pg_croak(...) do { pg_log_fatal(__VA_ARGS__); exit(1) } while (0)\n> > #define pg_carp(...) pg_log_warning(__VA_ARGS__);\n> > #else\n> > #define pg_croak(...) elog(ERROR, __VA_ARGS__)\n> > #define pg_carp(...) elog(WARNING, __VA_ARGS__)\n> > #endif\n>\n> Hm, the thing that jumps out at me about those is the lack of attention\n> to translatability. Sure, for really \"can't happen\" cases we probably\n> just want to use bare elog with a non-translated message. But warnings\n> are user-facing, and there will be an enormous temptation to use the\n> croak mechanism for user-facing errors too, and those should be\n> translated. There's also a problem that there's noplace to add an\n> ERRCODE; that's flat out not acceptable for backend code that's\n> reporting anything but absolutely-cannot-happen cases.\n\nI sorta meant to mention the translatability issues, but it went out\nof my head; not enough caffeine here either, I guess.\npg_log_generic_v() does fmt = _(fmt) and\nFRONTEND_COMMON_GETTEXT_TRIGGERS includes pg_log_error and friends, so\nI guess the idea is that everything that goes through these routines\ngets transalated. But then how does one justify this code that I\nquoted before?\n\n#ifndef FRONTEND\n#define pg_log_warning(...) elog(WARNING, __VA_ARGS__)\n#else\n#include \"common/logging.h\"\n#endif\n\nThese messages are, I guess, user-facing on the frontend, but can't\nhappen on the backend? Uggh.\n\n> What I keep thinking is that we should stick with ereport() as the\n> reporting notation, and invent a frontend-side implementation of it\n> that covers the cases you mention (ie WARNING and ERROR ... and maybe\n> DEBUG?), ignoring any components of the ereport that aren't helpful for\n> the purpose. That'd eliminate the temptation to shave the quality of\n> the backend-side error reports, and we still end up with about the same\n> basic functionality on frontend side.\n\nWell, the cases that I'm concerned about use elog(), not ereport(), so\nI don't think this would help me very much. They actually are\ncan't-happen messages. If I were to do what you propose here, I'd then\nhave to run around and convert a bunch of elog() calls to ereport() to\nmake it work. That seems like it's going in the direction of\nincreasing complexity rather than reducing it, and I don't much care\nfor the idea that we should run around changing things like:\n\nelog(ERROR, \"unexpected json parse state: %d\", ctx);\n\nto say:\n\nereport(ERROR, (errmsg_internal(\"unexpected json parse state: %d\", ctx)));\n\nThat's not a step forward in my book, especially because it'll be\n*necessary* for code in src/common and optional in other places. Also,\nusing ereport() -- or elog() -- in frontend code seems like a large\nstep down the slippery slope of leading people to believe that error\nrecovery in the frontend can, does, or should work like error recovery\nin the backend, and I think if we do even the slightest thing to feed\nthat impression we will regret it bitterly.\n\nThat being said, I do agree that there's a danger of people thinking\nthat they can use my proposed pg_croak() for user-facing messages.\nNow, comments would help. (I note in passing that the comments in\ncommon/logging.h make no mention of translatability, which IMHO they\nprobably should.) But we could also try to make it clear via the names\nthemselves, like call the macros cant_happen_error() and\ncant_happen_warning(). I actually thought about that option at one\npoint while I was fiddling around with this, but I am psychologically\nincapable of coping with spelling \"can't\" without an apostrophe.\nHowever, maybe some other spelling would dodge that problem.\npg_cannot_happen_error() and pg_cannot_happen_warning()? I don't know.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Jan 2020 10:37:29 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_croak, or something like it?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jan 27, 2020 at 10:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I keep thinking is that we should stick with ereport() as the\n>> reporting notation, and invent a frontend-side implementation of it\n>> that covers the cases you mention (ie WARNING and ERROR ... and maybe\n>> DEBUG?), ignoring any components of the ereport that aren't helpful for\n>> the purpose. That'd eliminate the temptation to shave the quality of\n>> the backend-side error reports, and we still end up with about the same\n>> basic functionality on frontend side.\n\n> Well, the cases that I'm concerned about use elog(), not ereport(), so\n> I don't think this would help me very much.\n\nSo? elog() is just a specific degenerate case of ereport(). If we have\na way to implement ereport() on frontend side then we can surely do\nelog() too.\n\nWhat it sounds to me like you want to do is implement (some equivalent of)\nelog() but not ereport() for this environment. I'm going to resist that\npretty strongly, because I think it will lead directly to abuse of elog()\nfor user-facing errors, with a consequent degradation of the user\nexperience when that code executes on backend side. I do not believe\nthat there are no user-facing error cases in the JSON parser, for\nexample; much less that we'll never introduce any in future.\n\n> That's not a step forward in my book, especially because it'll be\n> *necessary* for code in src/common and optional in other places. Also,\n> using ereport() -- or elog() -- in frontend code seems like a large\n> step down the slippery slope of leading people to believe that error\n> recovery in the frontend can, does, or should work like error recovery\n> in the backend, and I think if we do even the slightest thing to feed\n> that impression we will regret it bitterly.\n\nI'm not exactly buying that. For subsystems like the JSON parser,\nat least, either elog or ereport with ERROR just means \"I'm throwing\nup my hands, what happens next does not concern me\". I don't see\nany problem with interpreting that as leading to exit(1) on frontend\nside. I'm also not seeing why using some other, randomly different\nnotation for what are fundamentally going to be the same semantics\nwould really improve the situation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Jan 2020 10:50:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_croak, or something like it?"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 10:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So? elog() is just a specific degenerate case of ereport(). If we have\n> a way to implement ereport() on frontend side then we can surely do\n> elog() too.\n\nI suppose that's true.\n\n> What it sounds to me like you want to do is implement (some equivalent of)\n> elog() but not ereport() for this environment. I'm going to resist that\n> pretty strongly, because I think it will lead directly to abuse of elog()\n> for user-facing errors, with a consequent degradation of the user\n> experience when that code executes on backend side. I do not believe\n> that there are no user-facing error cases in the JSON parser, for\n> example; much less that we'll never introduce any in future.\n\nYou clearly haven't read the thread on this topic, or at least not\nvery carefully.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Jan 2020 10:56:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_croak, or something like it?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jan 27, 2020 at 10:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What it sounds to me like you want to do is implement (some equivalent of)\n>> elog() but not ereport() for this environment. I'm going to resist that\n>> pretty strongly, because I think it will lead directly to abuse of elog()\n>> for user-facing errors, with a consequent degradation of the user\n>> experience when that code executes on backend side. I do not believe\n>> that there are no user-facing error cases in the JSON parser, for\n>> example; much less that we'll never introduce any in future.\n\n> You clearly haven't read the thread on this topic, or at least not\n> very carefully.\n\nI have not, but I'm still going to stand by that point. It is not\ncredible that the code we will want to share between frontend and\nbackend will never contain any user-facing error reports. Designing\na reporting mechanism that assumes that is just going to lead to\ndegraded reporting of things that are indeed user-facing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Jan 2020 11:07:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_croak, or something like it?"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I have not, but I'm still going to stand by that point. It is not\n> credible that the code we will want to share between frontend and\n> backend will never contain any user-facing error reports.\n\nIt's hard to refute a statement this general; it can only devolve into\nan argument about what we will want to do in the future. And, if I may\nbe permitted to appeal to a highier authority on that topic, I think\nthis quote is highly relevant: \"Difficult to see. Always in motion is\nthe future.\"\n\nWhat I was hoping to do in this thread was to focus on the problem of\nwhich I have several instances immediately at hand, which is handling\ncan't-happen conditions, rather than blowing open the issue in its\nfull generality. I think there is a meaningful amount of code in the\nbackend where that is, or can be made to be, the only issue that needs\nto be solved, and I therefore think that it is a reasonable special\ncase to tackle first. Of course, I can't force you to have that\nconversation, but I don't think refusing to have it is going to make\nthe problem go away. Realistically, people aren't going to stop moving\ncode to src/common; what they're going to do is put special-case hacks\nin each file, like we already have in a couple of places, instead of\nusing some more general solution upon which we might try to agree.\n\nIn other words, somebody who comes across a chunk of code that uses\nereport() extensively might conceivably give up on moving it to\nsrc/common, but somebody who finds one elog(ERROR, \"oops this is\nbroken\") is not going to for that reason give up. They are going to do\nsomething to get around it. Better for them all to do the same thing,\nand something that's had some general thought given to it, than for\neach person who runs into such a problem to hand-roll their own way of\nhandling it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Jan 2020 11:45:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_croak, or something like it?"
}
] |
[
{
"msg_contents": "I see that buildfarm member caiman is generating a warning [1]:\n\njsonb_plpython.c: In function \\xe2\\x80\\x98PLyObject_ToJsonbValue\\xe2\\x80\\x99:\ncc1: warning: function may return address of local variable [-Wreturn-local-addr]\njsonb_plpython.c:413:13: note: declared here\n 413 | JsonbValue buf;\n | ^~~\n\nIt wasn't doing that a week or two ago when I last trawled the buildfarm\nfor warnings ... but this is unsurprising considering that the compiler\nit's using is hot off the presses:\n\nconfigure: using compiler=gcc (GCC) 10.0.1 20200121 (Red Hat 10.0.1-0.4)\n\nThe warning is from code like this:\n\n{\n JsonbValue buf;\n JsonbValue *out;\n ...\n if (*jsonb_state)\n out = &buf;\n else\n out = palloc(sizeof(JsonbValue));\n ...\n return (*jsonb_state ?\n pushJsonbValue(jsonb_state, is_elem ? WJB_ELEM : WJB_VALUE, out) :\n out);\n}\n\nso I can't say I blame gcc for being unhappy. This code is safe as long\nas *jsonb_state doesn't change in between, and as long as pushJsonbValue\ndoesn't expect its last argument to point at non-transient storage. But\ngcc doesn't want to assume that, and I don't really like the assumption\neither.\n\nI am thinking of trying to silence the warning by changing the return\nto be like\n\n return (out == &buf ?\n pushJsonbValue(jsonb_state, is_elem ? WJB_ELEM : WJB_VALUE, out) :\n out);\n\nIf that doesn't work, or if anyone thinks it's too ugly, I think we\nshould just drop the optimization of avoiding a palloc, and make\nthis function do a palloc always. It seems unlikely that anyone\nwould notice a performance difference, and the code would surely\nbe less rickety.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2020-01-25%2015%3A00%3A52\n\n\n",
"msg_date": "Mon, 27 Jan 2020 19:12:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "New compiler warning in jsonb_plpython.c"
}
] |
[
{
"msg_contents": "Hello,\n\nI noticed MemoryContextIsValid() called by various kinds of memory context\nroutines checks its node-tag as follows:\n\n#define MemoryContextIsValid(context) \\\n ((context) != NULL && \\\n (IsA((context), AllocSetContext) || \\\n IsA((context), SlabContext) || \\\n IsA((context), GenerationContext)))\n\nIt allows only \"known\" memory context methods, even though the memory context\nmechanism enables to implement custom memory allocator by extensions.\nHere is a node tag nobody used: T_MemoryContext.\nIt looks to me T_MemoryContext is a neutral naming for custom memory context,\nand here is no reason why memory context functions prevents custom methods.\n\n\nhttps://github.com/heterodb/pg-strom/blob/master/src/shmbuf.c#L1243\nI recently implemented a custom memory context for shared memory allocation\nwith portable pointers. It shall be used for cache of pre-built gpu\nbinary code and\nmetadata cache of apache arrow files.\nHowever, the assertion check above requires extension to set a fake node-tag\nto avoid backend crash. Right now, it is harmless to set T_AllocSetContext, but\nfeel a bit bad.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Tue, 28 Jan 2020 10:55:11 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 10:55:11AM +0900, Kohei KaiGai wrote:\n>Hello,\n>\n>I noticed MemoryContextIsValid() called by various kinds of memory context\n>routines checks its node-tag as follows:\n>\n>#define MemoryContextIsValid(context) \\\n> ((context) != NULL && \\\n> (IsA((context), AllocSetContext) || \\\n> IsA((context), SlabContext) || \\\n> IsA((context), GenerationContext)))\n>\n>It allows only \"known\" memory context methods, even though the memory context\n>mechanism enables to implement custom memory allocator by extensions.\n>Here is a node tag nobody used: T_MemoryContext.\n>It looks to me T_MemoryContext is a neutral naming for custom memory context,\n>and here is no reason why memory context functions prevents custom methods.\n>\n\nGood question. I don't think there's an explicit reason not to allow\nextensions to define custom memory contexts, and using T_MemoryContext\nseems like a possible solution. It's a bit weird though, because all the\nactual contexts are kinda \"subclasses\" of MemoryContext. So maybe adding\nT_CustomMemoryContext would be a better choice, but that only works in\nmaster, of course.\n\nAlso, it won't work if we need to add memory contexts to equalfuncs.c\netc. but maybe won't need that - it's more a theoretical issue.\n\n>\n>https://github.com/heterodb/pg-strom/blob/master/src/shmbuf.c#L1243\n>I recently implemented a custom memory context for shared memory allocation\n>with portable pointers. It shall be used for cache of pre-built gpu\n>binary code and\n>metadata cache of apache arrow files.\n>However, the assertion check above requires extension to set a fake node-tag\n>to avoid backend crash. Right now, it is harmless to set T_AllocSetContext, but\n>feel a bit bad.\n>\n\nInteresting. Does that mean the hared memory contexts are part of the\nsame hierarchy as \"normal\" contexts? That would be a bit confusing, I\nthink.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 28 Jan 2020 15:09:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "2020年1月28日(火) 23:09 Tomas Vondra <tomas.vondra@2ndquadrant.com>:\n>\n> On Tue, Jan 28, 2020 at 10:55:11AM +0900, Kohei KaiGai wrote:\n> >Hello,\n> >\n> >I noticed MemoryContextIsValid() called by various kinds of memory context\n> >routines checks its node-tag as follows:\n> >\n> >#define MemoryContextIsValid(context) \\\n> > ((context) != NULL && \\\n> > (IsA((context), AllocSetContext) || \\\n> > IsA((context), SlabContext) || \\\n> > IsA((context), GenerationContext)))\n> >\n> >It allows only \"known\" memory context methods, even though the memory context\n> >mechanism enables to implement custom memory allocator by extensions.\n> >Here is a node tag nobody used: T_MemoryContext.\n> >It looks to me T_MemoryContext is a neutral naming for custom memory context,\n> >and here is no reason why memory context functions prevents custom methods.\n> >\n>\n> Good question. I don't think there's an explicit reason not to allow\n> extensions to define custom memory contexts, and using T_MemoryContext\n> seems like a possible solution. It's a bit weird though, because all the\n> actual contexts are kinda \"subclasses\" of MemoryContext. So maybe adding\n> T_CustomMemoryContext would be a better choice, but that only works in\n> master, of course.\n>\n From the standpoint of extension author, reuse of T_MemoryContext and\nback patching the change on MemoryContextIsValid() makes us happy. :)\nHowever, even if we add a new node-tag here, the custom memory-context\ncan leave to use fake node-tag a few years later. It's better than nothing.\n\n> Also, it won't work if we need to add memory contexts to equalfuncs.c\n> etc. but maybe won't need that - it's more a theoretical issue.\n>\nRight now, none of nodes/XXXfuncs.c support these class of nodes related to\nMemoryContext. It shall not be a matter.\n\n> >https://github.com/heterodb/pg-strom/blob/master/src/shmbuf.c#L1243\n> >I recently implemented a custom memory context for shared memory allocation\n> >with portable pointers. It shall be used for cache of pre-built gpu\n> >binary code and\n> >metadata cache of apache arrow files.\n> >However, the assertion check above requires extension to set a fake node-tag\n> >to avoid backend crash. Right now, it is harmless to set T_AllocSetContext, but\n> >feel a bit bad.\n> >\n>\n> Interesting. Does that mean the hared memory contexts are part of the\n> same hierarchy as \"normal\" contexts? That would be a bit confusing, I\n> think.\n>\nIf this shared memory context is a child of normal context, likely, it should be\nreset or deleted as usual. However, if this shared memory context performs\nas a parent of normal context, it makes a problem when different process tries\nto delete this context, because its child (normal context) exists at the creator\nprocess only. So, it needs to be used carefully.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Tue, 28 Jan 2020 23:32:49 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Tue, Jan 28, 2020 at 10:55:11AM +0900, Kohei KaiGai wrote:\n>> I noticed MemoryContextIsValid() called by various kinds of memory context\n>> routines checks its node-tag as follows:\n>> #define MemoryContextIsValid(context) \\\n>> ((context) != NULL && \\\n>> (IsA((context), AllocSetContext) || \\\n>> IsA((context), SlabContext) || \\\n>> IsA((context), GenerationContext)))\n\n> Good question. I don't think there's an explicit reason not to allow\n> extensions to define custom memory contexts, and using T_MemoryContext\n> seems like a possible solution. It's a bit weird though, because all the\n> actual contexts are kinda \"subclasses\" of MemoryContext. So maybe adding\n> T_CustomMemoryContext would be a better choice, but that only works in\n> master, of course.\n\nI think the original reason for having distinct node types was to let\nindividual mcxt functions verify that they were passed the right type\nof node ... but I don't see any of them actually doing so, and the\ncontext-methods API makes it a bit hard to credit that we need to.\nSo there's certainly a case for replacing all three node typecodes\nwith just MemoryContext. On the other hand, it'd be ugly and it would\ncomplicate debugging: you could not be totally sure which struct type\nto cast a pointer-to-MemoryContext to when trying to inspect the\ncontents. We have a roughly comparable situation with respect to\nValue --- the node type codes we use with it, such as T_String, don't\nhave anything to do with the struct typedef name. I've always found\nthat very confusing when debugging. When gdb tells me that\n\"*(Node*) address\" is T_String, the first thing I want to do is write\n\"p *(String*) address\" and of course that doesn't work.\n\nI don't actually believe that private context types in extensions is\na very likely use-case, so on the whole I'd just as soon leave this\nalone. If we did want to do something, I'd vote for one NodeTag\ncode T_MemoryContext and then a secondary ID field that's an enum\nover specific memory context types. But that doesn't really improve\nmatters for debugging extension contexts, because they still don't\nhave a way to add elements to the secondary enum.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 10:35:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 11:32:49PM +0900, Kohei KaiGai wrote:\n>2020年1月28日(火) 23:09 Tomas Vondra <tomas.vondra@2ndquadrant.com>:\n>>\n>> On Tue, Jan 28, 2020 at 10:55:11AM +0900, Kohei KaiGai wrote:\n>> >Hello,\n>> >\n>> >I noticed MemoryContextIsValid() called by various kinds of memory context\n>> >routines checks its node-tag as follows:\n>> >\n>> >#define MemoryContextIsValid(context) \\\n>> > ((context) != NULL && \\\n>> > (IsA((context), AllocSetContext) || \\\n>> > IsA((context), SlabContext) || \\\n>> > IsA((context), GenerationContext)))\n>> >\n>> >It allows only \"known\" memory context methods, even though the memory context\n>> >mechanism enables to implement custom memory allocator by extensions.\n>> >Here is a node tag nobody used: T_MemoryContext.\n>> >It looks to me T_MemoryContext is a neutral naming for custom memory context,\n>> >and here is no reason why memory context functions prevents custom methods.\n>> >\n>>\n>> Good question. I don't think there's an explicit reason not to allow\n>> extensions to define custom memory contexts, and using T_MemoryContext\n>> seems like a possible solution. It's a bit weird though, because all the\n>> actual contexts are kinda \"subclasses\" of MemoryContext. So maybe adding\n>> T_CustomMemoryContext would be a better choice, but that only works in\n>> master, of course.\n>>\n>From the standpoint of extension author, reuse of T_MemoryContext and\n>back patching the change on MemoryContextIsValid() makes us happy. :)\n>However, even if we add a new node-tag here, the custom memory-context\n>can leave to use fake node-tag a few years later. It's better than nothing.\n>\n\nOh, right. I forgot we still need to backpatch this bit. But that seems\nlike a fairly small amount of code, so it should work.\n\nI think we can't backpatch the addition of T_CustomMemoryContext anyway\nas it essentially breaks ABI, as it changes the values assigned to T_\nconstants.\n\n>> Also, it won't work if we need to add memory contexts to equalfuncs.c\n>> etc. but maybe won't need that - it's more a theoretical issue.\n>>\n>Right now, none of nodes/XXXfuncs.c support these class of nodes related to\n>MemoryContext. It shall not be a matter.\n>\n\nYes. I did not really mean it as argument against the patch, it was\nmeant more like \"This could be an issue, but it actually is not.\" Sorry\nif that wasn't clear.\n\n>> >https://github.com/heterodb/pg-strom/blob/master/src/shmbuf.c#L1243\n>> >I recently implemented a custom memory context for shared memory allocation\n>> >with portable pointers. It shall be used for cache of pre-built gpu\n>> >binary code and\n>> >metadata cache of apache arrow files.\n>> >However, the assertion check above requires extension to set a fake node-tag\n>> >to avoid backend crash. Right now, it is harmless to set T_AllocSetContext, but\n>> >feel a bit bad.\n>> >\n>>\n>> Interesting. Does that mean the hared memory contexts are part of the\n>> same hierarchy as \"normal\" contexts? That would be a bit confusing, I\n>> think.\n>>\n>If this shared memory context is a child of normal context, likely, it should be\n>reset or deleted as usual. However, if this shared memory context performs\n>as a parent of normal context, it makes a problem when different process tries\n>to delete this context, because its child (normal context) exists at the creator\n>process only. So, it needs to be used carefully.\n>\n\nYeah, handling life cycle of a mix of those contexts may be quite tricky.\n\nBut my concern was a bit more general - is it a good idea to hide the\nnature of the memory context behind the same API. If you call palloc()\nshouldn't you really know whether you're allocating the stuff in regular\nor shared memory context?\n\nMaybe it makes perfect sense, but my initial impression is that those\nseem rather different, so maybe we should keep them separate (in\nseparate hierarchies or something). But I admit I don't have much\nexperience with use cases that would require such shmem contexts.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 28 Jan 2020 16:56:28 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 10:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't actually believe that private context types in extensions is\n> a very likely use-case, so on the whole I'd just as soon leave this\n> alone. If we did want to do something, I'd vote for one NodeTag\n> code T_MemoryContext and then a secondary ID field that's an enum\n> over specific memory context types.\n\nI generally like this idea, but I'd like to propose that we instead\nreplace the NodeTag with a 4-byte magic number. I was studying how\nfeasible it would be to make memory contexts available in frontend\ncode, and it doesn't look all that bad, but one of the downsides is\nthat nodes/memnodes.h includes nodes/nodes.h, and I think it would not\nbe a good idea to make frontend code depend on nodes/nodes.h, which\nseems like it's really a backend-only piece of infrastructure. Using a\nmagic number would allow us to avoid that, and it doesn't really cost\nanything, because the memory context nodes really don't participate in\nany of that infrastructure anyway.\n\nAlong with that, I think we could also change MemoryContextIsValid()\nto just check the magic number and not validate the type field. I\nthink the spirit of the code is just to check that we've got some kind\nof memory context rather than random garbage in memory, and checking\nthe magic number is good enough for that. It's possibly a little\nbetter than what we have now, since a node tag is a small integer,\nwhich might be more likely to occur at a random pointer address. At\nany rate I think it's no worse.\n\nProposed patch attached.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 28 Jan 2020 11:10:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 28, 2020 at 10:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't actually believe that private context types in extensions is\n>> a very likely use-case, so on the whole I'd just as soon leave this\n>> alone. If we did want to do something, I'd vote for one NodeTag\n>> code T_MemoryContext and then a secondary ID field that's an enum\n>> over specific memory context types.\n\n> I generally like this idea, but I'd like to propose that we instead\n> replace the NodeTag with a 4-byte magic number.\n\nYeah, there's something to be said for that. It's unlikely that it'd\never make sense for us to have copy/equal/write/read/etc support for\nmemory context headers, so having them be part of the Node taxonomy\ndoesn't seem very necessary.\n\n> Along with that, I think we could also change MemoryContextIsValid()\n> to just check the magic number and not validate the type field.\n\nRight, that's isomorphic to what I was imagining: there'd be just\none check not N.\n\n> Proposed patch attached.\n\nI strongly object to having the subtype field be just \"char\".\nI want it to be declared \"MemoryContextType\", so that gdb will\nstill be telling me explicitly what struct type this really is.\nI realize that you probably did that for alignment reasons, but\nmaybe we could shave the magic number down to 2 bytes, and then\nrearrange the field order? Or just not sweat so much about\nwasting a longword here. Having those bools up at the front\nis pretty ugly anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 11:24:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 11:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I strongly object to having the subtype field be just \"char\".\n> I want it to be declared \"MemoryContextType\", so that gdb will\n> still be telling me explicitly what struct type this really is.\n> I realize that you probably did that for alignment reasons, but\n> maybe we could shave the magic number down to 2 bytes, and then\n> rearrange the field order? Or just not sweat so much about\n> wasting a longword here. Having those bools up at the front\n> is pretty ugly anyway.\n\nI kind of dislike having the magic number not be the first thing in\nthe struct on aesthetic grounds, and possibly on the grounds that\nsomebody might be examining the initial bytes manually to try to\nfigure out what they've got, and having the magic number there makes\nit easier. Regarding space consumption, I guess this structure is\nalready over 64 bytes and not close to 128 bytes, so adding another 8\nbytes probably isn't very meaningful, but I don't love it. One thing\nthat concerns me a bit is that if somebody adds their own type of\nmemory context, then MemoryContextType will have a value that is not\nactually in the enum. If compilers are optimizing the code on the\nassumption that this cannot occur, do we need to worry about undefined\nbehavior?\n\nActually, I have what I think is a better idea. I notice that the\n\"type\" field is actually completely unused. We initialize it, and then\nnothing in the code ever checks it or relies on it for any purpose.\nSo, we could have a bug in the code that initializes that field with\nthe wrong value, for a new context type or in general, and only a\ndeveloper with a debugger would ever notice. That's because the thing\nthat really matters is the 'methods' array. So what I propose is that\nwe nuke the type field from orbit. If a developer wants to figure out\nwhat type of context they've got, they should print\ncontext->methods[0]; gdb will tell you the function names stored\nthere, and then you'll know *for real* what type of context you've\ngot.\n\nHere's a v2 that approaches the problem that way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 28 Jan 2020 12:18:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 28, 2020 at 11:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I strongly object to having the subtype field be just \"char\".\n>> I want it to be declared \"MemoryContextType\", so that gdb will\n>> still be telling me explicitly what struct type this really is.\n>> I realize that you probably did that for alignment reasons, but\n>> maybe we could shave the magic number down to 2 bytes, and then\n>> rearrange the field order? Or just not sweat so much about\n>> wasting a longword here. Having those bools up at the front\n>> is pretty ugly anyway.\n\n> I kind of dislike having the magic number not be the first thing in\n> the struct on aesthetic grounds,\n\nHuh? I didn't propose that. I was thinking either\n\n uint16 magic;\n bool isReset;\n bool allowInCritSection;\n enum type;\n ... 64-bit fields follow ...\n\nor\n\n uint32 magic;\n enum type;\n bool isReset;\n bool allowInCritSection;\n ... 64-bit fields follow ...\n\nwhere the latter wastes space unless the compiler chooses to fit the\nenum into 16 bits, but it's not really our fault if it doesn't. Besides,\nwhat's the reason to think we'll never add any more bools here? I don't\nthink we need to be that excited about the padding.\n\n> So, we could have a bug in the code that initializes that field with\n> the wrong value, for a new context type or in general, and only a\n> developer with a debugger would ever notice.\n\nRight, but that is a pretty important use-case.\n\n> That's because the thing\n> that really matters is the 'methods' array. So what I propose is that\n> we nuke the type field from orbit. If a developer wants to figure out\n> what type of context they've got, they should print\n> context->methods[0]; gdb will tell you the function names stored\n> there, and then you'll know *for real* what type of context you've\n> got.\n\nNo. No. Just NO. (A) that's overly complex for developers to use,\nand (B) you have far too much faith in the debugger producing something\nuseful. (My experience is that it'll fail to render function pointers\nlegibly on an awful lot of platforms.) Plus, you won't actually save\nany space by removing both of those fields.\n\nIf we were going to conclude that we don't really need a magic number,\nI'd opt for replacing the NodeTag with an enum MemoryContextType field\nthat's decoupled from NodeTag. But I don't feel tremendously happy\nabout not having a magic number. That'd make it noticeably harder\nto recognize cases where you're referencing an invalid context pointer.\n\nIn the end, trying to shave a couple of bytes from context headers\nseems pretty penny-wise and pound-foolish. There are lots of other\nstructs with significantly higher usage where we've not stopped to\nworry about alignment padding, so why here? Personally I'd just\nput the bools back at the end where they used to be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 13:08:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 1:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > That's because the thing\n> > that really matters is the 'methods' array. So what I propose is that\n> > we nuke the type field from orbit. If a developer wants to figure out\n> > what type of context they've got, they should print\n> > context->methods[0]; gdb will tell you the function names stored\n> > there, and then you'll know *for real* what type of context you've\n> > got.\n>\n> No. No. Just NO. (A) that's overly complex for developers to use,\n> and (B) you have far too much faith in the debugger producing something\n> useful. (My experience is that it'll fail to render function pointers\n> legibly on an awful lot of platforms.) Plus, you won't actually save\n> any space by removing both of those fields.\n\nI half-expected this reaction, but I think it's unjustified. Two\nsources of truth are not better than one, and I don't think that any\nother place where we use a vtable-type approach includes a redundant\ntype field just for decoration. Can you think of a counterexample?\n\nAfter scrounging around the source tree a bit, the most direct\nparallel I can find is probably TupleTableSlot, which contains a\npointer to a TupleTableSlotOps, but not a separate field identifying\nthe slot type. I don't remember you or anybody objecting that\nTupleTableSlot should contain both \"const TupleTableSlotOps *const\ntts_ops\" and also \"enum TupleTableSlotType\", and I don't think that\nsuch a suggestion would have been looked upon very favorably, not only\nbecause it would have made the struct bigger and served no necessary\npurpose, but also because having a centralized list of all\nTupleTableSlot types flies in the face of the essential goal of the\ntable AM interface, which is to allow adding new table type (and a\nslot type for each) without having to modify core code. That exact\nconsideration is also relevant here: KaiGai wants to be able to add\nhis own memory context type in third-party code without having to\nmodify core code. I've wanted to do that in the past, too. Having to\nlist all the context types in an enum means that you really can't do\nthat, which sucks, unless you're willing to lie about the context type\nand hope that nobody adds code that cares about it. Is there an\nalternate solution that you can propose that does not prevent that?\n\nYou might be entirely correct that there are some debuggers that can't\nprint function pointers correctly. I have not run across one, if at\nall, in a really long time, but I also work mostly on MacOS and Linux\nthese days, and those are pretty mainstream platforms where such\nproblems are less likely. However, I suspect that there are very few\ndevelopers who do the bulk of their work on obscure platforms with\npoorly-functioning debuggers. The only time it's likely to come up is\nif a buggy commit makes things crash on some random buildfarm critter\nand we need to debug from a core dump or whatever. But, if that does\nhappen, we're not dead. The only possible values of the \"methods\"\npointer -- if only core code is at issue -- are &AllocSetMethods,\n&SlabMethods, and &GenerationMethods, so somebody can just print out\nthose values and compare it to the pointer they find. That is a lot\nless convenient than being able to just print context->methods[0] and\nsee everything, but if it only comes up when debugging irreproducible\ncrashes on obscure platforms, it seems OK to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Jan 2020 13:57:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 2:55 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> I noticed MemoryContextIsValid() called by various kinds of memory context\n> routines checks its node-tag as follows:\n>\n> #define MemoryContextIsValid(context) \\\n> ((context) != NULL && \\\n> (IsA((context), AllocSetContext) || \\\n> IsA((context), SlabContext) || \\\n> IsA((context), GenerationContext)))\n>\n> It allows only \"known\" memory context methods, even though the memory context\n> mechanism enables to implement custom memory allocator by extensions.\n> Here is a node tag nobody used: T_MemoryContext.\n> It looks to me T_MemoryContext is a neutral naming for custom memory context,\n> and here is no reason why memory context functions prevents custom methods.\n>\n>\n> https://github.com/heterodb/pg-strom/blob/master/src/shmbuf.c#L1243\n> I recently implemented a custom memory context for shared memory allocation\n> with portable pointers. It shall be used for cache of pre-built gpu\n> binary code and\n> metadata cache of apache arrow files.\n> However, the assertion check above requires extension to set a fake node-tag\n> to avoid backend crash. Right now, it is harmless to set T_AllocSetContext, but\n> feel a bit bad.\n\nFWIW the code in https://commitfest.postgresql.org/26/2325/ ran into\nexactly the same problem while making nearly exactly the same kind of\nthing (namely, a MemoryContext backed by space in the main shm area,\nin this case reusing the dsa.c allocator).\n\n\n",
"msg_date": "Wed, 29 Jan 2020 08:27:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "2020年1月29日(水) 0:56 Tomas Vondra <tomas.vondra@2ndquadrant.com>:\n>\n> On Tue, Jan 28, 2020 at 11:32:49PM +0900, Kohei KaiGai wrote:\n> >2020年1月28日(火) 23:09 Tomas Vondra <tomas.vondra@2ndquadrant.com>:\n> >>\n> >> On Tue, Jan 28, 2020 at 10:55:11AM +0900, Kohei KaiGai wrote:\n> >> >Hello,\n> >> >\n> >> >I noticed MemoryContextIsValid() called by various kinds of memory context\n> >> >routines checks its node-tag as follows:\n> >> >\n> >> >#define MemoryContextIsValid(context) \\\n> >> > ((context) != NULL && \\\n> >> > (IsA((context), AllocSetContext) || \\\n> >> > IsA((context), SlabContext) || \\\n> >> > IsA((context), GenerationContext)))\n> >> >\n> >> >It allows only \"known\" memory context methods, even though the memory context\n> >> >mechanism enables to implement custom memory allocator by extensions.\n> >> >Here is a node tag nobody used: T_MemoryContext.\n> >> >It looks to me T_MemoryContext is a neutral naming for custom memory context,\n> >> >and here is no reason why memory context functions prevents custom methods.\n> >> >\n> >>\n> >> Good question. I don't think there's an explicit reason not to allow\n> >> extensions to define custom memory contexts, and using T_MemoryContext\n> >> seems like a possible solution. It's a bit weird though, because all the\n> >> actual contexts are kinda \"subclasses\" of MemoryContext. So maybe adding\n> >> T_CustomMemoryContext would be a better choice, but that only works in\n> >> master, of course.\n> >>\n> >From the standpoint of extension author, reuse of T_MemoryContext and\n> >back patching the change on MemoryContextIsValid() makes us happy. :)\n> >However, even if we add a new node-tag here, the custom memory-context\n> >can leave to use fake node-tag a few years later. It's better than nothing.\n> >\n>\n> Oh, right. I forgot we still need to backpatch this bit. But that seems\n> like a fairly small amount of code, so it should work.\n>\n> I think we can't backpatch the addition of T_CustomMemoryContext anyway\n> as it essentially breaks ABI, as it changes the values assigned to T_\n> constants.\n>\n> >> Also, it won't work if we need to add memory contexts to equalfuncs.c\n> >> etc. but maybe won't need that - it's more a theoretical issue.\n> >>\n> >Right now, none of nodes/XXXfuncs.c support these class of nodes related to\n> >MemoryContext. It shall not be a matter.\n> >\n>\n> Yes. I did not really mean it as argument against the patch, it was\n> meant more like \"This could be an issue, but it actually is not.\" Sorry\n> if that wasn't clear.\n>\n> >> >https://github.com/heterodb/pg-strom/blob/master/src/shmbuf.c#L1243\n> >> >I recently implemented a custom memory context for shared memory allocation\n> >> >with portable pointers. It shall be used for cache of pre-built gpu\n> >> >binary code and\n> >> >metadata cache of apache arrow files.\n> >> >However, the assertion check above requires extension to set a fake node-tag\n> >> >to avoid backend crash. Right now, it is harmless to set T_AllocSetContext, but\n> >> >feel a bit bad.\n> >> >\n> >>\n> >> Interesting. Does that mean the hared memory contexts are part of the\n> >> same hierarchy as \"normal\" contexts? That would be a bit confusing, I\n> >> think.\n> >>\n> >If this shared memory context is a child of normal context, likely, it should be\n> >reset or deleted as usual. However, if this shared memory context performs\n> >as a parent of normal context, it makes a problem when different process tries\n> >to delete this context, because its child (normal context) exists at the creator\n> >process only. So, it needs to be used carefully.\n> >\n>\n> Yeah, handling life cycle of a mix of those contexts may be quite tricky.\n>\n> But my concern was a bit more general - is it a good idea to hide the\n> nature of the memory context behind the same API. If you call palloc()\n> shouldn't you really know whether you're allocating the stuff in regular\n> or shared memory context?\n>\n> Maybe it makes perfect sense, but my initial impression is that those\n> seem rather different, so maybe we should keep them separate (in\n> separate hierarchies or something). But I admit I don't have much\n> experience with use cases that would require such shmem contexts.\n>\nYeah, as you mentioned, we have no way to distinguish whether a particular\nmemory chunk is private memory or shared memory, right now.\nIt is the responsibility of software developers, and I assume the shared-\nmemory chunks are applied on a limited usages where it has a certain reason\nwhy it should be shared.\nOn the other hand, it is the same situation even if private memory.\nWe should pay attention to the memory context to allocate a memory chunk from.\nFor example, state object to be valid during query execution must be allocated\nfrom estate->es_query_cxt. If someone allocates it from CurrentMemoryContext,\nthen implicitly released, we shall consider it is a bug.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 29 Jan 2020 09:59:57 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "2020年1月29日(水) 2:18 Robert Haas <robertmhaas@gmail.com>:\n>\n> On Tue, Jan 28, 2020 at 11:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I strongly object to having the subtype field be just \"char\".\n> > I want it to be declared \"MemoryContextType\", so that gdb will\n> > still be telling me explicitly what struct type this really is.\n> > I realize that you probably did that for alignment reasons, but\n> > maybe we could shave the magic number down to 2 bytes, and then\n> > rearrange the field order? Or just not sweat so much about\n> > wasting a longword here. Having those bools up at the front\n> > is pretty ugly anyway.\n>\n> I kind of dislike having the magic number not be the first thing in\n> the struct on aesthetic grounds, and possibly on the grounds that\n> somebody might be examining the initial bytes manually to try to\n> figure out what they've got, and having the magic number there makes\n> it easier. Regarding space consumption, I guess this structure is\n> already over 64 bytes and not close to 128 bytes, so adding another 8\n> bytes probably isn't very meaningful, but I don't love it. One thing\n> that concerns me a bit is that if somebody adds their own type of\n> memory context, then MemoryContextType will have a value that is not\n> actually in the enum. If compilers are optimizing the code on the\n> assumption that this cannot occur, do we need to worry about undefined\n> behavior?\n>\n> Actually, I have what I think is a better idea. I notice that the\n> \"type\" field is actually completely unused. We initialize it, and then\n> nothing in the code ever checks it or relies on it for any purpose.\n> So, we could have a bug in the code that initializes that field with\n> the wrong value, for a new context type or in general, and only a\n> developer with a debugger would ever notice. That's because the thing\n> that really matters is the 'methods' array. So what I propose is that\n> we nuke the type field from orbit. If a developer wants to figure out\n> what type of context they've got, they should print\n> context->methods[0]; gdb will tell you the function names stored\n> there, and then you'll know *for real* what type of context you've\n> got.\n>\n> Here's a v2 that approaches the problem that way.\n>\nHow about to have \"const char *name\" in MemoryContextMethods?\nIt is more human readable for debugging, than raw function pointers.\nWe already have similar field to identify the methods at CustomScanMethods.\n(it is also used for EXPLAIN, not only debugging...)\n\nI love the idea to identify the memory-context type with single identifiers\nrather than two. If we would have sub-field Id and memory-context methods\nseparately, it probably needs Assert() to check the consistency of\nthem, isn't it?\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:23:04 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "2020年1月29日(水) 4:27 Thomas Munro <thomas.munro@gmail.com>:\n>\n> On Tue, Jan 28, 2020 at 2:55 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > I noticed MemoryContextIsValid() called by various kinds of memory context\n> > routines checks its node-tag as follows:\n> >\n> > #define MemoryContextIsValid(context) \\\n> > ((context) != NULL && \\\n> > (IsA((context), AllocSetContext) || \\\n> > IsA((context), SlabContext) || \\\n> > IsA((context), GenerationContext)))\n> >\n> > It allows only \"known\" memory context methods, even though the memory context\n> > mechanism enables to implement custom memory allocator by extensions.\n> > Here is a node tag nobody used: T_MemoryContext.\n> > It looks to me T_MemoryContext is a neutral naming for custom memory context,\n> > and here is no reason why memory context functions prevents custom methods.\n> >\n> >\n> > https://github.com/heterodb/pg-strom/blob/master/src/shmbuf.c#L1243\n> > I recently implemented a custom memory context for shared memory allocation\n> > with portable pointers. It shall be used for cache of pre-built gpu\n> > binary code and\n> > metadata cache of apache arrow files.\n> > However, the assertion check above requires extension to set a fake node-tag\n> > to avoid backend crash. Right now, it is harmless to set T_AllocSetContext, but\n> > feel a bit bad.\n>\n> FWIW the code in https://commitfest.postgresql.org/26/2325/ ran into\n> exactly the same problem while making nearly exactly the same kind of\n> thing (namely, a MemoryContext backed by space in the main shm area,\n> in this case reusing the dsa.c allocator).\n>\nThanks, I had looked at \"Shared Memory Context\" title on pgsql-hackers a few\nmonths ago, however, missed the thread right now.\n\nThe main point of the differences from this approach is portability of pointers\nto shared memory chunks. (If I understand correctly)\nPG-Strom preserves logical address space, but no physical pages, on startup\ntime, then maps shared memory segment on the fixed address on demand.\nSo, pointers are portable to all the backend processes, thus, suitable to build\ntree structure on shared memory also.\n\nThis article below introduces how our \"shared memory context\" works.\nIt is originally written in Japanese, and Google translate may\ngenerate unnatural\nEnglish. However, its figures probably explain how it works.\nhttps://translate.google.co.jp/translate?hl=ja&sl=auto&tl=en&u=http%3A%2F%2Fkaigai.hatenablog.com%2Fentry%2F2016%2F06%2F19%2F095127\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:49:32 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 2:49 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> 2020年1月29日(水) 4:27 Thomas Munro <thomas.munro@gmail.com>:\n> > On Tue, Jan 28, 2020 at 2:55 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > FWIW the code in https://commitfest.postgresql.org/26/2325/ ran into\n> > exactly the same problem while making nearly exactly the same kind of\n> > thing (namely, a MemoryContext backed by space in the main shm area,\n> > in this case reusing the dsa.c allocator).\n> >\n> Thanks, I had looked at \"Shared Memory Context\" title on pgsql-hackers a few\n> months ago, however, missed the thread right now.\n>\n> The main point of the differences from this approach is portability of pointers\n> to shared memory chunks. (If I understand correctly)\n> PG-Strom preserves logical address space, but no physical pages, on startup\n> time, then maps shared memory segment on the fixed address on demand.\n> So, pointers are portable to all the backend processes, thus, suitable to build\n> tree structure on shared memory also.\n\nVery interesting. PostgreSQL's DSM segments could potentially have\nbeen implemented that way (whereas today they are not mapped with\nMAP_FIXED), but I suppose people were worried about portability\nproblems and ASLR. The Windows port struggles with that stuff.\n\nActually the WIP code in that patch reserves a chunk of space up front\nin the postmaster, and then puts a DSA allocator inside it. Normally,\nDSA allocators create/destroy new DSM segments on demand and deal with\nthe address portability stuff through a lot of extra work (this\nprobably makes Parallel Hash Join slightly slower than it should be),\nbut as a special case a DSA allocator can be created in preexisting\nmemory and then not allowed to grow. In exchange for accepting a\nfixed space, you get normal shareable pointers. This means that you\ncan use the resulting weird MemoryContext for stuff like building\nquery plans that can be used by other processes, but when you run out\nof memory, allocations begin to fail. That WIP code is experimenting\nwith caches that can tolerate running out of memory (or at least that\nis the intention), so a fixed sized space is OK for that.\n\n> This article below introduces how our \"shared memory context\" works.\n> It is originally written in Japanese, and Google translate may\n> generate unnatural\n> English. However, its figures probably explain how it works.\n> https://translate.google.co.jp/translate?hl=ja&sl=auto&tl=en&u=http%3A%2F%2Fkaigai.hatenablog.com%2Fentry%2F2016%2F06%2F19%2F095127\n\nThanks!\n\n\n",
"msg_date": "Wed, 29 Jan 2020 15:06:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 10:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> No. No. Just NO. (A) that's overly complex for developers to use,\n> and (B) you have far too much faith in the debugger producing something\n> useful. (My experience is that it'll fail to render function pointers\n> legibly on an awful lot of platforms.) Plus, you won't actually save\n> any space by removing both of those fields.\n\nFWIW, I noticed that GDB becomes much better at this when you add \"set\nprint symbol on\" to your .gdbinit dot file about a year ago. In theory\nyou shouldn't need to do that to print the symbol that a function\npointer points to, I think. At least that's what the documentation\nsays. But in practice this seems to help a lot.\n\nI don't recall figuring out a reason for this. Could have been due to\nGDB being fussier about the declared type of a pointer than it needs\nto be, or something along those lines.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 28 Jan 2020 18:08:26 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Jan 28, 2020 at 10:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> No. No. Just NO. (A) that's overly complex for developers to use,\n>> and (B) you have far too much faith in the debugger producing something\n>> useful. (My experience is that it'll fail to render function pointers\n>> legibly on an awful lot of platforms.) Plus, you won't actually save\n>> any space by removing both of those fields.\n\n> FWIW, I noticed that GDB becomes much better at this when you add \"set\n> print symbol on\" to your .gdbinit dot file about a year ago.\n\nInteresting. But I bet there are platform and version dependencies\nin the mix, too. I'd still not wish to rely on this for debugging.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 21:53:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 6:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > FWIW, I noticed that GDB becomes much better at this when you add \"set\n> > print symbol on\" to your .gdbinit dot file about a year ago.\n>\n> Interesting. But I bet there are platform and version dependencies\n> in the mix, too. I'd still not wish to rely on this for debugging.\n\nI agree that there are a lot of moving pieces here. I wouldn't like to\nhave to rely on this working myself.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 28 Jan 2020 19:16:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "2020年1月29日(水) 11:06 Thomas Munro <thomas.munro@gmail.com>:\n>\n> On Wed, Jan 29, 2020 at 2:49 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > 2020年1月29日(水) 4:27 Thomas Munro <thomas.munro@gmail.com>:\n> > > On Tue, Jan 28, 2020 at 2:55 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > FWIW the code in https://commitfest.postgresql.org/26/2325/ ran into\n> > > exactly the same problem while making nearly exactly the same kind of\n> > > thing (namely, a MemoryContext backed by space in the main shm area,\n> > > in this case reusing the dsa.c allocator).\n> > >\n> > Thanks, I had looked at \"Shared Memory Context\" title on pgsql-hackers a few\n> > months ago, however, missed the thread right now.\n> >\n> > The main point of the differences from this approach is portability of pointers\n> > to shared memory chunks. (If I understand correctly)\n> > PG-Strom preserves logical address space, but no physical pages, on startup\n> > time, then maps shared memory segment on the fixed address on demand.\n> > So, pointers are portable to all the backend processes, thus, suitable to build\n> > tree structure on shared memory also.\n>\n> Very interesting. PostgreSQL's DSM segments could potentially have\n> been implemented that way (whereas today they are not mapped with\n> MAP_FIXED), but I suppose people were worried about portability\n> problems and ASLR. The Windows port struggles with that stuff.\n>\nYes. I'm not certain whether Windows can support equivalen behavior\nto mmap(..., PROT_NONE) and SIGSEGV/SIGBUS handling.\nIt is also a reason why PG-Strom (that is only for Linux) wants to have\nown shared memory management logic, at least, right now.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 29 Jan 2020 13:16:30 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-28 11:10:47 -0500, Robert Haas wrote:\n> I generally like this idea, but I'd like to propose that we instead\n> replace the NodeTag with a 4-byte magic number. I was studying how\n> feasible it would be to make memory contexts available in frontend\n> code, and it doesn't look all that bad, but one of the downsides is\n> that nodes/memnodes.h includes nodes/nodes.h, and I think it would not\n> be a good idea to make frontend code depend on nodes/nodes.h, which\n> seems like it's really a backend-only piece of infrastructure. Using a\n> magic number would allow us to avoid that, and it doesn't really cost\n> anything, because the memory context nodes really don't participate in\n> any of that infrastructure anyway.\n\nHm. I kinda like the idea of still having one NodeTag identifying memory\ncontexts, and then some additional field identifying the actual\ntype. Being able to continue to rely on IsA() etc imo is nice. I think\nnodes.h itself only would be a problem for frontend code because we put\na lot of other stuff too. We should just separate the actually generic\nstuff out. I think it's going to be like 2 seconds once we have memory\ncontexts until we're e.g. going to want to also have pg_list.h - which\nis harder without knowing the tags.\n\nIt seems like a good idea to still have an additional identifier for\neach node type, for some cross checking. How about just frobbing the\npointer to the MemoryContextMethod slightly, and storing that in an\nadditional field? That'd be something fairly unlikely to ever be a false\npositive, and it doesn't require dereferencing any additional memory.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Feb 2020 16:26:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 9:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Interesting. But I bet there are platform and version dependencies\n> in the mix, too. I'd still not wish to rely on this for debugging.\n\nHmm. What if we put a \"const char *name\" in the methods array? Then\neven if you couldn't print out the function pointers, you would at\nleast see the name.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Feb 2020 09:19:04 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 7:26 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. I kinda like the idea of still having one NodeTag identifying memory\n> contexts, and then some additional field identifying the actual\n> type. Being able to continue to rely on IsA() etc imo is nice. I think\n> nodes.h itself only would be a problem for frontend code because we put\n> a lot of other stuff too. We should just separate the actually generic\n> stuff out. I think it's going to be like 2 seconds once we have memory\n> contexts until we're e.g. going to want to also have pg_list.h - which\n> is harder without knowing the tags.\n\nThe problem with IsA() is that it assumes that you've got all the node\ntags that can ever exist in one big enum. I don't see how to make that\nwork once you extend the system to work with more than one program. I\nthink it will be really confusing if frontend code starts reusing\nrandom backend data structures. Like, fundamental things like List,\nsure, that should be exposed. But if people start creating Plan or FDW\nobjects in the frontend, it's just going to be chaos. And I don't\nthink we want new objects that people may add for frontend code to be\nvisible to backend code, either.\n\n> It seems like a good idea to still have an additional identifier for\n> each node type, for some cross checking. How about just frobbing the\n> pointer to the MemoryContextMethod slightly, and storing that in an\n> additional field? That'd be something fairly unlikely to ever be a false\n> positive, and it doesn't require dereferencing any additional memory.\n\nThat would be fine as an internal sanity check, but if Tom is unhappy\nwith the idea of having to try to make sense of a function pointer,\nhe's probably going to be even less happy about trying to make sense\nof a frobbed pointer. And I would actually agree with him on that\npoint.\n\nI think we're all pursuing slightly different goals here. KaiGai's\nmain goal is to make it possible for third-party code to add new kinds\nof memory contexts. My main goal is to make memory contexts not depend\non backend-only infrastructure. Tom is concerned about debuggability.\nYour concern here is about sanity checking. There's some overlap\nbetween those goals but the absolute best thing for any given one of\nthem might be really bad for one of the other ones; hopefully we can\nfind some compromise that gets everybody the things they care about\nmost.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Feb 2020 09:28:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Hmm. What if we put a \"const char *name\" in the methods array? Then\n> even if you couldn't print out the function pointers, you would at\n> least see the name.\n\nYeah, that idea had occurred to me too. It'd definitely be better than\nrelying on the ability to interpret function pointers, and there might\nbe other uses for it besides manual debugging (eg if we had an outfuncs\nfunction for MemoryContext, it could print that). So I'd be a bit in\nfavor of adding that independently of this discussion. I still think\nthat it'd be inconvenient for debugging, though, compared to having\nan enum field right in the context. You'll have to do an extra step to\ndiscover the context's type, and if you jump to the wrong conclusion\nand do, say,\n\tp *(AllocSetContext *) ptr_value\nwhen it's really some other context type, there won't be anything\nas obvious as \"type = T_GenerationContext\" in what is printed to\ntell you you were wrong. So I really want to also have an enum\nfield of some sort, and it does not seem to me that there's any\ncompelling reason not to have one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Feb 2020 10:09:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 10:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I really want to also have an enum\n> field of some sort, and it does not seem to me that there's any\n> compelling reason not to have one.\n\nI mean, then it's not extensible, right?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Feb 2020 10:13:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Feb 3, 2020 at 7:26 PM Andres Freund <andres@anarazel.de> wrote:\n>> It seems like a good idea to still have an additional identifier for\n>> each node type, for some cross checking. How about just frobbing the\n>> pointer to the MemoryContextMethod slightly, and storing that in an\n>> additional field? That'd be something fairly unlikely to ever be a false\n>> positive, and it doesn't require dereferencing any additional memory.\n\n> That would be fine as an internal sanity check, but if Tom is unhappy\n> with the idea of having to try to make sense of a function pointer,\n> he's probably going to be even less happy about trying to make sense\n> of a frobbed pointer. And I would actually agree with him on that\n> point.\n\nYeah, it seems a bit overcomplicated for what it accomplishes.\n\n> I think we're all pursuing slightly different goals here. KaiGai's\n> main goal is to make it possible for third-party code to add new kinds\n> of memory contexts. My main goal is to make memory contexts not depend\n> on backend-only infrastructure. Tom is concerned about debuggability.\n> Your concern here is about sanity checking. There's some overlap\n> between those goals but the absolute best thing for any given one of\n> them might be really bad for one of the other ones; hopefully we can\n> find some compromise that gets everybody the things they care about\n> most.\n\nGood summary. I think that the combination of a magic number to identify\n\"this is a memory context struct\" and an enum to identify the specific\ntype of context should meet all these goals moderately well:\n\n* Third-party context types would have to force the compiler to take\ncontext-type values that weren't among the known enum values ---\nalthough they could ask us to reserve a value by adding an otherwise-\nunreferenced-by-core-code enum entry, and I don't really see why\nwe shouldn't accept such requests.\n\n* Frontend and backend would have to share the enum, but the list\nis short enough that that shouldn't be a killer maintenance problem.\n(Also, the enum field would be pretty much write-only from the\ncode's standpoint, so even if two programs were out of sync on it,\nthere would be at worst a debugging hazard.)\n\n* The enum does what I want for debuggability, especially if we\nback-stop it with a name string in the methods struct as you suggested.\n\n* The magic value does what Andres wants for sanity checking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Feb 2020 10:20:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 10:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Good summary. I think that the combination of a magic number to identify\n> \"this is a memory context struct\" and an enum to identify the specific\n> type of context should meet all these goals moderately well:\n>\n> * Third-party context types would have to force the compiler to take\n> context-type values that weren't among the known enum values ---\n> although they could ask us to reserve a value by adding an otherwise-\n> unreferenced-by-core-code enum entry, and I don't really see why\n> we shouldn't accept such requests.\n>\n> * Frontend and backend would have to share the enum, but the list\n> is short enough that that shouldn't be a killer maintenance problem.\n> (Also, the enum field would be pretty much write-only from the\n> code's standpoint, so even if two programs were out of sync on it,\n> there would be at worst a debugging hazard.)\n>\n> * The enum does what I want for debuggability, especially if we\n> back-stop it with a name string in the methods struct as you suggested.\n>\n> * The magic value does what Andres wants for sanity checking.\n\nI'm pretty unimpressed with the enum proposal - I think it's pretty\nnasty for an extension author to have to make up a value that's not in\nthe enum. One, how are they supposed to know that they should do that?\nTwo, how are they supposed to know that the code doesn't actually\ndepend on that enum value for anything important? And three, how do\nthey know that the compiler isn't going to hose them by assuming that\nisn't a can't-happen scenario?\n\nI mean, I'd rather get a patch committed here than not, but I have a\nhard time understanding why this is a good way to go.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Feb 2020 10:40:59 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 10:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> an enum field right in the context. You'll have to do an extra step to\n> discover the context's type, and if you jump to the wrong conclusion\n> and do, say,\n> p *(AllocSetContext *) ptr_value\n> when it's really some other context type, there won't be anything\n> as obvious as \"type = T_GenerationContext\" in what is printed to\n> tell you you were wrong.\n\nDoesn't the proposed magic number address this concern?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Feb 2020 10:41:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On 2/5/20 10:20 AM, Tom Lane wrote:\n\n> * Third-party context types would have to force the compiler to take\n> context-type values that weren't among the known enum values ---\n\nDoesn't that seem like a long run for a short slide? An extension\nauthor gets expected to do something awkward-bordering-on-smelly\nso that debugging can rely on an enum saying \"this is a Foo\" rather\nthan a string saying \"this is a Foo\"?\n\nGranted, it's possible the extension-authoring situation is rare,\nand debugging often happens under time pressure and dire stakes,\nso perhaps that would be the right balance for this case. I have\ncertainly seen emails from Tom in this space with the analysis of\nsome reported bug completed preternaturally fast, so if he judges\nthat losing the enum would make that harder, that's something.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 5 Feb 2020 10:52:25 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 5, 2020 at 10:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> an enum field right in the context. You'll have to do an extra step to\n>> discover the context's type, and if you jump to the wrong conclusion\n>> and do, say,\n>> \tp *(AllocSetContext *) ptr_value\n>> when it's really some other context type, there won't be anything\n>> as obvious as \"type = T_GenerationContext\" in what is printed to\n>> tell you you were wrong.\n\n> Doesn't the proposed magic number address this concern?\n\nNo, because (a) it will be a random magic number that nobody will\nremember, and gdb won't print in any helpful form; (b) at least\nas I understood the proposal, there'd be just one magic number for\nall types of memory context.\n\nAnother issue with relying on only a magic number is that if you\nget confused and do \"p *(AllocSetContext *) ptr_value\" on something\nthat doesn't point at any sort of memory context at all, there will\nnot be *anything* except confusing field values to help you realize\nthat. One of the great advantages of the Node system, IME, is that\nwhen you try to print out a Node-subtype struct, the first field\nin what is printed is either the Node type you were expecting, or\nsome recognizable other Node code, or obvious garbage. If we don't\nhave an enum field in MemoryContexts then there's not going to be\nany similarly-obvious clue that what you're looking at isn't a memory\ncontext in the first place. I'm okay with the enum not being the\nfirst field, but much less okay with not having one at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Feb 2020 10:56:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm pretty unimpressed with the enum proposal - I think it's pretty\n> nasty for an extension author to have to make up a value that's not in\n> the enum. One, how are they supposed to know that they should do that?\n> Two, how are they supposed to know that the code doesn't actually\n> depend on that enum value for anything important? And three, how do\n> they know that the compiler isn't going to hose them by assuming that\n> isn't a can't-happen scenario?\n\nWell, as I mentioned, the enum field will be pretty much write-only from\nthe code's standpoint, so that last point is not as killer as you might\nthink. The rest of this is just documentation, and any of the proposals\non the table require documentation if you expect people to be able to\nwrite extensions to use them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Feb 2020 10:59:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 2/5/20 10:20 AM, Tom Lane wrote:\n>> * Third-party context types would have to force the compiler to take\n>> context-type values that weren't among the known enum values ---\n\n> Doesn't that seem like a long run for a short slide?\n\nWell, one thing we could do is assign an \"other\" or \"custom\" code,\nand people who were just doing one-off things could use that.\nIf they were going to publish their code, we could encourage them\nto ask for a publicly-known enum entry. We have precedent for\nthat approach, eg in pg_statistic stakind codes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Feb 2020 11:03:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Doesn't the proposed magic number address this concern?\n>\n> No, because (a) it will be a random magic number that nobody will\n> remember, and gdb won't print in any helpful form; (b) at least\n> as I understood the proposal, there'd be just one magic number for\n> all types of memory context.\n\nI don't disagree with the factual statements that you are making but I\ndon't understand why any of them represent real problems.\n\n- It's true that magic numbers are generally not chosen for easy\nmemorization, but I think that most experienced hackers don't have\nmuch trouble looking them up with 'git grep' on those (generally rare)\noccasions when they are needed.\n\n- It's true that gdb's default format is decimal and you might want\nhex, but it has a 'printf' command that can be used to do that, which\nI at least have found to be pretty darn convenient for this sort of\nthing.\n\n- And it's true that I was proposing - with your agreement, or so I\nhad understood - one magic number for all context types, but that was\nspecifically so you could tell whether you had a memory context or\nsome other thing. Once you know it's really a memory context, you\ncould print cxt->methods->name.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Feb 2020 11:48:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't disagree with the factual statements that you are making but I\n> don't understand why any of them represent real problems.\n\nThe core problem that I'm unhappy about is that what you want to do\nadds cognitive burden (including an increased risk of mistakes) and\nextra debugging commands in common cases around inspecting memory\ncontexts in a debugger. Sure, it's not a fatal problem, but it's\nstill a burden. I judge that putting an enum into the struct would\ngreatly reduce that burden at very small cost.\n\nThe point of the magic number, AIUI, is so that we can still have an\nequivalent of Assert(IsA(MemoryContext)) in the code in appropriate\nplaces. That need doesn't require readability in a debugger, which is\nwhy I'm okay with it being a random magic number. But there's still\na need for debugger-friendliness, and a constant string that you need\nto use extra commands to see doesn't solve that end of the problem.\nIMO anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Feb 2020 12:15:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 12:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The core problem that I'm unhappy about is that what you want to do\n> adds cognitive burden (including an increased risk of mistakes) and\n> extra debugging commands in common cases around inspecting memory\n> contexts in a debugger. Sure, it's not a fatal problem, but it's\n> still a burden. I judge that putting an enum into the struct would\n> greatly reduce that burden at very small cost.\n\nI respect that, but I disagree. I think the annoyance and strangeness\nfactor of the enum is pretty high for third-party code that wants to\nuse this, because deliberately concocting an out-of-range value for an\nenum is really odd. And I think the gain in debuggability is pretty\nlow, because even if the enum seems to have the expected value, you\nstill don't really know that things are OK unless you check the magic\nnumber and the methods field, too. On the other hand, I don't inspect\nmemory contexts in a debugger all that often, and it sounds like you\ndo, or you presumably wouldn't feel so strongly about this.\n\nWe might have to see if anybody else has an opinion. I'd rather do it\nyour way than no way, but I feel like it's such a strange design that\nI'd be afraid to commit it if anyone other than you had suggested it,\nfor fear of having you demand an immediate revert and, maybe, the\nremoval of some of my less important limbs.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Feb 2020 13:09:48 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Hello.\n\nAt Wed, 5 Feb 2020 13:09:48 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Wed, Feb 5, 2020 at 12:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The core problem that I'm unhappy about is that what you want to do\n> > adds cognitive burden (including an increased risk of mistakes) and\n> > extra debugging commands in common cases around inspecting memory\n> > contexts in a debugger. Sure, it's not a fatal problem, but it's\n> > still a burden. I judge that putting an enum into the struct would\n> > greatly reduce that burden at very small cost.\n> \n> I respect that, but I disagree. I think the annoyance and strangeness\n> factor of the enum is pretty high for third-party code that wants to\n> use this, because deliberately concocting an out-of-range value for an\n> enum is really odd. And I think the gain in debuggability is pretty\n> low, because even if the enum seems to have the expected value, you\n> still don't really know that things are OK unless you check the magic\n> number and the methods field, too. On the other hand, I don't inspect\n> memory contexts in a debugger all that often, and it sounds like you\n> do, or you presumably wouldn't feel so strongly about this.\n\nI agree that \"deliberately concocting an out-of-(enum-)range value\" is\nodd. Regardless of the context type is in (or out-of?) enum or not,\nthere's no arbitrator of type numbers for custom allocators. If any,\nit would be like that for LWLock tranches, but differently from\ntranches, an allocator will fill the method field with the *correct*\nvalues and works correctly even having a bogus type number. That is,\nsuch an arbitration system or such type ids are mere a useless burden\nto custom allocators, and such concocted (or pseudo-random) type\nnumbers don't contribute debuggability at all since it cannot be\ndistinguished from really-bogus numbers on its face.\n\nSince we rarely face custom allocators, I think it's enough that we\nhave enum types for in-core allocators and one \"CUSTOM\". For the\n\"CUSTOM\" allocators, we need to look into cxt->methods->name if we\nneed to identify it but I don't think it's too bothering as far as we\nneed to do that only for the need.\n\n> We might have to see if anybody else has an opinion. I'd rather do it\n> your way than no way, but I feel like it's such a strange design that\n> I'd be afraid to commit it if anyone other than you had suggested it,\n> for fear of having you demand an immediate revert and, maybe, the\n> removal of some of my less important limbs.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Feb 2020 11:46:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-05 09:28:08 -0500, Robert Haas wrote:\n> On Mon, Feb 3, 2020 at 7:26 PM Andres Freund <andres@anarazel.de> wrote:\n> > Hm. I kinda like the idea of still having one NodeTag identifying memory\n> > contexts, and then some additional field identifying the actual\n> > type. Being able to continue to rely on IsA() etc imo is nice. I think\n> > nodes.h itself only would be a problem for frontend code because we put\n> > a lot of other stuff too. We should just separate the actually generic\n> > stuff out. I think it's going to be like 2 seconds once we have memory\n> > contexts until we're e.g. going to want to also have pg_list.h - which\n> > is harder without knowing the tags.\n> \n> The problem with IsA() is that it assumes that you've got all the node\n> tags that can ever exist in one big enum. I don't see how to make that\n> work once you extend the system to work with more than one program. I\n> think it will be really confusing if frontend code starts reusing\n> random backend data structures. Like, fundamental things like List,\n> sure, that should be exposed. But if people start creating Plan or FDW\n> objects in the frontend, it's just going to be chaos. And I don't\n> think we want new objects that people may add for frontend code to be\n> visible to backend code, either.\n\nI wasn't advocating for making plannodes.h etc frontend usable. I think\nthat's a fairly different discussion than making enum NodeTag,\npg_list.h, memutils.h available. I don't see them having access to the\nnumerical value of node tag for backend structs as something actually\nproblematic (I'm pretty sure you can do that today already if you really\nwant to - but why would you?).\n\nI don't buy that having a separate magic number for various types that\nwe may want to use both frontend and backend is better than largely just\nhaving one set of such magic type identifiers.\n\n\n> > It seems like a good idea to still have an additional identifier for\n> > each node type, for some cross checking. How about just frobbing the\n> > pointer to the MemoryContextMethod slightly, and storing that in an\n> > additional field? That'd be something fairly unlikely to ever be a false\n> > positive, and it doesn't require dereferencing any additional memory.\n> \n> That would be fine as an internal sanity check, but if Tom is unhappy\n> with the idea of having to try to make sense of a function pointer,\n> he's probably going to be even less happy about trying to make sense\n> of a frobbed pointer. And I would actually agree with him on that\n> point.\n\nI feel the concern about identifying nodes can pretty readily be\naddressed by adding a name to the context methods - something that's\nuseful independently too. It'd e.g. be nice to have some generic print\nroutines for memory context stats.\n\n\n> I think we're all pursuing slightly different goals here. KaiGai's\n> main goal is to make it possible for third-party code to add new kinds\n> of memory contexts. My main goal is to make memory contexts not depend\n> on backend-only infrastructure. Tom is concerned about debuggability.\n> Your concern here is about sanity checking. There's some overlap\n> between those goals but the absolute best thing for any given one of\n> them might be really bad for one of the other ones; hopefully we can\n> find some compromise that gets everybody the things they care about\n> most.\n\nFWIW, I care most about the frontend usable bit too. I was bringing up\nthe cross check idea solely as an idea of how to not loose, but if\nanything, improve our error checking, while making things more\nextensible.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Feb 2020 19:09:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On 2020-02-05 10:40:59 -0500, Robert Haas wrote:\n> I'm pretty unimpressed with the enum proposal - I think it's pretty\n> nasty for an extension author to have to make up a value that's not in\n> the enum. One, how are they supposed to know that they should do that?\n> Two, how are they supposed to know that the code doesn't actually\n> depend on that enum value for anything important? And three, how do\n> they know that the compiler isn't going to hose them by assuming that\n> isn't a can't-happen scenario?\n>\n> I mean, I'd rather get a patch committed here than not, but I have a\n> hard time understanding why this is a good way to go.\n\n+1\n\n\n",
"msg_date": "Wed, 5 Feb 2020 19:10:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-05 10:56:42 -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Feb 5, 2020 at 10:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> an enum field right in the context. You'll have to do an extra step to\n> >> discover the context's type, and if you jump to the wrong conclusion\n> >> and do, say,\n> >> \tp *(AllocSetContext *) ptr_value\n> >> when it's really some other context type, there won't be anything\n> >> as obvious as \"type = T_GenerationContext\" in what is printed to\n> >> tell you you were wrong.\n> \n> > Doesn't the proposed magic number address this concern?\n> \n> No, because (a) it will be a random magic number that nobody will\n> remember, and gdb won't print in any helpful form; (b) at least\n> as I understood the proposal, there'd be just one magic number for\n> all types of memory context.\n\nI still don't get what reason there is to not use T_MemoryContext as the\nmagic number, instead of something randomly new. It's really not\nproblematic to expose those numerical values. And it means that the\nfirst bit of visual inspection is going to be the same as it always has\nbeen, and the same as it works for most other types one regularly\ninspects in postgres.\n\nWhat about using T_MemoryContext as the identifier that's the same for\nall types of memory contexts and additionally have a new 'const char\n*contexttype' in MemoryContextData, that points to\nMemoryContextMethods.contexttype (which is a char[32] or such). As it's\na char * debuggers will display the value, making it easy to identify\nthe specific type. And sure, it's 8 additional bytes instead of 4 - but\nI don't see that being a problem.\n\nAnd because contexttype points into a specific offset in the\nMemoryContextData, we can use that as a crosscheck, by Asserting that\nMemoryContext->methods + offsetof(MemoryContextMethods.contexttype) == MemoryContext->contexttype\n\n\n> Another issue with relying on only a magic number is that if you\n> get confused and do \"p *(AllocSetContext *) ptr_value\" on something\n> that doesn't point at any sort of memory context at all, there will\n> not be *anything* except confusing field values to help you realize\n> that. One of the great advantages of the Node system, IME, is that\n> when you try to print out a Node-subtype struct, the first field\n> in what is printed is either the Node type you were expecting, or\n> some recognizable other Node code, or obvious garbage.\n\nI agree that that's good - which is why I think we should simply not\ngive it up here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Feb 2020 19:23:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 10:09 PM Andres Freund <andres@anarazel.de> wrote:\n> I wasn't advocating for making plannodes.h etc frontend usable. I think\n> that's a fairly different discussion than making enum NodeTag,\n> pg_list.h, memutils.h available. I don't see them having access to the\n> numerical value of node tag for backend structs as something actually\n> problematic (I'm pretty sure you can do that today already if you really\n> want to - but why would you?).\n>\n> I don't buy that having a separate magic number for various types that\n> we may want to use both frontend and backend is better than largely just\n> having one set of such magic type identifiers.\n\nTo be honest, and I realize this is probably going to blow your mind\nand/or make you think that I'm completely insane, one concern that I\nhave here is that I have seen multiple people fail to understand that\nthe frontend and backend are, ah, not the same process. And they try\nto write code in frontend environments that makes no sense whatsoever\nthere. The fact that nodes.h could hypothetically be included in\nfrontend code doesn't really contribute to confusion in this area, but\nI'm concerned that including it in every file might, because it means\nthat a whole lot of backend-only stuff suddenly becomes visible in any\ncode that anyone writes anywhere. And as things stand that would the\neffect of adding #include \"utils/palloc.h\" to \"postgres_fe.h\". Perhaps\nI worrying too much.\n\nOn a broader level, I am not convinced that having one \"enum\" to rule\nthem all is a good design. If we go that direction, then it means that\nfrontend code code that wants to add its own node types (and why\nshouldn't it want to do that?) would have to have them be visible to\nthe backend and to all other frontend processes. That doesn't seem\nlike a disaster, but I don't think it's great. I also don't really\nlike the fact that we have one central registry of node types that has\nto be edited to add more node types, because it means that extensions\nare not free to do so. I know we're some distance from allowing any\nreal extensibility around new node types and perhaps we never will,\nbut on principle a centralized registry sucks, and I'd prefer a more\ndecentralized solution if we could find one that would be acceptable.\nI don't know what that would be, though. Even though I'm not as\ntrenchant about debuggability as you and Tom, having a magic number at\nthe beginning of every type of node in lieu of an enum would drive me\nnuts.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Feb 2020 09:17:59 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 10:23 PM Andres Freund <andres@anarazel.de> wrote:\n> I still don't get what reason there is to not use T_MemoryContext as the\n> magic number, instead of something randomly new. It's really not\n> problematic to expose those numerical values. And it means that the\n> first bit of visual inspection is going to be the same as it always has\n> been, and the same as it works for most other types one regularly\n> inspects in postgres.\n\nWithout trying to say that my thought process is necessarily correct,\nI can explain my thought process.\n\nMany years ago, I had Tom slap down a patch of mine for making\nsomething participate in the Node system unnecessarily. He pointed\nout, then and at other times, that there was no reason for everything\nto be part of the giant enum just because many things need to be. At\nthe time, I was a bit perplexed by his feedback, but over time I've\ncome to see the point. We've got lots of \"enum\" fields all over the\nbackend whose purpose it is to decide whether a particular object is\nof one sub-type or another. We've also got NodeTag, which is that same\nthing at a very large scale. I used to think that the reason why we\nhad jammed everything into NodeTag was just programming laziness or\nsome kind of desire for consistency, but now I think that the real\npoint is that making something a node allows it to participate in\nreadfuncs.c, outfuncs.c, copyfuncs.c, equalfuncs.c; so that a complex\ndata structure made entirely of nodes can be easily copied,\nserialized, etc. The MemoryContext stuff participates in none of that\nmachinery, and it would be difficult to make it do so, and therefore\ndoes not really need to be part of the Node system at all. The fact\nthat it *is* a part of the node system is just a historical accident,\nor so I think. Sure, it's not an inconvenient thing to see a NodeTag\non random stuff that you're inspecting with a debugger, but if we took\nthat argument to its logical conclusion we would, I think, end up\nneeding to add node tags to a lot of stuff that doesn't have them now\n- like TupleTableSlots, for example.\n\nAlso, as a general rule, every Node of a given type is expected to\nhave the same structure, which wouldn't be true here, because there\nare multiple types of memory contexts that can exist, and\nT_MemoryContext would identify them all. It's true that there are some\nother weird exceptions, but it doesn't seem like a great idea to\ncreate more.\n\nBetween those concerns, and those I wrote about in my last post, it\nseemed to me that it made more sense to try to break the dependency\nbetween palloc.h and nodes.h rather than to embrace it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Feb 2020 09:31:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Thu, 6 Feb 2020 at 11:09, Andres Freund <andres@anarazel.de> wrote:\n\n> I wasn't advocating for making plannodes.h etc frontend usable. I think\n> that's a fairly different discussion than making enum NodeTag,\n> pg_list.h, memutils.h available. I don't see them having access to the\n> numerical value of node tag for backend structs as something actually\n> problematic (I'm pretty sure you can do that today already if you really\n> want to - but why would you?).\n>\n> I don't buy that having a separate magic number for various types that\n> we may want to use both frontend and backend is better than largely just\n> having one set of such magic type identifiers.\n\nSimply using MemoryContext as the NodeTag seems very sensible based on\nthe above discussion.\n\nBut rather than adding a const char * name to point to some constant\nfor the implementation name as was proposed earlier, I think the\nexisting pointer MemoryContextData->methods is sufficient to identify\nthe context type. We could add a NameData field to\nMemoryContextMethods that the initializer sets to the implementation\nname for convenience.\n\nIt's trivial to see when debugging with a p ctx->methods->name .\nWe keep the MemoryContextData size down and we lose nothing. Though\ngdb is smart enough to annotate a pointer to the symbol\nAllocSetMethods as such when it sees it in a debuginfo build there's\nno harm in having a single static string in the const-data segment per\nmemory context type.\n\nI'd also like to add a\n\n bool (*instanceof)(MemoryContext context, MemoryContextMethods context_type);\n\nto MemoryContextMethods . Then replace all use of IsA(ctx,\nAllocSetContext) etc with a test like:\n\n #define Insanceof_AllocSetContext(ctx) \\\n (ctx->methods == AllocSetMethods || ctx->is_a(AllocSetMethods));\n\nIn other words, we ask the target object what it is.\n\nThis would make it possible to create wrapper implementations of\nexisting contexts that do extra memory accounting or other limited\nsorts of extensions. The short-circuit testing for the known concrete\nAllocSetMethods should make it pretty much identical in performance\nterms, which is of course rather important.\n\nThe OO-alike naming is not a coincidence.\n\nI can't help but notice that we're facing some of the same issues\nfaced by early OO patterns. Not too surprising given that Pg uses a\nlot of pseudo-OO - some level of structural inheritance and\nbehavioural inheritance, but no encapsulation, no messaging model, no\ncode-to-data binding. I'm no OO purist, I don't care much so long as\nit works and is consistent.\n\nIn OO terms what we seem to be facing is difficulty with extending\nexisting object types into new subtypes without modifying all the code\nthat knows how to work with the parent types. MemoryContext is one\nexample of this, Node is another. The underlying issue is similar.\n\nBeing able to do this is something I'm much more interested in being\nable to do for plan and parse nodes etc than for MemoryContext tbh,\nbut the same principles apply.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Mon, 10 Feb 2020 12:53:11 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "2020年2月10日(月) 13:53 Craig Ringer <craig@2ndquadrant.com>:\n>\n> On Thu, 6 Feb 2020 at 11:09, Andres Freund <andres@anarazel.de> wrote:\n>\n> > I wasn't advocating for making plannodes.h etc frontend usable. I think\n> > that's a fairly different discussion than making enum NodeTag,\n> > pg_list.h, memutils.h available. I don't see them having access to the\n> > numerical value of node tag for backend structs as something actually\n> > problematic (I'm pretty sure you can do that today already if you really\n> > want to - but why would you?).\n> >\n> > I don't buy that having a separate magic number for various types that\n> > we may want to use both frontend and backend is better than largely just\n> > having one set of such magic type identifiers.\n>\n> Simply using MemoryContext as the NodeTag seems very sensible based on\n> the above discussion.\n>\n> But rather than adding a const char * name to point to some constant\n> for the implementation name as was proposed earlier, I think the\n> existing pointer MemoryContextData->methods is sufficient to identify\n> the context type. We could add a NameData field to\n> MemoryContextMethods that the initializer sets to the implementation\n> name for convenience.\n>\n> It's trivial to see when debugging with a p ctx->methods->name .\n> We keep the MemoryContextData size down and we lose nothing. Though\n> gdb is smart enough to annotate a pointer to the symbol\n> AllocSetMethods as such when it sees it in a debuginfo build there's\n> no harm in having a single static string in the const-data segment per\n> memory context type.\n>\n> I'd also like to add a\n>\n> bool (*instanceof)(MemoryContext context, MemoryContextMethods context_type);\n>\n> to MemoryContextMethods . Then replace all use of IsA(ctx,\n> AllocSetContext) etc with a test like:\n>\n> #define Insanceof_AllocSetContext(ctx) \\\n> (ctx->methods == AllocSetMethods || ctx->is_a(AllocSetMethods));\n>\nAllocSetMethods is statically defined at utils/mmgr/aset.c, so this macro\nshall be available only in this source file.\n\nIsn't it sufficient to have the macro below?\n\n#define Insanceof_AllocSetContext(ctx) \\\n (IsA(ctx, MemoryContext) && \\\n strcmp(((MemoryContext)(ctx))->methods->name, \"AllocSetMethods\") == 0)\n\nAs long as an insane extension does not define a different memory context\nwith the same name, it will work.\n\n\n> In other words, we ask the target object what it is.\n>\n> This would make it possible to create wrapper implementations of\n> existing contexts that do extra memory accounting or other limited\n> sorts of extensions. The short-circuit testing for the known concrete\n> AllocSetMethods should make it pretty much identical in performance\n> terms, which is of course rather important.\n>\n> The OO-alike naming is not a coincidence.\n>\n> I can't help but notice that we're facing some of the same issues\n> faced by early OO patterns. Not too surprising given that Pg uses a\n> lot of pseudo-OO - some level of structural inheritance and\n> behavioural inheritance, but no encapsulation, no messaging model, no\n> code-to-data binding. I'm no OO purist, I don't care much so long as\n> it works and is consistent.\n>\n> In OO terms what we seem to be facing is difficulty with extending\n> existing object types into new subtypes without modifying all the code\n> that knows how to work with the parent types. MemoryContext is one\n> example of this, Node is another. The underlying issue is similar.\n>\n> Being able to do this is something I'm much more interested in being\n> able to do for plan and parse nodes etc than for MemoryContext tbh,\n> but the same principles apply.\n>\n>\n> --\n> Craig Ringer http://www.2ndQuadrant.com/\n> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Mon, 10 Feb 2020 22:18:51 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
},
{
"msg_contents": "On Mon, 10 Feb 2020 at 21:19, Kohei KaiGai <kaigai@heterodb.com> wrote:\n>\n> 2020年2月10日(月) 13:53 Craig Ringer <craig@2ndquadrant.com>:\n> >\n> > On Thu, 6 Feb 2020 at 11:09, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > I wasn't advocating for making plannodes.h etc frontend usable. I think\n> > > that's a fairly different discussion than making enum NodeTag,\n> > > pg_list.h, memutils.h available. I don't see them having access to the\n> > > numerical value of node tag for backend structs as something actually\n> > > problematic (I'm pretty sure you can do that today already if you really\n> > > want to - but why would you?).\n> > >\n> > > I don't buy that having a separate magic number for various types that\n> > > we may want to use both frontend and backend is better than largely just\n> > > having one set of such magic type identifiers.\n> >\n> > Simply using MemoryContext as the NodeTag seems very sensible based on\n> > the above discussion.\n> >\n> > But rather than adding a const char * name to point to some constant\n> > for the implementation name as was proposed earlier, I think the\n> > existing pointer MemoryContextData->methods is sufficient to identify\n> > the context type. We could add a NameData field to\n> > MemoryContextMethods that the initializer sets to the implementation\n> > name for convenience.\n> >\n> > It's trivial to see when debugging with a p ctx->methods->name .\n> > We keep the MemoryContextData size down and we lose nothing. Though\n> > gdb is smart enough to annotate a pointer to the symbol\n> > AllocSetMethods as such when it sees it in a debuginfo build there's\n> > no harm in having a single static string in the const-data segment per\n> > memory context type.\n> >\n> > I'd also like to add a\n> >\n> > bool (*instanceof)(MemoryContext context, MemoryContextMethods context_type);\n> >\n> > to MemoryContextMethods . Then replace all use of IsA(ctx,\n> > AllocSetContext) etc with a test like:\n> >\n> > #define Insanceof_AllocSetContext(ctx) \\\n> > (ctx->methods == AllocSetMethods || ctx->is_a(AllocSetMethods));\n> >\n> AllocSetMethods is statically defined at utils/mmgr/aset.c, so this macro\n> shall be available only in this source file.\n>\n> Isn't it sufficient to have the macro below?\n>\n> #define Insanceof_AllocSetContext(ctx) \\\n> (IsA(ctx, MemoryContext) && \\\n> strcmp(((MemoryContext)(ctx))->methods->name, \"AllocSetMethods\") == 0)\n>\n> As long as an insane extension does not define a different memory context\n> with the same name, it will work.\n\nThat wouldn't allow for the sort of extensibility I suggested for\nwrapping objects, which is why I thought we might as well ask the\nobject itself. It's not exactly a new, weird or unusual pattern. A\npointer to AllocSetMethods would need to be made non-static if we\nwanted to allow a macro or static inline to avoid the function call\nand test it for equality, but that's not IMO a big problem. Or if it\nis, well, there's always whole-program optimisation...\n\nAlso, isn't strcmp() kinda expensive compared to a simple pointer\nvalue compare anyway? I admit I'm terribly clueless about modern\nmicroarchitectures, so I may be very wrong.\n\nAll I'm saying is that if we're changing this, lets learn from what\nothers have done when writing interfaces and inheritance-type\npatterns.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Mon, 10 Feb 2020 21:47:29 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Is custom MemoryContext prohibited?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was reviewing the libpq code for the recent SSL protocol patch, and\nnoticed two mistakes with dispsize for the following parameters:\n- channel_binding should be at 8, the largest value being \"require\".\n- gssencmode should be at 8.\n\nIn those cases the zero-terminator was forgotten in the count. A\nsimilar mistake was done in the past for sslmode that was fixed by\nf4051e36. It is unlikely that dispsize is being used, but we cannot\nbreak that on compatibility grounds, and the current numbers are\nincorrect so let's fix it.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 28 Jan 2020 14:36:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Some incorrect option sizes for PQconninfoOption in libpq"
},
{
"msg_contents": "> On 28 Jan 2020, at 06:36, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I was reviewing the libpq code for the recent SSL protocol patch, and\n> noticed two mistakes with dispsize for the following parameters:\n> - channel_binding should be at 8, the largest value being \"require\".\n> - gssencmode should be at 8.\n> \n> In those cases the zero-terminator was forgotten in the count. A\n> similar mistake was done in the past for sslmode that was fixed by\n> f4051e36. It is unlikely that dispsize is being used, but we cannot\n> break that on compatibility grounds, and the current numbers are\n> incorrect so let's fix it.\n> \n> Thoughts?\n\nNice catch! +1 on the attached patch.\n\ncheers ./daniel\n\n\n",
"msg_date": "Tue, 28 Jan 2020 11:57:19 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Some incorrect option sizes for PQconninfoOption in libpq"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 11:57:19AM +0100, Daniel Gustafsson wrote:\n> Nice catch! +1 on the attached patch.\n\nThanks, fixed and backpatched down to 12, where gssencmode has been\nadded.\n--\nMichael",
"msg_date": "Wed, 29 Jan 2020 15:10:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some incorrect option sizes for PQconninfoOption in libpq"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs mentioned on the thread dealing with concurrent indexing for\ntemporary relations, it sounds like a good thing to make the code more\ndefensive if attempting to do a REINDEX CONCURRENTLY with a backend\nstill holding references to the indexes worked on:\nhttps://www.postgresql.org/message-id/20191212213709.neopqccvdo724eha@alap3.anarazel.de\n\nOne thing is that as REINDEX CONCURRENTLY cannot be used in\ntransaction blocks, this cannot be triggered in the context of a\nfunction call, say with this patch:\n+ERROR: REINDEX CONCURRENTLY cannot be executed from a function\n+CONTEXT: SQL statement \"REINDEX INDEX CONCURRENTLY reindex_ind_ref\"\n+PL/pgSQL function reindex_func_ref() line 3 at EXECUTE\n\nAttached is a patch to do that. I am not sure if the test added in\nthe patch has much additional value, but feel free to look at it. Any\nthoughts?\n--\nMichael",
"msg_date": "Tue, 28 Jan 2020 15:26:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Adding one CheckTableNotInUse() for REINDEX CONCURRENTLY"
}
] |
[
{
"msg_contents": "Hello.\n\nWhile rebasing a patch, I found that after the commit 38a957316d\n(Sorry for overlooking that.), ReadRecord sets randAccess reverse\nway. That is, it sets randAccess to false just after a XLogBeginRead()\ncall. The attached fixes that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 28 Jan 2020 19:44:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "ReadRecord wrongly assumes randAccess after 38a957316d."
},
{
"msg_contents": "On 28/01/2020 12:44, Kyotaro Horiguchi wrote:\n> While rebasing a patch, I found that after the commit 38a957316d\n> (Sorry for overlooking that.), ReadRecord sets randAccess reverse\n> way. That is, it sets randAccess to false just after a XLogBeginRead()\n> call. The attached fixes that.\n\nThanks, applied!\n\n- Heikki\n\n\n",
"msg_date": "Tue, 28 Jan 2020 13:12:05 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecord wrongly assumes randAccess after 38a957316d."
},
{
"msg_contents": "At Tue, 28 Jan 2020 13:12:05 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> On 28/01/2020 12:44, Kyotaro Horiguchi wrote:\n> > While rebasing a patch, I found that after the commit 38a957316d\n> > (Sorry for overlooking that.), ReadRecord sets randAccess reverse\n> > way. That is, it sets randAccess to false just after a XLogBeginRead()\n> > call. The attached fixes that.\n> \n> Thanks, applied!\n\nThanks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 28 Jan 2020 21:03:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecord wrongly assumes randAccess after 38a957316d."
}
] |
[
{
"msg_contents": "I made these casual comments. If there's any agreement on their merit, it'd be\nnice to implement at least the first for v13.\n\nIn <20190818193533.GL11185@telsasoft.com>, I wrote: \n> . What do you think about pg_restore --no-tableam; similar to \n> --no-tablespaces, it would allow restoring a table to a different AM:\n> PGOPTIONS='-c default_table_access_method=zedstore' pg_restore --no-tableam ./pg_dump.dat -d postgres\n> Otherwise, the dump says \"SET default_table_access_method=heap\", which\n> overrides any value from PGOPTIONS and precludes restoring to new AM.\n\nThat appears to be a trivial variation on no-tablespace:\n\n /* do nothing in --no-tablespaces mode */\n if (ropt->noTablespace)\n return;\n\n> . it'd be nice if there was an ALTER TABLE SET ACCESS METHOD, to allow\n> migrating data. Otherwise I think the alternative is:\n> \tbegin; lock t;\n> \tCREATE TABLE new_t LIKE (t INCLUDING ALL EXCLUDING INDEXES) USING (zedstore);\n> \tINSERT INTO new_t SELECT * FROM t;\n> \tfor index; do CREATE INDEX...; done\n> \tDROP t; RENAME new_t (and all its indices). attach/inherit, etc.\n> \tcommit;\n\nIdeally that would allow all at once various combinations of altering\ntablespace, changing AM, clustering, and reindexing, like what's discussed\nhere:\nhttps://www.postgresql.org/message-id/flat/8a8f5f73-00d3-55f8-7583-1375ca8f6a91@postgrespro.ru\n\n> . Speaking of which, I think LIKE needs a new option for ACCESS METHOD, which\n> is otherwise lost.\n\n\n",
"msg_date": "Tue, 28 Jan 2020 07:33:17 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "tableam options for pg_dump/ALTER/LIKE"
},
{
"msg_contents": "I first suggested this a couple years ago.\nIs it desirable to implement in pg_dump and pg_restore ?\nIt'd be just like --tablespace.\n\nOn Tue, Jan 28, 2020 at 07:33:17AM -0600, Justin Pryzby wrote:\n> I made these casual comments. If there's any agreement on their merit, it'd be\n> nice to implement at least the first for v13.\n> \n> In <20190818193533.GL11185@telsasoft.com>, I wrote: \n> > . What do you think about pg_restore --no-tableam; similar to \n> > --no-tablespaces, it would allow restoring a table to a different AM:\n> > PGOPTIONS='-c default_table_access_method=zedstore' pg_restore --no-tableam ./pg_dump.dat -d postgres\n> > Otherwise, the dump says \"SET default_table_access_method=heap\", which\n> > overrides any value from PGOPTIONS and precludes restoring to new AM.\n> \n> That appears to be a trivial variation on no-tablespace:\n> \n> /* do nothing in --no-tablespaces mode */\n> if (ropt->noTablespace)\n> return;\n...\n\n\n",
"msg_date": "Tue, 7 Dec 2021 09:39:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg_dump/restore --no-tableam"
},
{
"msg_contents": "I forgot but had actually implemented this 6 months ago.",
"msg_date": "Thu, 9 Dec 2021 17:26:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump/restore --no-table-am"
},
{
"msg_contents": "@cfbot: rebased",
"msg_date": "Mon, 3 Jan 2022 15:44:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump/restore --no-tableam"
},
{
"msg_contents": "On Mon, Jan 03, 2022 at 03:44:24PM -0600, Justin Pryzby wrote:\n> @cfbot: rebased\n\nHmm. This could be useful to provide more control in some logical\nreload scenarios, so I'd agree to provide this switch. I'll look at\nthe patch later..\n--\nMichael",
"msg_date": "Tue, 4 Jan 2022 21:01:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/restore --no-tableam"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-03 15:44:24 -0600, Justin Pryzby wrote:\n> @cfbot: rebased\n\n> From 69ae2ed5d00a97d351e1f6c45a9e406f33032898 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 7 Mar 2021 19:35:37 -0600\n> Subject: [PATCH] Add pg_dump/restore --no-table-am..\n> \n> This was for some reason omitted from 3b925e905.\n\nSeems the docs changes aren't quite right?\n\nhttps://cirrus-ci.com/task/5864769860141056?logs=docs_build#L344\n\n[02:43:01.356] ref/pg_dump.sgml:1162: parser error : Opening and ending tag mismatch: varlistentry line 934 and variablelist\n[02:43:01.356] </variablelist>\n[02:43:01.356] ^\n....\n\n> + <varlistentry>\n> + <varlistentry>\n\nYup...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Jan 2022 18:48:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/restore --no-tableam"
},
{
"msg_contents": "On Mon, Jan 03, 2022 at 03:44:24PM -0600, Justin Pryzby wrote:\n> + <varlistentry>\n> + <varlistentry>\n> + <term><option>--no-table-am</option></term>\n> + <listitem>\n> + <para>\n> + Do not output commands to select table access methods.\n> + With this option, all objects will be created with whichever\n> + table access method is the default during restore.\n> + </para>\n\nHmm. --no-table-am may not be the best choice. Should this be called\n--no-table-access-method instead?\n\n> -\tno_toast_compression => {\n> -\t\tdump_cmd => [\n> -\t\t\t'pg_dump', '--no-sync',\n> -\t\t\t\"--file=$tempdir/no_toast_compression.sql\",\n> -\t\t\t'--no-toast-compression', 'postgres',\n> -\t\t],\n> -\t},\n\nWhy is this command moved down?\n--\nMichael",
"msg_date": "Tue, 11 Jan 2022 16:50:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/restore --no-tableam"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 04:50:23PM +0900, Michael Paquier wrote:\n> On Mon, Jan 03, 2022 at 03:44:24PM -0600, Justin Pryzby wrote:\n> > + <varlistentry>\n> > + <varlistentry>\n> > + <term><option>--no-table-am</option></term>\n> > + <listitem>\n> > + <para>\n> > + Do not output commands to select table access methods.\n> > + With this option, all objects will be created with whichever\n> > + table access method is the default during restore.\n> > + </para>\n> \n> Hmm. --no-table-am may not be the best choice. Should this be called\n> --no-table-access-method instead?\n\nI suppose you're right - I had previously renamed it from no-tableam.\n\n> > -\tno_toast_compression => {\n> > -\t\tdump_cmd => [\n> > -\t\t\t'pg_dump', '--no-sync',\n> > -\t\t\t\"--file=$tempdir/no_toast_compression.sql\",\n> > -\t\t\t'--no-toast-compression', 'postgres',\n> > -\t\t],\n> > -\t},\n> \n> Why is this command moved down?\n\nBecause it looks like this is intended to be mostly alphabetical, but that\nwasn't preserved by 63db0ac3f. It's most apparent in \"my %full_runs\".\n\nThe same could be said of no-privs, defaults_custom_format, pg_dumpall_globals,\nsection_data, but they've been that way forever.\n\n-- \nJustin",
"msg_date": "Tue, 11 Jan 2022 22:09:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump/restore --no-tableam"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 10:09:07PM -0600, Justin Pryzby wrote:\n> I suppose you're right - I had previously renamed it from no-tableam.\n\nThanks for the new version. I have noticed that support for the\noption with pg_dumpall was missing, but that looks useful to me like\nthe other switches.\n\n> Because it looks like this is intended to be mostly alphabetical, but that\n> wasn't preserved by 63db0ac3f. It's most apparent in \"my %full_runs\".\n\nSure. Now I am not sure that this is worth poking at if we don't\nchange the back-branches, as this could cause conflicts. So I have\nleft this change out at the end.\n\nAnd, done.\n--\nMichael",
"msg_date": "Mon, 17 Jan 2022 14:55:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/restore --no-tableam"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 02:55:58PM +0900, Michael Paquier wrote:\n> On Tue, Jan 11, 2022 at 10:09:07PM -0600, Justin Pryzby wrote:\n> > I suppose you're right - I had previously renamed it from no-tableam.\n> \n> Thanks for the new version. I have noticed that support for the\n> option with pg_dumpall was missing, but that looks useful to me like\n> the other switches.\n\nI saw that you added it to pg_dumpall. But there's a typo in --help:\n\ndiff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c\nindex 1cab0dfdc75..94852e7cdbb 100644\n--- a/src/bin/pg_dump/pg_dumpall.c\n+++ b/src/bin/pg_dump/pg_dumpall.c\n@@ -655,3 +655,3 @@ help(void)\n \tprintf(_(\" --no-sync do not wait for changes to be written safely to disk\\n\"));\n-\tprintf(_(\" --no-tables-access-method do not dump table access methods\\n\"));\n+\tprintf(_(\" --no-table-access-method do not dump table access methods\\n\"));\n \tprintf(_(\" --no-tablespaces do not dump tablespace assignments\\n\"));\n\nFeel free to leave it for now, and I'll add it to my typos branch.\n\n> And, done.\n\nThanks!\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 17 Jan 2022 00:20:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump/restore --no-tableam"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 12:20:07AM -0600, Justin Pryzby wrote:\n> I saw that you added it to pg_dumpall. But there's a typo in --help:\n\nThanks, fixed.\n--\nMichael",
"msg_date": "Mon, 17 Jan 2022 16:05:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/restore --no-tableam"
}
] |
[
{
"msg_contents": "Hi!\n\nPostgreSQL does not accept the following standard conforming statement:\n\n VALUES ROW(1,2), ROW(3,4)\n\nThere is a comment about this in the source code [0]:\n\n/*\n* We should allow ROW '(' expr_list ')' too, but that seems to require\n* making VALUES a fully reserved word, which will probably break more apps\n* than allowing the noise-word is worth.\n*/\n\nThe latest release of MySQL (8.0.19) introduced table value constructors (VALUES), but **requires** the keyword ROW [1]. Of the 9 systems I tested, only MySQL and H2 accept ROW in VALUES [2].\n\nIs it worth re-visiting this decision in order to improve standard conformance and MySQL (and H2) compability?\n\n-markus\n\nRefs:\n[0] src/backend/parser/gram.y - https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/backend/parser/gram.y#L11893\n[1] https://dev.mysql.com/doc/refman/8.0/en/values.html\n[2] Not supporting it: Db2 11.5, MariaDB 10.4, Oracle 19c, SQL Server 2019, SQLite 3.30.0, Derby 10.15.1.3.\n Some results published here: https://modern-sql.com/feature/values#compatibility\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 21:32:20 +0100",
"msg_from": "Markus Winand <markus.winand@winand.at>",
"msg_from_op": true,
"msg_subject": "VALUES ROW(...)"
},
{
"msg_contents": "Markus Winand <markus.winand@winand.at> writes:\n> PostgreSQL does not accept the following standard conforming statement:\n> VALUES ROW(1,2), ROW(3,4)\n> There is a comment about this in the source code [0]:\n\n> /*\n> * We should allow ROW '(' expr_list ')' too, but that seems to require\n> * making VALUES a fully reserved word, which will probably break more apps\n> * than allowing the noise-word is worth.\n> */\n\n> The latest release of MySQL (8.0.19) introduced table value constructors (VALUES), but **requires** the keyword ROW [1]. Of the 9 systems I tested, only MySQL and H2 accept ROW in VALUES [2].\n\n> Is it worth re-visiting this decision in order to improve standard conformance and MySQL (and H2) compability?\n\nI'd still say that making VALUES fully reserved is a bridge too far:\nyou will make many more people unhappy from that than you make happy\nbecause we can now spell this more than one way. (I'm somewhat\nguessing that some people are using \"values\" as a column or table name,\nbut I doubt that that's an unreasonable guess.)\n\nIf you want to see some movement on this, look into whether we can\nfind a way to allow it without that. I don't recall exactly what\nthe stumbling block is there, but maybe there's a way around it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 16:00:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: VALUES ROW(...)"
}
] |
[
{
"msg_contents": "\r\n> \r\n> I could do some tests with the patch on some larger machines. What exact\r\n> tests do you propose? Are there some specific postgresql.conf settings and\r\n> pgbench initialization you recommend for this? And was the test above just\r\n> running 'pgbench -S' select-only with specific -T, -j and -c parameters?\r\n> \r\n\r\nWith Andres' instructions I ran a couple of tests. With your patches I can reproduce a speedup of ~3% on single core tests reliably on a dual-socket 36-core machine for the pgbench select-only test case. When using the full scale test my results are way too noisy even for large runs unfortunately. I also tried some other queries (for example select's that return 10 or 100 rows instead of just 1), but can't see much of a speed-up there either, although it also doesn't hurt.\r\n\r\nSo I guess the most noticeable one is the select-only benchmark for 1 core:\r\n\r\n<Master>\r\ntransaction type: <builtin: select only>\r\nscaling factor: 300\r\nquery mode: prepared\r\nnumber of clients: 1\r\nnumber of threads: 1\r\nduration: 600 s\r\nnumber of transactions actually processed: 30255419\r\nlatency average = 0.020 ms\r\nlatency stddev = 0.001 ms\r\ntps = 50425.693234 (including connections establishing)\r\ntps = 50425.841532 (excluding connections establishing)\r\n\r\n<Patched>\r\ntransaction type: <builtin: select only>\r\nscaling factor: 300\r\nquery mode: prepared\r\nnumber of clients: 1\r\nnumber of threads: 1\r\nduration: 600 s\r\nnumber of transactions actually processed: 31363398\r\nlatency average = 0.019 ms\r\nlatency stddev = 0.001 ms\r\ntps = 52272.326597 (including connections establishing)\r\ntps = 52272.476380 (excluding connections establishing)\r\n\r\nThis is the one with 40 clients, 40 threads. Not really an improvement, and quite still quite noisy.\r\n<Master>\r\ntransaction type: <builtin: select only>\r\nscaling factor: 300\r\nquery mode: prepared\r\nnumber of clients: 40\r\nnumber of threads: 40\r\nduration: 600 s\r\nnumber of transactions actually processed: 876846915\r\nlatency average = 0.027 ms\r\nlatency stddev = 0.015 ms\r\ntps = 1461407.539610 (including connections establishing)\r\ntps = 1461422.084486 (excluding connections establishing)\r\n\r\n<Patched>\r\ntransaction type: <builtin: select only>\r\nscaling factor: 300\r\nquery mode: prepared\r\nnumber of clients: 40\r\nnumber of threads: 40\r\nduration: 600 s\r\nnumber of transactions actually processed: 872633979\r\nlatency average = 0.027 ms\r\nlatency stddev = 0.038 ms\r\ntps = 1454387.326179 (including connections establishing)\r\ntps = 1454396.879195 (excluding connections establishing)\r\n\r\nFor tests that don't use the full machine (eg. 10 clients, 10 threads) I see speed-ups as well, but not as high as the single-core run. It seems there are other bottlenecks (on the machine) coming into play.\r\n\r\n-Floris\r\n\r\n",
"msg_date": "Tue, 28 Jan 2020 21:34:34 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 1:34 PM Floris Van Nee <florisvannee@optiver.com> wrote:\n> With Andres' instructions I ran a couple of tests. With your patches I can reproduce a speedup of ~3% on single core tests reliably on a dual-socket 36-core machine for the pgbench select-only test case.\n\nThanks for testing!\n\nAttached is v2 of patch series, which makes the changes to 0001-*\nrequested by Andres. I restructured the loop in a way that allows the\ncompiler to assume that there will always be at least one loop\niteration -- so I'm not quite as aggressive as I was with v1. We don't\nactually delay the call to BTreeTupleGetNAtts() as such in v2.\n\nCan you test this version, Floris? The second two patches are probably\nnot helping here, so it would be useful if you could just test 0001-*,\nand then test all three together. I can toss the latter two patches if\nthere is no additional speedup.\n\nIf we're lucky, then Andres will have been right to suspect that there\nmight be a smaller stall caused by the new branch in the loop that\nexisted in v1. Maybe this will show up at higher client counts.\n\nI should point out that the deduplication patch changes the definition\nof BTreeTupleGetNAtts(), making it slightly more complicated. With the\ndeduplication patch, we have to check that the tuple isn't a posting\nlist tuple, which uses the INDEX_ALT_TID_MASK/INDEX_AM_RESERVED_BIT\nbit to indicate a non-standard tuple header format, just like the\ncurrent pivot tuple format (we need to check a BT_RESERVED_OFFSET_MASK\nbit to further differentiate posting list tuples from pivot tuples).\nThis increase in the complexity of BTreeTupleGetNAtts() will probably\nfurther tip things in favor of this patch.\n\nThere are no changes in either 0002-* or 0003-* patches for v2. I'm\nincluding the same patches here a second time for completeness.\n\n--\nPeter Geoghegan",
"msg_date": "Tue, 28 Jan 2020 14:55:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "\r\n> \r\n> Can you test this version, Floris? The second two patches are probably not\r\n> helping here, so it would be useful if you could just test 0001-*, and then test\r\n> all three together. I can toss the latter two patches if there is no additional\r\n> speedup.\r\n> \r\n\r\nHere's the results for runs with respectively 1 client, 9 clients and 30 clients on master, v2-0001, v2-0001+0002+0003 and for completeness also the previous v1 version of the patches.\r\nI ran the tests for 45 minutes each this time which seems to give more stable results.\r\nI'd say applying just v2-0001 is actually slightly hurtful for single-core performance. Applying all of them gives a good improvement though. It looks like the performance improvement is also more noticeable at higher core counts now.\r\n\r\n<master>\r\nnumber of clients: 1\r\nnumber of threads: 1\r\nduration: 2700 s\r\nnumber of transactions actually processed: 139314796\r\nlatency average = 0.019 ms\r\nlatency stddev = 0.001 ms\r\ntps = 51598.071835 (including connections establishing)\r\ntps = 51598.098715 (excluding connections establishing)\r\n\r\n<v2-0001>\r\nnumber of clients: 1\r\nnumber of threads: 1\r\nduration: 2700 s\r\nnumber of transactions actually processed: 137257492\r\nlatency average = 0.020 ms\r\nlatency stddev = 0.001 ms\r\ntps = 50836.107076 (including connections establishing)\r\ntps = 50836.133137 (excluding connections establishing)\r\n\r\n<v2-0001+2+3>\r\nnumber of clients: 1\r\nnumber of threads: 1\r\nduration: 2700 s\r\nnumber of transactions actually processed: 141721881\r\nlatency average = 0.019 ms\r\nlatency stddev = 0.001 ms\r\ntps = 52489.584928 (including connections establishing)\r\ntps = 52489.611373 (excluding connections establishing)\r\n\r\n<v1-0001+2+3>\r\nnumber of clients: 1\r\nnumber of threads: 1\r\nduration: 2700 s\r\nnumber of transactions actually processed: 141663780\r\nlatency average = 0.019 ms\r\nlatency stddev = 0.001 ms\r\ntps = 52468.065549 (including connections establishing)\r\ntps = 52468.093018 (excluding connections establishing)\r\n\r\n\r\n\r\n<master>\r\nnumber of clients: 9\r\nnumber of threads: 9\r\nduration: 2700 s\r\nnumber of transactions actually processed: 1197242115\r\nlatency average = 0.020 ms\r\nlatency stddev = 0.001 ms\r\ntps = 443422.987601 (including connections establishing)\r\ntps = 443423.306495 (excluding connections establishing)\r\n\r\n<v2-0001>\r\nnumber of clients: 9\r\nnumber of threads: 9\r\nduration: 2700 s\r\nnumber of transactions actually processed: 1187890004\r\nlatency average = 0.020 ms\r\nlatency stddev = 0.002 ms\r\ntps = 439959.241392 (including connections establishing)\r\ntps = 439959.588125 (excluding connections establishing)\r\n\r\n<v2-0001+2+3>\r\nnumber of clients: 9\r\nnumber of threads: 9\r\nduration: 2700 s\r\nnumber of transactions actually processed: 1203412941\r\nlatency average = 0.020 ms\r\nlatency stddev = 0.002 ms\r\ntps = 445708.478915 (including connections establishing)\r\ntps = 445708.798583 (excluding connections establishing)\r\n\r\n<v1-0001+2+3>\r\nnumber of clients: 9\r\nnumber of threads: 9\r\nduration: 2700 s\r\nnumber of transactions actually processed: 1195359533\r\nlatency average = 0.020 ms\r\nlatency stddev = 0.001 ms\r\ntps = 442725.734267 (including connections establishing)\r\ntps = 442726.052676 (excluding connections establishing)\r\n\r\n\r\n<master>\r\nnumber of clients: 30\r\nnumber of threads: 30\r\nduration: 2700 s\r\nnumber of transactions actually processed: 2617037081\r\nlatency average = 0.031 ms\r\nlatency stddev = 0.011 ms\r\ntps = 969272.811990 (including connections establishing)\r\ntps = 969273.960316 (excluding connections establishing)\r\n\r\n<v2-0001>\r\nnumber of clients: 30\r\nnumber of threads: 30\r\nduration: 2700 s\r\nnumber of transactions actually processed: 2736881585\r\nlatency average = 0.029 ms\r\nlatency stddev = 0.011 ms\r\ntps = 1013659.581348 (including connections establishing)\r\ntps = 1013660.819277 (excluding connections establishing)\r\n\r\n<v2-0001+2+3>\r\nnumber of clients: 30\r\nnumber of threads: 30\r\nduration: 2700 s\r\nnumber of transactions actually processed: 2844199686\r\nlatency average = 0.028 ms\r\nlatency stddev = 0.011 ms\r\ntps = 1053407.074721 (including connections establishing)\r\ntps = 1053408.220093 (excluding connections establishing)\r\n\r\n<v1-0001+2+3>\r\nnumber of clients: 30\r\nnumber of threads: 30\r\nduration: 2700 s\r\nnumber of transactions actually processed: 2693765822\r\nlatency average = 0.030 ms\r\nlatency stddev = 0.011 ms\r\ntps = 997690.883117 (including connections establishing)\r\ntps = 997692.051005 (excluding connections establishing)\r\n\r\n",
"msg_date": "Thu, 30 Jan 2020 09:19:19 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 1:19 AM Floris Van Nee <florisvannee@optiver.com> wrote:\n> I'd say applying just v2-0001 is actually slightly hurtful for single-core performance. Applying all of them gives a good improvement though. It looks like the performance improvement is also more noticeable at higher core counts now.\n\nMany thanks for testing once again!\n\nYour tests show that the overall winner is \"<v2-0001+2+3>\", which is\nstrictly better than all other configurations you tested -- it is at\nleast slightly better than every other configuration at every client\ncount tested. I was particularly pleased to see that \"<v2-0001+2+3>\"\nis ~8.6% faster than the master branch with 30 clients! That result\ngreatly exceeded my expectations.\n\nI have been able to independently confirm that you really need the\nfirst two patches together to see the benefits -- that wasn't clear\nuntil recently.\n\nThe interesting thing now is the role of the \"negative infinity test\"\npatch (the 0003-* patch) in all of this. I suspect that it may not be\nhelping us much here. I wonder, could you test the following\nconfigurations to settle this question?\n\n* <master> with 30 clients (i.e. repeat the test that you reported on\nmost recently)\n\n* <v2-0001+2+3> with 30 clients (i.e. repeat the test that you\nreported got us that nice ~8.6% increase in TPS)\n\n* <v2-0001+2> with 30 clients -- a new test, to see if performance is\nat all helped by the \"negative infinity test\" patch (the 0003-*\npatch).\n\nIt seems like a good idea to repeat the other two tests as part of\nperforming this third test, out of general paranoia. Intel seem to\nroll out a microcode update for a spectre-like security issue about\nevery other day.\n\nThanks again\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 4 Feb 2020 16:00:22 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "> \r\n> The interesting thing now is the role of the \"negative infinity test\"\r\n> patch (the 0003-* patch) in all of this. I suspect that it may not be helping us\r\n> much here. I wonder, could you test the following configurations to settle\r\n> this question?\r\n> \r\n> * <master> with 30 clients (i.e. repeat the test that you reported on most\r\n> recently)\r\n> \r\n> * <v2-0001+2+3> with 30 clients (i.e. repeat the test that you reported got us\r\n> that nice ~8.6% increase in TPS)\r\n> \r\n> * <v2-0001+2> with 30 clients -- a new test, to see if performance is at all\r\n> helped by the \"negative infinity test\" patch (the 0003-* patch).\r\n> \r\n> It seems like a good idea to repeat the other two tests as part of performing\r\n> this third test, out of general paranoia. Intel seem to roll out a microcode\r\n> update for a spectre-like security issue about every other day.\r\n> \r\n\r\nI ran all the tests on two different machines, several times for 1 hour each time. I'm still having a hard time getting reliable results for the 30 clients case though. I'm pretty certain the patches bring a performance benefit, but how high exactly is difficult to say. As for applying only patch 1+2 or all three patches - I found no significant difference between these two cases. It looks like all the performance benefit is in the first two patches.\r\n\r\n-Floris\r\n\r\n",
"msg_date": "Mon, 10 Feb 2020 09:05:36 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Mon, Feb 10, 2020 at 10:05 PM Floris Van Nee\n<florisvannee@optiver.com> wrote:\n> > The interesting thing now is the role of the \"negative infinity test\"\n> > patch (the 0003-* patch) in all of this. I suspect that it may not be helping us\n> > much here. I wonder, could you test the following configurations to settle\n> > this question?\n> >\n> > * <master> with 30 clients (i.e. repeat the test that you reported on most\n> > recently)\n> >\n> > * <v2-0001+2+3> with 30 clients (i.e. repeat the test that you reported got us\n> > that nice ~8.6% increase in TPS)\n> >\n> > * <v2-0001+2> with 30 clients -- a new test, to see if performance is at all\n> > helped by the \"negative infinity test\" patch (the 0003-* patch).\n> >\n> > It seems like a good idea to repeat the other two tests as part of performing\n> > this third test, out of general paranoia. Intel seem to roll out a microcode\n> > update for a spectre-like security issue about every other day.\n> >\n>\n> I ran all the tests on two different machines, several times for 1 hour each time. I'm still having a hard time getting reliable results for the 30 clients case though. I'm pretty certain the patches bring a performance benefit, but how high exactly is difficult to say. As for applying only patch 1+2 or all three patches - I found no significant difference between these two cases. It looks like all the performance benefit is in the first two patches.\n\nThe cfbot seems to be showing \"pg_regress: initdb failed\" on Ubuntu,\nwith an assertion failure like this:\n\n#2 0x00000000008e594f in ExceptionalCondition\n(conditionName=conditionName@entry=0x949098 \"BTreeTupleGetNAtts(itup,\nrel) >= key->keysz\", errorType=errorType@entry=0x938a7d\n\"FailedAssertion\", fileName=fileName@entry=0x949292 \"nbtsearch.c\",\nlineNumber=lineNumber@entry=620) at assert.c:67\nNo locals.\n#3 0x00000000004fdbaa in _bt_compare_inl (offnum=3,\npage=0x7ff7904bdf00 \"\", key=0x7ffde7c9bfa0, rel=0x7ff7a2325c20) at\nnbtsearch.c:620\n itup = 0x7ff7904bfec8\n heapTid = <optimized out>\n ski = <optimized out>\n itupdesc = 0x7ff7a2325f50\n scankey = <optimized out>\n ntupatts = 0\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/651843143\n\nIt's passing on Windows though, so perhaps there is something\nuninitialised or otherwise unstable in the patch?\n\n\n",
"msg_date": "Wed, 19 Feb 2020 09:54:51 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 12:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The cfbot seems to be showing \"pg_regress: initdb failed\" on Ubuntu,\n> with an assertion failure like this:\n>\n> #2 0x00000000008e594f in ExceptionalCondition\n> (conditionName=conditionName@entry=0x949098 \"BTreeTupleGetNAtts(itup,\n> rel) >= key->keysz\", errorType=errorType@entry=0x938a7d\n> \"FailedAssertion\", fileName=fileName@entry=0x949292 \"nbtsearch.c\",\n\nThis is a legitimate bug in v1 of the patch, which was written in a\nhurry. v2 does not have the problem. Floris inadvertently created a\nseparate thread for this same patch, which I responded to when posting\nv2. I've now updated the CF entry for this patch [1] to have both\nthreads.\n\nBTW, I've noticed that CF Tester is wonky with patches that have\nmultiple threads with at least one patch file posted to each thread.\nThe deduplication patch [2] has this problem, for example. It would be\nnice if CF Tester knew to prefer one thread over another based on a\nsimple rule, like \"consistently look for patch files on the first\nthread connected to a CF app entry, never any other thread\".\n\nMaybe you'd rather not go that way -- I guess that it would break\nother cases, such as the CF app entry for this patch (which now\ntechnically has one thread that supersedes the other). Perhaps a\ncompromise is possible. At a minimum, CF Tester should not look for a\npatch on the (say) second thread of a CF app entry for a patch just\nbecause somebody posted an e-mail to that thread (an e-mail that did\nnot contain a new patch). CF Tester will do this even though there is\na more recent patch on the first thread of the CF app entry, that has\nalready been accepted as passing by CFTester. I believe that CF Tester\nwill actually pingpong back and forth between the same two patches on\neach thread as e-mail is sent to each thread, without anybody ever\nposting a new patch.\n\nThanks\n\n[1] https://commitfest.postgresql.org/27/2429/#\n[2] https://commitfest.postgresql.org/27/2202/\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 18 Feb 2020 16:35:38 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Wed, Feb 19, 2020 at 1:35 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Feb 18, 2020 at 12:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > The cfbot seems to be showing \"pg_regress: initdb failed\" on Ubuntu,\n> > with an assertion failure like this:\n> >\n> > #2 0x00000000008e594f in ExceptionalCondition\n> > (conditionName=conditionName@entry=0x949098 \"BTreeTupleGetNAtts(itup,\n> > rel) >= key->keysz\", errorType=errorType@entry=0x938a7d\n> > \"FailedAssertion\", fileName=fileName@entry=0x949292 \"nbtsearch.c\",\n>\n> This is a legitimate bug in v1 of the patch, which was written in a\n> hurry. v2 does not have the problem. Floris inadvertently created a\n> separate thread for this same patch, which I responded to when posting\n> v2. I've now updated the CF entry for this patch [1] to have both\n> threads.\n>\n> BTW, I've noticed that CF Tester is wonky with patches that have\n> multiple threads with at least one patch file posted to each thread.\n> The deduplication patch [2] has this problem, for example. It would be\n> nice if CF Tester knew to prefer one thread over another based on a\n> simple rule, like \"consistently look for patch files on the first\n> thread connected to a CF app entry, never any other thread\".\n\nAhh. Well I had that rule early on, and then had the problem that\nsome discussions move entirely to a second or third thread and left it\ntesting some ancient stale patch.\n\n> Maybe you'd rather not go that way -- I guess that it would break\n> other cases, such as the CF app entry for this patch (which now\n> technically has one thread that supersedes the other). Perhaps a\n> compromise is possible. At a minimum, CF Tester should not look for a\n> patch on the (say) second thread of a CF app entry for a patch just\n> because somebody posted an e-mail to that thread (an e-mail that did\n> not contain a new patch). CF Tester will do this even though there is\n> a more recent patch on the first thread of the CF app entry, that has\n> already been accepted as passing by CFTester. I believe that CF Tester\n> will actually pingpong back and forth between the same two patches on\n> each thread as e-mail is sent to each thread, without anybody ever\n> posting a new patch.\n\nYeah, for CF entries with multiple threads, it currently looks at\nwhichever thread has the most recent email on it, and then it finds\nthe most recent patch set on that thread. Perhaps it should look at\nall the registered threads and find the message with the newest patch\nset across all of them, as you say. I will look into that.\n\n\n",
"msg_date": "Wed, 19 Feb 2020 13:44:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 4:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Yeah, for CF entries with multiple threads, it currently looks at\n> whichever thread has the most recent email on it, and then it finds\n> the most recent patch set on that thread. Perhaps it should look at\n> all the registered threads and find the message with the newest patch\n> set across all of them, as you say. I will look into that.\n\nThanks!\n\nI know that I am a bit unusual in that I use all of the CF app\nfeatures at the same time. But the current behavior of CF Tester\ndisincentivizes using multiple threads. This works against the goal of\nhaving good metadata about patches that are worked on over multiple\nreleases or years. We have a fair few of those.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 18 Feb 2020 17:37:40 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Mon, Feb 10, 2020 at 1:05 AM Floris Van Nee <florisvannee@optiver.com> wrote:\n> I ran all the tests on two different machines, several times for 1 hour each time. I'm still having a hard time getting reliable results for the 30 clients case though. I'm pretty certain the patches bring a performance benefit, but how high exactly is difficult to say. As for applying only patch 1+2 or all three patches - I found no significant difference between these two cases. It looks like all the performance benefit is in the first two patches.\n\nAttached is v3, which no longer includes the third patch/optimization.\nIt also inlines (in the second patch) by marking the _bt_compare\ndefinition as inline, while not changing anything in nbtree.h. I\nbelieve that this is portable C99 -- let's see what CF Tester thinks\nof it.\n\nI'm going to test this myself. It would be nice if you could repeat\nsomething like the previous experiments with v3, Floris. master vs v3\n(both patches together). With a variable number of clients.\n\nThanks\n-- \nPeter Geoghegan",
"msg_date": "Tue, 18 Feb 2020 18:14:24 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> It also inlines (in the second patch) by marking the _bt_compare\n> definition as inline, while not changing anything in nbtree.h. I\n> believe that this is portable C99 -- let's see what CF Tester thinks\n> of it.\n\nBoy, I'd be pretty darn hesitant to go there, even with our new\nexpectation of C99 compilers. What we found out when we last experimented\nwith non-static inlines was that the semantics were not very portable nor\ndesirable. I've forgotten the details unfortunately. But why do we need\nthis, and what exactly are you hoping the compiler will do with it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Feb 2020 15:55:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Wed, Feb 19, 2020 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Boy, I'd be pretty darn hesitant to go there, even with our new\n> expectation of C99 compilers. What we found out when we last experimented\n> with non-static inlines was that the semantics were not very portable nor\n> desirable. I've forgotten the details unfortunately.\n\nMy original approach to inlining was to alter the nbtsearch.c\n_bt_compare() callers (the majority) to call _bt_compare_inl(). This\nfunction matches our current _bt_compare() function, except it's a\nstatic inline. A \"new\" function, _bt_compare(), is also added. That's a\nshim function that simply calls _bt_compare_inl().\n\nThis earlier approach now seems to work better than the approach I took\nin v3. In fact, my overnight testing shows that v3 was a minor loss\nfor performance. I guess we can scrap the non-static inline thing\nright away.\n\n> But why do we need\n> this, and what exactly are you hoping the compiler will do with it?\n\nWell, the patch's approach to inlining prior to v3 was kind of ugly,\nand it would have been nice to not have to do it that way. That's all.\nSome further refinement is probably possible.\n\nMore generally, my goal is to realize a pretty tangible performance\nbenefit from avoiding a pipeline stall -- this work was driven by a\ncomplaint from Andres. I haven't really explained the reason why the\ninlining matters here, and there are a few things that need to be\nweighed. I'll have to do some kind of microarchitectural analysis\nbefore proceeding with commit. This is still unsettled.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Feb 2020 13:24:26 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Feb 19, 2020 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Boy, I'd be pretty darn hesitant to go there, even with our new\n>> expectation of C99 compilers. What we found out when we last experimented\n>> with non-static inlines was that the semantics were not very portable nor\n>> desirable. I've forgotten the details unfortunately.\n\n> My original approach to inlining was to alter the nbtsearch.c\n> _bt_compare() callers (the majority) to call _bt_compare_inl(). This\n> function matches our current _bt_compare() function, except it's a\n> static inline. A \"new\" function, _bt_compare(), is also added. That's a\n> shim function that simply calls _bt_compare_inl().\n\nYeah, that's pretty much the approach we concluded was necessary\nfor portable results.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Feb 2020 16:32:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-19 15:55:38 -0500, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > It also inlines (in the second patch) by marking the _bt_compare\n> > definition as inline, while not changing anything in nbtree.h. I\n> > believe that this is portable C99 -- let's see what CF Tester thinks\n> > of it.\n\n> Boy, I'd be pretty darn hesitant to go there, even with our new\n> expectation of C99 compilers. What we found out when we last experimented\n> with non-static inlines was that the semantics were not very portable nor\n> desirable. I've forgotten the details unfortunately.\n\nI think most of those problems were about putting extern inlines into\nheaders however - not about putting an inline onto an extern within one\ntranslation unit only. Given that potential fallout should be within a\nsingle file, and can fairly easily be fixed with adding wrappers etc, I\nthink it's a fairly low risk experiment to see what the buildfarm\nthinks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Feb 2020 14:38:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-02-19 15:55:38 -0500, Tom Lane wrote:\n>> Boy, I'd be pretty darn hesitant to go there, even with our new\n>> expectation of C99 compilers. What we found out when we last experimented\n>> with non-static inlines was that the semantics were not very portable nor\n>> desirable. I've forgotten the details unfortunately.\n\n> I think most of those problems were about putting extern inlines into\n> headers however - not about putting an inline onto an extern within one\n> translation unit only. Given that potential fallout should be within a\n> single file, and can fairly easily be fixed with adding wrappers etc, I\n> think it's a fairly low risk experiment to see what the buildfarm\n> thinks.\n\nThe buildfarm would only tell you if it compiles, not whether the\nperformance results are what you hoped for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Feb 2020 17:45:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "On Wed, Feb 19, 2020 at 2:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The buildfarm would only tell you if it compiles, not whether the\n> performance results are what you hoped for.\n\nRight. Plus I think that more granular control might make more sense\nin this particular instance anyway.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Feb 2020 14:57:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Attached is v3, which no longer includes the third patch/optimization.\n> It also inlines (in the second patch) by marking the _bt_compare\n> definition as inline, while not changing anything in nbtree.h. I\n> believe that this is portable C99 -- let's see what CF Tester thinks\n> of it.\n\nThe cfbot thinks it doesn't even apply anymore --- conflict with the dedup\npatch, no doubt?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Mar 2020 14:41:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()"
}
] |
[
{
"msg_contents": "Hello,\n\nI am interested in contributing source code changes for psqlODBC.\nThese changes cover (trivial) omissions, corrections, and potential \nenhancements.\n\nHow do I go about this? Is there a specific-mailing list (other than \nthis one) for that purpose?\nOr is there some other mechanism that is employed? Who/what is the \nfinal arbiter for actual\ncode changes that get distributed in releases?\n\nIf this information is in a \"readme\" somewhere that I have overlooked, \nplease let me know.\n\nRobert\n\n \nNOTICE from Ab Initio: This email (including any attachments) may contain information that is subject to confidentiality obligations or is legally privileged, and sender does not waive confidentiality or privilege. If received in error, please notify the sender, delete this email, and make no further use, disclosure, or distribution. \nHello,I am interested in contributing source\ncode changes for psqlODBC.These changes cover (trivial)\nomissions, corrections, and potential enhancements.How do I go about this? Is\nthere a specific-mailing list (other than this one) for that purpose?Or is there some other mechanism that\nis employed? Who/what is the final arbiter for\nactualcode changes that get distributed in\nreleases?If this information is in a \"readme\"\nsomewhere that I have overlooked, please let me know.Robert\n NOTICE from Ab Initio: This email (including any attachments) may contain information that is subject to confidentiality obligations or is legally privileged, and sender does not waive confidentiality or privilege. If received in error, please notify the sender, delete this email, and make no further use, disclosure, or distribution.",
"msg_date": "Tue, 28 Jan 2020 18:02:27 -0500",
"msg_from": "\"Robert Willis\" <rwillis@abinitio.com>",
"msg_from_op": true,
"msg_subject": "psqlODBC development"
},
{
"msg_contents": "## Robert Willis (rwillis@abinitio.com):\n\n> How do I go about this? Is there a specific-mailing list (other than \n> this one) for that purpose?\n\nhttps://odbc.postgresql.org/\n\"psqlODBC is developed and supported through the pgsql-odbc@postgresql.org\nmailing list.\"\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Wed, 29 Jan 2020 11:51:42 +0100",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: psqlODBC development"
}
] |
[
{
"msg_contents": "While reviewing the partition-wise join patch, I ran into an issue that exists in master, so rather than responding to that patch, I’m starting this new thread.\n\nI noticed that this seems similar to the problem that was supposed to have been fixed in the \"Re: COLLATE: Hash partition vs UPDATE” thread. As such, I’ve included Tom and Amit in the CC list.\n\nNotice the \"ERROR: could not determine which collation to use for string hashing”\n\nThe following is extracted from the output from the test:\n\n> CREATE TABLE raw_data (a text);\n> INSERT INTO raw_data (a) VALUES ('Türkiye'),\n> ('TÜRKIYE'),\n> ('bıt'),\n> ('BIT'),\n> ('äbç'),\n> ('ÄBÇ'),\n> ('aaá'),\n> ('coté'),\n> ('Götz'),\n> ('ὀδυσσεύς'),\n> ('ὈΔΥΣΣΕΎΣ'),\n> ('を読み取り用'),\n> ('にオープンできませんでした');\n> -- Create unpartitioned tables for test\n> CREATE TABLE alpha (a TEXT COLLATE \"ja_JP\", b TEXT COLLATE \"sv_SE\");\n> CREATE TABLE beta (a TEXT COLLATE \"tr_TR\", b TEXT COLLATE \"en_US\");\n> INSERT INTO alpha (SELECT a, a FROM raw_data);\n> INSERT INTO beta (SELECT a, a FROM raw_data);\n> ANALYZE alpha;\n> ANALYZE beta;\n> EXPLAIN (COSTS OFF)\n> SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2 ON (t1.a = t2.a) WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\n> QUERY PLAN\n> ------------------------------------------------------------\n> Hash Join\n> Hash Cond: ((t2.a)::text = (t1.a)::text)\n> -> Seq Scan on beta t2\n> -> Hash\n> -> Seq Scan on alpha t1\n> Filter: (a = ANY ('{äbç,ὀδυσσεύς}'::text[]))\n> (6 rows)\n> \n> SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2 ON (t1.a = t2.a) WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\n> ERROR: could not determine which collation to use for string hashing\n> HINT: Use the COLLATE clause to set the collation explicitly.\n> \n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 28 Jan 2020 15:36:03 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> While reviewing the partition-wise join patch, I ran into an issue that exists in master, so rather than responding to that patch, I’m starting this new thread.\n> I noticed that this seems similar to the problem that was supposed to have been fixed in the \"Re: COLLATE: Hash partition vs UPDATE” thread. As such, I’ve included Tom and Amit in the CC list.\n\nHm, I don't see any bug here. You're asking it to join\n\n>> CREATE TABLE alpha (a TEXT COLLATE \"ja_JP\", b TEXT COLLATE \"sv_SE\");\n>> CREATE TABLE beta (a TEXT COLLATE \"tr_TR\", b TEXT COLLATE \"en_US\");\n\n>> SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2 ON (t1.a = t2.a) WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\n\nso t1.a and t2.a have different collations, and the system can't resolve\nwhich to use for the comparison.\n\nNow, I'd be the first to agree that this error could be reported better.\nThe parser knows that it couldn't resolve a collation for t1.a = t2.a, but\nwhat it does *not* know is whether the '=' operator cares for collation.\nThrowing an error when the operator wouldn't care at runtime isn't going\nto make many people happy. On the other hand, when the operator finally\ndoes run and can't get a collation, all it knows is that it didn't get a\ncollation, not why. So we can't produce an error message as specific as\n\"ja_JP and tr_TR collations conflict\".\n\nNow that the collations feature has settled in, it'd be nice to go back\nand see if we can't improve that somehow. Not sure how.\n\n(BTW, before v12 the text '=' operator indeed did not care for collation,\nso this example would've worked. But the change in behavior is a\nnecessary consequence of having invented nondeterministic collations,\nnot a bug.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Jan 2020 21:46:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "\n\n> On Jan 28, 2020, at 6:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> While reviewing the partition-wise join patch, I ran into an issue that exists in master, so rather than responding to that patch, I’m starting this new thread.\n>> I noticed that this seems similar to the problem that was supposed to have been fixed in the \"Re: COLLATE: Hash partition vs UPDATE” thread. As such, I’ve included Tom and Amit in the CC list.\n> \n> Hm, I don't see any bug here. You're asking it to join\n> \n>>> CREATE TABLE alpha (a TEXT COLLATE \"ja_JP\", b TEXT COLLATE \"sv_SE\");\n>>> CREATE TABLE beta (a TEXT COLLATE \"tr_TR\", b TEXT COLLATE \"en_US\");\n> \n>>> SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2 ON (t1.a = t2.a) WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\n> \n> so t1.a and t2.a have different collations, and the system can't resolve\n> which to use for the comparison.\n> \n> Now, I'd be the first to agree that this error could be reported better.\n> The parser knows that it couldn't resolve a collation for t1.a = t2.a, but\n> what it does *not* know is whether the '=' operator cares for collation.\n> Throwing an error when the operator wouldn't care at runtime isn't going\n> to make many people happy. On the other hand, when the operator finally\n> does run and can't get a collation, all it knows is that it didn't get a\n> collation, not why. So we can't produce an error message as specific as\n> \"ja_JP and tr_TR collations conflict\".\n> \n> Now that the collations feature has settled in, it'd be nice to go back\n> and see if we can't improve that somehow. Not sure how.\n> \n> (BTW, before v12 the text '=' operator indeed did not care for collation,\n> so this example would've worked. But the change in behavior is a\n> necessary consequence of having invented nondeterministic collations,\n> not a bug.)\n\nI contemplated that for a while before submitting the report. I agree that for strings that are not binary equal, some collations might say the two strings are equal, and other collations may say that they are not. But when does any collation say that a string is not equal to itself? All the strings in these columns were loaded from the same source table, and they should always equal themselves, so the only problem I am aware of is if some of them equal others of them under one of the collations in question, where the other collation doesn’t think so. I’m pretty sure that does not exist in this concrete example.\n\nI guess I’m arguing that the system is giving up too soon, saying, “In theory there might be values I don’t know how to compare, so I’m going to give up now and not look”.\n\nI think what is happening here is that the system thinks, “Hey, I can use a hash join for this”, and then later realizes, “Oh, no, I can’t” and instead of falling back to something other than hash join, it gives up.\n\nIs there some more fundamental reason this query couldn’t correctly be completed? I don’t mind being enlightened about the part that I’m missing.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 19:38:59 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "\n\n> On Jan 28, 2020, at 7:38 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Jan 28, 2020, at 6:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> While reviewing the partition-wise join patch, I ran into an issue that exists in master, so rather than responding to that patch, I’m starting this new thread.\n>>> I noticed that this seems similar to the problem that was supposed to have been fixed in the \"Re: COLLATE: Hash partition vs UPDATE” thread. As such, I’ve included Tom and Amit in the CC list.\n>> \n>> Hm, I don't see any bug here. You're asking it to join\n>> \n>>>> CREATE TABLE alpha (a TEXT COLLATE \"ja_JP\", b TEXT COLLATE \"sv_SE\");\n>>>> CREATE TABLE beta (a TEXT COLLATE \"tr_TR\", b TEXT COLLATE \"en_US\");\n>> \n>>>> SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2 ON (t1.a = t2.a) WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\n>> \n>> so t1.a and t2.a have different collations, and the system can't resolve\n>> which to use for the comparison.\n>> \n>> Now, I'd be the first to agree that this error could be reported better.\n>> The parser knows that it couldn't resolve a collation for t1.a = t2.a, but\n>> what it does *not* know is whether the '=' operator cares for collation.\n>> Throwing an error when the operator wouldn't care at runtime isn't going\n>> to make many people happy. On the other hand, when the operator finally\n>> does run and can't get a collation, all it knows is that it didn't get a\n>> collation, not why. So we can't produce an error message as specific as\n>> \"ja_JP and tr_TR collations conflict\".\n>> \n>> Now that the collations feature has settled in, it'd be nice to go back\n>> and see if we can't improve that somehow. Not sure how.\n>> \n>> (BTW, before v12 the text '=' operator indeed did not care for collation,\n>> so this example would've worked. But the change in behavior is a\n>> necessary consequence of having invented nondeterministic collations,\n>> not a bug.)\n> \n> I contemplated that for a while before submitting the report. I agree that for strings that are not binary equal, some collations might say the two strings are equal, and other collations may say that they are not. But when does any collation say that a string is not equal to itself? All the strings in these columns were loaded from the same source table, and they should always equal themselves, so the only problem I am aware of is if some of them equal others of them under one of the collations in question, where the other collation doesn’t think so. I’m pretty sure that does not exist in this concrete example.\n> \n> I guess I’m arguing that the system is giving up too soon, saying, “In theory there might be values I don’t know how to compare, so I’m going to give up now and not look”.\n> \n> I think what is happening here is that the system thinks, “Hey, I can use a hash join for this”, and then later realizes, “Oh, no, I can’t” and instead of falling back to something other than hash join, it gives up.\n> \n> Is there some more fundamental reason this query couldn’t correctly be completed? I don’t mind being enlightened about the part that I’m missing.\n\nIf the answer here is just that you’d rather it always fail at planning time because that’s more deterministic than having it sometimes succeed and sometimes fail at runtime depending on which data has been loaded, ok, I can understand that. If so, then let’s put this error string into the docs, because right now, if you google\n\n\tsite:postgresql.org \"could not determine which collation to use for string hashing” \n\nyou don’t get anything from the docs telling you that this is an expected outcome.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 20:03:35 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "Hi Mark,\n\nOn Wed, Jan 29, 2020 at 1:03 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jan 28, 2020, at 7:38 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> >>> While reviewing the partition-wise join patch, I ran into an issue that exists in master, so rather than responding to that patch, I’m starting this new thread.\n> >>> I noticed that this seems similar to the problem that was supposed to have been fixed in the \"Re: COLLATE: Hash partition vs UPDATE” thread. As such, I’ve included Tom and Amit in the CC list.\n\nJust to clarify, we only intended in the quoted thread to plug\nrelevant holes of the *partitioning* code, which IIRC was more\nstraightforward to do than appears to be the case here.\n\n> If the answer here is just that you’d rather it always fail at planning time because that’s more deterministic than having it sometimes succeed and sometimes fail at runtime depending on which data has been loaded, ok, I can understand that. If so, then let’s put this error string into the docs, because right now, if you google\n>\n> site:postgresql.org \"could not determine which collation to use for string hashing”\n>\n> you don’t get anything from the docs telling you that this is an expected outcome.\n\nYou may have noticed that it's not only hash join that bails out:\n\nEXPLAIN (COSTS OFF) SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2\nON (t1.a = t2.a) WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\n QUERY PLAN\n------------------------------------------------------------\n Hash Join\n Hash Cond: ((t2.a)::text = (t1.a)::text)\n -> Seq Scan on beta t2\n -> Hash\n -> Seq Scan on alpha t1\n Filter: (a = ANY ('{äbç,ὀδυσσεύς}'::text[]))\n(6 rows)\n\nSELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2 ON (t1.a = t2.a)\nWHERE t1.a IN ('äbç', 'ὀδυσσεύς');\nERROR: could not determine which collation to use for string hashing\nHINT: Use the COLLATE clause to set the collation explicitly.\n\nSET enable_hashjoin TO off;\n-- error occurs partway through ExecInitMergeJoin(), so EXPLAIN can't finish\nEXPLAIN (COSTS OFF) SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2\nON (t1.a = t2.a) WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\nERROR: could not determine which collation to use for string comparison\nHINT: Use the COLLATE clause to set the collation explicitly.\n\nSET enable_mergejoin TO off;\nEXPLAIN (COSTS OFF) SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2\nON (t1.a = t2.a) WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\n QUERY PLAN\n------------------------------------------------------------\n Nested Loop\n Join Filter: ((t1.a)::text = (t2.a)::text)\n -> Seq Scan on beta t2\n -> Materialize\n -> Seq Scan on alpha t1\n Filter: (a = ANY ('{äbç,ὀδυσσεύς}'::text[]))\n(6 rows)\n\nSELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2 ON (t1.a = t2.a)\nWHERE t1.a IN ('äbç', 'ὀδυσσεύς');\nERROR: could not determine which collation to use for string comparison\nHINT: Use the COLLATE clause to set the collation explicitly.\n\nWith PG 11, I can see that hash join and nestloop join work. But with\nPG 12, this join can't possible work without an explicit COLLATE\nclause. So it would be nice if we can report a more specific error\nmuch sooner, possibly with some parser context, given that we now know\nfor sure that a join qual without a collation assigned will not work\nat all. IOW, maybe we should aim for making the run-time collation\nerrors to be of \"won't happen\" category as much as possible.\n\nTom said:\n> >> Now, I'd be the first to agree that this error could be reported better.\n> >> The parser knows that it couldn't resolve a collation for t1.a = t2.a, but\n> >> what it does *not* know is whether the '=' operator cares for collation.\n> >> Throwing an error when the operator wouldn't care at runtime isn't going\n> >> to make many people happy. On the other hand, when the operator finally\n> >> does run and can't get a collation, all it knows is that it didn't get a\n> >> collation, not why. So we can't produce an error message as specific as\n> >> \"ja_JP and tr_TR collations conflict\".\n> >>\n> >> Now that the collations feature has settled in, it'd be nice to go back\n> >> and see if we can't improve that somehow. Not sure how.\n\nWould it make sense to catch a qual with unassigned collation\nsomewhere in the planner, where the qual's operator family is\nestatblished, by checking if the operator family behavior is sensitive\nto collations?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 30 Jan 2020 15:14:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "\n\n> On Jan 29, 2020, at 10:14 PM, Amit Langote <amitlangote09@gmail.com> wrote:\n> \n> \n> SELECT t1.a, t2.a FROM alpha t1 INNER JOIN beta t2 ON (t1.a = t2.a)\n> WHERE t1.a IN ('äbç', 'ὀδυσσεύς');\n> ERROR: could not determine which collation to use for string comparison\n> HINT: Use the COLLATE clause to set the collation explicitly.\n> \n> With PG 11, I can see that hash join and nestloop join work. But with\n> PG 12, this join can't possible work without an explicit COLLATE\n> clause. So it would be nice if we can report a more specific error\n> much sooner, possibly with some parser context, given that we now know\n> for sure that a join qual without a collation assigned will not work\n> at all. IOW, maybe we should aim for making the run-time collation\n> errors to be of \"won't happen\" category as much as possible.\n> \n> Tom said:\n>>>> Now, I'd be the first to agree that this error could be reported better.\n>>>> The parser knows that it couldn't resolve a collation for t1.a = t2.a, but\n>>>> what it does *not* know is whether the '=' operator cares for collation.\n>>>> Throwing an error when the operator wouldn't care at runtime isn't going\n>>>> to make many people happy. On the other hand, when the operator finally\n>>>> does run and can't get a collation, all it knows is that it didn't get a\n>>>> collation, not why. So we can't produce an error message as specific as\n>>>> \"ja_JP and tr_TR collations conflict\".\n>>>> \n>>>> Now that the collations feature has settled in, it'd be nice to go back\n>>>> and see if we can't improve that somehow. Not sure how.\n> \n> Would it make sense to catch a qual with unassigned collation\n> somewhere in the planner, where the qual's operator family is\n> estatblished, by checking if the operator family behavior is sensitive\n> to collations?\n\nHi Amit, I appreciate your attention to my question, but I’m not ready to delve into possible fixes, as I still don’t entirely understand the problem.\n\nAccording to Tom:\n\n> (BTW, before v12 the text '=' operator indeed did not care for collation,\n> so this example would've worked. But the change in behavior is a\n> necessary consequence of having invented nondeterministic collations,\n> not a bug.)\n\nI’m still struggling with that, because the four collations I used in the example are all deterministic. I totally understand why having more than one collation matters if you ask that your data be in sorted order, as the system needs to know which ordering to use. But for equality, I would think that deterministic collations are all interchangeable, because they all agree on whether A = B, regardless of the collation defined on column A and/or on column B. Maybe I’m wrong about that. But that’s my reading of the definition of “deterministic collation” given in the docs:\n\n> A deterministic collation uses deterministic comparisons, which means that it considers strings to be equal only if they consist of the same byte sequence. \n\nI’m reading that as “If and only if”, and maybe I’m wrong to do so. Maybe that’s my error. But assuming that part is ok, it would seem to be sufficient to know that the columns being joined use deterministic collations, and you wouldn’t need them to be the *same* collations, nor even remember which collations they were. You’d just need information passed down that collations can be ignored for this comparison, or that a built-in byte-for-byte equality comparator should be used rather than the collation’s equality comparator, or some such solution.\n\nI’m guessing I’m wrong about at least one of these things, and I’m hoping somebody enlightens me.\n\nThanks so much in advance,\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 30 Jan 2020 10:55:37 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "\n\n>> Would it make sense to catch a qual with unassigned collation\n>> somewhere in the planner, where the qual's operator family is\n>> estatblished, by checking if the operator family behavior is sensitive\n>> to collations?\n\nI’m not sure how to do that. pg_opfamily doesn’t seem to have a field for that. Can you recommend how I would proceed there?\n\n\n\nThere may be operators other than = and != that are worth thinking about, but for now I’m really only thinking about those two. It is hard for me to see how we could ignore a collation mismatch for any other operator, but maybe I’m just not thinking about it the right way.\n\nSo, for = and !=, I’m looking at the definition of texteq, and it calls check_collation_set as one of the very first things it does. That’s where the error that annoys me comes out. But I don’t think it really needs to be doing this. It should first be determining if collation *matters*. It can’t do that right now, because it gets that information from this line:\n\n> if (lc_collate_is_c(collid) ||\n> collid == DEFAULT_COLLATION_OID ||\n> pg_newlocale_from_collation(collid)->deterministic)\n\nWhich obviously won’t work if collid hasn’t been set. So three approaches come to my mind:\n\n1) Somewhere upstream from calling texteq, figure out that we don’t actually care about the collation stuff, because both the left and right side of the comparison use deterministic collations and the comparison we’re calling is equality, and then pass down a dummy collation such as “C” even though that isn’t actually true.\n\nThe problem with (1) that I see is that it could lead to the wrong collation being mentioned in error messages, though I haven’t looked, and that it’s enough of a hack that it might make coding in this area harder in the future.\n\n2) Somewhere upstream from calling texteq, pass in a new boolean flag that specifies whether collation matters, and extend texteq to take an additional argument.\n\nThis also seems very hacky to me, but for different reasons.\n\n3) Extend the concept of collations to collation sets. Right now, I’m only thinking about a collation set as having two values, the lefthand and the righthand side, but maybe there are other cases like (Left, (Left,Right)) that get built up and need to work. Anyway, at the point in the executor that the collations don’t match, instead of passing NULL down the line, pass in a collation set (Left, Right), and functions like texteq can see that they’re dealing with two different collations and decide if they can deal with that or if they need to throw an error.\n\nI bet if we went with (3), the error being thrown in the example I used to start this thread would go away, without breaking anything else. I’m going to go poke at that a bit, but I’d still appreciate any comments/concerns about my analysis.\n\n> According to Tom:\n> \n>> (BTW, before v12 the text '=' operator indeed did not care for collation,\n>> so this example would've worked. But the change in behavior is a\n>> necessary consequence of having invented nondeterministic collations,\n>> not a bug.)\n> \n> I’m still struggling with that, because the four collations I used in the example are all deterministic. I totally understand why having more than one collation matters if you ask that your data be in sorted order, as the system needs to know which ordering to use. But for equality, I would think that deterministic collations are all interchangeable, because they all agree on whether A = B, regardless of the collation defined on column A and/or on column B. Maybe I’m wrong about that. But that’s my reading of the definition of “deterministic collation” given in the docs:\n> \n>> A deterministic collation uses deterministic comparisons, which means that it considers strings to be equal only if they consist of the same byte sequence. \n> \n> I’m reading that as “If and only if”, and maybe I’m wrong to do so. Maybe that’s my error. But assuming that part is ok, it would seem to be sufficient to know that the columns being joined use deterministic collations, and you wouldn’t need them to be the *same* collations, nor even remember which collations they were. You’d just need information passed down that collations can be ignored for this comparison, or that a built-in byte-for-byte equality comparator should be used rather than the collation’s equality comparator, or some such solution.\n\nI’m starting to think that “consequence of having invented nondeterministic collations” in Tom’s message really should read “consequence of having invented nondeterministic collations without reworking these other interfaces”, but once again, I’m hoping to be corrected if I’ve gone off in the wrong direction here.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:43:46 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> According to Tom:\n>> (BTW, before v12 the text '=' operator indeed did not care for collation,\n>> so this example would've worked. But the change in behavior is a\n>> necessary consequence of having invented nondeterministic collations,\n>> not a bug.)\n\n> I’m still struggling with that, because the four collations I used in\n> the example are all deterministic. I totally understand why having more\n> than one collation matters if you ask that your data be in sorted order,\n> as the system needs to know which ordering to use. But for equality, I\n> would think that deterministic collations are all interchangeable,\n> because they all agree on whether A = B, regardless of the collation\n> defined on column A and/or on column B. Maybe I’m wrong about that.\n\nWell, you're not wrong, but you're assuming much finer distinctions\nthan the collation machinery actually makes (or than it'd be sane\nto ask it to make, IMO). We don't have a way to tell texteq that\n\"well, we don't know what collation to assign to this operation,\nbut it's okay to assume that it's deterministic\". Nor does the\nparser have any way to know that texteq could be satisfied by\nthat knowledge --- if it doesn't even know whether texteq cares\nabout collation, how could it know that?\n\nThere are other issues here too. Just because the query could\ntheoretically be implemented without reference to any specific\ncollation doesn't mean that that's a practical thing to do.\nIt'd be unclear for instance whether we can safely use indexes\nthat *do* have specific collations attached. We'd also lose\nthe option to consider plans like mergejoins.\n\nIf the parser understood that a particular operator behaved\nlike text equality --- which it does not, and I guarantee you\nI will shoot down any proposal to hard-wire a parser test for\nthat particular operator --- you could imagine assigning \"C\"\ncollation when we have an unresolvable combination of\ndeterministic collations for the inputs. That dodges the\nproblem of not having any way to represent the situation.\nBut it's still got implementation issues, in that such a\ncollation choice probably won't match the collations of any\nindexes for the input columns.\n\nAnother issue is that collations \"bubble up\" in the parse tree,\nso sneaking in a collation that's not supposed to be there per\nspec carries a risk of causing unexpected semantics further up.\nI think we could get away with that for the particular case of\nequality (which returns collation-less boolean), but this is\nanother thing that makes the case narrower and less useful.\n\nIn the end, TBH, I'm not finding your example compelling enough\nto be worth putting in weird hacks for such cases. If you're\njoining columns of dissimilar collations, you're going to be\nfinding it necessary to specify what collation to use in a lot\nof places ... so where's the value in making a weird special\ncase for equality?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Jan 2020 15:02:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 2:44 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> 3) Extend the concept of collations to collation sets. Right now, I’m only thinking about a collation set as having two values, the lefthand and the righthand side, but maybe there are other cases like (Left, (Left,Right)) that get built up and need to work. Anyway, at the point in the executor that the collations don’t match, instead of passing NULL down the line, pass in a collation set (Left, Right), and functions like texteq can see that they’re dealing with two different collations and decide if they can deal with that or if they need to throw an error.\n>\n> I bet if we went with (3), the error being thrown in the example I used to start this thread would go away, without breaking anything else. I’m going to go poke at that a bit, but I’d still appreciate any comments/concerns about my analysis.\n\nI assume that what would have to happen to implement this is that an\nSQL-callable function would be passed more than one collation OID,\nperhaps one per argument or something like that. Notice, however, that\nthis would require changing the way that functions get called. See the\nDirectFunctionCall{1,2,3,...}Coll() and\nFunctionCall{0,1,2,3,...}Coll() and the definition of\nFunctionCallInfoBaseData -- there's only one spot for an OID available\nright now. Allowing for more would likely have a noticeable impact on\nthe cost of calling SQL-callable functions, and that's already\nexpensive enough that people have been unhappy about it. It seems\nunlikely that it would be worth incurring more overhead here for every\nquery all the time just to make this case work.\n\nIt is, perhaps, a little strange that the only two choices for an\noperator are \"cares about collation\" and \"doesn't,\" and I somehow feel\nlike there ought to be a way to do better. But I don't know what it\nis.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 30 Jan 2020 15:17:32 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "\n\n> On Jan 30, 2020, at 12:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> According to Tom:\n>>> (BTW, before v12 the text '=' operator indeed did not care for collation,\n>>> so this example would've worked. But the change in behavior is a\n>>> necessary consequence of having invented nondeterministic collations,\n>>> not a bug.)\n> \n>> I’m still struggling with that, because the four collations I used in\n>> the example are all deterministic. I totally understand why having more\n>> than one collation matters if you ask that your data be in sorted order,\n>> as the system needs to know which ordering to use. But for equality, I\n>> would think that deterministic collations are all interchangeable,\n>> because they all agree on whether A = B, regardless of the collation\n>> defined on column A and/or on column B. Maybe I’m wrong about that.\n> \n> Well, you're not wrong, but you're assuming much finer distinctions\n> than the collation machinery actually makes (or than it'd be sane\n> to ask it to make, IMO). We don't have a way to tell texteq that\n> \"well, we don't know what collation to assign to this operation,\n> but it's okay to assume that it's deterministic\". Nor does the\n> parser have any way to know that texteq could be satisfied by\n> that knowledge --- if it doesn't even know whether texteq cares\n> about collation, how could it know that?\n\nI agree. Having this in the parser seems really weird and unwholesome.\n\n> There are other issues here too. Just because the query could\n> theoretically be implemented without reference to any specific\n> collation doesn't mean that that's a practical thing to do.\n> It'd be unclear for instance whether we can safely use indexes\n> that *do* have specific collations attached. We'd also lose\n> the option to consider plans like mergejoins.\n> \n> If the parser understood that a particular operator behaved\n> like text equality --- which it does not, and I guarantee you\n> I will shoot down any proposal to hard-wire a parser test for\n> that particular operator --- you could imagine assigning \"C\"\n> collation when we have an unresolvable combination of\n> deterministic collations for the inputs. That dodges the\n> problem of not having any way to represent the situation.\n> But it's still got implementation issues, in that such a\n> collation choice probably won't match the collations of any\n> indexes for the input columns.\n\nYeah, I disclaimed that idea in a subsequent email, but if you’re responding to my emails in the order that you receive them (which is totally reasonable), then you aren’t to know that yet.\n\n> \n> Another issue is that collations \"bubble up\" in the parse tree,\n> so sneaking in a collation that's not supposed to be there per\n> spec carries a risk of causing unexpected semantics further up.\n> I think we could get away with that for the particular case of\n> equality (which returns collation-less boolean), but this is\n> another thing that makes the case narrower and less useful.\n\nI was wondering if bubbling up (LeftCollation,RightCollation) would be ok. There are likely cases that can’t make use of that, but those places would just throw the same sort of error that they’re currently throwing, except they’d have a more useful error message because it would include which collations were mismatched.\n\n> In the end, TBH, I'm not finding your example compelling enough\n> to be worth putting in weird hacks for such cases. If you're\n> joining columns of dissimilar collations, you're going to be\n> finding it necessary to specify what collation to use in a lot\n> of places ... so where's the value in making a weird special\n> case for equality?\n\nI agree with your position against weird hacks. If the only way to do this is a weird hack, then forget about it.\n\nIf I’m not putting upon your time too much, could you respond to my other email in this thread as to whether it sounds any better?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 30 Jan 2020 12:18:05 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Would it make sense to catch a qual with unassigned collation\n> somewhere in the planner, where the qual's operator family is\n> estatblished, by checking if the operator family behavior is sensitive\n> to collations?\n\n> I’m not sure how to do that. pg_opfamily doesn’t seem to have a field for that. Can you recommend how I would proceed there?\n\nThere's no such information attached to opfamilies, which is more or less\nforced by the fact that individual operators don't expose it either.\nThere's not much hope of making that better without incompatible changes\nin the requirements for extensions to define operators and/or operator\nfamilies.\n\n> So, for = and !=, I’m looking at the definition of texteq, and it calls check_collation_set as one of the very first things it does. That’s where the error that annoys me comes out. But I don’t think it really needs to be doing this. It should first be determining if collation *matters*.\n\nBut of course it matters. How do you know whether the operation is\ndeterministic if you don't know the collation?\n\n> 3) Extend the concept of collations to collation sets. Right now, I’m only thinking about a collation set as having two values, the lefthand and the righthand side, but maybe there are other cases like (Left, (Left,Right)) that get built up and need to work. Anyway, at the point in the executor that the collations don’t match, instead of passing NULL down the line, pass in a collation set (Left, Right), and functions like texteq can see that they’re dealing with two different collations and decide if they can deal with that or if they need to throw an error.\n\nMaybe this could work. I think it would get messy when bubbling up\ncollations, but as long as you're talking about \"sets\" not \"pairs\"\nit might be possible to postpone collation resolution.\n\nTo me, though, the main advantage of this is that we could throw a\nmore explicit error like \"collations \"ja_JP\" and \"tr_TR\" cannot be\nunified\", since that information would still be there at runtime.\nI'm still pretty dubious that having texteq special-case the situation\nwhere the collations are different but all deterministic is a reasonable\nthing to do.\n\nOne practical problem is that postponing that work to runtime could be\na huge performance hit, because you'd have to do it over again on each\ncall of the operator. I suppose some caching might be possible.\n\nAnother issue is that you're still putting far too much emphasis on\nthe fact that a hash-join plan manages to avoid this error, and ignoring\nthe problem that a lot of other plans for the same query will not avoid\nit. What if the planner had chosen a merge-join, for instance? How\nuseful is it to allow the join if things still break the moment you\nadd an ORDER BY?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Jan 2020 15:29:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I assume that what would have to happen to implement this is that an\n> SQL-callable function would be passed more than one collation OID,\n> perhaps one per argument or something like that. Notice, however, that\n> this would require changing the way that functions get called. See the\n> DirectFunctionCall{1,2,3,...}Coll() and\n> FunctionCall{0,1,2,3,...}Coll() and the definition of\n> FunctionCallInfoBaseData -- there's only one spot for an OID available\n> right now. Allowing for more would likely have a noticeable impact on\n> the cost of calling SQL-callable functions, and that's already\n> expensive enough that people have been unhappy about it. It seems\n> unlikely that it would be worth incurring more overhead here for every\n> query all the time just to make this case work.\n\nThe implementation I was visualizing was replacing, eg,\nFuncExpr.inputcollid with an OID List, and then teaching PG_GET_COLLATION\nto throw an error if the list is longer than one element. I agree that\nthe performance implications of that would be pretty troublesome, though.\n\nIn the end, it seems like the only solution that would be remotely\npractical from a performance standpoint is to redefine things so that\ncollation-sensitive functions have to be labeled as such in pg_proc,\nand then we can have the parser throw the appropriate error if it\ncan't resolve an input collation for such a function. Perhaps the\nbackwards-compatibility hit wouldn't be as bad as it first seems,\nsince the whole thing can be ignored for functions that haven't got at\nleast one collatable input, and most of those would likely be all right\nwith a default assumption that they are collation sensitive. Or maybe\nbetter, we could make the default assumption be that they aren't\nsensitive, with the same error still being thrown at runtime if they are,\nso that extensions have to take positive action to get the better error\nbehavior but if they don't then things are no worse than today.\n\nMark, obviously, would then lobby for the pg_proc marking to\ninclude one state that identifies functions that only care about\ncollation when it's nondeterministic. But I'm still not very\nsure how that would work as soon as you look anyplace except at\nwhat texteq() itself would do. The questions of whether such a\nquery matches a given index, or could be implemented via mergejoin,\netc, remain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Jan 2020 15:50:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "\n\n> On Jan 30, 2020, at 12:29 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> Would it make sense to catch a qual with unassigned collation\n>> somewhere in the planner, where the qual's operator family is\n>> estatblished, by checking if the operator family behavior is sensitive\n>> to collations?\n> \n>> I’m not sure how to do that. pg_opfamily doesn’t seem to have a field for that. Can you recommend how I would proceed there?\n> \n> There's no such information attached to opfamilies, which is more or less\n> forced by the fact that individual operators don't expose it either.\n> There's not much hope of making that better without incompatible changes\n> in the requirements for extensions to define operators and/or operator\n> families.\n\nThanks, Tom, for confirming this.\n\nGiven the excellent explanations you and Robert have given, I think I’m retracting this whole idea and accepting your positions that it’s not worth it.\n\nFor the archives, I’m still going to respond to the rest of what you say:\n\n>> So, for = and !=, I’m looking at the definition of texteq, and it calls check_collation_set as one of the very first things it does. That’s where the error that annoys me comes out. But I don’t think it really needs to be doing this. It should first be determining if collation *matters*.\n> \n> But of course it matters. How do you know whether the operation is\n> deterministic if you don't know the collation?\n> \n>> 3) Extend the concept of collations to collation sets. Right now, I’m only thinking about a collation set as having two values, the lefthand and the righthand side, but maybe there are other cases like (Left, (Left,Right)) that get built up and need to work. Anyway, at the point in the executor that the collations don’t match, instead of passing NULL down the line, pass in a collation set (Left, Right), and functions like texteq can see that they’re dealing with two different collations and decide if they can deal with that or if they need to throw an error.\n> \n> Maybe this could work. I think it would get messy when bubbling up\n> collations, but as long as you're talking about \"sets\" not \"pairs\"\n> it might be possible to postpone collation resolution.\n> \n> To me, though, the main advantage of this is that we could throw a\n> more explicit error like \"collations \"ja_JP\" and \"tr_TR\" cannot be\n> unified\", since that information would still be there at runtime.\n> I'm still pretty dubious that having texteq special-case the situation\n> where the collations are different but all deterministic is a reasonable\n> thing to do.\n\nOn my mac, when I run “SELECT * FROM pg_collation”, every one of the 271 rows I get back have collisdeterministic true. I know that which collations you get on a system is variable, so I’m not saying that nobody has nondeterministic collations, but it seems common enough that mismatched collations will both be deterministic. That’s the common case, not some weird edge case.\n\nSo the issue here seems to be whether equality should get different treatment from other operators, and I obviously am arguing that it should, though you and Robert have both made really good points against that position.\n\n> \n> One practical problem is that postponing that work to runtime could be\n> a huge performance hit, because you'd have to do it over again on each\n> call of the operator. I suppose some caching might be possible.\n\nYes, Robert mentioned performance implications, too.\n\n> Another issue is that you're still putting far too much emphasis on\n> the fact that a hash-join plan manages to avoid this error, and ignoring\n> the problem that a lot of other plans for the same query will not avoid\n> it. What if the planner had chosen a merge-join, for instance? \n\nYou’re looking at the problem from the point of view of how postgres is currently and historically implemented, and seeing that this problem is hard. I was looking at it more from the perspective of a user who gets the error message and thinks, “this is stupid, the query is refusing to run for want of a collation being specified, but I can clearly see that it doesn’t actually need one.” I think that same reaction from the user would happen if the planner chose a merge-join. The user would just say, “gee, what a stupid planner, why did it choose a merge join for this when the lack of a collation clause clearly indicates that it should have limited itself to something that only needs equality comparison.”\n\nI’m not calling the planner stupid, nor the system generally, but I know that people get frustrated with systems that have unintuitive limitations like this when they don’t know the internals of the system that give rise to the limitations.\n\n> How\n> useful is it to allow the join if things still break the moment you\n> add an ORDER BY?\n\nI think that’s apples-to-oranges. If you ask the system to order the data, and you’ve got ambiguity about which ordering you mean, then of course it can’t continue until you tell it which collation you want.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 30 Jan 2020 13:15:28 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 6:15 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Jan 30, 2020, at 12:29 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> >> Would it make sense to catch a qual with unassigned collation\n> >> somewhere in the planner, where the qual's operator family is\n> >> estatblished, by checking if the operator family behavior is sensitive\n> >> to collations?\n> >\n> >> I’m not sure how to do that. pg_opfamily doesn’t seem to have a field for that. Can you recommend how I would proceed there?\n> >\n> > There's no such information attached to opfamilies, which is more or less\n> > forced by the fact that individual operators don't expose it either.\n> > There's not much hope of making that better without incompatible changes\n> > in the requirements for extensions to define operators and/or operator\n> > families.\n>\n> Thanks, Tom, for confirming this.\n\nJust for the record, I will explain why I brought up doing anything\nwith operator families at all. What I was really imagining is putting\na hard-coded check somewhere in the middle of equivalence processing\nto see if a given qual's operator would be sensitive to collation\nbased *only* on whether it belongs to a text operator family, such as\nTEXT_BTREE_FAM_OID, whereas the qual expression's inputcollid is 0\n(parser failed to resolve collation conflict among its arguments) and\nerroring out if so. If we do that, maybe we won't need\ncheck_collation_set() that's used in various text operators. Also,\nerroring out sooner might allow us to produce more specific error\nmessage, which as I understand it, would help with one of the Mark's\ncomplaints that the error message is too ambiguous due to emitted as\nsuch a low layer.\n\nI thought of the idea after running into a recent commit relevant to\nnon-deterministic collations:\n\ncommit 2810396312664bdb941e549df7dfa75218d73a1c\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sat Sep 21 16:29:17 2019 -0400\n\n Fix up handling of nondeterministic collations with pattern_ops opclasses.\n\n text_pattern_ops and its siblings can't be used with nondeterministic\n collations, because they use the text_eq operator which will not behave\n as bitwise equality if applied with a nondeterministic collation. The\n initial implementation of that restriction was to insert a run-time test\n in the related comparison functions, but that is inefficient, may throw\n misleading errors, and will throw errors in some cases that would work.\n It seems sufficient to just prevent the combination during CREATE INDEX,\n so do that instead.\n\n Lacking any better way to identify the opclasses involved, we need to\n hard-wire tests for them, which requires hand-assigned values for their\n OIDs, which forces a catversion bump because they previously had OIDs\n that would be assigned automatically. That's slightly annoying in the\n v12 branch, but fortunately we're not at rc1 yet, so just do it.\n\n Discussion: https://postgr.es/m/22566.1568675619@sss.pgh.pa.us\n\nIIUC, the above commit removes a check_collation_set() call from a\noperator class comparison function in favor of ensuring that an index\nusing that operator class can only be defined with a deterministic\ncollation in the first place. But as the above commit is purportedly\nonly a stop-gap solution due to lacking operator class infrastructure\nto consider collation in operator semantics, maybe we shouldn't spread\nsuch a hack in other places.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 31 Jan 2020 18:37:57 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash join not finding which collation to use for string hashing"
}
] |
[
{
"msg_contents": "This line\nhttps://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L1073\ntriggers data race errors when run under ThreadSanitizer (*)\n\nAs far as I can tell, the static variable in question is a hack to allow a\ncouple of deprecated functions that are already unsafe to use\n(PQescapeString and PQescapeBytea) to be fractionally less unsafe to use.\n\nWould there be any interest in a patch changing the type of\nstatic_client_coding\nand static_std_strings\n<https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L49>\nto\nsome atomic equivalent, so the data race goes away?\n\nI know that as long as clients aren't daft enough to call PQescapeString or\nPQescapeBytea, then the static variables are write-only - but wiser heads\nthan mine assure me that even if nothing ever reads from the variables,\nunguarded simultaneous writes can cause issues when built with particularly\naggressive compilers... On the plus side: \"As a note for changing the\nvariable to be atomic - if it uses acquire-release semantics, it'll be free\non x86 (barring any potential compiler optimisations that we *want* to\nstop). And relaxed semantics should be free everywhere.\"\n\nAnd it would be nice to be able to keep running my postgres-using tests\nunder ThreadSanitizer...\n\nthanks\n\nMark\n\n(*) specifically in a test that kicks off two simultaneous threads whose\nfirst action is to open a postgres connection.\n\nThis line https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L1073triggers data race errors when run under ThreadSanitizer (*)As far as I can tell, the static variable in question is a hack to allow a couple of deprecated functions that are already unsafe to use (PQescapeString and PQescapeBytea) to be fractionally less unsafe to use.Would there be any interest in a patch changing the type of static_client_coding and static_std_strings to some atomic equivalent, so the data race goes away?I know that as long as clients aren't daft enough to call PQescapeString or PQescapeBytea, then the static variables are write-only - but wiser heads than mine assure me that even if nothing ever reads from the variables, unguarded simultaneous writes can cause issues when built with particularly aggressive compilers... On the plus side: \"As a note for changing the variable to be atomic - if it uses acquire-release semantics, it'll be free on x86 (barring any potential compiler optimisations that we want to stop). And relaxed semantics should be free everywhere.\"And it would be nice to be able to keep running my postgres-using tests under ThreadSanitizer...thanksMark(*) specifically in a test that kicks off two simultaneous threads whose first action is to open a postgres connection.",
"msg_date": "Wed, 29 Jan 2020 09:41:02 +0000",
"msg_from": "Mark Charsley <mcharsley@google.com>",
"msg_from_op": true,
"msg_subject": "Data race in interfaces/libpq/fe-exec.c"
},
{
"msg_contents": "Mark Charsley <mcharsley@google.com> writes:\n> This line\n> https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L1073\n> triggers data race errors when run under ThreadSanitizer (*)\n\n> As far as I can tell, the static variable in question is a hack to allow a\n> couple of deprecated functions that are already unsafe to use\n> (PQescapeString and PQescapeBytea) to be fractionally less unsafe to use.\n\nYup.\n\n> Would there be any interest in a patch changing the type of\n> static_client_coding\n> and static_std_strings\n> <https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L49>\n> to\n> some atomic equivalent, so the data race goes away?\n\nI don't see that making those be some other datatype would improve anything\nusefully. (1) On just about every platform known to man, int and bool are\ngoing to be atomic anyway. (2) The *actual* hazards here, as opposed to\ntheoretical ones, are that you're using more than one connection with\ndifferent settings for these values, whereupon it's not clear whether\nthose deprecated functions will see the appropriate settings when they're\nused. A different data type won't help that.\n\nIn short: this warning you're getting from ThreadSanitizer is entirely\noff-point, so contorting the code to suppress it seems useless.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:46:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Data race in interfaces/libpq/fe-exec.c"
},
{
"msg_contents": "According to folks significantly cleverer than me, this can be a problem:\nSee section 2.4 in\nhttps://www.usenix.org/legacy/events/hotpar11/tech/final_files/Boehm.pdf\n\ntl;dr in a self-fulfilling prophecy kind of way, there are no benign\ndata-races. So the compiler can assume no-one would write a data race.\nTherefore it can make aggressive optimisations that render what would\notherwise have been a benign race actively dangerous.\n\nGranted the danger here is mainly theoretical, and the main problem for me\nis that turning off ThreadSanitizer because of this issue means that other\nmore dangerous issues in my code (rather than the postgres client code)\nwon't be found. But the above is the reason why ThreadSanitizer folks don't\nwant to put in any \"you can ignore this race, it's benign\" functionality,\nand told me that the right thing to do was to contact you folks and get a\nfix in upstream...\n\nMark\n\nOn Thu, Jan 30, 2020 at 4:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Mark Charsley <mcharsley@google.com> writes:\n> > This line\n> >\n> https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L1073\n> > triggers data race errors when run under ThreadSanitizer (*)\n>\n> > As far as I can tell, the static variable in question is a hack to allow\n> a\n> > couple of deprecated functions that are already unsafe to use\n> > (PQescapeString and PQescapeBytea) to be fractionally less unsafe to use.\n>\n> Yup.\n>\n> > Would there be any interest in a patch changing the type of\n> > static_client_coding\n> > and static_std_strings\n> > <\n> https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L49\n> >\n> > to\n> > some atomic equivalent, so the data race goes away?\n>\n> I don't see that making those be some other datatype would improve anything\n> usefully. (1) On just about every platform known to man, int and bool are\n> going to be atomic anyway. (2) The *actual* hazards here, as opposed to\n> theoretical ones, are that you're using more than one connection with\n> different settings for these values, whereupon it's not clear whether\n> those deprecated functions will see the appropriate settings when they're\n> used. A different data type won't help that.\n>\n> In short: this warning you're getting from ThreadSanitizer is entirely\n> off-point, so contorting the code to suppress it seems useless.\n>\n> regards, tom lane\n>\n\nAccording to folks significantly cleverer than me, this can be a problem: See section 2.4 in https://www.usenix.org/legacy/events/hotpar11/tech/final_files/Boehm.pdf tl;dr in a self-fulfilling prophecy kind of way, there are no benign data-races. So the compiler can assume no-one would write a data race. Therefore it can make aggressive optimisations that render what would otherwise have been a benign race actively dangerous.Granted the danger here is mainly theoretical, and the main problem for me is that turning off ThreadSanitizer because of this issue means that other more dangerous issues in my code (rather than the postgres client code) won't be found. But the above is the reason why ThreadSanitizer folks don't want to put in any \"you can ignore this race, it's benign\" functionality, and told me that the right thing to do was to contact you folks and get a fix in upstream...MarkOn Thu, Jan 30, 2020 at 4:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Mark Charsley <mcharsley@google.com> writes:\n> This line\n> https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L1073\n> triggers data race errors when run under ThreadSanitizer (*)\n\n> As far as I can tell, the static variable in question is a hack to allow a\n> couple of deprecated functions that are already unsafe to use\n> (PQescapeString and PQescapeBytea) to be fractionally less unsafe to use.\n\nYup.\n\n> Would there be any interest in a patch changing the type of\n> static_client_coding\n> and static_std_strings\n> <https://github.com/postgres/postgres/blob/30012a04a6c8127397a8ab71e160d9c7e7fbe874/src/interfaces/libpq/fe-exec.c#L49>\n> to\n> some atomic equivalent, so the data race goes away?\n\nI don't see that making those be some other datatype would improve anything\nusefully. (1) On just about every platform known to man, int and bool are\ngoing to be atomic anyway. (2) The *actual* hazards here, as opposed to\ntheoretical ones, are that you're using more than one connection with\ndifferent settings for these values, whereupon it's not clear whether\nthose deprecated functions will see the appropriate settings when they're\nused. A different data type won't help that.\n\nIn short: this warning you're getting from ThreadSanitizer is entirely\noff-point, so contorting the code to suppress it seems useless.\n\n regards, tom lane",
"msg_date": "Fri, 31 Jan 2020 10:47:24 +0000",
"msg_from": "Mark Charsley <mcharsley@google.com>",
"msg_from_op": true,
"msg_subject": "Re: Data race in interfaces/libpq/fe-exec.c"
}
] |
[
{
"msg_contents": "Hi,\n\npg_basebackup reports the backup progress if --progress option is specified,\nand we can monitor it in the client side. I think that it's useful if we can\nmonitor the progress information also in the server side because, for example,\nwe can easily check that by using SQL. So I'd like to propose\npg_stat_progress_basebackup view that allows us to monitor the progress\nof pg_basebackup, in the server side. Thought?\n\nPOC patch is attached.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Wed, 29 Jan 2020 23:16:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_progress_basebackup - progress reporting for pg_basebackup,\n in the server side"
},
{
"msg_contents": "At Wed, 29 Jan 2020 23:16:08 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Hi,\n> \n> pg_basebackup reports the backup progress if --progress option is\n> specified,\n> and we can monitor it in the client side. I think that it's useful if\n> we can\n> monitor the progress information also in the server side because, for\n> example,\n> we can easily check that by using SQL. So I'd like to propose\n> pg_stat_progress_basebackup view that allows us to monitor the\n> progress\n> of pg_basebackup, in the server side. Thought?\n> \n> POC patch is attached.\n\n| postgres=# \\d pg_stat_progress_basebackup\n| View \"pg_catalog.pg_stat_progress_basebackup\"\n| Column | Type | Collation | Nullable | Default \n| ---------------------+---------+-----------+----------+---------\n| pid | integer | | | \n| phase | text | | | \n| backup_total | bigint | | | \n| backup_streamed | bigint | | | \n| tablespace_total | bigint | | | \n| tablespace_streamed | bigint | | | \n\nI think the view needs client identity such like host/port pair\nbesides pid (host/port pair fails identify client in the case of\nunix-sockets.). Also elapsed time from session start might be\nuseful. pg_stat_progress_acuum has datid, datname and relid.\n\n+\tif (backup_total > 0 && backup_streamed > backup_total)\n+\t{\n+\t\tbackup_total = backup_streamed;\n\nDo we need the condition \"backup_total > 0\"?\n\n\n+\t\tint\t\ttblspc_streamed = 0;\n+\n+\t\tpgstat_progress_update_param(PROGRESS_BASEBACKUP_PHASE,\n+\t\t\t\t\t\t\t\t\t PROGRESS_BASEBACKUP_PHASE_STREAM_BACKUP);\n\nThis starts \"streaming backup\" phase with backup_total = 0. Coudln't\nwe move to that phase after setting backup total and tablespace total?\nThat is, just after calling SendBackupHeader().\n\n+ WHEN 3 THEN 'stopping backup'::text\n\nI'm not sure, but the \"stop\" seems suggesting the backup is terminated\nbefore completion. If it is following the name of the function\npg_stop_backup, I think the name is suggesting to stop \"the state for\nperforming backup\", not a backup.\n\nIn the first place, the progress is about \"backup\" so it seems strange\nthat we have another phase after the \"stopping backup\" phase. It\nmight be better that it is \"finishing file transfer\" or such.\n\n \"initializing\"\n-> \"starting file transfer\"\n-> \"transferring files\"\n-> \"finishing file transfer\"\n-> \"transaferring WAL\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 30 Jan 2020 12:58:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/01/30 12:58, Kyotaro Horiguchi wrote:\n> At Wed, 29 Jan 2020 23:16:08 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Hi,\n>>\n>> pg_basebackup reports the backup progress if --progress option is\n>> specified,\n>> and we can monitor it in the client side. I think that it's useful if\n>> we can\n>> monitor the progress information also in the server side because, for\n>> example,\n>> we can easily check that by using SQL. So I'd like to propose\n>> pg_stat_progress_basebackup view that allows us to monitor the\n>> progress\n>> of pg_basebackup, in the server side. Thought?\n>>\n>> POC patch is attached.\n> \n> | postgres=# \\d pg_stat_progress_basebackup\n> | View \"pg_catalog.pg_stat_progress_basebackup\"\n> | Column | Type | Collation | Nullable | Default\n> | ---------------------+---------+-----------+----------+---------\n> | pid | integer | | |\n> | phase | text | | |\n> | backup_total | bigint | | |\n> | backup_streamed | bigint | | |\n> | tablespace_total | bigint | | |\n> | tablespace_streamed | bigint | | |\n> \n> I think the view needs client identity such like host/port pair\n> besides pid (host/port pair fails identify client in the case of\n> unix-sockets.).\n\nI don't think this is job of a progress reporting. If those information\nis necessary, we can join this view with pg_stat_replication using\npid column as the join key.\n\n> Also elapsed time from session start might be\n> useful.\n\n+1\nI think that not only pg_stat_progress_basebackup but also all the other\nprogress views should report the time when the target command was\nstarted and the time when the phase was last changed. Those times\nwould be useful to estimate the remaining execution time from the\nprogress infromation.\n\n> pg_stat_progress_acuum has datid, datname and relid.\n> \n> +\tif (backup_total > 0 && backup_streamed > backup_total)\n> +\t{\n> +\t\tbackup_total = backup_streamed;\n> \n> Do we need the condition \"backup_total > 0\"?\n\nI added the condition for the case where --progress option is not supplied\nin pg_basebackup, i.e., the case where the total amount of backup is not\nestimated at the beginning of the backup. In this case, total_backup is\nalways 0.\n\n> +\t\tint\t\ttblspc_streamed = 0;\n> +\n> +\t\tpgstat_progress_update_param(PROGRESS_BASEBACKUP_PHASE,\n> +\t\t\t\t\t\t\t\t\t PROGRESS_BASEBACKUP_PHASE_STREAM_BACKUP);\n> \n> This starts \"streaming backup\" phase with backup_total = 0. Coudln't\n> we move to that phase after setting backup total and tablespace total?\n> That is, just after calling SendBackupHeader().\n\nOK, that's a bit less confusing for users. I will change in that way.\n\n> + WHEN 3 THEN 'stopping backup'::text\n> \n> I'm not sure, but the \"stop\" seems suggesting the backup is terminated\n> before completion. If it is following the name of the function\n> pg_stop_backup, I think the name is suggesting to stop \"the state for\n> performing backup\", not a backup.\n> \n> In the first place, the progress is about \"backup\" so it seems strange\n> that we have another phase after the \"stopping backup\" phase. It\n> might be better that it is \"finishing file transfer\" or such.\n> \n> \"initializing\"\n> -> \"starting file transfer\"\n> -> \"transferring files\"\n> -> \"finishing file transfer\"\n> -> \"transaferring WAL\"\n\nBetter name is always welcome! If \"stopping back\" is confusing,\nwhat about \"performing pg_stop_backup\"? So\n\ninitializing\nperforming pg_start_backup\nstreaming database files\nperforming pg_stop_backup\ntransfering WAL files\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 31 Jan 2020 02:29:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Fri, 31 Jan 2020 at 02:29, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/01/30 12:58, Kyotaro Horiguchi wrote:\n> > At Wed, 29 Jan 2020 23:16:08 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >> Hi,\n> >>\n> >> pg_basebackup reports the backup progress if --progress option is\n> >> specified,\n> >> and we can monitor it in the client side. I think that it's useful if\n> >> we can\n> >> monitor the progress information also in the server side because, for\n> >> example,\n> >> we can easily check that by using SQL. So I'd like to propose\n> >> pg_stat_progress_basebackup view that allows us to monitor the\n> >> progress\n> >> of pg_basebackup, in the server side. Thought?\n> >>\n> >> POC patch is attached.\n> >\n> > | postgres=# \\d pg_stat_progress_basebackup\n> > | View \"pg_catalog.pg_stat_progress_basebackup\"\n> > | Column | Type | Collation | Nullable | Default\n> > | ---------------------+---------+-----------+----------+---------\n> > | pid | integer | | |\n> > | phase | text | | |\n> > | backup_total | bigint | | |\n> > | backup_streamed | bigint | | |\n> > | tablespace_total | bigint | | |\n> > | tablespace_streamed | bigint | | |\n> >\n> > I think the view needs client identity such like host/port pair\n> > besides pid (host/port pair fails identify client in the case of\n> > unix-sockets.).\n>\n> I don't think this is job of a progress reporting. If those information\n> is necessary, we can join this view with pg_stat_replication using\n> pid column as the join key.\n>\n> > Also elapsed time from session start might be\n> > useful.\n>\n> +1\n> I think that not only pg_stat_progress_basebackup but also all the other\n> progress views should report the time when the target command was\n> started and the time when the phase was last changed. Those times\n> would be useful to estimate the remaining execution time from the\n> progress infromation.\n>\n> > pg_stat_progress_acuum has datid, datname and relid.\n> >\n> > + if (backup_total > 0 && backup_streamed > backup_total)\n> > + {\n> > + backup_total = backup_streamed;\n> >\n> > Do we need the condition \"backup_total > 0\"?\n>\n> I added the condition for the case where --progress option is not supplied\n> in pg_basebackup, i.e., the case where the total amount of backup is not\n> estimated at the beginning of the backup. In this case, total_backup is\n> always 0.\n>\n> > + int tblspc_streamed = 0;\n> > +\n> > + pgstat_progress_update_param(PROGRESS_BASEBACKUP_PHASE,\n> > + PROGRESS_BASEBACKUP_PHASE_STREAM_BACKUP);\n> >\n> > This starts \"streaming backup\" phase with backup_total = 0. Coudln't\n> > we move to that phase after setting backup total and tablespace total?\n> > That is, just after calling SendBackupHeader().\n>\n> OK, that's a bit less confusing for users. I will change in that way.\n>\n> > + WHEN 3 THEN 'stopping backup'::text\n> >\n> > I'm not sure, but the \"stop\" seems suggesting the backup is terminated\n> > before completion. If it is following the name of the function\n> > pg_stop_backup, I think the name is suggesting to stop \"the state for\n> > performing backup\", not a backup.\n> >\n> > In the first place, the progress is about \"backup\" so it seems strange\n> > that we have another phase after the \"stopping backup\" phase. It\n> > might be better that it is \"finishing file transfer\" or such.\n> >\n> > \"initializing\"\n> > -> \"starting file transfer\"\n> > -> \"transferring files\"\n> > -> \"finishing file transfer\"\n> > -> \"transaferring WAL\"\n>\n> Better name is always welcome! If \"stopping back\" is confusing,\n> what about \"performing pg_stop_backup\"? So\n>\n> initializing\n> performing pg_start_backup\n> streaming database files\n> performing pg_stop_backup\n> transfering WAL files\n\nAnother idea I came up with is to show steps that take time instead of\npg_start_backup/pg_stop_backup, for better understanding the\nsituation. That is, \"performing checkpoint\" and \"performing WAL\narchive\" etc, which engage the most of the time of these functions.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 2 Feb 2020 14:59:56 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/02/02 14:59, Masahiko Sawada wrote:\n> On Fri, 31 Jan 2020 at 02:29, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/01/30 12:58, Kyotaro Horiguchi wrote:\n>>> At Wed, 29 Jan 2020 23:16:08 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>> Hi,\n>>>>\n>>>> pg_basebackup reports the backup progress if --progress option is\n>>>> specified,\n>>>> and we can monitor it in the client side. I think that it's useful if\n>>>> we can\n>>>> monitor the progress information also in the server side because, for\n>>>> example,\n>>>> we can easily check that by using SQL. So I'd like to propose\n>>>> pg_stat_progress_basebackup view that allows us to monitor the\n>>>> progress\n>>>> of pg_basebackup, in the server side. Thought?\n>>>>\n>>>> POC patch is attached.\n>>>\n>>> | postgres=# \\d pg_stat_progress_basebackup\n>>> | View \"pg_catalog.pg_stat_progress_basebackup\"\n>>> | Column | Type | Collation | Nullable | Default\n>>> | ---------------------+---------+-----------+----------+---------\n>>> | pid | integer | | |\n>>> | phase | text | | |\n>>> | backup_total | bigint | | |\n>>> | backup_streamed | bigint | | |\n>>> | tablespace_total | bigint | | |\n>>> | tablespace_streamed | bigint | | |\n>>>\n>>> I think the view needs client identity such like host/port pair\n>>> besides pid (host/port pair fails identify client in the case of\n>>> unix-sockets.).\n>>\n>> I don't think this is job of a progress reporting. If those information\n>> is necessary, we can join this view with pg_stat_replication using\n>> pid column as the join key.\n>>\n>>> Also elapsed time from session start might be\n>>> useful.\n>>\n>> +1\n>> I think that not only pg_stat_progress_basebackup but also all the other\n>> progress views should report the time when the target command was\n>> started and the time when the phase was last changed. Those times\n>> would be useful to estimate the remaining execution time from the\n>> progress infromation.\n>>\n>>> pg_stat_progress_acuum has datid, datname and relid.\n>>>\n>>> + if (backup_total > 0 && backup_streamed > backup_total)\n>>> + {\n>>> + backup_total = backup_streamed;\n>>>\n>>> Do we need the condition \"backup_total > 0\"?\n>>\n>> I added the condition for the case where --progress option is not supplied\n>> in pg_basebackup, i.e., the case where the total amount of backup is not\n>> estimated at the beginning of the backup. In this case, total_backup is\n>> always 0.\n>>\n>>> + int tblspc_streamed = 0;\n>>> +\n>>> + pgstat_progress_update_param(PROGRESS_BASEBACKUP_PHASE,\n>>> + PROGRESS_BASEBACKUP_PHASE_STREAM_BACKUP);\n>>>\n>>> This starts \"streaming backup\" phase with backup_total = 0. Coudln't\n>>> we move to that phase after setting backup total and tablespace total?\n>>> That is, just after calling SendBackupHeader().\n>>\n>> OK, that's a bit less confusing for users. I will change in that way.\n\nFixed. Attached is the updated version of the patch.\nI also fixed the regression test failure.\n\n>>\n>>> + WHEN 3 THEN 'stopping backup'::text\n>>>\n>>> I'm not sure, but the \"stop\" seems suggesting the backup is terminated\n>>> before completion. If it is following the name of the function\n>>> pg_stop_backup, I think the name is suggesting to stop \"the state for\n>>> performing backup\", not a backup.\n>>>\n>>> In the first place, the progress is about \"backup\" so it seems strange\n>>> that we have another phase after the \"stopping backup\" phase. It\n>>> might be better that it is \"finishing file transfer\" or such.\n>>>\n>>> \"initializing\"\n>>> -> \"starting file transfer\"\n>>> -> \"transferring files\"\n>>> -> \"finishing file transfer\"\n>>> -> \"transaferring WAL\"\n>>\n>> Better name is always welcome! If \"stopping back\" is confusing,\n>> what about \"performing pg_stop_backup\"? So\n>>\n>> initializing\n>> performing pg_start_backup\n>> streaming database files\n>> performing pg_stop_backup\n>> transfering WAL files\n> \n> Another idea I came up with is to show steps that take time instead of\n> pg_start_backup/pg_stop_backup, for better understanding the\n> situation. That is, \"performing checkpoint\" and \"performing WAL\n> archive\" etc, which engage the most of the time of these functions.\n\nYeah, that's an idea. ISTM that \"waiting for WAL archiving\" sounds\nbetter than \"performing WAL archive\". Thought?\nI've not applied this change in the patch yet, but if there is no\nother idea, I'd like to adopt this.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Mon, 3 Feb 2020 13:17:17 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 1:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/02 14:59, Masahiko Sawada wrote:\n> > On Fri, 31 Jan 2020 at 02:29, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> On 2020/01/30 12:58, Kyotaro Horiguchi wrote:\n> >>> + WHEN 3 THEN 'stopping backup'::text\n> >>>\n> >>> I'm not sure, but the \"stop\" seems suggesting the backup is terminated\n> >>> before completion. If it is following the name of the function\n> >>> pg_stop_backup, I think the name is suggesting to stop \"the state for\n> >>> performing backup\", not a backup.\n> >>>\n> >>> In the first place, the progress is about \"backup\" so it seems strange\n> >>> that we have another phase after the \"stopping backup\" phase. It\n> >>> might be better that it is \"finishing file transfer\" or such.\n> >>>\n> >>> \"initializing\"\n> >>> -> \"starting file transfer\"\n> >>> -> \"transferring files\"\n> >>> -> \"finishing file transfer\"\n> >>> -> \"transaferring WAL\"\n> >>\n> >> Better name is always welcome! If \"stopping back\" is confusing,\n> >> what about \"performing pg_stop_backup\"? So\n> >>\n> >> initializing\n> >> performing pg_start_backup\n> >> streaming database files\n> >> performing pg_stop_backup\n> >> transfering WAL files\n> >\n> > Another idea I came up with is to show steps that take time instead of\n> > pg_start_backup/pg_stop_backup, for better understanding the\n> > situation. That is, \"performing checkpoint\" and \"performing WAL\n> > archive\" etc, which engage the most of the time of these functions.\n>\n> Yeah, that's an idea. ISTM that \"waiting for WAL archiving\" sounds\n> better than \"performing WAL archive\". Thought?\n> I've not applied this change in the patch yet, but if there is no\n> other idea, I'd like to adopt this.\n\nIf we are trying to \"pg_stop_backup\" in phase name, maybe we should\navoid \"pg_start_backup\"? Then maybe:\n\ninitializing\nstarting backup / waiting for [ backup start ] checkpoint to finish\ntransferring database files\nfinishing backup / waiting for WAL archiving to finish\ntransferring WAL files\n\n?\n\nSome comments on documentation changes in v2 patch:\n\n+ Amount of data already streamed.\n\n\"already\" may be redundant. For example, in pg_start_progress_vacuum,\nheap_blks_scanned is described as \"...blocks scanned\", not \"...blocks\nalready scanned\".\n\n+ <entry><structfield>tablespace_total</structfield></entry>\n+ <entry><structfield>tablespace_streamed</structfield></entry>\n\nBetter to use plural tablespaces_total and tablespaces_streamed for consistency?\n\n+ The WAL sender process is currently performing\n+ <function>pg_start_backup</function> and setting up for\n+ making a base backup.\n\nHow about \"taking\" instead of \"making\" in the above sentence?\n\n- <varlistentry>\n+ <varlistentry id=\"protocol-replication-base-backup\" xreflabel=\"BASE_BACKUP\">\n\nI don't see any new text in the documentation patch that uses above\nxref, so no need to define it?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 3 Feb 2020 16:28:30 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> If we are trying to \"pg_stop_backup\" in phase name, maybe we should\n> avoid \"pg_start_backup\"? Then maybe:\n\nSorry, I meant to write:\n\nIf we are trying to avoid \"pg_stop_backup\" in phase name, maybe we\nshould avoid \"pg_start_backup\"? Then maybe:\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 3 Feb 2020 16:30:01 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/02/03 16:28, Amit Langote wrote:\n> On Mon, Feb 3, 2020 at 1:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/02/02 14:59, Masahiko Sawada wrote:\n>>> On Fri, 31 Jan 2020 at 02:29, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> On 2020/01/30 12:58, Kyotaro Horiguchi wrote:\n>>>>> + WHEN 3 THEN 'stopping backup'::text\n>>>>>\n>>>>> I'm not sure, but the \"stop\" seems suggesting the backup is terminated\n>>>>> before completion. If it is following the name of the function\n>>>>> pg_stop_backup, I think the name is suggesting to stop \"the state for\n>>>>> performing backup\", not a backup.\n>>>>>\n>>>>> In the first place, the progress is about \"backup\" so it seems strange\n>>>>> that we have another phase after the \"stopping backup\" phase. It\n>>>>> might be better that it is \"finishing file transfer\" or such.\n>>>>>\n>>>>> \"initializing\"\n>>>>> -> \"starting file transfer\"\n>>>>> -> \"transferring files\"\n>>>>> -> \"finishing file transfer\"\n>>>>> -> \"transaferring WAL\"\n>>>>\n>>>> Better name is always welcome! If \"stopping back\" is confusing,\n>>>> what about \"performing pg_stop_backup\"? So\n>>>>\n>>>> initializing\n>>>> performing pg_start_backup\n>>>> streaming database files\n>>>> performing pg_stop_backup\n>>>> transfering WAL files\n>>>\n>>> Another idea I came up with is to show steps that take time instead of\n>>> pg_start_backup/pg_stop_backup, for better understanding the\n>>> situation. That is, \"performing checkpoint\" and \"performing WAL\n>>> archive\" etc, which engage the most of the time of these functions.\n>>\n>> Yeah, that's an idea. ISTM that \"waiting for WAL archiving\" sounds\n>> better than \"performing WAL archive\". Thought?\n>> I've not applied this change in the patch yet, but if there is no\n>> other idea, I'd like to adopt this.\n> \n> If we are trying to \"pg_stop_backup\" in phase name, maybe we should\n> avoid \"pg_start_backup\"? Then maybe:\n> \n> initializing\n> starting backup / waiting for [ backup start ] checkpoint to finish\n> transferring database files\n> finishing backup / waiting for WAL archiving to finish\n> transferring WAL files\n\nSo we now have the following ideas about the phase names for pg_basebackup.\n\n1.\ninitializing\n\n2.\n2-1. starting backup\n2-2. starting file transfer\n2-3. performing pg_start_backup\n2-4. performing checkpoint\n2-5. waiting for [ backup start ] checkpoint to finish\n\n3.\n3-1. streaming backup\n3-2. transferring database files\n3-3. streaming database files\n3-4. transferring files\n\n4.\n4-1. stopping backup\n4-2. finishing file transfer\n4-3. performing pg_stop_backup\n4-4. finishing backup\n4-5. waiting for WAL archiving to finish\n4-6. performing WAL archive\n\n5.\n1. transferring wal\n2. transferring WAL files\n\nWhat conbination of these do you prefer?\n> Some comments on documentation changes in v2 patch:\n> \n> + Amount of data already streamed.\n\nOk, fixed.\n\n> \"already\" may be redundant. For example, in pg_start_progress_vacuum,\n> heap_blks_scanned is described as \"...blocks scanned\", not \"...blocks\n> already scanned\".\n> \n> + <entry><structfield>tablespace_total</structfield></entry>\n> + <entry><structfield>tablespace_streamed</structfield></entry>\n> \n> Better to use plural tablespaces_total and tablespaces_streamed for consistency?\n\nFixed.\n\n> + The WAL sender process is currently performing\n> + <function>pg_start_backup</function> and setting up for\n> + making a base backup.\n> \n> How about \"taking\" instead of \"making\" in the above sentence?\n\nFixed. Attached is the updated version of the patch.\n\n> \n> - <varlistentry>\n> + <varlistentry id=\"protocol-replication-base-backup\" xreflabel=\"BASE_BACKUP\">\n> \n> I don't see any new text in the documentation patch that uses above\n> xref, so no need to define it?\n\nThe following description that I added uses this.\n\n certain commands during command execution. Currently, the only commands\n which support progress reporting are <command>ANALYZE</command>,\n <command>CLUSTER</command>,\n- <command>CREATE INDEX</command>, and <command>VACUUM</command>.\n+ <command>CREATE INDEX</command>, <command>VACUUM</command>,\n+ and <xref linkend=\"protocol-replication-base-backup\"/> (i.e., replication\n+ command that <xref linkend=\"app-pgbasebackup\"/> issues to take\n+ a base backup).\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Mon, 3 Feb 2020 23:04:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 11:04 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> So we now have the following ideas about the phase names for pg_basebackup.\n>\n> 1.\n> initializing\n>\n> 2.\n> 2-1. starting backup\n> 2-2. starting file transfer\n> 2-3. performing pg_start_backup\n> 2-4. performing checkpoint\n> 2-5. waiting for [ backup start ] checkpoint to finish\n>\n> 3.\n> 3-1. streaming backup\n> 3-2. transferring database files\n> 3-3. streaming database files\n> 3-4. transferring files\n>\n> 4.\n> 4-1. stopping backup\n> 4-2. finishing file transfer\n> 4-3. performing pg_stop_backup\n> 4-4. finishing backup\n> 4-5. waiting for WAL archiving to finish\n> 4-6. performing WAL archive\n>\n> 5.\n> 1. transferring wal\n> 2. transferring WAL files\n>\n> What conbination of these do you prefer?\n\nI like:\n\n1. initializing\n2-5 waiting for backup start checkpoint to finish\n3-3 streaming database files\n4-5 waiting for wal archiving to finish\n5-1 transferring wal (or streaming wal)\n\n> > - <varlistentry>\n> > + <varlistentry id=\"protocol-replication-base-backup\" xreflabel=\"BASE_BACKUP\">\n> >\n> > I don't see any new text in the documentation patch that uses above\n> > xref, so no need to define it?\n>\n> The following description that I added uses this.\n>\n> certain commands during command execution. Currently, the only commands\n> which support progress reporting are <command>ANALYZE</command>,\n> <command>CLUSTER</command>,\n> - <command>CREATE INDEX</command>, and <command>VACUUM</command>.\n> + <command>CREATE INDEX</command>, <command>VACUUM</command>,\n> + and <xref linkend=\"protocol-replication-base-backup\"/> (i.e., replication\n> + command that <xref linkend=\"app-pgbasebackup\"/> issues to take\n> + a base backup).\n\nSorry, I missed that. I was mistakenly expecting a different value of linkend.\n\nSome comments on v3:\n\n+ <entry>Process ID of a WAL sender process.</entry>\n\n\"a\" sounds redundant. Maybe:\n\nof this WAL sender process or\nof WAL sender process\n\nReading this:\n\n+ <entry><structfield>backup_total</structfield></entry>\n+ <entry><type>bigint</type></entry>\n+ <entry>\n+ Total amount of data that will be streamed. If progress reporting\n+ is not enabled in <application>pg_basebackup</application>\n+ (i.e., <literal>--progress</literal> option is not specified),\n+ this is <literal>0</literal>.\n\nI wonder how hard would it be to change basebackup.c to always set\nbackup_total, irrespective of whether --progress is specified with\npg_basebackup or not? It seems quite misleading to leave it set to 0,\nbecause one may panic that they have lost their data, that is, if they\nhaven't first read the documentation.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 4 Feb 2020 10:34:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Tue, Feb 4, 2020 at 10:34 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Reading this:\n>\n> + <entry><structfield>backup_total</structfield></entry>\n> + <entry><type>bigint</type></entry>\n> + <entry>\n> + Total amount of data that will be streamed. If progress reporting\n> + is not enabled in <application>pg_basebackup</application>\n> + (i.e., <literal>--progress</literal> option is not specified),\n> + this is <literal>0</literal>.\n>\n> I wonder how hard would it be to change basebackup.c to always set\n> backup_total, irrespective of whether --progress is specified with\n> pg_basebackup or not? It seems quite misleading to leave it set to 0,\n> because one may panic that they have lost their data, that is, if they\n> haven't first read the documentation.\n\nFor example, the attached patch changes basebackup.c to always set\ntablespaceinfo.size, irrespective of whether --progress was passed\nwith BASE_BACKUP command. It passes make check-world, so maybe safe.\nMaybe it would be a good idea to add a couple of more comments around\ntablespaceinfo struct definition, such as how 'size' is to be\ninterpreted.\n\nThanks,\nAmit",
"msg_date": "Tue, 4 Feb 2020 15:59:46 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/02/04 10:34, Amit Langote wrote:\n> On Mon, Feb 3, 2020 at 11:04 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> So we now have the following ideas about the phase names for pg_basebackup.\n>>\n>> 1.\n>> initializing\n>>\n>> 2.\n>> 2-1. starting backup\n>> 2-2. starting file transfer\n>> 2-3. performing pg_start_backup\n>> 2-4. performing checkpoint\n>> 2-5. waiting for [ backup start ] checkpoint to finish\n>>\n>> 3.\n>> 3-1. streaming backup\n>> 3-2. transferring database files\n>> 3-3. streaming database files\n>> 3-4. transferring files\n>>\n>> 4.\n>> 4-1. stopping backup\n>> 4-2. finishing file transfer\n>> 4-3. performing pg_stop_backup\n>> 4-4. finishing backup\n>> 4-5. waiting for WAL archiving to finish\n>> 4-6. performing WAL archive\n>>\n>> 5.\n>> 1. transferring wal\n>> 2. transferring WAL files\n>>\n>> What conbination of these do you prefer?\n> \n> I like:\n\nThanks for reviewing the patch!\n\n> 1. initializing\n> 2-5 waiting for backup start checkpoint to finish\n\nCan we shorten this to \"waiting for checkpoint\"? IMO the simpler\nphase name is better and \"to finish\" sounds a bit redundant. Also\nin the description of pg_stat_progress_create_index, basically\n\"waiting for xxx\" is used.\n\n> 3-3 streaming database files\n> 4-5 waiting for wal archiving to finish\n\nCan we shorten this to \"waiting for wal archiving\" because of\nthe same reason as above?\n\n> 5-1 transferring wal (or streaming wal)\n\nIMO \"transferring wal\" is better because this phase happens only when\n\"--wal-method=fetch\" is specified in pg_basebackup. \"streaming wal\"\nseems to implie \"--wal-method=stream\", instead.\n\n>>> - <varlistentry>\n>>> + <varlistentry id=\"protocol-replication-base-backup\" xreflabel=\"BASE_BACKUP\">\n>>>\n>>> I don't see any new text in the documentation patch that uses above\n>>> xref, so no need to define it?\n>>\n>> The following description that I added uses this.\n>>\n>> certain commands during command execution. Currently, the only commands\n>> which support progress reporting are <command>ANALYZE</command>,\n>> <command>CLUSTER</command>,\n>> - <command>CREATE INDEX</command>, and <command>VACUUM</command>.\n>> + <command>CREATE INDEX</command>, <command>VACUUM</command>,\n>> + and <xref linkend=\"protocol-replication-base-backup\"/> (i.e., replication\n>> + command that <xref linkend=\"app-pgbasebackup\"/> issues to take\n>> + a base backup).\n> \n> Sorry, I missed that. I was mistakenly expecting a different value of linkend.\n> \n> Some comments on v3:\n> \n> + <entry>Process ID of a WAL sender process.</entry>\n> \n> \"a\" sounds redundant. Maybe:\n> \n> of this WAL sender process or\n> of WAL sender process\n\nI borrowed \"Process ID of a WAL sender process\" from the description\nof pg_stat_replication.pid. But if it's better to get rid of \"a\",\nI'm happy to do that!\n\n> Reading this:\n> \n> + <entry><structfield>backup_total</structfield></entry>\n> + <entry><type>bigint</type></entry>\n> + <entry>\n> + Total amount of data that will be streamed. If progress reporting\n> + is not enabled in <application>pg_basebackup</application>\n> + (i.e., <literal>--progress</literal> option is not specified),\n> + this is <literal>0</literal>.\n> \n> I wonder how hard would it be to change basebackup.c to always set\n> backup_total, irrespective of whether --progress is specified with\n> pg_basebackup or not? It seems quite misleading to leave it set to 0,\n> because one may panic that they have lost their data, that is, if they\n> haven't first read the documentation.\n\nYeah, I understand your concern. The pg_basebackup document explains\nthe risk when --progress is specified, as follows. Since I imagined that\nsomeone may explicitly disable --progress to avoid this risk, I made\nthe server estimate the total size only when --progress is specified.\nBut you think that this overhead by --progress is negligibly small?\n\n--------------------\nWhen this is enabled, the backup will start by enumerating the size of\nthe entire database, and then go back and send the actual contents.\nThis may make the backup take slightly longer, and in particular it will\ntake longer before the first data is sent.\n--------------------\n\nIf we really can always estimate the total size, whether --progress is\nspecified or not, we should get rid of PROGRESS option from BASE_BACKUP\nreplication command because it will no longer be necessary, I think.\n\nRegards, \n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 5 Feb 2020 15:36:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 3:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/04 10:34, Amit Langote wrote:\n> > I like:\n>\n> Thanks for reviewing the patch!\n>\n> > 1. initializing\n> > 2-5 waiting for backup start checkpoint to finish\n>\n> Can we shorten this to \"waiting for checkpoint\"? IMO the simpler\n> phase name is better and \"to finish\" sounds a bit redundant. Also\n> in the description of pg_stat_progress_create_index, basically\n> \"waiting for xxx\" is used.\n\n\"waiting for checkpoint\" works for me.\n\n> > 3-3 streaming database files\n> > 4-5 waiting for wal archiving to finish\n>\n> Can we shorten this to \"waiting for wal archiving\" because of\n> the same reason as above?\n\nYes.\n\n> > 5-1 transferring wal (or streaming wal)\n>\n> IMO \"transferring wal\" is better because this phase happens only when\n> \"--wal-method=fetch\" is specified in pg_basebackup. \"streaming wal\"\n> seems to implie \"--wal-method=stream\", instead.\n\nAh, okay,\n\n> > Reading this:\n> >\n> > + <entry><structfield>backup_total</structfield></entry>\n> > + <entry><type>bigint</type></entry>\n> > + <entry>\n> > + Total amount of data that will be streamed. If progress reporting\n> > + is not enabled in <application>pg_basebackup</application>\n> > + (i.e., <literal>--progress</literal> option is not specified),\n> > + this is <literal>0</literal>.\n> >\n> > I wonder how hard would it be to change basebackup.c to always set\n> > backup_total, irrespective of whether --progress is specified with\n> > pg_basebackup or not? It seems quite misleading to leave it set to 0,\n> > because one may panic that they have lost their data, that is, if they\n> > haven't first read the documentation.\n>\n> Yeah, I understand your concern. The pg_basebackup document explains\n> the risk when --progress is specified, as follows. Since I imagined that\n> someone may explicitly disable --progress to avoid this risk, I made\n> the server estimate the total size only when --progress is specified.\n> But you think that this overhead by --progress is negligibly small?\n>\n> --------------------\n> When this is enabled, the backup will start by enumerating the size of\n> the entire database, and then go back and send the actual contents.\n> This may make the backup take slightly longer, and in particular it will\n> take longer before the first data is sent.\n> --------------------\n\nSorry, I hadn't read this before. So, my proposal would make this a lie.\n\nStill, if \"streaming database files\" is the longest phase, then not\nhaving even an approximation of how much data is to be streamed over\ndoesn't much help estimating progress, at least as long as one only\nhas this view to look at.\n\nThat said, the overhead of checking the size before sending any data\nmay be worse for some people than others, so having the option to\navoid that might be good after all.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 5 Feb 2020 16:29:54 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "At Wed, 5 Feb 2020 16:29:54 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> On Wed, Feb 5, 2020 at 3:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > On 2020/02/04 10:34, Amit Langote wrote:\n> > > I like:\n> >\n> > Thanks for reviewing the patch!\n> >\n> > > 1. initializing\n> > > 2-5 waiting for backup start checkpoint to finish\n> >\n> > Can we shorten this to \"waiting for checkpoint\"? IMO the simpler\n> > phase name is better and \"to finish\" sounds a bit redundant. Also\n> > in the description of pg_stat_progress_create_index, basically\n> > \"waiting for xxx\" is used.\n> \n> \"waiting for checkpoint\" works for me.\n\nI'm not sure, but doesn't that mean \"waiting for a checkpoint to\nstart\"? Sorry in advance if that is not the case.\n\n> > > 3-3 streaming database files\n> > > 4-5 waiting for wal archiving to finish\n> >\n> > Can we shorten this to \"waiting for wal archiving\" because of\n> > the same reason as above?\n> \n> Yes.\n> \n> > > 5-1 transferring wal (or streaming wal)\n> >\n> > IMO \"transferring wal\" is better because this phase happens only when\n> > \"--wal-method=fetch\" is specified in pg_basebackup. \"streaming wal\"\n> > seems to implie \"--wal-method=stream\", instead.\n> \n> Ah, okay,\n> \n> > > Reading this:\n> > >\n> > > + <entry><structfield>backup_total</structfield></entry>\n> > > + <entry><type>bigint</type></entry>\n> > > + <entry>\n> > > + Total amount of data that will be streamed. If progress reporting\n> > > + is not enabled in <application>pg_basebackup</application>\n> > > + (i.e., <literal>--progress</literal> option is not specified),\n> > > + this is <literal>0</literal>.\n> > >\n> > > I wonder how hard would it be to change basebackup.c to always set\n> > > backup_total, irrespective of whether --progress is specified with\n> > > pg_basebackup or not? It seems quite misleading to leave it set to 0,\n> > > because one may panic that they have lost their data, that is, if they\n> > > haven't first read the documentation.\n> >\n> > Yeah, I understand your concern. The pg_basebackup document explains\n> > the risk when --progress is specified, as follows. Since I imagined that\n> > someone may explicitly disable --progress to avoid this risk, I made\n> > the server estimate the total size only when --progress is specified.\n> > But you think that this overhead by --progress is negligibly small?\n> >\n> > --------------------\n> > When this is enabled, the backup will start by enumerating the size of\n> > the entire database, and then go back and send the actual contents.\n> > This may make the backup take slightly longer, and in particular it will\n> > take longer before the first data is sent.\n> > --------------------\n> \n> Sorry, I hadn't read this before. So, my proposal would make this a lie.\n> \n> Still, if \"streaming database files\" is the longest phase, then not\n> having even an approximation of how much data is to be streamed over\n> doesn't much help estimating progress, at least as long as one only\n> has this view to look at.\n> \n> That said, the overhead of checking the size before sending any data\n> may be worse for some people than others, so having the option to\n> avoid that might be good after all.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 05 Feb 2020 17:30:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 5:32 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 5 Feb 2020 16:29:54 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > On Wed, Feb 5, 2020 at 3:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > On 2020/02/04 10:34, Amit Langote wrote:\n> > > > I like:\n> > >\n> > > Thanks for reviewing the patch!\n> > >\n> > > > 1. initializing\n> > > > 2-5 waiting for backup start checkpoint to finish\n> > >\n> > > Can we shorten this to \"waiting for checkpoint\"? IMO the simpler\n> > > phase name is better and \"to finish\" sounds a bit redundant. Also\n> > > in the description of pg_stat_progress_create_index, basically\n> > > \"waiting for xxx\" is used.\n> >\n> > \"waiting for checkpoint\" works for me.\n>\n> I'm not sure, but doesn't that mean \"waiting for a checkpoint to\n> start\"? Sorry in advance if that is not the case.\n\nNo, I really meant \"to finish\". As Sawada-san said upthread, we\nshould really use text that describes the activity that usually takes\nlong. While it takes takes only a moment to actually start the\ncheckpoint, it might take long for it to finish. As Fujii-san says\nthough we don't need the noise words \"to finish\".\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 5 Feb 2020 17:54:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 18:25 Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Wed, Feb 5, 2020 at 6:15 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > 2020年2月5日(水) 17:54 Amit Langote <amitlangote09@gmail.com>:\n> >>\n> >> > I'm not sure, but doesn't that mean \"waiting for a checkpoint to\n> >> > start\"? Sorry in advance if that is not the case.\n> >>\n> >> No, I really meant \"to finish\". As Sawada-san said upthread, we\n> >> should really use text that describes the activity that usually takes\n> >> long. While it takes takes only a moment to actually start the\n> >> checkpoint, it might take long for it to finish.\n> >\n> > I meant that the wording might sound as if it implies \"to start\", but..\n>\n> Ah, I misunderstood then, sorry.\n>\n> So, maybe you're saying that \"waiting for checkpoint\" is ambiguous and\n> most people will assume it means \"...to start\". As for me, I assume\n> it ends with \"...to finish\".\n>\n> >> As Fujii-san says\n> >> though we don't need the noise words \"to finish\".\n> >\n> > Understood, sorry for my noise.\n>\n> Actually, that's an important point to consider and we should strive\n> to use words that are unambiguous.\n\n\nLast two messages weren’t sent to the list.\n\nThanks,\nAmit\n\n>\n\nOn Wed, Feb 5, 2020 at 18:25 Amit Langote <amitlangote09@gmail.com> wrote:On Wed, Feb 5, 2020 at 6:15 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> 2020年2月5日(水) 17:54 Amit Langote <amitlangote09@gmail.com>:\n>>\n>> > I'm not sure, but doesn't that mean \"waiting for a checkpoint to\n>> > start\"? Sorry in advance if that is not the case.\n>>\n>> No, I really meant \"to finish\". As Sawada-san said upthread, we\n>> should really use text that describes the activity that usually takes\n>> long. While it takes takes only a moment to actually start the\n>> checkpoint, it might take long for it to finish.\n>\n> I meant that the wording might sound as if it implies \"to start\", but..\n\nAh, I misunderstood then, sorry.\n\nSo, maybe you're saying that \"waiting for checkpoint\" is ambiguous and\nmost people will assume it means \"...to start\". As for me, I assume\nit ends with \"...to finish\".\n\n>> As Fujii-san says\n>> though we don't need the noise words \"to finish\".\n>\n> Understood, sorry for my noise.\n\nActually, that's an important point to consider and we should strive\nto use words that are unambiguous.Last two messages weren’t sent to the list.Thanks,Amit",
"msg_date": "Wed, 5 Feb 2020 18:53:19 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "At Wed, 5 Feb 2020 18:53:19 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> Last two messages weren’t sent to the list.\n\nOops! Sorry, I made a mistake sending the mail.\n\n> On Wed, Feb 5, 2020 at 18:25 Amit Langote <amitlangote09@gmail.com> wrote:\n> \n> > On Wed, Feb 5, 2020 at 6:15 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > 2020年2月5日(水) 17:54 Amit Langote <amitlangote09@gmail.com>:\n> > >>\n> > So, maybe you're saying that \"waiting for checkpoint\" is ambiguous and\n> > most people will assume it means \"...to start\". As for me, I assume\n> > it ends with \"...to finish\".\n\nI'm not sure \"most peple will assume\" or not, so I said \"I'm not\nsure\". For example, I feel strangeness to use \"I'm waiting for Amit\"\nto express that I'm waiting Amit to leave there. That phrase gives me\nsuch kind of uneasiness.\n\nI thought of \"establishing checkpoint\" or \"running a checkpoint\" as\nother candidates.\n\n> > >> As Fujii-san says\n> > >> though we don't need the noise words \"to finish\".\n> > >\n> > > Understood, sorry for my noise.\n> >\n> > Actually, that's an important point to consider and we should strive\n> > to use words that are unambiguous.\n\nI think it's not ambiguous if knowing what happens during backup so my\nconcern was not unambiguity, but that it might give feeling of\nstrangeness that that sentense appears in that context.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Feb 2020 09:50:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Thu, Feb 6, 2020 at 9:51 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > On Wed, Feb 5, 2020 at 18:25 Amit Langote <amitlangote09@gmail.com> wrote:\n> > > So, maybe you're saying that \"waiting for checkpoint\" is ambiguous and\n> > > most people will assume it means \"...to start\". As for me, I assume\n> > > it ends with \"...to finish\".\n>\n> I'm not sure \"most peple will assume\" or not, so I said \"I'm not\n> sure\". For example, I feel strangeness to use \"I'm waiting for Amit\"\n> to express that I'm waiting Amit to leave there. That phrase gives me\n> such kind of uneasiness.\n>\n> I thought of \"establishing checkpoint\" or \"running a checkpoint\" as\n> other candidates.\n\nOkay, I understand. I am fine with \"running checkpoint\", although I\nthink \"waiting for checkpoint\" isn't totally wrong either.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 6 Feb 2020 11:07:22 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 4:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Feb 5, 2020 at 3:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Yeah, I understand your concern. The pg_basebackup document explains\n> > the risk when --progress is specified, as follows. Since I imagined that\n> > someone may explicitly disable --progress to avoid this risk, I made\n> > the server estimate the total size only when --progress is specified.\n> > But you think that this overhead by --progress is negligibly small?\n> >\n> > --------------------\n> > When this is enabled, the backup will start by enumerating the size of\n> > the entire database, and then go back and send the actual contents.\n> > This may make the backup take slightly longer, and in particular it will\n> > take longer before the first data is sent.\n> > --------------------\n>\n> Sorry, I hadn't read this before. So, my proposal would make this a lie.\n>\n> Still, if \"streaming database files\" is the longest phase, then not\n> having even an approximation of how much data is to be streamed over\n> doesn't much help estimating progress, at least as long as one only\n> has this view to look at.\n>\n> That said, the overhead of checking the size before sending any data\n> may be worse for some people than others, so having the option to\n> avoid that might be good after all.\n\nBy the way, if calculating backup total size can take significantly\nlong in some cases, that is when requested by specifying --progress,\nthen it might be a good idea to define a separate phase for that, like\n\"estimating backup size\" or some such. Currently, it's part of\n\"starting backup\", which covers both running the checkpoint and size\nestimation which run one after another.\n\nI suspect people might never get stuck on \"estimating backup size\" as\nthey might on \"running checkpoint\", which perhaps only strengthens the\ncase for *always* calculating the size before sending the backup\nheader.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 6 Feb 2020 11:35:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/02/06 11:07, Amit Langote wrote:\n> On Thu, Feb 6, 2020 at 9:51 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>>> On Wed, Feb 5, 2020 at 18:25 Amit Langote <amitlangote09@gmail.com> wrote:\n>>>> So, maybe you're saying that \"waiting for checkpoint\" is ambiguous and\n>>>> most people will assume it means \"...to start\". As for me, I assume\n>>>> it ends with \"...to finish\".\n>>\n>> I'm not sure \"most peple will assume\" or not, so I said \"I'm not\n>> sure\". For example, I feel strangeness to use \"I'm waiting for Amit\"\n>> to express that I'm waiting Amit to leave there. That phrase gives me\n>> such kind of uneasiness.\n>>\n>> I thought of \"establishing checkpoint\" or \"running a checkpoint\" as\n>> other candidates.\n> \n> Okay, I understand. I am fine with \"running checkpoint\", although I\n> think \"waiting for checkpoint\" isn't totally wrong either.\n\nYeah, but if \"waiting for XXX\" sounds a bit confusing to some people,\nI'm OK to back to \"waiting for XXX to finish\" that you originally\nproposed.\n\nAttached the updated version of the patch. This patch uses the following\ndescriptions of the phases.\n\n waiting for checkpoint to finish\n estimating backup size\n streaming database files\n waiting for wal archiving to finish\n transferring wal files\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Mon, 17 Feb 2020 22:00:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/02/06 11:35, Amit Langote wrote:\n> On Wed, Feb 5, 2020 at 4:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Wed, Feb 5, 2020 at 3:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Yeah, I understand your concern. The pg_basebackup document explains\n>>> the risk when --progress is specified, as follows. Since I imagined that\n>>> someone may explicitly disable --progress to avoid this risk, I made\n>>> the server estimate the total size only when --progress is specified.\n>>> But you think that this overhead by --progress is negligibly small?\n>>>\n>>> --------------------\n>>> When this is enabled, the backup will start by enumerating the size of\n>>> the entire database, and then go back and send the actual contents.\n>>> This may make the backup take slightly longer, and in particular it will\n>>> take longer before the first data is sent.\n>>> --------------------\n>>\n>> Sorry, I hadn't read this before. So, my proposal would make this a lie.\n>>\n>> Still, if \"streaming database files\" is the longest phase, then not\n>> having even an approximation of how much data is to be streamed over\n>> doesn't much help estimating progress, at least as long as one only\n>> has this view to look at.\n>>\n>> That said, the overhead of checking the size before sending any data\n>> may be worse for some people than others, so having the option to\n>> avoid that might be good after all.\n> \n> By the way, if calculating backup total size can take significantly\n> long in some cases, that is when requested by specifying --progress,\n> then it might be a good idea to define a separate phase for that, like\n> \"estimating backup size\" or some such. Currently, it's part of\n> \"starting backup\", which covers both running the checkpoint and size\n> estimation which run one after another.\n\nOK, I added this phase in the latest patch that I posted upthread.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 17 Feb 2020 22:01:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Mon, Feb 17, 2020 at 10:00 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/06 11:07, Amit Langote wrote:\n> > On Thu, Feb 6, 2020 at 9:51 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >> I thought of \"establishing checkpoint\" or \"running a checkpoint\" as\n> >> other candidates.\n> >\n> > Okay, I understand. I am fine with \"running checkpoint\", although I\n> > think \"waiting for checkpoint\" isn't totally wrong either.\n>\n> Yeah, but if \"waiting for XXX\" sounds a bit confusing to some people,\n> I'm OK to back to \"waiting for XXX to finish\" that you originally\n> proposed.\n>\n> Attached the updated version of the patch. This patch uses the following\n> descriptions of the phases.\n>\n> waiting for checkpoint to finish\n> estimating backup size\n> streaming database files\n> waiting for wal archiving to finish\n> transferring wal files\n\nThanks for the new patch.\n\nI noticed that there is missing </para> tag in the documentation changes:\n\n+ <row>\n+ <entry><literal>waiting for checkpoint to finish</literal></entry>\n+ <entry>\n+ The WAL sender process is currently performing\n+ <function>pg_start_backup</function> to set up for\n+ taking a base backup, and waiting for backup start\n+ checkpoint to finish.\n+ </entry>\n+ <row>\n\nThere should be a </row> between </entry> and <row> at the end of the\nhunk shown above.\n\nSorry for not saying it before, but the following text needs revisiting:\n\n+ <para>\n+ Whenever <application>pg_basebackup</application> is taking a base\n+ backup, the <structname>pg_stat_progress_basebackup</structname>\n+ view will contain a row for each WAL sender process that is currently\n+ running <command>BASE_BACKUP</command> replication command\n+ and streaming the backup.\n\nI understand that you wrote \"Whenever pg_basebackup is taking a\nbackup...\", because description of other views contains a similar\nstarting line. But, it may not only be pg_basebackup that would be\nserved by this view, no? It could be any tool that speaks Postgres'\nreplication protocol and thus be able to send a BASE_BACKUP command.\nIf that is correct, I would write something like \"When an application\nis taking a backup\" or some such without specific reference to\npg_basebackup. Thoughts?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 18 Feb 2020 16:02:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/02/18 16:02, Amit Langote wrote:\n> On Mon, Feb 17, 2020 at 10:00 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/02/06 11:07, Amit Langote wrote:\n>>> On Thu, Feb 6, 2020 at 9:51 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>>> I thought of \"establishing checkpoint\" or \"running a checkpoint\" as\n>>>> other candidates.\n>>>\n>>> Okay, I understand. I am fine with \"running checkpoint\", although I\n>>> think \"waiting for checkpoint\" isn't totally wrong either.\n>>\n>> Yeah, but if \"waiting for XXX\" sounds a bit confusing to some people,\n>> I'm OK to back to \"waiting for XXX to finish\" that you originally\n>> proposed.\n>>\n>> Attached the updated version of the patch. This patch uses the following\n>> descriptions of the phases.\n>>\n>> waiting for checkpoint to finish\n>> estimating backup size\n>> streaming database files\n>> waiting for wal archiving to finish\n>> transferring wal files\n> \n> Thanks for the new patch.\n\nThanks for reviewing the patch!\n\n> I noticed that there is missing </para> tag in the documentation changes:\n\nCould you tell me where I should add </para> tag?\n\n> + <row>\n> + <entry><literal>waiting for checkpoint to finish</literal></entry>\n> + <entry>\n> + The WAL sender process is currently performing\n> + <function>pg_start_backup</function> to set up for\n> + taking a base backup, and waiting for backup start\n> + checkpoint to finish.\n> + </entry>\n> + <row>\n> \n> There should be a </row> between </entry> and <row> at the end of the\n> hunk shown above.\n\nWill fix. Thanks!\n\n> Sorry for not saying it before, but the following text needs revisiting:\n\nOf course, no problem. I'm happy to improve the patch!\n\n> + <para>\n> + Whenever <application>pg_basebackup</application> is taking a base\n> + backup, the <structname>pg_stat_progress_basebackup</structname>\n> + view will contain a row for each WAL sender process that is currently\n> + running <command>BASE_BACKUP</command> replication command\n> + and streaming the backup.\n> \n> I understand that you wrote \"Whenever pg_basebackup is taking a\n> backup...\", because description of other views contains a similar\n> starting line. But, it may not only be pg_basebackup that would be\n> served by this view, no? It could be any tool that speaks Postgres'\n> replication protocol and thus be able to send a BASE_BACKUP command.\n> If that is correct, I would write something like \"When an application\n> is taking a backup\" or some such without specific reference to\n> pg_basebackup. Thoughts?\n\nYeah, there may be some such applications. But most users would\nuse pg_basebackup, so getting rid of the reference to pg_basebackup\nwould make the description a bit difficult-to-read. Also I can imagine\nthat an user of those backup applications would get to know\nthe progress reporting view from their documents. So I prefer\nthe existing one or something like \"Whenever an application like\n pg_basebackup ...\". Thought?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 18 Feb 2020 16:42:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 4:42 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/18 16:02, Amit Langote wrote:\n> > I noticed that there is missing </para> tag in the documentation changes:\n>\n> Could you tell me where I should add </para> tag?\n>\n> > + <row>\n> > + <entry><literal>waiting for checkpoint to finish</literal></entry>\n> > + <entry>\n> > + The WAL sender process is currently performing\n> > + <function>pg_start_backup</function> to set up for\n> > + taking a base backup, and waiting for backup start\n> > + checkpoint to finish.\n> > + </entry>\n> > + <row>\n> >\n> > There should be a </row> between </entry> and <row> at the end of the\n> > hunk shown above.\n>\n> Will fix. Thanks!\n\nJust to clarify, that's the missing </para> tag I am talking about above.\n\n> > + <para>\n> > + Whenever <application>pg_basebackup</application> is taking a base\n> > + backup, the <structname>pg_stat_progress_basebackup</structname>\n> > + view will contain a row for each WAL sender process that is currently\n> > + running <command>BASE_BACKUP</command> replication command\n> > + and streaming the backup.\n> >\n> > I understand that you wrote \"Whenever pg_basebackup is taking a\n> > backup...\", because description of other views contains a similar\n> > starting line. But, it may not only be pg_basebackup that would be\n> > served by this view, no? It could be any tool that speaks Postgres'\n> > replication protocol and thus be able to send a BASE_BACKUP command.\n> > If that is correct, I would write something like \"When an application\n> > is taking a backup\" or some such without specific reference to\n> > pg_basebackup. Thoughts?\n>\n> Yeah, there may be some such applications. But most users would\n> use pg_basebackup, so getting rid of the reference to pg_basebackup\n> would make the description a bit difficult-to-read. Also I can imagine\n> that an user of those backup applications would get to know\n> the progress reporting view from their documents. So I prefer\n> the existing one or something like \"Whenever an application like\n> pg_basebackup ...\". Thought?\n\nSure, \"an application like pg_basebackup\" sounds fine to me.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 18 Feb 2020 16:53:04 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/02/18 16:53, Amit Langote wrote:\n> On Tue, Feb 18, 2020 at 4:42 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/02/18 16:02, Amit Langote wrote:\n>>> I noticed that there is missing </para> tag in the documentation changes:\n>>\n>> Could you tell me where I should add </para> tag?\n>>\n>>> + <row>\n>>> + <entry><literal>waiting for checkpoint to finish</literal></entry>\n>>> + <entry>\n>>> + The WAL sender process is currently performing\n>>> + <function>pg_start_backup</function> to set up for\n>>> + taking a base backup, and waiting for backup start\n>>> + checkpoint to finish.\n>>> + </entry>\n>>> + <row>\n>>>\n>>> There should be a </row> between </entry> and <row> at the end of the\n>>> hunk shown above.\n>>\n>> Will fix. Thanks!\n> \n> Just to clarify, that's the missing </para> tag I am talking about above.\n\nOK, so I added </row> tag just after the above </entry>.\n\n>>> + <para>\n>>> + Whenever <application>pg_basebackup</application> is taking a base\n>>> + backup, the <structname>pg_stat_progress_basebackup</structname>\n>>> + view will contain a row for each WAL sender process that is currently\n>>> + running <command>BASE_BACKUP</command> replication command\n>>> + and streaming the backup.\n>>>\n>>> I understand that you wrote \"Whenever pg_basebackup is taking a\n>>> backup...\", because description of other views contains a similar\n>>> starting line. But, it may not only be pg_basebackup that would be\n>>> served by this view, no? It could be any tool that speaks Postgres'\n>>> replication protocol and thus be able to send a BASE_BACKUP command.\n>>> If that is correct, I would write something like \"When an application\n>>> is taking a backup\" or some such without specific reference to\n>>> pg_basebackup. Thoughts?\n>>\n>> Yeah, there may be some such applications. But most users would\n>> use pg_basebackup, so getting rid of the reference to pg_basebackup\n>> would make the description a bit difficult-to-read. Also I can imagine\n>> that an user of those backup applications would get to know\n>> the progress reporting view from their documents. So I prefer\n>> the existing one or something like \"Whenever an application like\n>> pg_basebackup ...\". Thought?\n> \n> Sure, \"an application like pg_basebackup\" sounds fine to me.\n\nOK, I changed the doc that way. Attached the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Tue, 18 Feb 2020 21:31:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 9:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> OK, I changed the doc that way. Attached the updated version of the patch.\n\nThank you. Looks good to me.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 19 Feb 2020 11:22:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/02/19 11:22, Amit Langote wrote:\n> On Tue, Feb 18, 2020 at 9:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> OK, I changed the doc that way. Attached the updated version of the patch.\n> \n> Thank you. Looks good to me.\n\nThanks for the review!\nYou think that the patch can be marked as \"ready for committer\"?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 19 Feb 2020 21:49:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Feb 19, 2020 at 9:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/19 11:22, Amit Langote wrote:\n> > On Tue, Feb 18, 2020 at 9:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> OK, I changed the doc that way. Attached the updated version of the patch.\n> >\n> > Thank you. Looks good to me.\n>\n> Thanks for the review!\n> You think that the patch can be marked as \"ready for committer\"?\n\nAs far as I am concerned, yes. :)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 19 Feb 2020 21:51:32 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/02/18 21:31, Fujii Masao wrote:\n> \n> \n> On 2020/02/18 16:53, Amit Langote wrote:\n>> On Tue, Feb 18, 2020 at 4:42 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> On 2020/02/18 16:02, Amit Langote wrote:\n>>>> I noticed that there is missing </para> tag in the documentation changes:\n>>>\n>>> Could you tell me where I should add </para> tag?\n>>>\n>>>> + <row>\n>>>> + <entry><literal>waiting for checkpoint to finish</literal></entry>\n>>>> + <entry>\n>>>> + The WAL sender process is currently performing\n>>>> + <function>pg_start_backup</function> to set up for\n>>>> + taking a base backup, and waiting for backup start\n>>>> + checkpoint to finish.\n>>>> + </entry>\n>>>> + <row>\n>>>>\n>>>> There should be a </row> between </entry> and <row> at the end of the\n>>>> hunk shown above.\n>>>\n>>> Will fix. Thanks!\n>>\n>> Just to clarify, that's the missing </para> tag I am talking about above.\n> \n> OK, so I added </row> tag just after the above </entry>.\n> \n>>>> + <para>\n>>>> + Whenever <application>pg_basebackup</application> is taking a base\n>>>> + backup, the <structname>pg_stat_progress_basebackup</structname>\n>>>> + view will contain a row for each WAL sender process that is currently\n>>>> + running <command>BASE_BACKUP</command> replication command\n>>>> + and streaming the backup.\n>>>>\n>>>> I understand that you wrote \"Whenever pg_basebackup is taking a\n>>>> backup...\", because description of other views contains a similar\n>>>> starting line. But, it may not only be pg_basebackup that would be\n>>>> served by this view, no? It could be any tool that speaks Postgres'\n>>>> replication protocol and thus be able to send a BASE_BACKUP command.\n>>>> If that is correct, I would write something like \"When an application\n>>>> is taking a backup\" or some such without specific reference to\n>>>> pg_basebackup. Thoughts?\n>>>\n>>> Yeah, there may be some such applications. But most users would\n>>> use pg_basebackup, so getting rid of the reference to pg_basebackup\n>>> would make the description a bit difficult-to-read. Also I can imagine\n>>> that an user of those backup applications would get to know\n>>> the progress reporting view from their documents. So I prefer\n>>> the existing one or something like \"Whenever an application like\n>>> pg_basebackup ...\". Thought?\n>>\n>> Sure, \"an application like pg_basebackup\" sounds fine to me.\n> \n> OK, I changed the doc that way. Attached the updated version of the patch.\n\nAttached is the updated version of the patch.\n\nThe previous patch used only pgstat_progress_update_param()\neven when updating multiple values. Since those updates are\nnot atomic, this can cause readers of the values to see\nthe intermediate states. To avoid this issue, the latest patch\nuses pgstat_progress_update_multi_param(), instead.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Wed, 26 Feb 2020 23:18:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/02/26 23:18, Fujii Masao wrote:\n> \n> \n> On 2020/02/18 21:31, Fujii Masao wrote:\n>>\n>>\n>> On 2020/02/18 16:53, Amit Langote wrote:\n>>> On Tue, Feb 18, 2020 at 4:42 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> On 2020/02/18 16:02, Amit Langote wrote:\n>>>>> I noticed that there is missing </para> tag in the documentation changes:\n>>>>\n>>>> Could you tell me where I should add </para> tag?\n>>>>\n>>>>> + <row>\n>>>>> + <entry><literal>waiting for checkpoint to finish</literal></entry>\n>>>>> + <entry>\n>>>>> + The WAL sender process is currently performing\n>>>>> + <function>pg_start_backup</function> to set up for\n>>>>> + taking a base backup, and waiting for backup start\n>>>>> + checkpoint to finish.\n>>>>> + </entry>\n>>>>> + <row>\n>>>>>\n>>>>> There should be a </row> between </entry> and <row> at the end of the\n>>>>> hunk shown above.\n>>>>\n>>>> Will fix. Thanks!\n>>>\n>>> Just to clarify, that's the missing </para> tag I am talking about above.\n>>\n>> OK, so I added </row> tag just after the above </entry>.\n>>\n>>>>> + <para>\n>>>>> + Whenever <application>pg_basebackup</application> is taking a base\n>>>>> + backup, the <structname>pg_stat_progress_basebackup</structname>\n>>>>> + view will contain a row for each WAL sender process that is currently\n>>>>> + running <command>BASE_BACKUP</command> replication command\n>>>>> + and streaming the backup.\n>>>>>\n>>>>> I understand that you wrote \"Whenever pg_basebackup is taking a\n>>>>> backup...\", because description of other views contains a similar\n>>>>> starting line. But, it may not only be pg_basebackup that would be\n>>>>> served by this view, no? It could be any tool that speaks Postgres'\n>>>>> replication protocol and thus be able to send a BASE_BACKUP command.\n>>>>> If that is correct, I would write something like \"When an application\n>>>>> is taking a backup\" or some such without specific reference to\n>>>>> pg_basebackup. Thoughts?\n>>>>\n>>>> Yeah, there may be some such applications. But most users would\n>>>> use pg_basebackup, so getting rid of the reference to pg_basebackup\n>>>> would make the description a bit difficult-to-read. Also I can imagine\n>>>> that an user of those backup applications would get to know\n>>>> the progress reporting view from their documents. So I prefer\n>>>> the existing one or something like \"Whenever an application like\n>>>> pg_basebackup ...\". Thought?\n>>>\n>>> Sure, \"an application like pg_basebackup\" sounds fine to me.\n>>\n>> OK, I changed the doc that way. Attached the updated version of the patch.\n> \n> Attached is the updated version of the patch.\n> \n> The previous patch used only pgstat_progress_update_param()\n> even when updating multiple values. Since those updates are\n> not atomic, this can cause readers of the values to see\n> the intermediate states. To avoid this issue, the latest patch\n> uses pgstat_progress_update_multi_param(), instead.\n\nAttached the updated version of the patch.\nBarring any objections, I plan to commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Mon, 2 Mar 2020 17:29:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "Hello\n\nI reviewed a recently published patch. Looks good for me.\nOne small note: the values for the new definitions in progress.h seems not to be aligned vertically. However, pgindent doesn't objects.\n\nregards, Sergei\n\n\n",
"msg_date": "Mon, 02 Mar 2020 13:27:23 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "At Mon, 2 Mar 2020 17:29:30 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > Attached is the updated version of the patch.\n> > The previous patch used only pgstat_progress_update_param()\n> > even when updating multiple values. Since those updates are\n> > not atomic, this can cause readers of the values to see\n> > the intermediate states. To avoid this issue, the latest patch\n> > uses pgstat_progress_update_multi_param(), instead.\n> \n> Attached the updated version of the patch.\n> Barring any objections, I plan to commit this patch.\n\nIt is working as designed and the status names are fine to me.\n\nThe last one comment from me.\nThe newly defined symbols have inconsistent indents.\n\n===\n#define PROGRESS_BASEBACKUP_PHASE\t\t\t\t\t\t0\n#define PROGRESS_BASEBACKUP_BACKUP_TOTAL\t\t\t1\n#define PROGRESS_BASEBACKUP_BACKUP_STREAMED\t\t\t2\n#define PROGRESS_BASEBACKUP_TBLSPC_TOTAL\t\t\t\t3\n#define PROGRESS_BASEBACKUP_TBLSPC_STREAMED\t\t\t4\n\n/* Phases of pg_basebackup (as advertised via PROGRESS_BASEBACKUP_PHASE) */\n#define PROGRESS_BASEBACKUP_PHASE_WAIT_CHECKPOINT\t\t1\n#define PROGRESS_BASEBACKUP_PHASE_ESTIMATE_BACKUP_SIZE\t\t2\n#define PROGRESS_BASEBACKUP_PHASE_STREAM_BACKUP\t\t3\n#define PROGRESS_BASEBACKUP_PHASE_WAIT_WAL_ARCHIVE\t\t4\n#define PROGRESS_BASEBACKUP_PHASE_TRANSFER_WAL\t\t5\n====\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 03 Mar 2020 09:27:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/02 19:27, Sergei Kornilov wrote:\n> Hello\n> \n> I reviewed a recently published patch. Looks good for me.\n\nThanks for the review! I pushed the patch.\n\n> One small note: the values for the new definitions in progress.h seems not to be aligned vertically. However, pgindent doesn't objects.\n\nYes, I fixed that.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 3 Mar 2020 12:08:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/03 9:27, Kyotaro Horiguchi wrote:\n> At Mon, 2 Mar 2020 17:29:30 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> Attached is the updated version of the patch.\n>>> The previous patch used only pgstat_progress_update_param()\n>>> even when updating multiple values. Since those updates are\n>>> not atomic, this can cause readers of the values to see\n>>> the intermediate states. To avoid this issue, the latest patch\n>>> uses pgstat_progress_update_multi_param(), instead.\n>>\n>> Attached the updated version of the patch.\n>> Barring any objections, I plan to commit this patch.\n> \n> It is working as designed and the status names are fine to me.\n\nThanks for the review! I pushed the patch.\n\n> The last one comment from me.\n> The newly defined symbols have inconsistent indents.\n\nYes, I fixed that.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 3 Mar 2020 12:09:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "Hi, \r\n\r\nThank you for developing good features.\r\nThe attached patch is a small fix to the committed documentation. This patch fixes the description literal for the backup_streamed column.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Fujii Masao [mailto:masao.fujii@oss.nttdata.com] \r\nSent: Tuesday, March 3, 2020 12:09 PM\r\nTo: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\nCc: amitlangote09@gmail.com; masahiko.sawada@2ndquadrant.com; pgsql-hackers@postgresql.org\r\nSubject: Re: pg_stat_progress_basebackup - progress reporting for pg_basebackup, in the server side\r\n\r\n\r\n\r\nOn 2020/03/03 9:27, Kyotaro Horiguchi wrote:\r\n> At Mon, 2 Mar 2020 17:29:30 +0900, Fujii Masao \r\n> <masao.fujii@oss.nttdata.com> wrote in\r\n>>> Attached is the updated version of the patch.\r\n>>> The previous patch used only pgstat_progress_update_param() even \r\n>>> when updating multiple values. Since those updates are not atomic, \r\n>>> this can cause readers of the values to see the intermediate states. \r\n>>> To avoid this issue, the latest patch uses \r\n>>> pgstat_progress_update_multi_param(), instead.\r\n>>\r\n>> Attached the updated version of the patch.\r\n>> Barring any objections, I plan to commit this patch.\r\n> \r\n> It is working as designed and the status names are fine to me.\r\n\r\nThanks for the review! I pushed the patch.\r\n\r\n> The last one comment from me.\r\n> The newly defined symbols have inconsistent indents.\r\n\r\nYes, I fixed that.\r\n\r\nRegards,\r\n\r\n\r\n--\r\nFujii Masao\r\nNTT DATA CORPORATION\r\nAdvanced Platform Technology Group\r\nResearch and Development Headquarters",
"msg_date": "Tue, 3 Mar 2020 05:37:57 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/03 14:37, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n> Hi,\n> \n> Thank you for developing good features.\n> The attached patch is a small fix to the committed documentation. This patch fixes the description literal for the backup_streamed column.\n\nThanks for the report and patch! Pushed.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 3 Mar 2020 15:03:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 10:03 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/03 14:37, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n> > Hi,\n> >\n> > Thank you for developing good features.\n> > The attached patch is a small fix to the committed documentation. This patch fixes the description literal for the backup_streamed column.\n>\n> Thanks for the report and patch! Pushed.\n\nThis patch requires, AIUI, that you add -P to the pg_basebackup\ncommandline in order to get the progress tracked in details\nserverside. But this also generates output in the client that one\nmight not want.\n\nShould we perhaps have a switch in pg_basebackup that enables the\nserver side tracking only, without generating output in the client?\n\n//Magnus\n\n\n",
"msg_date": "Wed, 4 Mar 2020 16:31:47 -0800",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/05 9:31, Magnus Hagander wrote:\n> On Mon, Mar 2, 2020 at 10:03 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/03 14:37, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n>>> Hi,\n>>>\n>>> Thank you for developing good features.\n>>> The attached patch is a small fix to the committed documentation. This patch fixes the description literal for the backup_streamed column.\n>>\n>> Thanks for the report and patch! Pushed.\n> \n> This patch requires, AIUI, that you add -P to the pg_basebackup\n> commandline in order to get the progress tracked in details\n> serverside.\n\nWhether --progress is enabled or not, the pg_stat_progress_basebackup\nview report the progress of the backup in the server side. But the total\namount of data that will be streamed is estimated and reported only when\nthis option is enabled.\n\n> But this also generates output in the client that one\n> might not want.\n> \n> Should we perhaps have a switch in pg_basebackup that enables the\n> server side tracking only, without generating output in the client?\n\nYes, this sounds reasonable.\n\nI have two ideas.\n\n(1) Extend --progress option so that it accepts the setting values like\n none, server, both (better names?). If both is specified, PROGRESS\n option is specified in BASE_BACKUP replication command and\n the total backup size is estimated in the server side, but the progress\n is not reported in a client side. If none, PROGRESS option is not\n specified in BASE_BACKUP. The demerit of this idea is that --progress\n option without argument is not supported yet and the existing\n application using --progress option when using pg_basebackup needs\n to be updated when upgrading PostgreSQL version to v13.\n\n(2) Add something like --estimate-backup-size (better name?) option\n to pg_basebackup. If it's specified, PROGRESS option is specified but\n the progress is not reported in a client side.\n\nThought?\n\nOr, as another approach, it might be worth considering to make\nthe server always estimate the total backup size whether --progress is\nspecified or not, as Amit argued upthread. If the time required to\nestimate the backup size is negligible compared to total backup time,\nIMO this approach seems better. If we adopt this, we can also get\nrid of PROGESS option from BASE_BACKUP replication command.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 5 Mar 2020 13:53:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020-03-05 05:53, Fujii Masao wrote:\n> Or, as another approach, it might be worth considering to make\n> the server always estimate the total backup size whether --progress is\n> specified or not, as Amit argued upthread. If the time required to\n> estimate the backup size is negligible compared to total backup time,\n> IMO this approach seems better. If we adopt this, we can also get\n> rid of PROGESS option from BASE_BACKUP replication command.\n\nI think that would be preferable.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 5 Mar 2020 08:15:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Thu, Mar 5, 2020 at 8:15 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-03-05 05:53, Fujii Masao wrote:\n> > Or, as another approach, it might be worth considering to make\n> > the server always estimate the total backup size whether --progress is\n> > specified or not, as Amit argued upthread. If the time required to\n> > estimate the backup size is negligible compared to total backup time,\n> > IMO this approach seems better. If we adopt this, we can also get\n> > rid of PROGESS option from BASE_BACKUP replication command.\n>\n> I think that would be preferable.\n\n+1\n\n\n",
"msg_date": "Thu, 5 Mar 2020 10:32:45 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "At Thu, 5 Mar 2020 10:32:45 +0100, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Thu, Mar 5, 2020 at 8:15 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > On 2020-03-05 05:53, Fujii Masao wrote:\n> > > Or, as another approach, it might be worth considering to make\n> > > the server always estimate the total backup size whether --progress is\n> > > specified or not, as Amit argued upthread. If the time required to\n> > > estimate the backup size is negligible compared to total backup time,\n> > > IMO this approach seems better. If we adopt this, we can also get\n> > > rid of PROGESS option from BASE_BACKUP replication command.\n> >\n> > I think that would be preferable.\n> \n> +1\n\n+1\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 05 Mar 2020 18:41:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Thu, 5 Mar 2020 10:32:45 +0100\nJulien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Mar 5, 2020 at 8:15 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > On 2020-03-05 05:53, Fujii Masao wrote: \n> > > Or, as another approach, it might be worth considering to make\n> > > the server always estimate the total backup size whether --progress is\n> > > specified or not, as Amit argued upthread. If the time required to\n> > > estimate the backup size is negligible compared to total backup time,\n> > > IMO this approach seems better. If we adopt this, we can also get\n> > > rid of PROGESS option from BASE_BACKUP replication command. \n> >\n> > I think that would be preferable. \n> \n> +1\n\n+1\n\n\n",
"msg_date": "Thu, 5 Mar 2020 10:41:02 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Mar 4, 2020 at 11:15 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-03-05 05:53, Fujii Masao wrote:\n> > Or, as another approach, it might be worth considering to make\n> > the server always estimate the total backup size whether --progress is\n> > specified or not, as Amit argued upthread. If the time required to\n> > estimate the backup size is negligible compared to total backup time,\n> > IMO this approach seems better. If we adopt this, we can also get\n> > rid of PROGESS option from BASE_BACKUP replication command.\n>\n> I think that would be preferable.\n\n From a UI perspective I definitely agree.\n\nThe problem with that one is that it can take a non-trivlal amount of\ntime, that's why it was made an option (in the protocol) in the first\nplace. Particularly if you have a database with many small objets.\n\nIs it enough to care about? I'm not sure, but it's definitely\nsomething to consider. It was not negligible in some tests I ran then,\nbut it is quite some time ago and reality has definitely changed since\nthen.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 5 Mar 2020 07:45:24 -0800",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/06 0:45, Magnus Hagander wrote:\n> On Wed, Mar 4, 2020 at 11:15 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> On 2020-03-05 05:53, Fujii Masao wrote:\n>>> Or, as another approach, it might be worth considering to make\n>>> the server always estimate the total backup size whether --progress is\n>>> specified or not, as Amit argued upthread. If the time required to\n>>> estimate the backup size is negligible compared to total backup time,\n>>> IMO this approach seems better. If we adopt this, we can also get\n>>> rid of PROGESS option from BASE_BACKUP replication command.\n>>\n>> I think that would be preferable.\n> \n> From a UI perspective I definitely agree.\n> \n> The problem with that one is that it can take a non-trivlal amount of\n> time, that's why it was made an option (in the protocol) in the first\n> place. Particularly if you have a database with many small objets.\n\nYeah, this is why I made the server estimate the total backup size\nonly when --progress is specified.\n\nAnother idea is;\n- Make pg_basebackup specify PROGRESS option in BASE_BACKUP command\n whether --progress is specified or not. This causes the server to estimate\n the total backup size even when users don't specify --progress.\n- Change pg_basebackup so that it treats --progress option as just a knob to\n determine whether to report the progress in a client-side.\n- Add new option like --no-estimate-backup-size (better name?) to\n pg_basebackup. If this option is specified, pg_basebackup doesn't use\n PROGRESS in BASE_BACKUP and the server doesn't estimate the backup size.\n\nI believe that the time required to estimate the backup size is not so large\nin most cases, so in the above idea, most users don't need to specify more\noption for the estimation. This is good for UI perspective.\n\nOTOH, users who are worried about the estimation time can use\n--no-estimate-backup-size option and skip the time-consuming estimation.\n\nThought?\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 6 Mar 2020 18:51:55 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Fri, Mar 6, 2020 at 1:51 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/06 0:45, Magnus Hagander wrote:\n> > On Wed, Mar 4, 2020 at 11:15 PM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com> wrote:\n> >>\n> >> On 2020-03-05 05:53, Fujii Masao wrote:\n> >>> Or, as another approach, it might be worth considering to make\n> >>> the server always estimate the total backup size whether --progress is\n> >>> specified or not, as Amit argued upthread. If the time required to\n> >>> estimate the backup size is negligible compared to total backup time,\n> >>> IMO this approach seems better. If we adopt this, we can also get\n> >>> rid of PROGESS option from BASE_BACKUP replication command.\n> >>\n> >> I think that would be preferable.\n> >\n> > From a UI perspective I definitely agree.\n> >\n> > The problem with that one is that it can take a non-trivlal amount of\n> > time, that's why it was made an option (in the protocol) in the first\n> > place. Particularly if you have a database with many small objets.\n>\n> Yeah, this is why I made the server estimate the total backup size\n> only when --progress is specified.\n>\n> Another idea is;\n> - Make pg_basebackup specify PROGRESS option in BASE_BACKUP command\n> whether --progress is specified or not. This causes the server to estimate\n> the total backup size even when users don't specify --progress.\n> - Change pg_basebackup so that it treats --progress option as just a knob to\n> determine whether to report the progress in a client-side.\n> - Add new option like --no-estimate-backup-size (better name?) to\n> pg_basebackup. If this option is specified, pg_basebackup doesn't use\n> PROGRESS in BASE_BACKUP and the server doesn't estimate the backup size.\n>\n> I believe that the time required to estimate the backup size is not so large\n> in most cases, so in the above idea, most users don't need to specify more\n> option for the estimation. This is good for UI perspective.\n>\n> OTOH, users who are worried about the estimation time can use\n> --no-estimate-backup-size option and skip the time-consuming estimation.\n\nPersonally, I think this is the best idea. it brings a \"reasonable\ndefault\", since most people are not going to have this problem, and\nyet a good way to get out from the issue for those that potentially\nhave it. Especially since we are now already showing the state that\n\"walsender is estimating the size\", it should be easy enugh for people\nto determine if they need to use this flag or not.\n\nIn nitpicking mode, I'd just call the flag --no-estimate-size -- it's\npretty clear things are about backups when you call pg_basebackup, and\nit keeps the option a bit more reasonable in length.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 6 Mar 2020 09:54:09 -0800",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "At Fri, 6 Mar 2020 09:54:09 -0800, Magnus Hagander <magnus@hagander.net> wrote in \n> On Fri, Mar 6, 2020 at 1:51 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > I believe that the time required to estimate the backup size is not so large\n> > in most cases, so in the above idea, most users don't need to specify more\n> > option for the estimation. This is good for UI perspective.\n> >\n> > OTOH, users who are worried about the estimation time can use\n> > --no-estimate-backup-size option and skip the time-consuming estimation.\n> \n> Personally, I think this is the best idea. it brings a \"reasonable\n> default\", since most people are not going to have this problem, and\n> yet a good way to get out from the issue for those that potentially\n> have it. Especially since we are now already showing the state that\n> \"walsender is estimating the size\", it should be easy enugh for people\n> to determine if they need to use this flag or not.\n> \n> In nitpicking mode, I'd just call the flag --no-estimate-size -- it's\n> pretty clear things are about backups when you call pg_basebackup, and\n> it keeps the option a bit more reasonable in length.\n\nI agree to the negative option and the shortened name. What if both\n--no-estimate-size and -P are specifed? Rejecting as conflicting\noptions or -P supercedes? I would choose the former because we don't\nknow which of them has priority.\n\n$ pg_basebackup --no-estimate-size -P\npg_basebackup: -P requires size estimate.\n$ \n\nregads.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 09 Mar 2020 14:12:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Sun, Mar 8, 2020 at 10:13 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 6 Mar 2020 09:54:09 -0800, Magnus Hagander <magnus@hagander.net> wrote in\n> > On Fri, Mar 6, 2020 at 1:51 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > I believe that the time required to estimate the backup size is not so large\n> > > in most cases, so in the above idea, most users don't need to specify more\n> > > option for the estimation. This is good for UI perspective.\n> > >\n> > > OTOH, users who are worried about the estimation time can use\n> > > --no-estimate-backup-size option and skip the time-consuming estimation.\n> >\n> > Personally, I think this is the best idea. it brings a \"reasonable\n> > default\", since most people are not going to have this problem, and\n> > yet a good way to get out from the issue for those that potentially\n> > have it. Especially since we are now already showing the state that\n> > \"walsender is estimating the size\", it should be easy enugh for people\n> > to determine if they need to use this flag or not.\n> >\n> > In nitpicking mode, I'd just call the flag --no-estimate-size -- it's\n> > pretty clear things are about backups when you call pg_basebackup, and\n> > it keeps the option a bit more reasonable in length.\n>\n> I agree to the negative option and the shortened name. What if both\n> --no-estimate-size and -P are specifed? Rejecting as conflicting\n> options or -P supercedes? I would choose the former because we don't\n> know which of them has priority.\n\nI would definitely prefer rejecting an invalid combination of options.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 8 Mar 2020 22:21:22 -0700",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/09 14:21, Magnus Hagander wrote:\n> On Sun, Mar 8, 2020 at 10:13 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>>\n>> At Fri, 6 Mar 2020 09:54:09 -0800, Magnus Hagander <magnus@hagander.net> wrote in\n>>> On Fri, Mar 6, 2020 at 1:51 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> I believe that the time required to estimate the backup size is not so large\n>>>> in most cases, so in the above idea, most users don't need to specify more\n>>>> option for the estimation. This is good for UI perspective.\n>>>>\n>>>> OTOH, users who are worried about the estimation time can use\n>>>> --no-estimate-backup-size option and skip the time-consuming estimation.\n>>>\n>>> Personally, I think this is the best idea. it brings a \"reasonable\n>>> default\", since most people are not going to have this problem, and\n>>> yet a good way to get out from the issue for those that potentially\n>>> have it. Especially since we are now already showing the state that\n>>> \"walsender is estimating the size\", it should be easy enugh for people\n>>> to determine if they need to use this flag or not.\n>>>\n>>> In nitpicking mode, I'd just call the flag --no-estimate-size -- it's\n>>> pretty clear things are about backups when you call pg_basebackup, and\n>>> it keeps the option a bit more reasonable in length.\n\n+1\n\n>> I agree to the negative option and the shortened name. What if both\n>> --no-estimate-size and -P are specifed? Rejecting as conflicting\n>> options or -P supercedes? I would choose the former because we don't\n>> know which of them has priority.\n> \n> I would definitely prefer rejecting an invalid combination of options.\n\n+1\n\nSo, I will make the patch adding support for --no-estimate-size option\nin pg_basebackup.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 10 Mar 2020 11:36:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/03/10 11:36, Fujii Masao wrote:\n> \n> \n> On 2020/03/09 14:21, Magnus Hagander wrote:\n>> On Sun, Mar 8, 2020 at 10:13 PM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>>>\n>>> At Fri, 6 Mar 2020 09:54:09 -0800, Magnus Hagander <magnus@hagander.net> wrote in\n>>>> On Fri, Mar 6, 2020 at 1:51 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>> I believe that the time required to estimate the backup size is not so large\n>>>>> in most cases, so in the above idea, most users don't need to specify more\n>>>>> option for the estimation. This is good for UI perspective.\n>>>>>\n>>>>> OTOH, users who are worried about the estimation time can use\n>>>>> --no-estimate-backup-size option and skip the time-consuming estimation.\n>>>>\n>>>> Personally, I think this is the best idea. it brings a \"reasonable\n>>>> default\", since most people are not going to have this problem, and\n>>>> yet a good way to get out from the issue for those that potentially\n>>>> have it. Especially since we are now already showing the state that\n>>>> \"walsender is estimating the size\", it should be easy enugh for people\n>>>> to determine if they need to use this flag or not.\n>>>>\n>>>> In nitpicking mode, I'd just call the flag --no-estimate-size -- it's\n>>>> pretty clear things are about backups when you call pg_basebackup, and\n>>>> it keeps the option a bit more reasonable in length.\n> \n> +1\n> \n>>> I agree to the negative option and the shortened name. What if both\n>>> --no-estimate-size and -P are specifed? Rejecting as conflicting\n>>> options or -P supercedes? I would choose the former because we don't\n>>> know which of them has priority.\n>>\n>> I would definitely prefer rejecting an invalid combination of options.\n> \n> +1\n> \n> So, I will make the patch adding support for --no-estimate-size option\n> in pg_basebackup.\n\nPatch attached.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Tue, 10 Mar 2020 18:09:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > So, I will make the patch adding support for --no-estimate-size option\n> > in pg_basebackup.\n>\n> Patch attached.\n\nLike the idea and the patch looks mostly good.\n\n+ total size. If the estimation is disabled in\n+ <application>pg_basebackup</application>\n+ (i.e., <literal>--no-estimate-size</literal> option is specified),\n+ this is always <literal>0</literal>.\n\n\"always\" seems unnecessary.\n\n+ This option prevents the server from estimating the total\n+ amount of backup data that will be streamed. In other words,\n+ <literal>backup_total</literal> column in the\n+ <structname>pg_stat_progress_basebackup</structname>\n+ view always indicates <literal>0</literal> if this option is enabled.\n\nHere too.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 10 Mar 2020 22:43:46 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/03/10 22:43, Amit Langote wrote:\n> On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> So, I will make the patch adding support for --no-estimate-size option\n>>> in pg_basebackup.\n>>\n>> Patch attached.\n> \n> Like the idea and the patch looks mostly good.\n\nThanks for reviewing the patch!\n\n> + total size. If the estimation is disabled in\n> + <application>pg_basebackup</application>\n> + (i.e., <literal>--no-estimate-size</literal> option is specified),\n> + this is always <literal>0</literal>.\n> \n> \"always\" seems unnecessary.\n\nFixed.\n\n> + This option prevents the server from estimating the total\n> + amount of backup data that will be streamed. In other words,\n> + <literal>backup_total</literal> column in the\n> + <structname>pg_stat_progress_basebackup</structname>\n> + view always indicates <literal>0</literal> if this option is enabled.\n> \n> Here too.\n\nFixed.\n\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Wed, 11 Mar 2020 02:19:14 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/10 22:43, Amit Langote wrote:\n> > On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>> So, I will make the patch adding support for --no-estimate-size option\n> >>> in pg_basebackup.\n> >>\n> >> Patch attached.\n> >\n> > Like the idea and the patch looks mostly good.\n>\n> Thanks for reviewing the patch!\n>\n> > + total size. If the estimation is disabled in\n> > + <application>pg_basebackup</application>\n> > + (i.e., <literal>--no-estimate-size</literal> option is specified),\n> > + this is always <literal>0</literal>.\n> >\n> > \"always\" seems unnecessary.\n>\n> Fixed.\n>\n> > + This option prevents the server from estimating the total\n> > + amount of backup data that will be streamed. In other words,\n> > + <literal>backup_total</literal> column in the\n> > + <structname>pg_stat_progress_basebackup</structname>\n> > + view always indicates <literal>0</literal> if this option is enabled.\n> >\n> > Here too.\n>\n> Fixed.\n>\n> Attached is the updated version of the patch.\n\nWould it perhaps be better to return NULL instead of 0 in the\nstatistics view if there is no data?\n\nAlso, should it really be the server version that decides how this\nfeature behaves, and not the pg_basebackup version? Given that the\nimplementation is entirely in the client, it seems that's more\nlogical?\n\n\nand a few docs nitpicks:\n\n <para>\n Whether this is enabled or not, the\n <structname>pg_stat_progress_basebackup</structname> view\n- report the progress of the backup in the server side. But note\n- that the total amount of data that will be streamed is estimated\n- and reported only when this option is enabled. In other words,\n- <literal>backup_total</literal> column in the view always\n- indicates <literal>0</literal> if this option is disabled.\n+ report the progress of the backup in the server side.\n+ </para>\n+ <para>\n+ This option is not allowed when using\n+ <option>--no-estimate-size</option>.\n </para>\n\nI think you should just remove that whole paragraph. The details are\nnow listed under the disable parameter.\n\n+ This option prevents the server from estimating the total\n+ amount of backup data that will be streamed. In other words,\n+ <literal>backup_total</literal> column in the\n+ <structname>pg_stat_progress_basebackup</structname>\n+ view indicates <literal>0</literal> if this option is enabled.\n\nI suggest just \"This option prevents the server from estimating the\ntotal amount of backup data that will be streamed, resulting in the\nackup_total column in pg_stat_progress_basebackup to be (zero or NULL\ndepending on above)\".\n\n(Markup needed on that of course ,but you get the idea)\n\n+ When this is disabled, the backup will start by enumerating\n\nI'd try to avoid the double negation, with something \"without this\nparameter, the backup will start...\"\n\n\n\n+ <para>\n+ <application>pg_basebackup</application> asks the server to estimate\n+ the total amount of data that will be streamed by default (unless\n+ <option>--no-estimate-size</option> is specified) in version 13 or later,\n+ and does that only when <option>--progress</option> is specified in\n+ the older versions.\n+ </para>\n\nThat's an item for the release notes, not for the reference page, I\nthink. It's already explained under the --disable parameter, so I\nsuggest removing this paragraph as well.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 10 Mar 2020 19:39:32 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Mar 11, 2020 at 3:39 AM Magnus Hagander <magnus@hagander.net> wrote:\n> On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Attached is the updated version of the patch.\n>\n> Would it perhaps be better to return NULL instead of 0 in the\n> statistics view if there is no data?\n\nNULL sounds better than 0.\n\nThank you,\nAmit\n\n\n",
"msg_date": "Wed, 11 Mar 2020 13:44:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/11 13:44, Amit Langote wrote:\n> On Wed, Mar 11, 2020 at 3:39 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Attached is the updated version of the patch.\n>>\n>> Would it perhaps be better to return NULL instead of 0 in the\n>> statistics view if there is no data?\n> \n> NULL sounds better than 0.\n\nOk, I will make the patch that changes the view so that NULL is\nreturned instead of 0. I'm thinking to commit that patch\nafter applying the change of pg_basebackup client side first.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 11 Mar 2020 13:51:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/03/11 3:39, Magnus Hagander wrote:\n> On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/10 22:43, Amit Langote wrote:\n>>> On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>> So, I will make the patch adding support for --no-estimate-size option\n>>>>> in pg_basebackup.\n>>>>\n>>>> Patch attached.\n>>>\n>>> Like the idea and the patch looks mostly good.\n>>\n>> Thanks for reviewing the patch!\n>>\n>>> + total size. If the estimation is disabled in\n>>> + <application>pg_basebackup</application>\n>>> + (i.e., <literal>--no-estimate-size</literal> option is specified),\n>>> + this is always <literal>0</literal>.\n>>>\n>>> \"always\" seems unnecessary.\n>>\n>> Fixed.\n>>\n>>> + This option prevents the server from estimating the total\n>>> + amount of backup data that will be streamed. In other words,\n>>> + <literal>backup_total</literal> column in the\n>>> + <structname>pg_stat_progress_basebackup</structname>\n>>> + view always indicates <literal>0</literal> if this option is enabled.\n>>>\n>>> Here too.\n>>\n>> Fixed.\n>>\n>> Attached is the updated version of the patch.\n> \n> Would it perhaps be better to return NULL instead of 0 in the\n> statistics view if there is no data?\n> \n> Also, should it really be the server version that decides how this\n> feature behaves, and not the pg_basebackup version? Given that the\n> implementation is entirely in the client, it seems that's more\n> logical?\n\nYeah, you're right. I changed the patch that way.\nAttached is the updated version of the patch.\n \n> and a few docs nitpicks:\n> \n> <para>\n> Whether this is enabled or not, the\n> <structname>pg_stat_progress_basebackup</structname> view\n> - report the progress of the backup in the server side. But note\n> - that the total amount of data that will be streamed is estimated\n> - and reported only when this option is enabled. In other words,\n> - <literal>backup_total</literal> column in the view always\n> - indicates <literal>0</literal> if this option is disabled.\n> + report the progress of the backup in the server side.\n> + </para>\n> + <para>\n> + This option is not allowed when using\n> + <option>--no-estimate-size</option>.\n> </para>\n> \n> I think you should just remove that whole paragraph. The details are\n> now listed under the disable parameter.\n\nFixed.\n\n> + This option prevents the server from estimating the total\n> + amount of backup data that will be streamed. In other words,\n> + <literal>backup_total</literal> column in the\n> + <structname>pg_stat_progress_basebackup</structname>\n> + view indicates <literal>0</literal> if this option is enabled.\n> \n> I suggest just \"This option prevents the server from estimating the\n> total amount of backup data that will be streamed, resulting in the\n> ackup_total column in pg_stat_progress_basebackup to be (zero or NULL\n> depending on above)\".\n> \n> (Markup needed on that of course ,but you get the idea)\n\nYes, fixed.\n\n> + When this is disabled, the backup will start by enumerating\n> \n> I'd try to avoid the double negation, with something \"without this\n> parameter, the backup will start...\"\n\nFixed. I used \"Without using this option ...\".\n \n> + <para>\n> + <application>pg_basebackup</application> asks the server to estimate\n> + the total amount of data that will be streamed by default (unless\n> + <option>--no-estimate-size</option> is specified) in version 13 or later,\n> + and does that only when <option>--progress</option> is specified in\n> + the older versions.\n> + </para>\n> \n> That's an item for the release notes, not for the reference page, I\n> think. It's already explained under the --disable parameter, so I\n> suggest removing this paragraph as well.\n\nFixed.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Wed, 11 Mar 2020 13:53:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Mar 11, 2020 at 5:53 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/11 3:39, Magnus Hagander wrote:\n> > On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/10 22:43, Amit Langote wrote:\n> >>> On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>> So, I will make the patch adding support for --no-estimate-size option\n> >>>>> in pg_basebackup.\n> >>>>\n> >>>> Patch attached.\n> >>>\n> >>> Like the idea and the patch looks mostly good.\n> >>\n> >> Thanks for reviewing the patch!\n> >>\n> >>> + total size. If the estimation is disabled in\n> >>> + <application>pg_basebackup</application>\n> >>> + (i.e., <literal>--no-estimate-size</literal> option is specified),\n> >>> + this is always <literal>0</literal>.\n> >>>\n> >>> \"always\" seems unnecessary.\n> >>\n> >> Fixed.\n> >>\n> >>> + This option prevents the server from estimating the total\n> >>> + amount of backup data that will be streamed. In other words,\n> >>> + <literal>backup_total</literal> column in the\n> >>> + <structname>pg_stat_progress_basebackup</structname>\n> >>> + view always indicates <literal>0</literal> if this option is enabled.\n> >>>\n> >>> Here too.\n> >>\n> >> Fixed.\n> >>\n> >> Attached is the updated version of the patch.\n> >\n> > Would it perhaps be better to return NULL instead of 0 in the\n> > statistics view if there is no data?\n\nDid you miss this comment, or not agree? :)\n\n\n> > Also, should it really be the server version that decides how this\n> > feature behaves, and not the pg_basebackup version? Given that the\n> > implementation is entirely in the client, it seems that's more\n> > logical?\n>\n> Yeah, you're right. I changed the patch that way.\n> Attached is the updated version of the patch.\n\nThe other changes in it look good!\n\n//Magnus\n\n\n",
"msg_date": "Wed, 18 Mar 2020 16:37:21 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020/03/19 0:37, Magnus Hagander wrote:\n> On Wed, Mar 11, 2020 at 5:53 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/11 3:39, Magnus Hagander wrote:\n>>> On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/10 22:43, Amit Langote wrote:\n>>>>> On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>> So, I will make the patch adding support for --no-estimate-size option\n>>>>>>> in pg_basebackup.\n>>>>>>\n>>>>>> Patch attached.\n>>>>>\n>>>>> Like the idea and the patch looks mostly good.\n>>>>\n>>>> Thanks for reviewing the patch!\n>>>>\n>>>>> + total size. If the estimation is disabled in\n>>>>> + <application>pg_basebackup</application>\n>>>>> + (i.e., <literal>--no-estimate-size</literal> option is specified),\n>>>>> + this is always <literal>0</literal>.\n>>>>>\n>>>>> \"always\" seems unnecessary.\n>>>>\n>>>> Fixed.\n>>>>\n>>>>> + This option prevents the server from estimating the total\n>>>>> + amount of backup data that will be streamed. In other words,\n>>>>> + <literal>backup_total</literal> column in the\n>>>>> + <structname>pg_stat_progress_basebackup</structname>\n>>>>> + view always indicates <literal>0</literal> if this option is enabled.\n>>>>>\n>>>>> Here too.\n>>>>\n>>>> Fixed.\n>>>>\n>>>> Attached is the updated version of the patch.\n>>>\n>>> Would it perhaps be better to return NULL instead of 0 in the\n>>> statistics view if there is no data?\n> \n> Did you miss this comment, or not agree? :)\n\nOh, I forgot to attached the patch... Patch attached.\nThis patch needs to be applied after applying\nadd_no_estimate_size_v3.patch.\n\n>>> Also, should it really be the server version that decides how this\n>>> feature behaves, and not the pg_basebackup version? Given that the\n>>> implementation is entirely in the client, it seems that's more\n>>> logical?\n>>\n>> Yeah, you're right. I changed the patch that way.\n>> Attached is the updated version of the patch.\n> \n> The other changes in it look good!\n\nThanks for the review!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Thu, 19 Mar 2020 01:13:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Wed, Mar 18, 2020 at 5:14 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/19 0:37, Magnus Hagander wrote:\n> > On Wed, Mar 11, 2020 at 5:53 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/11 3:39, Magnus Hagander wrote:\n> >>> On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/03/10 22:43, Amit Langote wrote:\n> >>>>> On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>> So, I will make the patch adding support for --no-estimate-size option\n> >>>>>>> in pg_basebackup.\n> >>>>>>\n> >>>>>> Patch attached.\n> >>>>>\n> >>>>> Like the idea and the patch looks mostly good.\n> >>>>\n> >>>> Thanks for reviewing the patch!\n> >>>>\n> >>>>> + total size. If the estimation is disabled in\n> >>>>> + <application>pg_basebackup</application>\n> >>>>> + (i.e., <literal>--no-estimate-size</literal> option is specified),\n> >>>>> + this is always <literal>0</literal>.\n> >>>>>\n> >>>>> \"always\" seems unnecessary.\n> >>>>\n> >>>> Fixed.\n> >>>>\n> >>>>> + This option prevents the server from estimating the total\n> >>>>> + amount of backup data that will be streamed. In other words,\n> >>>>> + <literal>backup_total</literal> column in the\n> >>>>> + <structname>pg_stat_progress_basebackup</structname>\n> >>>>> + view always indicates <literal>0</literal> if this option is enabled.\n> >>>>>\n> >>>>> Here too.\n> >>>>\n> >>>> Fixed.\n> >>>>\n> >>>> Attached is the updated version of the patch.\n> >>>\n> >>> Would it perhaps be better to return NULL instead of 0 in the\n> >>> statistics view if there is no data?\n> >\n> > Did you miss this comment, or not agree? :)\n>\n> Oh, I forgot to attached the patch... Patch attached.\n> This patch needs to be applied after applying\n> add_no_estimate_size_v3.patch.\n\n:)\n\nHmm. I'm slightly irked by doing the -1 -> NULL conversion in the SQL\nview. I wonder if it might be worth teaching\npg_stat_get_progress_info() about returning NULL?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 18 Mar 2020 17:22:00 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/19 1:22, Magnus Hagander wrote:\n> On Wed, Mar 18, 2020 at 5:14 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/19 0:37, Magnus Hagander wrote:\n>>> On Wed, Mar 11, 2020 at 5:53 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/11 3:39, Magnus Hagander wrote:\n>>>>> On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/03/10 22:43, Amit Langote wrote:\n>>>>>>> On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>> So, I will make the patch adding support for --no-estimate-size option\n>>>>>>>>> in pg_basebackup.\n>>>>>>>>\n>>>>>>>> Patch attached.\n>>>>>>>\n>>>>>>> Like the idea and the patch looks mostly good.\n>>>>>>\n>>>>>> Thanks for reviewing the patch!\n>>>>>>\n>>>>>>> + total size. If the estimation is disabled in\n>>>>>>> + <application>pg_basebackup</application>\n>>>>>>> + (i.e., <literal>--no-estimate-size</literal> option is specified),\n>>>>>>> + this is always <literal>0</literal>.\n>>>>>>>\n>>>>>>> \"always\" seems unnecessary.\n>>>>>>\n>>>>>> Fixed.\n>>>>>>\n>>>>>>> + This option prevents the server from estimating the total\n>>>>>>> + amount of backup data that will be streamed. In other words,\n>>>>>>> + <literal>backup_total</literal> column in the\n>>>>>>> + <structname>pg_stat_progress_basebackup</structname>\n>>>>>>> + view always indicates <literal>0</literal> if this option is enabled.\n>>>>>>>\n>>>>>>> Here too.\n>>>>>>\n>>>>>> Fixed.\n>>>>>>\n>>>>>> Attached is the updated version of the patch.\n>>>>>\n>>>>> Would it perhaps be better to return NULL instead of 0 in the\n>>>>> statistics view if there is no data?\n>>>\n>>> Did you miss this comment, or not agree? :)\n>>\n>> Oh, I forgot to attached the patch... Patch attached.\n>> This patch needs to be applied after applying\n>> add_no_estimate_size_v3.patch.\n> \n> :)\n> \n> Hmm. I'm slightly irked by doing the -1 -> NULL conversion in the SQL\n> view. I wonder if it might be worth teaching\n> pg_stat_get_progress_info() about returning NULL?\n\nThat's possible by\n- adding the boolean array like st_progress_null[PGSTAT_NUM_PROGRESS_PARAM]\n that indicates whether each column is NULL or not, into PgBackendStatus\n- extending pgstat_progress_update_param() and pgstat_progress_update_multi_param()\n so that they can update the boolean array for NULL\n- updating the progress reporting code so that the extended versions of\n function are used\n\nI didn't adopt this idea because it looks a bit overkill for the purpose.\nOTOH, this would be good improvement for the progress reporting\ninfrastructure and I'm fine to implement it.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 19 Mar 2020 01:47:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "Hello,\n\nOn Thu, Mar 19, 2020 at 1:47 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/03/19 1:22, Magnus Hagander wrote:\n> >>>>> Would it perhaps be better to return NULL instead of 0 in the\n> >>>>> statistics view if there is no data?\n> >>>\n> >>> Did you miss this comment, or not agree? :)\n> >>\n> >> Oh, I forgot to attached the patch... Patch attached.\n> >> This patch needs to be applied after applying\n> >> add_no_estimate_size_v3.patch.\n> >\n> > :)\n> >\n> > Hmm. I'm slightly irked by doing the -1 -> NULL conversion in the SQL\n> > view. I wonder if it might be worth teaching\n> > pg_stat_get_progress_info() about returning NULL?\n>\n> That's possible by\n> - adding the boolean array like st_progress_null[PGSTAT_NUM_PROGRESS_PARAM]\n> that indicates whether each column is NULL or not, into PgBackendStatus\n> - extending pgstat_progress_update_param() and pgstat_progress_update_multi_param()\n> so that they can update the boolean array for NULL\n> - updating the progress reporting code so that the extended versions of\n> function are used\n>\n> I didn't adopt this idea because it looks a bit overkill for the purpose.\n\nI tend to agree that this would be too many changes for something only\none place currently needs to use.\n\n> OTOH, this would be good improvement for the progress reporting\n> infrastructure and I'm fine to implement it.\n\nMagnus' idea of checking the values in pg_stat_get_progress_info() to\ndetermine whether to return NULL seems fine to me. We will need to\nupdate the documentation of st_progress_param, because it currently\nsays:\n\n * ...but the meaning of each element in the\n * st_progress_param array is command-specific.\n */\n ProgressCommandType st_progress_command;\n Oid st_progress_command_target;\n int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n} PgBackendStatus;\n\nIf we are to define -1 in st_progress_param[] as NULL to the users,\nthat must be mentioned here.\n\n-- \nThank you,\nAmit\n\n\n",
"msg_date": "Thu, 19 Mar 2020 10:02:14 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020-Mar-19, Amit Langote wrote:\n\n> Magnus' idea of checking the values in pg_stat_get_progress_info() to\n> determine whether to return NULL seems fine to me. We will need to\n> update the documentation of st_progress_param, because it currently\n> says:\n> \n> * ...but the meaning of each element in the\n> * st_progress_param array is command-specific.\n> */\n> ProgressCommandType st_progress_command;\n> Oid st_progress_command_target;\n> int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n> } PgBackendStatus;\n> \n> If we are to define -1 in st_progress_param[] as NULL to the users,\n> that must be mentioned here.\n\nHmm, why -1? It seems like a value that we might want to use for other\npurposes in other params. Maybe INT64_MIN is a better choice?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Mar 2020 23:24:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 11:24 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2020-Mar-19, Amit Langote wrote:\n>\n> > Magnus' idea of checking the values in pg_stat_get_progress_info() to\n> > determine whether to return NULL seems fine to me. We will need to\n> > update the documentation of st_progress_param, because it currently\n> > says:\n> >\n> > * ...but the meaning of each element in the\n> > * st_progress_param array is command-specific.\n> > */\n> > ProgressCommandType st_progress_command;\n> > Oid st_progress_command_target;\n> > int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n> > } PgBackendStatus;\n> >\n> > If we are to define -1 in st_progress_param[] as NULL to the users,\n> > that must be mentioned here.\n>\n> Hmm, why -1? It seems like a value that we might want to use for other\n> purposes in other params. Maybe INT64_MIN is a better choice?\n\nYes, maybe.\n\n-- \nThank you,\nAmit\n\n\n",
"msg_date": "Thu, 19 Mar 2020 11:32:37 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On 2020-Mar-19, Amit Langote wrote:\n\n> On Thu, Mar 19, 2020 at 11:24 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > On 2020-Mar-19, Amit Langote wrote:\n> >\n> > > Magnus' idea of checking the values in pg_stat_get_progress_info() to\n> > > determine whether to return NULL seems fine to me. We will need to\n> > > update the documentation of st_progress_param, because it currently\n> > > says:\n> > >\n> > > * ...but the meaning of each element in the\n> > > * st_progress_param array is command-specific.\n> > > */\n> > > ProgressCommandType st_progress_command;\n> > > Oid st_progress_command_target;\n> > > int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n> > > } PgBackendStatus;\n> > >\n> > > If we are to define -1 in st_progress_param[] as NULL to the users,\n> > > that must be mentioned here.\n> >\n> > Hmm, why -1? It seems like a value that we might want to use for other\n> > purposes in other params. Maybe INT64_MIN is a better choice?\n> \n> Yes, maybe.\n\nLooking at the code involved, I think it's okay to use -1 in that\nspecific param and teach the SQL query to display null when it finds\nthat value. We have plenty of magic numbers in the progress params, and\nit's always the definition of the view that interprets them.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Mar 2020 23:38:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/19 11:32, Amit Langote wrote:\n> On Thu, Mar 19, 2020 at 11:24 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n>> On 2020-Mar-19, Amit Langote wrote:\n>>\n>>> Magnus' idea of checking the values in pg_stat_get_progress_info() to\n>>> determine whether to return NULL seems fine to me.\n\nSo you think that the latest patch is good enough?\n\n>>> We will need to\n>>> update the documentation of st_progress_param, because it currently\n>>> says:\n>>>\n>>> * ...but the meaning of each element in the\n>>> * st_progress_param array is command-specific.\n>>> */\n>>> ProgressCommandType st_progress_command;\n>>> Oid st_progress_command_target;\n>>> int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n>>> } PgBackendStatus;\n>>>\n>>> If we are to define -1 in st_progress_param[] as NULL to the users,\n>>> that must be mentioned here.\n>>\n>> Hmm, why -1? It seems like a value that we might want to use for other\n>> purposes in other params. Maybe INT64_MIN is a better choice?\n> \n> Yes, maybe.\n\nI don't think that we need to define the specific value like -1 as NULL globally.\nWhich value should be used for that purpose may vary by each command. Only for\npg_stat_progress_basebackup.backup_total, IMO using -1 as special value for\nNULL is not so bad idea.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 19 Mar 2020 11:45:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 11:45 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2020/03/19 11:32, Amit Langote wrote:\n> > On Thu, Mar 19, 2020 at 11:24 AM Alvaro Herrera\n> > <alvherre@2ndquadrant.com> wrote:\n> >> On 2020-Mar-19, Amit Langote wrote:\n> >>\n> >>> Magnus' idea of checking the values in pg_stat_get_progress_info() to\n> >>> determine whether to return NULL seems fine to me.\n>\n> So you think that the latest patch is good enough?\n\nI see that the latest patch modifies pg_stat_progress_basebackup view\nto return NULL, so not exactly. IIUC, Magnus seems to be advocating\nto *centralize* this in pg_stat_get_progress_info(), which all views\nare based on, which means we need to globally define a NULL param\nvalue, as Alvaro also pointed out.\n\nBut...\n\n> >>> We will need to\n> >>> update the documentation of st_progress_param, because it currently\n> >>> says:\n> >>>\n> >>> * ...but the meaning of each element in the\n> >>> * st_progress_param array is command-specific.\n> >>> */\n> >>> ProgressCommandType st_progress_command;\n> >>> Oid st_progress_command_target;\n> >>> int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n> >>> } PgBackendStatus;\n> >>>\n> >>> If we are to define -1 in st_progress_param[] as NULL to the users,\n> >>> that must be mentioned here.\n> >>\n> >> Hmm, why -1? It seems like a value that we might want to use for other\n> >> purposes in other params. Maybe INT64_MIN is a better choice?\n> >\n> > Yes, maybe.\n>\n> I don't think that we need to define the specific value like -1 as NULL globally.\n> Which value should be used for that purpose may vary by each command. Only for\n> pg_stat_progress_basebackup.backup_total, IMO using -1 as special value for\n> NULL is not so bad idea.\n\nThis is the first instance of needing to display NULL in a progress\nview, so a non-general solution may be enough for now. IOW, your\nlatest patch is good enough for that. :)\n\n--\nThank you,\nAmit\n\n\n",
"msg_date": "Thu, 19 Mar 2020 12:02:57 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/19 1:13, Fujii Masao wrote:\n> \n> \n> On 2020/03/19 0:37, Magnus Hagander wrote:\n>> On Wed, Mar 11, 2020 at 5:53 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/03/11 3:39, Magnus Hagander wrote:\n>>>> On Tue, Mar 10, 2020 at 6:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/03/10 22:43, Amit Langote wrote:\n>>>>>> On Tue, Mar 10, 2020 at 6:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>> So, I will make the patch adding support for --no-estimate-size option\n>>>>>>>> in pg_basebackup.\n>>>>>>>\n>>>>>>> Patch attached.\n>>>>>>\n>>>>>> Like the idea and the patch looks mostly good.\n>>>>>\n>>>>> Thanks for reviewing the patch!\n>>>>>\n>>>>>> + total size. If the estimation is disabled in\n>>>>>> + <application>pg_basebackup</application>\n>>>>>> + (i.e., <literal>--no-estimate-size</literal> option is specified),\n>>>>>> + this is always <literal>0</literal>.\n>>>>>>\n>>>>>> \"always\" seems unnecessary.\n>>>>>\n>>>>> Fixed.\n>>>>>\n>>>>>> + This option prevents the server from estimating the total\n>>>>>> + amount of backup data that will be streamed. In other words,\n>>>>>> + <literal>backup_total</literal> column in the\n>>>>>> + <structname>pg_stat_progress_basebackup</structname>\n>>>>>> + view always indicates <literal>0</literal> if this option is enabled.\n>>>>>>\n>>>>>> Here too.\n>>>>>\n>>>>> Fixed.\n>>>>>\n>>>>> Attached is the updated version of the patch.\n>>>>\n>>>> Would it perhaps be better to return NULL instead of 0 in the\n>>>> statistics view if there is no data?\n>>\n>> Did you miss this comment, or not agree? :)\n> \n> Oh, I forgot to attached the patch... Patch attached.\n> This patch needs to be applied after applying\n> add_no_estimate_size_v3.patch.\n> \n>>>> Also, should it really be the server version that decides how this\n>>>> feature behaves, and not the pg_basebackup version? Given that the\n>>>> implementation is entirely in the client, it seems that's more\n>>>> logical?\n>>>\n>>> Yeah, you're right. I changed the patch that way.\n>>> Attached is the updated version of the patch.\n>>\n>> The other changes in it look good!\n> \n> Thanks for the review!\n\nPushed! Thanks!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 19 Mar 2020 17:21:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/19 12:02, Amit Langote wrote:\n> On Thu, Mar 19, 2020 at 11:45 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/03/19 11:32, Amit Langote wrote:\n>>> On Thu, Mar 19, 2020 at 11:24 AM Alvaro Herrera\n>>> <alvherre@2ndquadrant.com> wrote:\n>>>> On 2020-Mar-19, Amit Langote wrote:\n>>>>\n>>>>> Magnus' idea of checking the values in pg_stat_get_progress_info() to\n>>>>> determine whether to return NULL seems fine to me.\n>>\n>> So you think that the latest patch is good enough?\n> \n> I see that the latest patch modifies pg_stat_progress_basebackup view\n> to return NULL, so not exactly. IIUC, Magnus seems to be advocating\n> to *centralize* this in pg_stat_get_progress_info(), which all views\n> are based on, which means we need to globally define a NULL param\n> value, as Alvaro also pointed out.\n> \n> But...\n> \n>>>>> We will need to\n>>>>> update the documentation of st_progress_param, because it currently\n>>>>> says:\n>>>>>\n>>>>> * ...but the meaning of each element in the\n>>>>> * st_progress_param array is command-specific.\n>>>>> */\n>>>>> ProgressCommandType st_progress_command;\n>>>>> Oid st_progress_command_target;\n>>>>> int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n>>>>> } PgBackendStatus;\n>>>>>\n>>>>> If we are to define -1 in st_progress_param[] as NULL to the users,\n>>>>> that must be mentioned here.\n>>>>\n>>>> Hmm, why -1? It seems like a value that we might want to use for other\n>>>> purposes in other params. Maybe INT64_MIN is a better choice?\n>>>\n>>> Yes, maybe.\n>>\n>> I don't think that we need to define the specific value like -1 as NULL globally.\n>> Which value should be used for that purpose may vary by each command. Only for\n>> pg_stat_progress_basebackup.backup_total, IMO using -1 as special value for\n>> NULL is not so bad idea.\n> \n> This is the first instance of needing to display NULL in a progress\n> view, so a non-general solution may be enough for now. IOW, your\n> latest patch is good enough for that. :)\n\nOk, so barring any objection, I will commit the latest patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 19 Mar 2020 17:22:42 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-19 17:21:38 +0900, Fujii Masao wrote:\n> Pushed! Thanks!\n\nFWIW, I'm a bit doubtful that incuring the overhead of this by default\non everybody is a nice thing. On filesystems with high latency and with\na lot of small relations the overhead of stating a lot of files can be\nalmost as high as the actual base backup.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Mar 2020 11:39:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/20 3:39, Andres Freund wrote:\n> Hi,\n> \n> On 2020-03-19 17:21:38 +0900, Fujii Masao wrote:\n>> Pushed! Thanks!\n> \n> FWIW, I'm a bit doubtful that incuring the overhead of this by default\n> on everybody is a nice thing. On filesystems with high latency and with\n> a lot of small relations the overhead of stating a lot of files can be\n> almost as high as the actual base backup.\n\nYeah, so if we receive lots of complaints like that during beta and\nRC phases, we should consider to change the default behavior.\n\nAlso maybe I should measure how long the estimation takes on the env\nwhere, for example, ten thousand tables (i.e., files) exist, in order to\nwhether the default behavior is really time-consuming or not?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 23 Mar 2020 16:06:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
},
{
"msg_contents": "\n\nOn 2020/03/19 17:22, Fujii Masao wrote:\n> \n> \n> On 2020/03/19 12:02, Amit Langote wrote:\n>> On Thu, Mar 19, 2020 at 11:45 AM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> On 2020/03/19 11:32, Amit Langote wrote:\n>>>> On Thu, Mar 19, 2020 at 11:24 AM Alvaro Herrera\n>>>> <alvherre@2ndquadrant.com> wrote:\n>>>>> On 2020-Mar-19, Amit Langote wrote:\n>>>>>\n>>>>>> Magnus' idea of checking the values in pg_stat_get_progress_info() to\n>>>>>> determine whether to return NULL seems fine to me.\n>>>\n>>> So you think that the latest patch is good enough?\n>>\n>> I see that the latest patch modifies pg_stat_progress_basebackup view\n>> to return NULL, so not exactly. IIUC, Magnus seems to be advocating\n>> to *centralize* this in pg_stat_get_progress_info(), which all views\n>> are based on, which means we need to globally define a NULL param\n>> value, as Alvaro also pointed out.\n>>\n>> But...\n>>\n>>>>>> We will need to\n>>>>>> update the documentation of st_progress_param, because it currently\n>>>>>> says:\n>>>>>>\n>>>>>> * ...but the meaning of each element in the\n>>>>>> * st_progress_param array is command-specific.\n>>>>>> */\n>>>>>> ProgressCommandType st_progress_command;\n>>>>>> Oid st_progress_command_target;\n>>>>>> int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n>>>>>> } PgBackendStatus;\n>>>>>>\n>>>>>> If we are to define -1 in st_progress_param[] as NULL to the users,\n>>>>>> that must be mentioned here.\n>>>>>\n>>>>> Hmm, why -1? It seems like a value that we might want to use for other\n>>>>> purposes in other params. Maybe INT64_MIN is a better choice?\n>>>>\n>>>> Yes, maybe.\n>>>\n>>> I don't think that we need to define the specific value like -1 as NULL globally.\n>>> Which value should be used for that purpose may vary by each command. Only for\n>>> pg_stat_progress_basebackup.backup_total, IMO using -1 as special value for\n>>> NULL is not so bad idea.\n>>\n>> This is the first instance of needing to display NULL in a progress\n>> view, so a non-general solution may be enough for now. IOW, your\n>> latest patch is good enough for that. :)\n> \n> Ok, so barring any objection, I will commit the latest patch.\n\nPushed! Thanks all!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 24 Mar 2020 10:46:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_basebackup - progress reporting for\n pg_basebackup, in the server side"
}
] |
[
{
"msg_contents": "Now that we're just about there on the patch to invent trusted\nextensions [1], I'd like to start a discussion about whether to mark\nanything besides the trusted PLs as trusted. I think generally\nwe ought to mark contrib modules as trusted if it's sane to do so;\nthere's not much point in handing people plperl (even sandboxed)\nbut not, say, hstore. I trawled through what's in contrib today\nand broke things down like this:\n\nCertainly NO, as these allow external or low-level access:\n\nadminpack\ndblink\nfile_fdw\npostgres_fdw\npageinspect\npg_buffercache\npg_freespacemap\npg_visibility\npgstattuple\n\nProbably NO, if only because you'd need additional privileges\nto use these anyway:\n\namcheck\ndict_xsyn\nhstore_plperlu\nhstore_plpython2u\nhstore_plpython3u\nhstore_plpythonu\njsonb_plperlu\njsonb_plpython2u\njsonb_plpython3u\njsonb_plpythonu\nltree_plpython2u\nltree_plpython3u\nltree_plpythonu\npg_prewarm\npg_stat_statements\n\nDefinitely candidates to mark trusted:\n\ncitext\ncube\ndict_int\t\t(unlike dict_xsyn, this needs no external file)\nearthdistance\t\t(marginal usefulness though)\nfuzzystrmatch\nhstore\nhstore_plperl\nintagg\t\t\t(marginal usefulness though)\nintarray\nisn\njsonb_plperl\nlo\nltree\npg_trgm\npgcrypto\nseg\ntablefunc\ntcn\ntsm_system_rows\ntsm_system_time\nunaccent\t\t(needs external file, but the default one is useful)\nuuid-ossp\n\nNot sure what I think about these:\n\nbloom\t\t\t(are these useful in production?)\nbtree_gin\nbtree_gist\npgrowlocks\t\t(seems safe, but are there security issues?)\nspi/autoinc\t\t(I doubt that these four are production grade)\nspi/insert_username\nspi/moddatetime\nspi/refint\nsslinfo\t\t\t(seems safe, but are there security issues?)\nxml2\t\t\t(nominally safe, but deprecated, and libxml2\n\t\t\t has been a fertile source of security issues)\n\nAny opinions about these, particularly the on-the-edge cases?\n\nAlso, how should we document this, if we do it? Add a boilerplate\nsentence to each module's description about whether it is trusted\nor not? Put a table up at the front of Appendxix F? Both?\n\nI'm happy to go make this happen, once we have consensus on what\nshould happen.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/5889.1566415762%40sss.pgh.pa.us\n\n\n",
"msg_date": "Wed, 29 Jan 2020 14:41:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "On 2020-Jan-29, Tom Lane wrote:\n\n> Not sure what I think about these:\n> \n> bloom\t\t\t(are these useful in production?)\n> btree_gin\n> btree_gist\n> pgrowlocks\t\t(seems safe, but are there security issues?)\n> spi/autoinc\t\t(I doubt that these four are production grade)\n> spi/insert_username\n> spi/moddatetime\n> spi/refint\n> sslinfo\t\t\t(seems safe, but are there security issues?)\n> xml2\t\t\t(nominally safe, but deprecated, and libxml2\n> \t\t\t has been a fertile source of security issues)\n\nOf these, btree_gist is definitely useful from a user perspective,\nbecause it enables creation of certain exclusion constraints.\n\nI've never heard of anyone using bloom indexes in production. I'd\nargue that if the feature is useful, then we should turn it into a\ncore-included index AM with regular WAL logging for improved\nperformance, and add a stripped-down version to src/test/modules to\ncover the WAL-log testing needs. Maybe exposing it more, as promoting\nit as a trusted extension would do, would help find more use cases for\nit.\n\n> Also, how should we document this, if we do it? Add a boilerplate\n> sentence to each module's description about whether it is trusted\n> or not? Put a table up at the front of Appendxix F? Both?\n\nIf it were possible to do both from a single source of truth, that would\nbe great. Failing that, I'd just list it in each module's section.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 29 Jan 2020 17:29:19 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Hello,\n\n\n> btree_gin\n> btree_gist\n\n\nI would even ask btree_gin and btree_gist to be moved to core.\n\nbtree_gist is shipping opclasses for built in types to be used in gist\nindexes. btree_* is confusing part in the name pretending there's some\nmagic happening linking btree and gist.\n\ngist is the most popular way to get geometric indexes, and these often need\nto be combined with some class identifier that's used in lookups together.\nCREATE INDEX on geom_table using gist (zooom_level, geom); fails for no\nreason without btree_gist - types are shipped in core,\ngist itself is not an extension, but letting to use one core mechanism with\nanother in an obvious way is for some reason split out.\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHello, btree_gin\nbtree_gistI would even ask btree_gin and btree_gist to be moved to core.btree_gist is shipping opclasses for built in types to be used in gist indexes. btree_* is confusing part in the name pretending there's some magic happening linking btree and gist.gist is the most popular way to get geometric indexes, and these often need to be combined with some class identifier that's used in lookups together.CREATE INDEX on geom_table using gist (zooom_level, geom); fails for no reason without btree_gist - types are shipped in core,gist itself is not an extension, but letting to use one core mechanism with another in an obvious way is for some reason split out.-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Wed, 29 Jan 2020 23:45:37 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n>> btree_gin\n>> btree_gist\n\n> I would even ask btree_gin and btree_gist to be moved to core.\n\nThat's not in scope here. Our past experience with trying to move\nextensions into core is that it creates a pretty painful upgrade\nexperience for users, so that's not something I'm interested in doing\n... especially for relatively marginal cases like these.\n\nThere's also a more generic question of why we should want to move\nanything to core anymore. The trusted-extension mechanism removes\none of the biggest remaining gripes about extensions, namely the\npain level for installing them. (But please, let's not have that\ndebate on this thread.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Jan 2020 16:27:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 9:46 PM Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n>\n> Hello,\n>\n>>\n>> btree_gin\n>> btree_gist\n>\n>\n> I would even ask btree_gin and btree_gist to be moved to core.\n\nWithout going that far, I also agree that I relied on those extension\nquite often, so +1 for marking them as trusted.\n\n>> Probably NO, if only because you'd need additional privileges\n>> to use these anyway:\n>> pg_stat_statements\n\nBut the additional privileges are global, so assuming the extension\nhas been properly setup, wouldn't it be sensible to ease the\nper-database installation? If not properly setup, there's no harm in\ncreating the extension anyway.\n\n\n",
"msg_date": "Wed, 29 Jan 2020 22:28:22 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n>>> Probably NO, if only because you'd need additional privileges\n>>> to use these anyway:\n>>> pg_stat_statements\n\n> But the additional privileges are global, so assuming the extension\n> has been properly setup, wouldn't it be sensible to ease the\n> per-database installation? If not properly setup, there's no harm in\n> creating the extension anyway.\n\nMmm, I'm not convinced --- the ability to see what statements are being\nexecuted in other sessions (even other databases) is something that\nparanoid installations might not be so happy about. Our previous\ndiscussions about what privilege level is needed to look at\npg_stat_statements info were all made against a background assumption\nthat you needed some extra privilege to set up the view in the first\nplace. I think that would need another look or two before being\ncomfortable that we're not shifting the goal posts too far.\n\nThe bigger picture here is that I don't want to get push-back that\nwe've broken somebody's security posture by marking too many extensions\ntrusted. So for anything where there's any question about security\nimplications, we should err in the conservative direction of leaving\nit untrusted.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Jan 2020 16:38:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "On Wed, 29 Jan 2020 at 21:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> >>> pg_stat_statements\n>\n> Mmm, I'm not convinced --- the ability to see what statements are being\n> executed in other sessions (even other databases) is something that\n> paranoid installations might not be so happy about. Our previous\n> discussions about what privilege level is needed to look at\n> pg_stat_statements info were all made against a background assumption\n> that you needed some extra privilege to set up the view in the first\n> place. I think that would need another look or two before being\n> comfortable that we're not shifting the goal posts too far.\n>\n> The bigger picture here is that I don't want to get push-back that\n> we've broken somebody's security posture by marking too many extensions\n> trusted. So for anything where there's any question about security\n> implications, we should err in the conservative direction of leaving\n> it untrusted.\n>\n\n+1\n\nI wonder if the same could be said about pgrowlocks.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 31 Jan 2020 09:40:32 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Wed, 29 Jan 2020 at 21:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The bigger picture here is that I don't want to get push-back that\n>> we've broken somebody's security posture by marking too many extensions\n>> trusted. So for anything where there's any question about security\n>> implications, we should err in the conservative direction of leaving\n>> it untrusted.\n\n> I wonder if the same could be said about pgrowlocks.\n\nGood point. I had figured it was probably OK given that it's\nanalogous to the pg_locks view (which is unrestricted AFAIR),\nand that it already has some restrictions on what you can see.\nI'd have no hesitation about dropping it off this list though,\nsince it's probably not used that much and it could also be seen\nas exposing internals.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 Jan 2020 10:13:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "After looking more closely at these modules, I'm kind of inclined\n*not* to put the trusted marker on intagg. That module is just\na backwards-compatibility wrapper around functionality that\nexists in the core code nowadays. So I think what we ought to be\ndoing with it is deprecating and eventually removing it, not\nencouraging people to keep using it.\n\nGiven that and the other discussion in this thread, I think the\ninitial list of modules to trust is:\n\nbtree_gin\nbtree_gist\ncitext\ncube\ndict_int\nearthdistance\nfuzzystrmatch\nhstore\nhstore_plperl\nintarray\nisn\njsonb_plperl\nlo\nltree\npg_trgm\npgcrypto\nseg\ntablefunc\ntcn\ntsm_system_rows\ntsm_system_time\nunaccent\nuuid-ossp\n\nSo attached is a patch to do that. The code changes are trivial; just\nadd \"trusted = true\" to each control file. We don't need to bump the\nmodule version numbers, since this doesn't change the contents of any\nextension, just who can install it. I do not think any regression\ntest changes are needed either. (Note that commit 50fc694e4 already\nadded a test that trusted extensions behave as expected, see\nsrc/pl/plperl/sql/plperl_setup.sql.) So it seems like the only thing\nthat needs much discussion is the documentation changes. I adjusted\ncontrib.sgml's discussion of how to install these modules in general,\nand then labeled the individual modules if they are trusted.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 07 Feb 2020 20:40:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> >>> Probably NO, if only because you'd need additional privileges\n> >>> to use these anyway:\n> >>> pg_stat_statements\n> \n> > But the additional privileges are global, so assuming the extension\n> > has been properly setup, wouldn't it be sensible to ease the\n> > per-database installation? If not properly setup, there's no harm in\n> > creating the extension anyway.\n> \n> Mmm, I'm not convinced --- the ability to see what statements are being\n> executed in other sessions (even other databases) is something that\n> paranoid installations might not be so happy about.\n\nOf course, but that's why we have a default role which allows\ninstallations to control access to that kind of information- and it's\nalready being checked in the pg_stat_statements case and in the\npg_stat_activity case:\n\n/* Superusers or members of pg_read_all_stats members are allowed */\nis_allowed_role = is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS);\n\n> Our previous\n> discussions about what privilege level is needed to look at\n> pg_stat_statements info were all made against a background assumption\n> that you needed some extra privilege to set up the view in the first\n> place. I think that would need another look or two before being\n> comfortable that we're not shifting the goal posts too far.\n\nWhile you could maybe argue that's true for pg_stat_statements, it's\ncertainly not true for pg_stat_activity, so I don't buy it for either.\nThis looks like revisionist history to justify paranoia. I understand\nthe general concern, but if we were really depending on the mere\ninstallation of the extension to provide security then we wouldn't have\nbothered putting in checks like the one above, and, worse, I think our\nusers would be routinely complaining that our extensions don't follow\nour security model and how they can't install them.\n\nLots of people want to use pg_stat_statements, even in environments\nwhere not everyone on the database server, or even in the database you\nwant pg_stat_statements in, is trusted, and therefore we have to have\nthese additional checks inside the extension itself.\n\nThe same goes for just about everything else (I sure hope, at least) in\nour extensions set- none of the core extensions should be allowing\naccess to things which break our security model, even if they're\ninstalled, unless some additional privileges are granted out. The act\nof installing a core extension should not create a security risk for our\nusers- if it did, it'd be a security issue and CVE-worthy.\n\nAs such, I really don't agree with this entire line of thinking when it\ncomes to our core extensions. I view the 'trusted extension' model as\nreally for things where the extension author doesn't care about, and\ndoesn't wish to care about, dealing with our security model and making\nsure that it's followed. We do care, and we do maintain, the security\nmodel that we have throughout the core extensions. \n\nWhat I expect and hope will happen is that people will realize that, now\nthat they can have non-superusers installing these extensions and\ntherefore they don't have to give out superuser-level rights as much,\nthere will be asks for more default roles to allow granting out of\naccess to formerly superuser-only capabilities. There's a bit of a\ncomplication there since there might be privileges that only make sense\nfor a specific extension, but an extension can't really install a new\ndefault role (and, even if it did, the role would have to be only\navailable to the superuser initially anyway), so we might have to try\nand come up with some more generic and reusable default role for that\ncase. Still, we can try to deal with that when it happens.\n\nConsider that you may wish to have a system that, once installed, a\nsuperuser will virtually never access again, but one where you want\nusers to be able to install and use extensions like postgis and\npg_stat_statements. That can be done with these changes, and that's\nfantastic progress- you just install PG, create a non-superuser role,\nmake them the DB owner, and then GRANT things like pg_read_all_stats to\ntheir role with admin rights, and boom, they're good to go and you\ndidn't have to hack up the PG source code at all.\n\n> The bigger picture here is that I don't want to get push-back that\n> we've broken somebody's security posture by marking too many extensions\n> trusted. So for anything where there's any question about security\n> implications, we should err in the conservative direction of leaving\n> it untrusted.\n\nThis is just going to a) cause our users to complain about not being\nable to install extensions that they've routinely installed in the past,\nand b) make our users wonder what it is about these extensions that\nwe've decided can't be trusted to even just be installed and if they're\nat risk today because they've installed them.\n\nWhile it might not seem obvious, the discussion over on the thread about\nDEFAULT PRIVILEGES and pg_init_privs is actually a lot more relevant\nhere- there's extensions we have that expect certain functions, once\ninstalled, to be owned by a superuser (which will still be the case\nhere, thanks to how you've addressed that), but then to not have EXECUTE\nrights GRANT'd to anyone (thanks to the REVERT FROM PUBLIC in the\ninstallation), but that falls apart when someone's decided to set\nup DEFAULT PRIVILEGES for the superuser. While no one seems to want to\ndiscuss that with me, unfortunately, it's becoming more and more clear\nthat we need to skip DEFAULT PRIVILEGES from being applied during\nextension creation.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 8 Feb 2020 08:54:30 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Our previous\n>> discussions about what privilege level is needed to look at\n>> pg_stat_statements info were all made against a background assumption\n>> that you needed some extra privilege to set up the view in the first\n>> place. I think that would need another look or two before being\n>> comfortable that we're not shifting the goal posts too far.\n\n> While you could maybe argue that's true for pg_stat_statements, it's\n> certainly not true for pg_stat_activity, so I don't buy it for either.\n\nThe analogy of pg_stat_activity certainly suggests that there shouldn't be\na reason *in principle* why pg_stat_statements couldn't be made trusted.\nThere's a difference between that statement and saying that *in practice*\npg_stat_statements is ready to be trusted right now with no further\nchanges. I haven't done the analysis needed to conclude that, and don't\ncare to do so as part of this patch proposal.\n\n> The same goes for just about everything else (I sure hope, at least) in\n> our extensions set- none of the core extensions should be allowing\n> access to things which break our security model, even if they're\n> installed, unless some additional privileges are granted out.\n\nMaybe not, but the principle of defense-in-depth still says that admins\ncould reasonably want to not let dangerous tools get installed in the\nfirst place.\n\n> As such, I really don't agree with this entire line of thinking when it\n> comes to our core extensions. I view the 'trusted extension' model as\n> really for things where the extension author doesn't care about, and\n> doesn't wish to care about, dealing with our security model and making\n> sure that it's followed. We do care, and we do maintain, the security\n> model that we have throughout the core extensions. \n\nI am confused as to what \"entire line of thinking\" you are objecting\nto. Are you now saying that we should forget the trusted-extension\nmodel? Or maybe that we can just mark *everything* we ship as trusted?\nI'm not going to agree with either.\n\n>> The bigger picture here is that I don't want to get push-back that\n>> we've broken somebody's security posture by marking too many extensions\n>> trusted. So for anything where there's any question about security\n>> implications, we should err in the conservative direction of leaving\n>> it untrusted.\n\n> This is just going to a) cause our users to complain about not being\n> able to install extensions that they've routinely installed in the past,\n\nThat's utter nonsense. Nothing here is taking away privileges that\nexisted before; if you could install $whatever as superuser before,\nyou still can. OTOH, we *would* have a problem of that sort if we\nmarked $whatever as trusted and then later had to undo it. So I\nthink there's plenty of reason to be conservative about the first\nwave of what-to-mark-as-trusted. Once we've got more experience\nwith this mechanism under our belts, we might decide we can be more\nliberal about it.\n\n> and b) make our users wonder what it is about these extensions that\n> we've decided can't be trusted to even just be installed and if they're\n> at risk today because they've installed them.\n\nYep, you're right, this patch does make value judgements of that\nsort, and I'll stand behind them. Giving people the impression that,\nsay, postgres_fdw isn't any more dangerous than cube isn't helpful.\n\n> While it might not seem obvious, the discussion over on the thread about\n> DEFAULT PRIVILEGES and pg_init_privs is actually a lot more relevant\n> here- there's extensions we have that expect certain functions, once\n> installed, to be owned by a superuser (which will still be the case\n> here, thanks to how you've addressed that), but then to not have EXECUTE\n> rights GRANT'd to anyone (thanks to the REVERT FROM PUBLIC in the\n> installation), but that falls apart when someone's decided to set\n> up DEFAULT PRIVILEGES for the superuser. While no one seems to want to\n> discuss that with me, unfortunately, it's becoming more and more clear\n> that we need to skip DEFAULT PRIVILEGES from being applied during\n> extension creation.\n\nOr that we can't let people apply default privileges to superuser-created\nobjects at all. But I agree that that's a different discussion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Feb 2020 12:34:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-29 14:41:16 -0500, Tom Lane wrote:\n> pgcrypto\n\nFWIW, given the code quality, I'm doubtful about putting itq into the trusted\nsection.\n\n\nHave you audited how safe the create/upgrade scripts are against being\nused to elevate privileges?\n\nEspecially with FROM UNPACKAGED it seems like it'd be fairly easy to get\nan extension script to do dangerous things (as superuser). One could\njust create pre-existing objects that have *not* been created by a\nprevious version, and some upgrade scripts would do pretty weird\nstuff. There's several that do things like updating catalogs directly\netc. It seems to me that FROM UNPACKAGED shouldn't support trusted.\n\nRegards,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Feb 2020 15:30:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-01-29 14:41:16 -0500, Tom Lane wrote:\n>> pgcrypto\n\n> FWIW, given the code quality, I'm doubtful about putting itq into the trusted\n> section.\n\nI don't particularly have an opinion about that --- is it really that\nawful? If there is anything broken in it, wouldn't we consider that\na security problem anyhow?\n\n> Especially with FROM UNPACKAGED it seems like it'd be fairly easy to get\n> an extension script to do dangerous things (as superuser). One could\n> just create pre-existing objects that have *not* been created by a\n> previous version, and some upgrade scripts would do pretty weird\n> stuff. There's several that do things like updating catalogs directly\n> etc. It seems to me that FROM UNPACKAGED shouldn't support trusted.\n\nHmm, seems like a reasonable idea, but I'm not quite sure how to mechanize\nit given that \"unpackaged\" isn't magic in any way so far as extension.c\nis concerned. Maybe we could decide that the time for supporting easy\nupdates from pre-9.1 is past, and just remove all the unpackaged-to-XXX\nscripts? Maybe even remove the \"FROM version\" option altogether.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Feb 2020 18:57:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> It seems to me that FROM UNPACKAGED shouldn't support trusted.\n\n> Hmm, seems like a reasonable idea, but I'm not quite sure how to mechanize\n> it given that \"unpackaged\" isn't magic in any way so far as extension.c\n> is concerned. Maybe we could decide that the time for supporting easy\n> updates from pre-9.1 is past, and just remove all the unpackaged-to-XXX\n> scripts? Maybe even remove the \"FROM version\" option altogether.\n\n[ thinks some more... ] A less invasive idea would be to insist that\nyou be superuser to use the FROM option. But I'm thinking that the\nunpackaged-to-XXX scripts are pretty much dead letters anymore. Has\nanyone even tested them in years? How much longer do we want to be\non the hook to fix them?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Feb 2020 19:09:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-13 18:57:10 -0500, Tom Lane wrote:\n> Maybe we could decide that the time for supporting easy updates from\n> pre-9.1 is past, and just remove all the unpackaged-to-XXX scripts?\n> Maybe even remove the \"FROM version\" option altogether.\n\nYea, that strikes me as a reasonable thing to do. These days that just\nseems to be dangerous, without much advantage.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Feb 2020 17:26:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-02-13 18:57:10 -0500, Tom Lane wrote:\n>> Maybe we could decide that the time for supporting easy updates from\n>> pre-9.1 is past, and just remove all the unpackaged-to-XXX scripts?\n>> Maybe even remove the \"FROM version\" option altogether.\n\n> Yea, that strikes me as a reasonable thing to do. These days that just\n> seems to be dangerous, without much advantage.\n\nHere's a patch to remove the core-code support and documentation for\nthat. I have not included the actual deletion of the contrib modules'\n'unpackaged' scripts, as that seems both long and boring.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 14 Feb 2020 12:39:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-01-29 14:41:16 -0500, Tom Lane wrote:\n> >> pgcrypto\n> \n> > FWIW, given the code quality, I'm doubtful about putting itq into the trusted\n> > section.\n> \n> I don't particularly have an opinion about that --- is it really that\n> awful? If there is anything broken in it, wouldn't we consider that\n> a security problem anyhow?\n\nI would certainly hope so- and I would expect that to go for any of the\nother extensions which are included in core. If we aren't going to\nmaintain them and deal with security issues in them, then we should drop\nthem.\n\nWhich goes back to my earlier complaint that having extensions in core\nwhich aren't or can't be marked as trusted is not a position we should\nput our users in. Either they're maintained and have been vetted\nthrough our commit process, or they aren't and should be removed.\n\n> > Especially with FROM UNPACKAGED it seems like it'd be fairly easy to get\n> > an extension script to do dangerous things (as superuser). One could\n> > just create pre-existing objects that have *not* been created by a\n> > previous version, and some upgrade scripts would do pretty weird\n> > stuff. There's several that do things like updating catalogs directly\n> > etc. It seems to me that FROM UNPACKAGED shouldn't support trusted.\n> \n> Hmm, seems like a reasonable idea, but I'm not quite sure how to mechanize\n> it given that \"unpackaged\" isn't magic in any way so far as extension.c\n> is concerned. Maybe we could decide that the time for supporting easy\n> updates from pre-9.1 is past, and just remove all the unpackaged-to-XXX\n> scripts? Maybe even remove the \"FROM version\" option altogether.\n\nI agree in general with dropping the unpackaged-to-XXX bits.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 18 Feb 2020 10:13:50 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "On Thu, Feb 13, 2020 at 07:09:18PM -0500, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> It seems to me that FROM UNPACKAGED shouldn't support trusted.\n> \n> > Hmm, seems like a reasonable idea, but I'm not quite sure how to mechanize\n> > it given that \"unpackaged\" isn't magic in any way so far as extension.c\n> > is concerned. Maybe we could decide that the time for supporting easy\n> > updates from pre-9.1 is past, and just remove all the unpackaged-to-XXX\n> > scripts? Maybe even remove the \"FROM version\" option altogether.\n> \n> [ thinks some more... ] A less invasive idea would be to insist that\n> you be superuser to use the FROM option. But I'm thinking that the\n> unpackaged-to-XXX scripts are pretty much dead letters anymore. Has\n> anyone even tested them in years? How much longer do we want to be\n> on the hook to fix them?\n\nPostGIS uses unpackaged-to-XXX pretty heavily, and has it under\nautomated testing (which broke since \"FROM unpackaged\" support was\nremoved, see 14514.1581638958@sss.pgh.pa.us)\n\nWe'd be ok with requiring SUPERUSER for doing that, since that's\nwhat is currently required so nothing would change for us.\n\nInstead, dropping UPGRADE..FROM completely puts us in trouble of\nhaving to find another way to \"package\" postgis objects.\n\n--strk;\n\n\n",
"msg_date": "Wed, 26 Feb 2020 09:11:21 +0100",
"msg_from": "Sandro Santilli <strk@kbt.io>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-26 09:11:21 +0100, Sandro Santilli wrote:\n> PostGIS uses unpackaged-to-XXX pretty heavily, and has it under\n> automated testing (which broke since \"FROM unpackaged\" support was\n> removed, see 14514.1581638958@sss.pgh.pa.us)\n> \n> We'd be ok with requiring SUPERUSER for doing that, since that's\n> what is currently required so nothing would change for us.\n> \n> Instead, dropping UPGRADE..FROM completely puts us in trouble of\n> having to find another way to \"package\" postgis objects.\n\nCoul you explain what postgis is trying to achieve with FROM unpackaged?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 Feb 2020 00:17:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
},
{
"msg_contents": "On Wed, Feb 26, 2020 at 12:17:37AM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2020-02-26 09:11:21 +0100, Sandro Santilli wrote:\n> > PostGIS uses unpackaged-to-XXX pretty heavily, and has it under\n> > automated testing (which broke since \"FROM unpackaged\" support was\n> > removed, see 14514.1581638958@sss.pgh.pa.us)\n> > \n> > We'd be ok with requiring SUPERUSER for doing that, since that's\n> > what is currently required so nothing would change for us.\n> > \n> > Instead, dropping UPGRADE..FROM completely puts us in trouble of\n> > having to find another way to \"package\" postgis objects.\n> \n> Coul you explain what postgis is trying to achieve with FROM unpackaged?\n\nWe're turning a non-extension based install into an extension-based\none. Common need for those who came to PostGIS way before EXTENSION\nwas even invented and for those who remained there for the bigger\nflexibility (for example to avoid the raster component, which was\nunavoidable with EXTENSION mechanism until PostGIS 3.0).\n\nFor the upgrades to 3.0.0 when coming from a previous version we're\nusing that `FROM unpackaged` SYNTAX for re-packaging the raster\ncomponent for those who still want it (raster objects are unpackaged\nfrom 'postgis' extension on EXTENSION UPDATE because there was no other\nway to move them from an extension to another).\n\nI guess it would be ok for us to do the packaging directly from the\nscripts that would run on `CREATE EXTENSION postgis`, but would that\nmean we'd take the security risk you're trying to avoid by dropping\nthe `FROM unpackaged` syntax ?\n\n--strk;\n\n\n",
"msg_date": "Wed, 26 Feb 2020 09:46:32 +0100",
"msg_from": "Sandro Santilli <strk@kbt.io>",
"msg_from_op": false,
"msg_subject": "Re: Marking some contrib modules as trusted extensions"
}
] |
[
{
"msg_contents": "Some ereport calls have excess sets of parentheses. patch 0001 removes\nthe ones I found in a very quick grep.\n\n0002 removes newlines immediately following parens. These were\npreviously useful because pgindent would move arguments further to the\nleft so that the line would fit under 80 chars. However, pgindent no\nlonger does that, so the newlines are pointless and ugly.\n\nThese being cosmetic cleanups, they're not intended for backpatch,\nthough an argument could be made that doing that would save some future\nbackpatching pain. If ther are sufficient votes for that, I'm open to\ndoing it. (Of course, 0002 would not be backpatched further back than\npg10, the first release that uses the \"new\" pgindent rules.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 29 Jan 2020 17:04:01 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "parens cleanup"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Some ereport calls have excess sets of parentheses. patch 0001 removes\n> the ones I found in a very quick grep.\n\n+1 ... kind of looks like somebody got this wrong long ago, and then\nvarious people copied-and-pasted from a bad example.\n\n> 0002 removes newlines immediately following parens. These were\n> previously useful because pgindent would move arguments further to the\n> left so that the line would fit under 80 chars. However, pgindent no\n> longer does that, so the newlines are pointless and ugly.\n\n+1 except for the changes in zic.c. Those line breaks are following\nthe upstream code, so I'd just put them back in the next merge ...\n\n> These being cosmetic cleanups, they're not intended for backpatch,\n> though an argument could be made that doing that would save some future\n> backpatching pain. If ther are sufficient votes for that, I'm open to\n> doing it. (Of course, 0002 would not be backpatched further back than\n> pg10, the first release that uses the \"new\" pgindent rules.)\n\nMeh, -0.1 or so on back-patching.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Jan 2020 16:47:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: parens cleanup"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 04:47:19PM -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> 0002 removes newlines immediately following parens. These were\n>> previously useful because pgindent would move arguments further to the\n>> left so that the line would fit under 80 chars. However, pgindent no\n>> longer does that, so the newlines are pointless and ugly.\n> \n> +1 except for the changes in zic.c. Those line breaks are following\n> the upstream code, so I'd just put them back in the next merge ...\n\n+1.\n\n>> These being cosmetic cleanups, they're not intended for backpatch,\n>> though an argument could be made that doing that would save some future\n>> backpatching pain. If there are sufficient votes for that, I'm open to\n>> doing it. (Of course, 0002 would not be backpatched further back than\n>> pg10, the first release that uses the \"new\" pgindent rules.)\n> \n> Meh, -0.1 or so on back-patching.\n\nI am not sure that this is worth a back-patch.\n--\nMichael",
"msg_date": "Thu, 30 Jan 2020 11:27:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: parens cleanup"
},
{
"msg_contents": "Thanks both for looking! I have pushed those, removing the zic.c\nchanges.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 Jan 2020 14:13:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: parens cleanup"
}
] |
[
{
"msg_contents": "Hi\n\nWe encountered an unfortunate case of $SUBJECT the other day where it\nwould have been preferable to catch the error before rather than after\npg_basebackup ran.\n\nI can't think of any practical reason why pg_basebackup would ever need to\nbe run as root; we disallow that for initdb, pg_ctl and pg_upgrade, so it\nseems reasonable to do the same for pg_basebackup. Trivial patch attached,\nwhich as with the other cases will allow only the --help/--version options\nto be executed as root, otherwise nothing else.\n\nThe patch doesn't update the pg_basebackup documentation page; we don't\nmention it in the pg_ctl and pg_upgrade pages either and it doesn't seem\nparticularly important to mention it explicitly.\n\nI'll add this to the March CF.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 30 Jan 2020 14:29:06 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Prevent pg_basebackup running as root"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 02:29:06PM +0900, Ian Barwick wrote:\n> I can't think of any practical reason why pg_basebackup would ever need to\n> be run as root; we disallow that for initdb, pg_ctl and pg_upgrade, so it\n> seems reasonable to do the same for pg_basebackup. Trivial patch attached,\n> which as with the other cases will allow only the --help/--version options\n> to be executed as root, otherwise nothing else.\n\nMy take on the matter is that we should prevent anything creating or\nmodifying the data directory to run as root if we finish with\npermissions incompatible with what a postmaster expects. So +1.\n\n> The patch doesn't update the pg_basebackup documentation page; we don't\n> mention it in the pg_ctl and pg_upgrade pages either and it doesn't seem\n> particularly important to mention it explicitly.\n\nWe don't mention that in the docs of pg_rewind either. Note also that\nbefore 5d5aedd pg_rewind printed an error without exiting :)\n\n> +\t/*\n> +\t * Disallow running as root, as PostgreSQL will be unable to start\n> +\t * with root-owned files.\n> +\t */\n\nHere is a suggestion:\n/*\n * Don't allow pg_basebackup to be run as root, to avoid creating\n * files in the data directory with ownership rights incompatible\n * with the postmaster. We need only check for root -- any other user\n * won't have sufficient permissions to modify files in the data\n * directory.\n */ \n\n> +\t#ifndef WIN32\n\nIndentation here.\n\n> +\tif (geteuid() == 0)\t\t\t/* 0 is root's uid */\n> +\t{\n> +\t\tpg_log_error(\"cannot be run as root\");\n> +\t\tfprintf(stderr,\n> +\t\t\t\t_(\"Please log in (using, e.g., \\\"su\\\") as the (unprivileged) user that will\\n\"\n> +\t\t\t\t \"own the server process.\\n\"));\n> +\t\texit(1);\n> +\t}\n> +#endif\n\nI would recommend to map with the existing message of pg_rewind for\nconsistency:\n pg_log_error(\"cannot be executed by \\\"root\\\"\");\n fprintf(stderr, _(\"You must run %s as the PostgreSQL superuser.\\n\"),\n progname); \n\nA backpatch could be surprising for some users as that's a behavior\nchange, so I would recommend not to do a backpatch.\n--\nMichael",
"msg_date": "Thu, 30 Jan 2020 14:57:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent pg_basebackup running as root"
},
{
"msg_contents": "2020年1月30日(木) 14:57 Michael Paquier <michael@paquier.xyz>:\n>\n> On Thu, Jan 30, 2020 at 02:29:06PM +0900, Ian Barwick wrote:\n> > I can't think of any practical reason why pg_basebackup would ever need to\n> > be run as root; we disallow that for initdb, pg_ctl and pg_upgrade, so it\n> > seems reasonable to do the same for pg_basebackup. Trivial patch attached,\n> > which as with the other cases will allow only the --help/--version options\n> > to be executed as root, otherwise nothing else.\n>\n> My take on the matter is that we should prevent anything creating or\n> modifying the data directory to run as root if we finish with\n> permissions incompatible with what a postmaster expects. So +1.\n>\n> > The patch doesn't update the pg_basebackup documentation page; we don't\n> > mention it in the pg_ctl and pg_upgrade pages either and it doesn't seem\n> > particularly important to mention it explicitly.\n>\n> We don't mention that in the docs of pg_rewind either. Note also that\n> before 5d5aedd pg_rewind printed an error without exiting :)\n\nOuch.\n\n> > + /*\n> > + * Disallow running as root, as PostgreSQL will be unable to start\n> > + * with root-owned files.\n> > + */\n>\n> Here is a suggestion:\n> /*\n> * Don't allow pg_basebackup to be run as root, to avoid creating\n> * files in the data directory with ownership rights incompatible\n> * with the postmaster. We need only check for root -- any other user\n> * won't have sufficient permissions to modify files in the data\n> * directory.\n> */\n\nI think we can skip the second sentence altogether. It'd be theoretically\neasy enough to up with some combination of group permissions,\nsticky bits, umask, ACL settings etc/ which would allow one user to\nmodify the files owned by another user,\n\n> > + #ifndef WIN32\n>\n> Indentation here.\n\nWhoops, that's what comes from typing on the train ;)\n\n> > + if (geteuid() == 0) /* 0 is root's uid */\n> > + {\n> > + pg_log_error(\"cannot be run as root\");\n> > + fprintf(stderr,\n> > + _(\"Please log in (using, e.g., \\\"su\\\") as the (unprivileged) user that will\\n\"\n> > + \"own the server process.\\n\"));\n> > + exit(1);\n> > + }\n> > +#endif\n>\n> I would recommend to map with the existing message of pg_rewind for\n> consistency:\n> pg_log_error(\"cannot be executed by \\\"root\\\"\");\n> fprintf(stderr, _(\"You must run %s as the PostgreSQL superuser.\\n\"),\n> progname);\n\nHmm, I was using the existing message from initdb and pg_ctl for consistency:\n\n src/bin/initdb/initdb.c:\n\n if (geteuid() == 0) /* 0 is root's uid */\n {\n pg_log_error(\"cannot be run as root\");\n fprintf(stderr,\n _(\"Please log in (using, e.g., \\\"su\\\") as the\n(unprivileged) user that will\\n\"\n \"own the server process.\\n\"));\n exit(1);\n }\n\n src/bin/pg_ctl/pg_ctl.c:\n\n if (geteuid() == 0)\n {\n write_stderr(_(\"%s: cannot be run as root\\n\"\n \"Please log in (using, e.g., \\\"su\\\") as the \"\n \"(unprivileged) user that will\\n\"\n \"own the server process.\\n\"),\n progname);\n exit(1);\n }\n\n src/bin/pg_upgrade/option.c:\n\n if (os_user_effective_id == 0)\n pg_fatal(\"%s: cannot be run as root\\n\", os_info.progname);\n\nI wonder if it would be worth settling on a common message and way of emitting\nit, each utility does it slightly differently.\n\n> A backpatch could be surprising for some users as that's a behavior\n> change, so I would recommend not to do a backpatch.\n\nAgreed.\n\n\nRegards\n\nIan Barwick",
"msg_date": "Thu, 30 Jan 2020 15:38:54 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Prevent pg_basebackup running as root"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 03:38:54PM +0900, Ian Barwick wrote:\n> 2020年1月30日(木) 14:57 Michael Paquier <michael@paquier.xyz>:\n\nI have never noticed that your client was configured in Japanese :)\n\n> I think we can skip the second sentence altogether. It'd be theoretically\n> easy enough to up with some combination of group permissions,\n> sticky bits, umask, ACL settings etc/ which would allow one user to\n> modify the files owned by another user,\n\nOkay, fine by me.\n\n> Hmm, I was using the existing message from initdb and pg_ctl for consistency:\n\nAhh, indeed. pg_rewind has inherited its message from pg_resetwal.\n\n> I wonder if it would be worth settling on a common message and way of emitting\n> it, each utility does it slightly differently.\n\nNot sure that's a good idea. Each tool has its own properties, so it\nis good to keep some flexibility in the error message produced.\n\nAnyway, your patch looks like a good idea to me, so let's see if\nothers have opinions or objections about it.\n--\nMichael",
"msg_date": "Thu, 30 Jan 2020 16:00:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent pg_basebackup running as root"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 04:00:40PM +0900, Michael Paquier wrote:\n> Anyway, your patch looks like a good idea to me, so let's see if\n> others have opinions or objections about it.\n\nSeeing nothing, committed v2.\n--\nMichael",
"msg_date": "Sat, 1 Feb 2020 18:34:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent pg_basebackup running as root"
},
{
"msg_contents": "On 2020/02/01 18:34, Michael Paquier wrote:\n> On Thu, Jan 30, 2020 at 04:00:40PM +0900, Michael Paquier wrote:\n>> Anyway, your patch looks like a good idea to me, so let's see if\n>> others have opinions or objections about it.\n> \n> Seeing nothing, committed v2.\n\nThanks!\n\n\nRegards\n\nIan Barwick\n\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 3 Feb 2020 09:14:23 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Prevent pg_basebackup running as root"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-01 18:34:33 +0900, Michael Paquier wrote:\n> Seeing nothing, committed v2.\n\nFor reference: As a consequence of the discussion starting at\nhttps://www.postgresql.org/message-id/20200205172259.GW3195%40tamriel.snowman.net\nthis patch has been reverted, at least for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Feb 2020 18:08:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Prevent pg_basebackup running as root"
}
] |
[
{
"msg_contents": "Hello,\n\nCreateParallelContext() can return a context with seg == NULL. That\ncauses CREATE INDEX to segfault. Instead, it should fall back to\nnon-parallel build. See attached.\n\nThis probably explains a segfault reported over on pgsql-general[1].\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BcfjHy63hXEOc-CRZEPcUhu9%3DP3gKk_W9OiXzj-dfV_g%40mail.gmail.com",
"msg_date": "Thu, 30 Jan 2020 20:37:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Parallel CREATE INDEX vs DSM starvation"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 11:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> CreateParallelContext() can return a context with seg == NULL. That\n> causes CREATE INDEX to segfault. Instead, it should fall back to\n> non-parallel build. See attached.\n\nI guess we can't call _bt_end_parallel() here. So your patch LGTM.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 29 Jan 2020 23:47:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX vs DSM starvation"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 8:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Jan 29, 2020 at 11:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > CreateParallelContext() can return a context with seg == NULL. That\n> > causes CREATE INDEX to segfault. Instead, it should fall back to\n> > non-parallel build. See attached.\n>\n> I guess we can't call _bt_end_parallel() here. So your patch LGTM.\n\nThanks. Pushed.\n\n\n",
"msg_date": "Fri, 31 Jan 2020 11:35:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX vs DSM starvation"
}
] |
[
{
"msg_contents": "Hello,\n\nAs reported over on pgsql-general[1], we leak shared memory when we\nrun out of DSM slots. To see this, add the random-run-out-of-slots\nhack I showed in that thread, create and analyze a table t(i) with a\nmillion integers, run with dynamic_shared_memory_type=mmap, and try\nSELECT COUNT(*) FROM t t1 JOIN t t2 USING (i) a few times and you'll\nsee that pgbase/pg_dynshmem fills up with leaked memory segments each\ntime an out-of-slots errors is raised. (It happens with all DSM\ntypes, but then the way to list the segments varies or there isn't\none, depending on type and OS.) Here's a draft patch to fix that.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2Bzw87b70yJp%2BOzz6LqS6s9QvdO4%2BhQuZc%3DDWLMi6Od6A%40mail.gmail.com",
"msg_date": "Thu, 30 Jan 2020 22:53:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Shared memory leak on DSM slot exhaustion"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 4:54 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> As reported over on pgsql-general[1], we leak shared memory when we\n> run out of DSM slots. To see this, add the random-run-out-of-slots\n> hack I showed in that thread, create and analyze a table t(i) with a\n> million integers, run with dynamic_shared_memory_type=mmap, and try\n> SELECT COUNT(*) FROM t t1 JOIN t t2 USING (i) a few times and you'll\n> see that pgbase/pg_dynshmem fills up with leaked memory segments each\n> time an out-of-slots errors is raised. (It happens with all DSM\n> types, but then the way to list the segments varies or there isn't\n> one, depending on type and OS.) Here's a draft patch to fix that.\n\nWhoops. The patch looks OK to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 31 Jan 2020 13:37:40 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory leak on DSM slot exhaustion"
},
{
"msg_contents": "On Sat, Feb 1, 2020 at 7:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Whoops. The patch looks OK to me.\n\nPushed.\n\n\n",
"msg_date": "Sat, 1 Feb 2020 15:37:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory leak on DSM slot exhaustion"
}
] |
[
{
"msg_contents": "Hello\n\nCurrently during point-in-time recovery with recovery_target_action = 'pause' we print log lines:\n\n> LOG: recovery has paused\n> HINT: Execute pg_wal_replay_resume() to continue.\n\nMy colleague told me that this is a terrible moment: to continue what exactly? It sounds like \"to continue replay\", similar to normal pg_wal_replay_pause/pg_wal_replay_resume behavior. We have just small note in documentation:\n\n> The paused state can be resumed by using pg_wal_replay_resume() (see Table 9.81), which then causes recovery to end.\n\nBut I think this is important place and can be improved.\n\nAlso the database does not respond to the promote signals at this stage. Attached patch 0001 with the test will fail.\n\n0002 patch contains my proposed ideas:\n- introduce separate message for pause due pg_wal_replay_pause call and for recovery_target_action.\n- check for standby triggers only for recovery_target_action - I am not sure this would be safe for pg_wal_replay_pause() call case\n\nMaybe more verbose hint would be appropriate:\n\n> Execute pg_promote() to end recovery or shut down the server, change the recovery target settings to a later target and restart to continue recovery\n\nThoughts?\n\nregards, Sergei",
"msg_date": "Thu, 30 Jan 2020 14:00:43 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/01/30 20:00, Sergei Kornilov wrote:\n> Hello\n> \n> Currently during point-in-time recovery with recovery_target_action = 'pause' we print log lines:\n> \n>> LOG: recovery has paused\n>> HINT: Execute pg_wal_replay_resume() to continue.\n> \n> My colleague told me that this is a terrible moment: to continue what exactly? It sounds like \"to continue replay\", similar to normal pg_wal_replay_pause/pg_wal_replay_resume behavior. We have just small note in documentation:\n> \n>> The paused state can be resumed by using pg_wal_replay_resume() (see Table 9.81), which then causes recovery to end.\n> \n> But I think this is important place and can be improved.\n> \n> Also the database does not respond to the promote signals at this stage. Attached patch 0001 with the test will fail.\n> \n> 0002 patch contains my proposed ideas:\n> - introduce separate message for pause due pg_wal_replay_pause call and for recovery_target_action.\n\n+1\n\n> - check for standby triggers only for recovery_target_action - I am not sure this would be safe for pg_wal_replay_pause() call case\n\nAgreed. Basically I think that recoveryPausesHere() should the promotion\ntrigger whether recovery target is reached or not. But one question is;\nhow should the recovery behave if recovery target is reached with\nrecovery_target_action=pause after the promotion is requested?\nIt should pause? Or promote?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 31 Jan 2020 02:00:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "Hi Sergei,\n\nOn 1/30/20 12:00 PM, Fujii Masao wrote:\n> \n>> - check for standby triggers only for recovery_target_action - I am \n>> not sure this would be safe for pg_wal_replay_pause() call case\n> \n> Agreed. Basically I think that recoveryPausesHere() should the promotion\n> trigger whether recovery target is reached or not. But one question is;\n> how should the recovery behave if recovery target is reached with\n> recovery_target_action=pause after the promotion is requested?\n> It should pause? Or promote?\n\nDo you have thoughts on Fujii's comments?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 9 Mar 2020 08:30:53 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/03/09 21:30, David Steele wrote:\n> Hi Sergei,\n> \n> On 1/30/20 12:00 PM, Fujii Masao wrote:\n>>\n>>> - check for standby triggers only for recovery_target_action - I am not sure this would be safe for pg_wal_replay_pause() call case\n>>\n>> Agreed. Basically I think that recoveryPausesHere() should the promotion\n>> trigger whether recovery target is reached or not. But one question is;\n>> how should the recovery behave if recovery target is reached with\n>> recovery_target_action=pause after the promotion is requested?\n>> It should pause? Or promote?\n> \n> Do you have thoughts on Fujii's comments?\n\nThanks for the ping! And sorry for not reporting the current status.\n\nI started the discussion about the topic very related to this.\nI'm thinking to apply the change that Sergei proposes after applying\nthe patch I attached in this thread.\nhttps://postgr.es/m/00c194b2-dbbb-2e8a-5b39-13f14048ef0a@oss.nttdata.com\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 10 Mar 2020 01:03:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "On Mon, Mar 9, 2020 at 12:03 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> I started the discussion about the topic very related to this.\n> I'm thinking to apply the change that Sergei proposes after applying\n> the patch I attached in this thread.\n> https://postgr.es/m/00c194b2-dbbb-2e8a-5b39-13f14048ef0a@oss.nttdata.com\n\nI think it would be good to change the primary message, not just the hint. e.g.\n\nLOG: pausing at end of recovery\nHINT: Execute pg_wal_replay_resume() to promote.\n\nvs.\n\nLOG: recovery has paused\nHINT: Execute pg_wal_replay_resume() to continue.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Mar 2020 13:27:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "On 2020/03/10 2:27, Robert Haas wrote:\n> On Mon, Mar 9, 2020 at 12:03 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> I started the discussion about the topic very related to this.\n>> I'm thinking to apply the change that Sergei proposes after applying\n>> the patch I attached in this thread.\n>> https://postgr.es/m/00c194b2-dbbb-2e8a-5b39-13f14048ef0a@oss.nttdata.com\n> \n> I think it would be good to change the primary message, not just the hint. e.g.\n> \n> LOG: pausing at end of recovery\n> HINT: Execute pg_wal_replay_resume() to promote.\n> \n> vs.\n> \n> LOG: recovery has paused\n> HINT: Execute pg_wal_replay_resume() to continue.\n\nLooks good to me.\nAttached is the updated version of the patch.\nThis is based on Sergei's patch.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Tue, 24 Mar 2020 15:39:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "Hello,\r\n\r\nWhen I test the patch, I find an issue: I start a stream with 'promote_trigger_file'\r\nGUC valid, and exec pg_wal_replay_pause() during recovery and as below it\r\nshows success to pause at the first time. I think it use a initialize\r\n'SharedPromoteIsTriggered' value first time I exec the pg_wal_replay_pause().\r\n\r\n#####################################\r\npostgres=# select pg_wal_replay_pause();\r\n pg_wal_replay_pause \r\n---------------------\r\n \r\n(1 row)\r\n\r\npostgres=# select pg_wal_replay_pause();\r\nERROR: standby promotion is ongoing\r\nHINT: pg_wal_replay_pause() cannot be executed after promotion is triggered.\r\npostgres=# select pg_wal_replay_pause();\r\nERROR: recovery is not in progress\r\nHINT: Recovery control functions can only be executed during recovery.\r\npostgres=#\r\n##############################################################\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Tue, 31 Mar 2020 08:48:44 +0000",
"msg_from": "movead li <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "Hello\n\n> When I test the patch, I find an issue: I start a stream with 'promote_trigger_file'\n> GUC valid, and exec pg_wal_replay_pause() during recovery and as below it\n> shows success to pause at the first time. I think it use a initialize\n> 'SharedPromoteIsTriggered' value first time I exec the pg_wal_replay_pause().\n\nhm. Are you sure this is related to this patch? Could you explain the exact timing? I mean log_statement = all and relevant logs.\nMost likely this is expected change by https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=496ee647ecd2917369ffcf1eaa0b2cdca07c8730\n\nMy proposal does not change the behavior after this commit, only changing the lines in the logs.\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 31 Mar 2020 12:59:50 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/03/31 18:59, Sergei Kornilov wrote:\n> Hello\n> \n>> When I test the patch, I find an issue: I start a stream with 'promote_trigger_file'\n>> GUC valid, and exec pg_wal_replay_pause() during recovery and as below it\n>> shows success to pause at the first time. I think it use a initialize\n>> 'SharedPromoteIsTriggered' value first time I exec the pg_wal_replay_pause().\n> \n> hm. Are you sure this is related to this patch? Could you explain the exact timing? I mean log_statement = all and relevant logs.\n> Most likely this is expected change by https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=496ee647ecd2917369ffcf1eaa0b2cdca07c8730\n\nYeah, if the problem exists there, that would be my fault in the commit.\nmovead li, could you share the detail procedure to reproduce the issue?\nI'm happy to investigate it.\n\n> My proposal does not change the behavior after this commit, only changing the lines in the logs.\n\nYes. What's your opinion about the latest patch based on Robert's idea?\nBarring any ojection, I'd like to commit that.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 31 Mar 2020 19:19:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "Hello\n\n>> My proposal does not change the behavior after this commit, only changing the lines in the logs.\n>\n> Yes. What's your opinion about the latest patch based on Robert's idea?\n> Barring any ojection, I'd like to commit that.\n\nOh, sorry. Looks good and solves my issue\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 31 Mar 2020 13:33:28 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/03/31 19:33, Sergei Kornilov wrote:\n> Hello\n> \n>>> My proposal does not change the behavior after this commit, only changing the lines in the logs.\n>>\n>> Yes. What's your opinion about the latest patch based on Robert's idea?\n>> Barring any ojection, I'd like to commit that.\n> \n> Oh, sorry. Looks good and solves my issue\n\nThanks for reviewing the patch! Pushed!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 1 Apr 2020 03:37:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": ">>>> My proposal does not change the behavior after this commit, only changing the lines in the logs.\n>>>\n>>> Yes. What's your opinion about the latest patch based on Robert's idea?\n>>> Barring any ojection, I'd like to commit that.\n>>\n>> Oh, sorry. Looks good and solves my issue\n>\n> Thanks for reviewing the patch! Pushed!\n\nThank you!\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 31 Mar 2020 21:47:48 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": ">> When I test the patch, I find an issue: I start a stream with 'promote_trigger_file'\r\n>> GUC valid, and exec pg_wal_replay_pause() during recovery and as below it\r\n>> shows success to pause at the first time. I think it use a initialize\r\n>> 'SharedPromoteIsTriggered' value first time I exec the pg_wal_replay_pause().\r\n \r\n>hm. Are you sure this is related to this patch? Could you explain the exact timing? I mean log_statement = all and relevant logs.\r\n>Most likely this is expected change by https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=496ee647ecd2917369ffcf1eaa0b2cdca07c8730 \r\n>My proposal does not change the behavior after this commit, only changing the lines in the logs.\r\n \r\nI test it again with (92d31085e926253aa650b9d1e1f2f09934d0ddfc), and the\r\nissue appeared again. Here is my test method which quite simple:\r\n1. Setup a base backup by pg_basebackup.\r\n2. Insert lots of data in master for the purpose I have enough time to exec\r\n pg_wal_replay_pause() when startup the replication.\r\n3. Configure the 'promote_trigger_file' GUC and create the trigger file.\r\n4. Start the backup(standby), connect it immediately, and exec pg_wal_replay_pause()\r\nThen it appears, and a test log attached.\r\n\r\nI means when I exec the pg_wal_replay_pause() first time, nobody has check the trigger state\r\nby CheckForStandbyTrigger(), it use a Initialized 'SharedPromoteIsTriggered' value. \r\nAnd patch attached can solve the issue.\r\n\r\n\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Wed, 1 Apr 2020 10:42:37 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/04/01 11:42, movead.li@highgo.ca wrote:\n> \n>>> When I test the patch, I find an issue: I start a stream with 'promote_trigger_file'\n>> > GUC valid, and exec pg_wal_replay_pause() during recovery and as below it\n> >>�shows success to pause at the first time. I think it use a initialize\n> >>�'SharedPromoteIsTriggered' value first time I exec the pg_wal_replay_pause().\n>>hm. Are you sure this is related to this patch? Could you explain the exact timing? I mean log_statement = all and relevant logs.\n>>Most likely this is expected change by https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=496ee647ecd2917369ffcf1eaa0b2cdca07c8730\n>>My proposal does not change the behavior after this commit, only changing the lines in the logs.\n> I test it again with (92d31085e926253aa650b9d1e1f2f09934d0ddfc), and the\n> issue appeared again. Here is my test method which quite simple:\n> 1. Setup a base backup by pg_basebackup.\n> 2. Insert lots of data in master for the purpose I have enough time to exec\n> � �pg_wal_replay_pause() when startup the replication.\n> 3. Configure the 'promote_trigger_file' GUC and create the trigger file.\n> 4. Start the backup(standby), connect it immediately, and exec pg_wal_replay_pause()\n> Then it appears, and a test log attached.\n> \n> I means when I exec the pg_wal_replay_pause() first time, nobody has check the trigger state\n> by CheckForStandbyTrigger(), it use a Initialized 'SharedPromoteIsTriggered' value.\n> And patch attached can solve the issue.\n\nThanks for the explanation!\n\nBut, sorry,,, I failed to understand the issue that you reported, yet...\nYou mean that the first call of pg_wal_replay_pause() in the step #2\nshould check whether the trigger file exists or not? If so, could you\ntell me why we should do that?\n\nBTW, right now only the startup process is allowed to call\nCheckForStandbyTrigger(). So the backend process calling\npg_wal_replay_pause() and PromoteIsTriggered() is not allowed to call\nCheckForStandbyTrigger(). The current logic is that the startup process\nis responsible for checking the trigger file and set the flag in the shmem\nif promotion is triggered. Then other processes like backend know\nwhether promotion is ongoing or not from the shmem. So basically\nthe backend doesn't need to check the trigger file itself.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 1 Apr 2020 16:22:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": ">But, sorry,,, I failed to understand the issue that you reported, yet...\r\n>You mean that the first call of pg_wal_replay_pause() in the step #2\r\n>should check whether the trigger file exists or not? If so, could you\r\n>tell me why we should do that?\r\nSorry about my pool english. The 'pg_wal_replay_pause()' is executed\r\nin step 4. I mention it in step 2 is for explain why it need lots of insert\r\ndata. \r\n\r\n>BTW, right now only the startup process is allowed to call\r\n>CheckForStandbyTrigger(). So the backend process calling\r\n>pg_wal_replay_pause() and PromoteIsTriggered() is not allowed to call\r\n>CheckForStandbyTrigger(). The current logic is that the startup process\r\n>is responsible for checking the trigger file and set the flag in the shmem\r\nIt's here, startup process does not call CheckForStandbyTrigger() to check\r\nthe trigger file until a pg_wal_replay_pause() or PromoteIsTriggered() comes,\r\nso first time to call the pg_wal_replay_pause(), it use a wrong \r\n'SharedPromoteIsTriggered' value.\r\n\r\n\r\n>if promotion is triggered. Then other processes like backend know\r\n>whether promotion is ongoing or not from the shmem. So basically\r\n>the backend doesn't need to check the trigger file itself.\r\nIf backend is not allowed to call CheckForStandbyTrigger(), then you should\r\nfind a way to hold it.\r\nIn another word, during the recovery if I add the trigger file, the starup process\r\ndo not know it at all until after a pg_wal_replay_pause() come.\r\n\r\nIn addition, although the first time I exec 'pg_wal_replay_pause' it shows success,\r\nthe startup process is keeping redo(no stop). \r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>But, sorry,,, I failed to understand the issue that you reported, yet....>You mean that the first call of pg_wal_replay_pause() in the step #2>should check whether the trigger file exists or not? If so, could you>tell me why we should do that?Sorry about my pool english. The 'pg_wal_replay_pause()' is executedin step 4. I mention it in step 2 is for explain why it need lots of insertdata. >BTW, right now only the startup process is allowed to call>CheckForStandbyTrigger(). So the backend process calling>pg_wal_replay_pause() and PromoteIsTriggered() is not allowed to call>CheckForStandbyTrigger(). The current logic is that the startup process>is responsible for checking the trigger file and set the flag in the shmemIt's here, startup process does not call CheckForStandbyTrigger() to checkthe trigger file until a pg_wal_replay_pause() or PromoteIsTriggered() comes,so first time to call the pg_wal_replay_pause(), it use a wrong 'SharedPromoteIsTriggered' value.>if promotion is triggered. Then other processes like backend know>whether promotion is ongoing or not from the shmem. So basically>the backend doesn't need to check the trigger file itself.\nIf backend is not allowed to call CheckForStandbyTrigger(), then you shouldfind a way to hold it.In another word, during the recovery if I add the trigger file, the starup processdo not know it at all until after a pg_wal_replay_pause() come.In addition, although the first time I exec 'pg_wal_replay_pause' it shows success,the startup process is keeping redo(no stop). \nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Wed, 1 Apr 2020 15:53:37 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/04/01 16:53, movead.li@highgo.ca wrote:\n> \n>>But, sorry,,, I failed to understand the issue that you reported, yet....\n>>You mean that the first call of pg_wal_replay_pause() in the step #2\n>>should check whether the trigger file exists or not? If so, could you\n>>tell me why we should do that?\n> Sorry about my pool english.� The 'pg_wal_replay_pause()' is executed\n> in step 4. I mention it in step 2 is for explain why it need lots of insert\n> data.\n> \n>>BTW, right now only the startup process is allowed to call\n>>CheckForStandbyTrigger(). So the backend process calling\n>>pg_wal_replay_pause() and PromoteIsTriggered() is not allowed to call\n>>CheckForStandbyTrigger(). The current logic is that the startup process\n>>is responsible for checking the trigger file and set the flag in the shmem\n> It's here, startup process does not call CheckForStandbyTrigger() to check\n> the trigger file until a pg_wal_replay_pause() or PromoteIsTriggered() comes,\n> so first time to call the pg_wal_replay_pause(), it use a wrong\n> 'SharedPromoteIsTriggered' value.\n> \n> \n>>if promotion is triggered. Then other processes like backend know\n>>whether promotion is ongoing or not from the shmem. So basically\n>>the backend doesn't need to check the trigger file itself.\n> If backend is not allowed to call CheckForStandbyTrigger(), then you should\n> find a way to hold it.\n> In another word, during the recovery if I add the trigger file, the starup process\n> do not know it at all until after a pg_wal_replay_pause() come.\n\nThanks for the explanation again! Maybe I understand your point.\n\nAs far as I read the code, in the standby mode, the startup process\nperiodically checks the trigger file in WaitForWALToBecomeAvailable().\nNo?\n\nThere can be small delay between the creation of the trigger file\nand the periodic call to CheckForStandbyTrigger() by the startup process.\nIf you execute pg_wal_replay_pause() during that delay, it would suceed.\n\nBut you'd like to get rid of that delay completely? In other words,\nboth the startup process and the backend calling pg_wal_replay_pause()\nshould detect the existence of the trigger file immdiately after\nit's created?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 1 Apr 2020 17:42:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": ">Thanks for the explanation again! Maybe I understand your point.\r\nGreat.\r\n\r\n>As far as I read the code, in the standby mode, the startup process\r\n>periodically checks the trigger file in WaitForWALToBecomeAvailable().\r\n>No?\r\nYes it is. \r\n\r\n>There can be small delay between the creation of the trigger file\r\n>and the periodic call to CheckForStandbyTrigger() by the startup process.\r\n>If you execute pg_wal_replay_pause() during that delay, it would suceed.\r\nIf there be a huge number of wal segments need a standby to rewind,\r\nthen it can not be a 'small delay'.\r\n\r\n>But you'd like to get rid of that delay completely? In other words,\r\n>both the startup process and the backend calling pg_wal_replay_pause()\r\n>should detect the existence of the trigger file immdiately after\r\n>it's created?\r\nI want to point out the thing, the pg_wal_replay_pause() shows success but\r\nthe startup process keeping redo, it may cause something confused. So it's\r\nbetter to solve the issue.\r\n\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>Thanks for the explanation again! Maybe I understand your point.Great.>As far as I read the code, in the standby mode, the startup process>periodically checks the trigger file in WaitForWALToBecomeAvailable().>No?Yes it is. >There can be small delay between the creation of the trigger file>and the periodic call to CheckForStandbyTrigger() by the startup process.>If you execute pg_wal_replay_pause() during that delay, it would suceed.If there be a huge number of wal segments need a standby to rewind,then it can not be a 'small delay'.>But you'd like to get rid of that delay completely? In other words,>both the startup process and the backend calling pg_wal_replay_pause()>should detect the existence of the trigger file immdiately after>it's created?I want to point out the thing, the pg_wal_replay_pause() shows success butthe startup process keeping redo, it may cause something confused. So it'sbetter to solve the issue.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Wed, 1 Apr 2020 16:58:01 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/04/01 17:58, movead.li@highgo.ca wrote:\n>>Thanks for the explanation again! Maybe I understand your point.\n> Great.\n> \n>>As far as I read the code, in the standby mode, the startup process\n>>periodically checks the trigger file in WaitForWALToBecomeAvailable().\n>>No?\n> Yes it is.\n> \n>>There can be small delay between the creation of the trigger file\n>>and the periodic call to CheckForStandbyTrigger() by the startup process.\n>>If you execute pg_wal_replay_pause() during that delay, it would suceed.\n> If there be a huge number of wal segments need a standby to rewind,\n> then it can not be a 'small delay'.\n\nYeah, that's true.\n\n>>But you'd like to get rid of that delay completely? In other words,\n>>both the startup process and the backend calling pg_wal_replay_pause()\n>>should detect the existence of the trigger file immdiately after\n>>it's created?\n> I want to point out the thing, the pg_wal_replay_pause() shows success but\n> the startup process keeping redo, it may cause something confused. So it's\n> better to solve the issue.\n\nThis happens because the startup process detects the trigger file\nafter pg_wal_replay_pause() succeeds, and then make the recovery\nget out of the paused state. It might be problematic to end the paused\nstate silently? So, to make the situation less confusing, what about\nemitting a log message when ending the paused state because of\nthe promotion?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 1 Apr 2020 18:35:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": ">This happens because the startup process detects the trigger file\r\n>after pg_wal_replay_pause() succeeds, and then make the recovery\r\n>get out of the paused state. \r\nYes that is.\r\n\r\n>It might be problematic to end the paused\r\n>state silently? So, to make the situation less confusing, what about\r\n>emitting a log message when ending the paused state because of\r\n>the promotion?\r\nBut where to emit it? I think it not so good by emitting to log file,\r\nbecause the user will not check it everytime. BTW, why\r\nCheckForStandbyTrigger() can not be called in backend.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>This happens because the startup process detects the trigger file>after pg_wal_replay_pause() succeeds, and then make the recovery>get out of the paused state. Yes that is.>It might be problematic to end the paused>state silently? So, to make the situation less confusing, what about>emitting a log message when ending the paused state because of>the promotion?But where to emit it? I think it not so good by emitting to log file,because the user will not check it everytime. BTW, whyCheckForStandbyTrigger() can not be called in backend.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Wed, 1 Apr 2020 17:56:14 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/04/01 18:56, movead.li@highgo.ca wrote:\n> \n> >This happens because the startup process detects the trigger file\n>>after pg_wal_replay_pause() succeeds, and then make the recovery\n>>get out of the paused state.\n> Yes that is.\n> \n> >It might be problematic to end the paused\n>>state silently? So, to make the situation less confusing, what about\n>>emitting a log message when ending the paused state because of\n>>the promotion?\n> But where to emit it? I think it not so good by emitting to log file,\n> because the user will not check it everytime.\n\nYeah, I'm thinking to emit the message to log file, like the startup process\ndoes in other places :)\n\n> BTW, why\n> CheckForStandbyTrigger()�can not be called in backend.\n\nBecause\n\n1) promote_signaled flag that IsPromoteSignaled() sees is set\n only in the startup process\n2) the trigger file can be removed or promote_trigger_file can be\n changed after the backend detects it but before the startup process\n does. That is, the result of the trigger file detection can be different\n between the processes.\n\nOf course we can change CheckForStandbyTrigger() so that it can\nbe called by backends, but I'm not sure if it's worth doing that.\n\nOr another idea to reduce the delay between the request for\nthe promotion and the detection of it is to make the startup process\ncall CheckForStandbyTrigger() more frequently. Calling that every\nreplay of WAL record would be overkill and decrease the recovery\nperformance. Calling thst every WAL file might be ok. I'm not sure\nif this is really improvement or not, though...\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 1 Apr 2020 19:37:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "On 2020/04/01 19:37, Fujii Masao wrote:\n> \n> \n> On 2020/04/01 18:56, movead.li@highgo.ca wrote:\n>>\n>> �>This happens because the startup process detects the trigger file\n>>> after pg_wal_replay_pause() succeeds, and then make the recovery\n>>> get out of the paused state.\n>> Yes that is.\n>>\n>> �>It might be problematic to end the paused\n>>> state silently? So, to make the situation less confusing, what about\n>>> emitting a log message when ending the paused state because of\n>>> the promotion?\n>> But where to emit it? I think it not so good by emitting to log file,\n>> because the user will not check it everytime.\n> \n> Yeah, I'm thinking to emit the message to log file, like the startup process\n> does in other places :)\n\nSo I'd like to propose the attached patch. The patch changes the message\nlogged when a promotion is requested, based on whether the recovery is\nin paused state or not.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Wed, 1 Apr 2020 20:45:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": ">So I'd like to propose the attached patch. The patch changes the message\r\n>logged when a promotion is requested, based on whether the recovery is\r\n>in paused state or not.\r\nIt is a compromise, we should notice it in document I think.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>So I'd like to propose the attached patch. The patch changes the message>logged when a promotion is requested, based on whether the recovery is>in paused state or not.\nIt is a compromise, we should notice it in document I think.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Thu, 2 Apr 2020 09:41:48 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/04/02 10:41, movead.li@highgo.ca wrote:\n> \n> >So I'd like to propose the attached patch. The patch changes the message\n>>logged when a promotion is requested, based on whether the recovery is\n>>in paused state or not.\n> It is a compromise,\n\nOk, so barring any objection, I will commit the patch.\n\n> we should notice it in document I think.\n\nThere is the following explation about the relationship the recovery\npause and the promotion, in the document. You may want to add more\ndescriptions into the docs?\n\n------------------------------\nIf a promotion is triggered while recovery is paused, the paused\nstate ends and a promotion continues.\n------------------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 8 Apr 2020 15:59:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": ">> we should notice it in document I think. \r\n>There is the following explation about the relationship the recovery\r\n>pause and the promotion, in the document. You may want to add more\r\n>descriptions into the docs?\r\n>------------------------------\r\n>If a promotion is triggered while recovery is paused, the paused\r\n>state ends and a promotion continues.\r\n>------------------------------\r\n\r\nFor example we can add this words:\r\nFirst-time pg_wal_replay_pause() called during recovery which triggered\r\nas promotion, pg_wal_replay_pause() show success but it did not really\r\npause the recovery.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>> we should notice it in document I think. >There is the following explation about the relationship the recovery>pause and the promotion, in the document. You may want to add more>descriptions into the docs?>------------------------------>If a promotion is triggered while recovery is paused, the paused>state ends and a promotion continues.>------------------------------For example we can add this words:First-time pg_wal_replay_pause() called during recovery which triggeredas promotion, pg_wal_replay_pause() show success but it did not reallypause the recovery.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Wed, 8 Apr 2020 15:41:40 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": "\n\nOn 2020/04/08 16:41, movead.li@highgo.ca wrote:\n> \n> >> we should notice it in document I think.\n>>There is the following explation about the relationship the recovery\n>>pause and the promotion, in the document. You may want to add more\n>>descriptions into the docs?\n> >------------------------------\n>>If a promotion is triggered while recovery is paused, the paused\n>>state ends and a promotion continues.\n>>------------------------------\n> \n> For example we can add this words:\n> First-time pg_wal_replay_pause() called during recovery which triggered\n> as promotion, pg_wal_replay_pause() show success but it did not really\n> pause the recovery.\n\nI think this is not true. In your case, pg_wal_replay_pause() succeeded\nbecause the startup process had not detected the promotion request yet\nat that moment.\n\nTo cover your case, what about adding the following description?\n\n-----------------\nThere can be delay between a promotion request by users and the trigger of\na promotion in the server. Note that pg_wal_replay_pause() succeeds\nduring that delay, i.e., until a promotion is actually triggered.\n-----------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 8 Apr 2020 18:25:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
},
{
"msg_contents": ">To cover your case, what about adding the following description?\r\n \r\n>-----------------\r\n>There can be delay between a promotion request by users and the trigger of\r\n>a promotion in the server. Note that pg_wal_replay_pause() succeeds\r\n>during that delay, i.e., until a promotion is actually triggered.\r\n>-----------------\r\nYes that's it.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>To cover your case, what about adding the following description? >----------------->There can be delay between a promotion request by users and the trigger of>a promotion in the server. Note that pg_wal_replay_pause() succeeds>during that delay, i.e., until a promotion is actually triggered.>-----------------\nYes that's it.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Wed, 8 Apr 2020 17:35:05 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target_action=pause with confusing hint"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.